US20100105364A1 - Mobile terminal and control method thereof - Google Patents

Mobile terminal and control method thereof Download PDF

Info

Publication number
US20100105364A1
US20100105364A1 US12/433,133 US43313309A US2010105364A1 US 20100105364 A1 US20100105364 A1 US 20100105364A1 US 43313309 A US43313309 A US 43313309A US 2010105364 A1 US2010105364 A1 US 2010105364A1
Authority
US
United States
Prior art keywords
web page
information
mobile terminal
controller
objects
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US12/433,133
Other versions
US9129011B2 (en
Inventor
Seung-Jin Yang
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
LG Electronics Inc
Original Assignee
LG Electronics Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by LG Electronics Inc filed Critical LG Electronics Inc
Assigned to LG ELECTRONICS INC. reassignment LG ELECTRONICS INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: YANG, SEUNG-JIN
Publication of US20100105364A1 publication Critical patent/US20100105364A1/en
Application granted granted Critical
Publication of US9129011B2 publication Critical patent/US9129011B2/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/3332Query translation
    • G06F16/3334Selection or weighting of terms from queries, including natural language queries
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B1/00Details of transmission systems, not covered by a single one of groups H04B3/00 - H04B13/00; Details of transmission systems not characterised by the medium used for transmission
    • H04B1/38Transceivers, i.e. devices in which transmitter and receiver form a structural unit and in which at least one part is used for functions of transmitting and receiving
    • H04B1/40Circuits
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/957Browsing optimisation, e.g. caching or content distillation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/72445User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality for supporting Internet browser applications
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers
    • H04M3/487Arrangements for providing information services, e.g. recorded voice services or time announcements
    • H04M3/493Interactive information services, e.g. directory enquiries ; Arrangements therefor, e.g. interactive voice response [IVR] systems or voice portals
    • H04M3/4938Interactive information services, e.g. directory enquiries ; Arrangements therefor, e.g. interactive voice response [IVR] systems or voice portals comprising a voice browser which renders and interprets, e.g. VoiceXML
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2250/00Details of telephonic subscriber devices
    • H04M2250/22Details of telephonic subscriber devices including a touch pad, a touch sensor or a touch detector
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2250/00Details of telephonic subscriber devices
    • H04M2250/74Details of telephonic subscriber devices with voice recognition means

Definitions

  • the present invention relates to a mobile terminal and corresponding method for searching objects on a displayed web page.
  • Mobile terminals now provide many additional services beside the basic call service. For example, user's can now access the Internet, play games, watch videos, listen to music, capture images and videos, record audio files, etc. Mobile terminals also now provide broadcasting programs such that user can watch television shows, sporting programs, videos etc.
  • mobile terminals also provide web browsing functions, However, because the mobile terminal display is small in size, it is difficult to select items or links displayed on a particular web page. It is also difficult to search for information using web browsing functions.
  • one object of the present invention is to address the above-noted and other problems.
  • Another object of the present invention is to provide a mobile terminal and corresponding method for performing and information search through a voice command in a web browsing mode.
  • Yet another object of the present invention is to provide a mobile terminal and corresponding method for entering an information search mode from a web browsing mode through a voice command.
  • Another object of the present invention is to provide a mobile terminal and corresponding method for inputting a search word on an information search window through a voice command in a web browsing mode.
  • Still another object of the present invention is to provide a mobile terminal and corresponding method for indicating a search execution through a voice command in a web browsing mode.
  • Another object of the present invention is to provide a mobile terminal and corresponding method for selecting through a voice command information searched in a web browsing mode, and displaying a web page relating to the selected information.
  • Still yet another object of the present invention is to provide a mobile terminal and corresponding method for searching information on a currently displayed web page based on a voice command.
  • the present invention provides in one aspect a mobile terminal including a wireless communication unit configured to access a web page, a display unit configured to display the accessed web page, a receiving unit configured to receive input voice information, and a controller configured to convert the input voice information into text information, to search the displayed web page for objects that include the converted text information, and to control the display unit to distinctively display found objects that include the converted text information from other information displayed on the web page.
  • the present invention provides a method of controlling a mobile terminal, and which includes displaying an accessed web page on a display of the mobile terminal, receiving input voice information, converting the input voice information into text information, searching the displayed web page for objects that include the converted text information, and distinctively displaying found objects that include the converted text information from other information displayed on the web page.
  • FIG. 1 is a block diagram of a mobile terminal according to an embodiment of the present invention.
  • FIG. 2A is a front perspective view of the mobile terminal according to an embodiment of the present invention.
  • FIG. 2B is a rear perspective view of the mobile terminal according to an embodiment of the present invention.
  • FIGS. 3A and 3B are front views showing an operation state of the mobile terminal according to an embodiment of the present invention.
  • FIG. 4 is a conceptual view showing a proximity depth measured by a proximity sensor
  • FIG. 5 is a flowchart illustrating a menu voice control method in a mobile terminal according to an embodiment of the present invention
  • FIG. 6A includes overviews of display screens illustrating a method for activating a voice recognition function in a mobile terminal according to an embodiment of the present invention
  • FIGS. 6B and 6C include overviews of display screens illustrating a method for outputting help information in a mobile terminal according to an embodiment of the present invention
  • FIG. 7A is a flowchart illustrating a method for recognizing a voice command in a mobile terminal according to an embodiment of the present invention
  • FIG. 7B is an overview illustrating a method for recognizing a voice command in a mobile terminal according to an embodiment of the present invention
  • FIG. 8 includes overviews of display screens illustrating a method for displaying a menu in cooperation with a rate of voice recognition in a mobile terminal according to an embodiment of the present invention
  • FIG. 9 includes overviews of display screens illustrating a method for recognizing a voice command in a mobile terminal according to an embodiment of the present invention.
  • FIG. 10 is an overview illustrating an organization of databases used for recognizing a voice command in a mobile terminal according an embodiment of the present invention
  • FIG. 10 is an overview of a configuration of database used as a reference for voice command recognition of a mobile terminal according to the present invention.
  • FIG. 11 is an overview showing a web browser of a mobile terminal according to an embodiment of the present invention.
  • FIG. 13 is an overview showing a method for setting database of objects displayed on a web page according to an embodiment of the present invention
  • FIG. 14 is an overview showing a method for entering an information search mode from a web browsing mode in a mobile terminal according to an embodiment of the present invention
  • FIGS. 15A to 15C are overviews showing a method for displaying a state that a mobile terminal has entered an information search mode according to an embodiment of the present invention
  • FIGS. 16A and 16B are overviews showing a method for inputting a search word in an information search mode according to an embodiment of the present invention
  • FIGS. 18A and 18B are exemplary views showing a method for indicating information search according to an embodiment of the present invention.
  • FIG. 19 is a flowchart showing a method for searching a user's desired information in a web page according to an embodiment of the present invention.
  • FIGS. 20A and 20B are overviews showing a method for selecting specific information among information obtained as a result of information search according to an embodiment the present invention.
  • the mobile terminal described in the specification can include a cellular phone, a smart phone, a laptop computer, a digital broadcasting terminal, personal digital assistants (PDA), a portable multimedia player (PMP), a navigation system and so on.
  • PDA personal digital assistants
  • PMP portable multimedia player
  • FIG. 1 is a block diagram of a mobile terminal 100 according to an embodiment of the present invention.
  • the mobile terminal 100 includes a radio communication unit 110 , an audio/video (A/V) input unit 120 , a user input unit 130 , a sensing unit 140 , an output unit 150 , a memory 160 , an interface 170 , a controller 180 , and a power supply 190 .
  • A/V audio/video
  • the mobile terminal 100 includes a radio communication unit 110 , an audio/video (A/V) input unit 120 , a user input unit 130 , a sensing unit 140 , an output unit 150 , a memory 160 , an interface 170 , a controller 180 , and a power supply 190 .
  • Not all of the components shown in FIG. 1 are essential parts and the number of components included in the mobile terminal can be varied.
  • the radio communication unit 110 includes at least one module that enables radio communication between the mobile terminal 100 and a radio communication system or between the mobile terminal 100 and a network in which the mobile terminal 100 is located.
  • the radio communication unit 110 includes a broadcasting receiving module 111 , a mobile communication module 112 , a wireless Internet module 113 , a local area communication module 114 and a position information module 115 .
  • the broadcasting receiving module 111 receives broadcasting signals and/or broadcasting related information from an external broadcasting management server through a broadcasting channel.
  • the broadcasting channel can include a satellite channel and a terrestrial channel.
  • the broadcasting management server can be a server that generates and transmits broadcasting signals and/or broadcasting related information or a server that receives previously created broadcasting signals and/or broadcasting related information and transmits the broadcasting signals and/or broadcasting related information to a terminal.
  • the broadcasting signals can include not only TV broadcasting signals, radio broadcasting signals and data broadcasting signals but also signals in the form of a combination of a TV broadcasting signal and a radio broadcasting signal.
  • the broadcasting related information can be information on a broadcasting channel, a broadcasting program or a broadcasting service provider.
  • the broadcasting related information can also be provided through a mobile communication network. In this instance, the broadcasting related information can be received by the mobile communication module 112 .
  • the broadcasting related information can also exist in various forms.
  • the broadcasting related information can exist in the form of an electronic program guide (EPG) of the digital multimedia broadcasting (DMB) system or in the form of an electronic service guide (ESG) of the digital video broadcast-handheld (DVB-H) system.
  • EPG electronic program guide
  • DMB digital multimedia broadcasting
  • ESG electronic service guide
  • DVD-H digital video broadcast-handheld
  • the broadcasting receiving module 111 receives broadcasting signals using various broadcasting systems.
  • the broadcasting receiving module 111 can receive digital broadcasting signals using digital broadcasting systems such as the digital multimedia broadcasting-terrestrial (DMB-T) system, the digital multimedia broadcasting-satellite (DMB-S) system, the media forward link only (MediaFLO) system, the DVB-H system and the integrated services digital broadcast-terrestrial (ISDB-T) system.
  • the broadcasting receiving module 111 can also be constructed to be suited to broadcasting systems providing broadcasting signals other than the above-described digital broadcasting systems.
  • the broadcasting signals and/or broadcasting related information received through the broadcasting receiving module 111 can also be stored in the memory 160 .
  • the mobile communication module 112 transmits/receives a radio signal to/from at least one of a base station, an external terminal, and a server on a mobile communication network.
  • the radio signal can include a voice call signal, a video telephony call signal or data in various forms according to transmission and receiving of text/multimedia messages.
  • the wireless Internet module 113 corresponds to a module for wireless Internet access and can be included in the mobile terminal 100 or externally attached to the mobile terminal 100 .
  • Wireless LAN (WLAN) Wi-Fi
  • Wibro wireless broadband
  • Wimax world interoperability for microwave access
  • HSDPA high speed downlink packet access
  • the local area communication module 114 corresponds to a module for local area communication.
  • Bluetooth, radio frequency identification (RFID), infrared data association (IrDA), ultra wideband (UWB) and ZigBee can be used as a local area communication technique.
  • the position information module 115 confirms or obtains the position of the mobile terminal 100 .
  • a global positioning system (GPS) module is a representative example of the position information module 115 .
  • the GPS module 115 can calculate information on distances between one point (object) and at least three satellites and information on the time when the distance information is measured and apply trigonometry to the obtained distance information to obtain three-dimensional position information on the point (object) according to latitude, longitude and altitude coordinate at a predetermined time.
  • a method of calculating position and time information using three satellites and correcting the calculated position and time information using another satellite is also used.
  • the GPS module 115 continuously calculates the current position in real time and calculates velocity information using the position information.
  • the A/V input unit 120 is used to input an audio signal or a video signal and includes a camera 121 and a microphone 122 .
  • the camera 121 processes image frames of still images or moving images obtained by an image sensor in a video telephony mode or a photographing mode.
  • the processed image frames can be displayed on a display unit 151 included in the output unit 150 .
  • the image frames processed by the camera 121 can be stored in the memory 160 or transmitted to an external device through the radio communication unit 110 .
  • the mobile terminal 100 can also include at least two cameras according to constitution of the terminal.
  • the microphone 122 receives an external audio signal in a call mode, a recording mode or a speed recognition mode and processes the received audio signal into electric audio data.
  • the audio data can also be converted into a form that can be transmitted to a mobile communication base station through the mobile communication module 112 and output in the call mode.
  • the microphone 122 can employ various noise removal algorithms for removing noise generated when the external audio signal is received.
  • the user input unit 130 receives input data for controlling the operation of the terminal from a user.
  • the user input unit 130 can include a keypad, a dome switch, a touch pad (constant voltage/capacitance), jog wheel, jog switch and so on.
  • the sensing unit 140 senses the current state of the mobile terminal 100 , such as open/close state of the mobile terminal 100 , the position of the mobile terminal 100 , whether a user touches the mobile terminal 100 , the direction of the mobile terminal 100 and acceleration/deceleration of the mobile terminal 100 and generates a detection signal for controlling the operation of the mobile terminal 100 .
  • the sensing unit 140 can sense whether a slide phone is opened or closed when the mobile terminal 100 is the slide phone.
  • the sensing unit 140 can sense whether the power supply 190 supplies power and whether the interface 170 is connected to an external device.
  • the sensing unit 140 can also include a proximity sensor 141 .
  • the output unit 150 generates visual, auditory or tactile output and can include the display unit 151 , an audio output module 152 , an alarm 153 and a haptic module 154 .
  • the display unit 151 displays information processed by the mobile terminal 100 .
  • the display unit 151 displays a UI or graphic user interface (GUI) related to a telephone call when the mobile terminal is in the call mode.
  • GUI graphic user interface
  • the display unit 151 also displays a captured or/and received image, UL or GUI when the mobile terminal 100 is in the video telephony mode or the photographing mode.
  • the display unit 151 can also include at least one of a liquid crystal display, a thin film transistor liquid crystal display, an organic light-emitting diode display, a flexible display and a three-dimensional display. Some of these displays can be of a transparent type or a light transmission type, which is referred to as a transparent display.
  • the transparent display also includes a transparent liquid crystal display.
  • the rear structure of the display unit 151 can also be of the light transmission type. According to this structure, a user can see an object located behind the body of the mobile terminal 100 through an area of the body of the mobile terminal 100 , which is occupied by the display unit 151 .
  • the mobile terminal 100 can include at least two display units 151 according to constitution of the terminal.
  • the mobile terminal 100 can include a plurality of displays that are arranged on a single face at a predetermined distance or integrated. Otherwise, the plurality of displays can be arranged on different sides.
  • the display unit 151 and a sensor sensing touch (referred to as a touch sensor hereinafter) form a layered structure, which is referred to as a touch screen hereinafter
  • the display unit 151 can be used as an input device in addition to an output device.
  • the touch sensor can be in the form of a touch film, a touch sheet and a touch pad, for example.
  • the touch sensor can be constructed such that it converts a variation in pressure applied to a specific portion of the display unit 151 or a variation in capacitance generated at a specific portion of the display unit 151 into an electric input signal.
  • the touch sensor can also be constructed such that it can sense pressure of touch as well as the position and area of touch.
  • the proximity sensor 141 can be located in an internal region of the mobile terminal 100 , surrounded by the touch screen, or near the touch screen.
  • the proximity sensor 141 senses an object approaching a predetermined sensing face or an object located near the proximity sensor 141 using an electromagnetic force or infrared rays without having mechanical contact. Further, the proximity sensor 141 has a lifetime longer than that of a contact sensor and has a wide application.
  • the proximity sensor 141 also includes a transmission type photo-electric sensor, a direct reflection type photo-electric sensor, a mirror reflection type photo-electric sensor, a high-frequency oscillating proximity sensor, a capacitive proximity sensor, a magnetic proximity sensor, an infrared proximity sensor, etc.
  • a capacitive touch screen is constructed such that a proximity of a pointer is detected through a variation in an electric field according to the proximity of the pointer.
  • the touch screen touch sensor
  • the touch screen can be classified as a proximity sensor.
  • proximity touch an action of approaching the pointer to the touch screen while the pointer it not in contact with the touch screen such that the location of the pointer on the touch screen is recognized
  • contact touch an action of bringing the pointer into contact with the touch screen
  • a proximity touch point of the pointer on the touch screen means a point of the touch screen to which the pointer corresponds perpendicularly to the touch screen when the pointer proximity-touches the touch screen.
  • the proximity sensor 141 senses a proximity touch and a proximity touch pattern (for example, a proximity touch distance, a proximity touch direction, a proximity touch velocity, a proximity touch time, a proximity touch position, a proximity touch moving state, etc.). Information corresponding to the sensed proximity touch action and proximity touch pattern can also be displayed on the touch screen.
  • the audio output module 152 can output audio data received from the radio communication unit 110 or stored in the memory 160 in a call signal receiving mode, a telephone call mode or a recording mode, a speech recognition mode and a broadcasting receiving mode.
  • the audio output module 152 also outputs audio signals related to functions (for example, a call signal incoming tone, a message incoming tone, etc.) performed in the mobile terminal 100 .
  • the audio output module 152 can include a receiver, a speaker, a buzzer, etc.
  • the alarm 153 outputs a signal for indicating generation of an event of the mobile terminal 100 .
  • Examples of events generated in the mobile terminal 100 include receiving a call signal, receiving a message, input of a key signal, input of touch, etc.
  • the alarm 153 can also output signals in forms different from video signals or audio signals, for example, a signal for indicating a generation of an event through vibration.
  • the video signals or the audio signals can also be output through the display unit 151 or the audio output module 152 .
  • the haptic module 154 generates various haptic effects that the user can feel.
  • a representative example of the haptic effect is vibration.
  • the intensity and pattern of vibration generated by the haptic module 154 can also be controlled. For example, different vibrations can be combined and output or sequentially output.
  • the haptic module 154 can also generate a variety of haptic effects including an effect of stimulus according to arrangement of pins vertically moving for a contact skin face, an effect of stimulus according to a jet force or sucking force of air through a jet hole or a sucking hole, an effect of stimulus rubbing the skin, an effect of stimulus according to contact of an electrode, an effect of stimulus using an electrostatic force and an effect according to reproduction of cold and warmth using an element capable of absorbing or radiating heat in addition to vibrations. Further, the haptic module 154 can not only transmit haptic effects through direct contact but also allow the user to feel haptic effects through kinesthetic sense of his or her fingers or arms.
  • the mobile terminal 100 can also include at least two or more haptic modules 154 according to constitution of the mobile terminal.
  • the memory 160 stores a program for the operation of the controller 180 and temporarily stores input/output data (for example, phone book, messages, still images, moving images, etc.).
  • the memory 160 can also store data about vibrations and sounds in various patterns, which are output when a touch input is applied to the touch screen.
  • the memory 160 can include at least one of a flash memory, a hard disk type memory, a multimedia card micro type memory, a card type memory (for example, SD or XD memory), a random access memory (RAM), a static RAM (SRAM), a read-only memory (ROM), an electrically erasable programmable ROM (EEPROM), a programmable ROM (PROM) magnetic memory, a magnetic disk and an optical disk.
  • the mobile terminal 100 can also operate in relation to a web storage performing the storing function of the memory 160 on the Internet.
  • the interface 170 serves as a path to all external devices connected to the mobile terminal 100 .
  • the interface 170 receives data from the external devices or power and transmits the data or power to the internal components of the mobile terminal 100 or transmits data of the mobile terminal 100 to the external devices.
  • the interface 170 can also include a wired/wireless headset port, an external charger port, a wired/wireless data port, a memory card port, a port for connecting a device having a user identification module, an audio I/O port, a video I/O port, an earphone port, etc., for example.
  • an identification module is a chip that stores information for authenticating the authority to use the mobile terminal 100 and can include a user identify module (UIM), a subscriber identify module (SIM) and a universal subscriber identify module (USIM).
  • a device (referred to as an identification device hereinafter) including the identification module can be manufactured in the form of a smart card. Accordingly, the identification device can be connected to the mobile terminal 100 through a port.
  • the interface 170 can serve as a path through which power from an external cradle is provided to the mobile terminal 100 when the mobile terminal 100 is connected to the external cradle or a path through which various command signals input by the user through the cradle to the mobile terminal 100 .
  • the various command signals or power input from the cradle can be used as a signal for confirming whether the mobile terminal 100 is correctly set in the cradle.
  • the controller 180 controls the overall operation of the mobile terminal.
  • the controller 180 performs control and processing for voice communication, data communication and video telephony.
  • the controller 180 includes a multimedia module 181 for playing multimedia.
  • the multimedia module 181 can be included in the controller 180 or separated from the controller 180 .
  • the controller 180 can perform a pattern recognition process capable of recognizing handwriting input or picture-drawing input applied to the touch screen as characters or images.
  • the power supply 190 receives external power and internal power and provides power required for the operations of the components of the mobile terminal under the control of the controller 180 .
  • various embodiments of the present invention can be implemented in a computer or similar device readable recording medium using software, hardware or a combination thereof, for example.
  • the embodiments of the present invention can be implemented using at least one of application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays (FPGAs), processors, controllers, micro-controllers, microprocessors, electrical units for executing functions.
  • ASICs application specific integrated circuits
  • DSPs digital signal processors
  • DSPDs digital signal processing devices
  • PLDs programmable logic devices
  • FPGAs field programmable gate arrays
  • processors controllers, micro-controllers, microprocessors, electrical units for executing functions.
  • controller 180 can also be implemented by the controller 180 .
  • embodiments such as procedures or functions can be implemented with a separate software module executing at least one function or operation.
  • Software codes can be implemented according to a software application written in an appropriate software language. Furthermore, the software codes can be stored in the memory 160 and executed by the controller 180 .
  • FIG. 2A is a front perspective view of a mobile terminal or a handheld terminal 100 according to an embodiment of the present invention.
  • the handheld terminal 100 has a bar type terminal body.
  • the present invention is not limited to a bar type terminal and can be applied to terminals of various types including a slide type, folder type, swing type and swivel type terminals having at least two bodies that are relatively movably combined.
  • the terminal body includes a case (a casing, a housing, a cover, etc.) forming the exterior of the terminal 100 .
  • the case is divided into a front case 101 and a rear case 102 .
  • Various electronic components are also arranged in the space formed between the front case 101 and the rear case 102 .
  • At least one middle case can be additionally arranged between the front case 101 and the rear case 102 .
  • the cases can also be formed of plastics through injection molding or made of a metal material such as stainless steel (STS) or titanium (Ti).
  • the display unit 151 , the audio output unit 152 , the camera 121 , user input units 131 and 132 of the user input unit 130 ( FIG. 1 ), the microphone 122 and the interface 170 are arranged in the terminal body, specifically, in the front case 101 .
  • the display unit 151 occupies most part of the main face of the front case 101 .
  • the audio output unit 152 and the camera 121 are arranged in a region in proximity to one of both ends of the display unit 151 , and the user input unit 131 and the microphone 122 are located in a region in proximity to the other end of the display unit 151 .
  • the user input unit 132 and the interface 170 are arranged on the sides of the front case 101 and the rear case 102 .
  • the user input unit 130 is operated to receive commands for controlling the operation of the handheld terminal 100 and can include the plurality of operating units 131 and 132 .
  • the operating units 131 and 132 can be referred to as manipulating portions and employ any tactile manner in which a user operates the operating units 131 and 132 while having tactile feeling.
  • the operating units 131 and 132 can also receive various inputs.
  • the operating unit 131 receives commands such as start, end and scroll
  • the second operating unit 132 receives commands such as control of the volume of sound output from the audio output unit 152 or conversion of the display unit 151 to a touch recognition mode.
  • FIG. 2B is a rear perspective view of the handheld terminal shown in FIG. 2A according to an embodiment of the present invention.
  • a camera 121 ′ is additionally attached to the rear side of the terminal body, that is, the rear case 102 .
  • the camera 121 ′ has a photographing direction opposite to that of the camera 121 shown in FIG. 2A and can have pixels different from those of the camera 121 shown in FIG. 2A .
  • the camera 121 has low pixels such that it can capture an image of the face of a user and transmit the image to a receiving part for video telephony, while the camera 121 ′ has high pixels because it captures an image of a general object and does not immediately transmit the image in many instances.
  • the cameras 121 and 121 ′ can also be attached to the terminal body such that they can be rotated or pop-up.
  • a flash bulb 123 and a mirror 124 are additionally arranged in proximity to the camera 121 ′.
  • the flash bulb 123 lights an object when the camera 121 ′ takes a picture of the object, and the mirror 124 is used for the user to look at his/her face in the mirror when the user wants to self-photograph himself/herself using the camera 121 ′.
  • An audio output unit 152 ′ is additionally provided on the rear side of the terminal body.
  • the audio output unit 152 ′ can thus achieve a stereo function with the audio output unit 152 shown in FIG. 2A and be used for a speaker phone mode when the terminal is used for a telephone call.
  • a broadcasting signal receiving antenna 124 is also attached to the side of the terminal body in addition to an antenna for telephone calls.
  • the antenna 124 constructing a part of the broadcasting receiving module 111 shown in FIG. 1 can be set in the terminal body such that the antenna 124 can be pulled out of the terminal body.
  • the power supply 190 for providing power to the handheld terminal 100 is set in the terminal body.
  • the power supply 190 can be included in the terminal body or detachably attached to the terminal body.
  • a touch pad 135 for sensing touch is also attached to the rear case 102 .
  • the touch pad 135 can be of a light transmission type as the display unit 151 . In this instance, if the display unit 151 outputs visual information through both sides thereof, the visual information can be recognized through the touch pad 135 . The information output through both sides of the display unit 151 can also be controlled by the touch pad 135 . Otherwise, a display is additionally attached to the touch pad 135 such that a touch screen can be arranged even in the rear case 102 .
  • the touch pad 135 also operates in connection with the display unit 151 of the front case 101 .
  • the touch pad 135 can be located in parallel with the display unit 151 behind the display unit 151 , and can be identical to or smaller than the display unit 151 in size. Interoperations of the display unit 151 and the touch pad 135 will now be described with reference to FIGS. 3A and 3B .
  • FIGS. 3A and 3B are front views of the handheld terminal 100 for explaining an operating state of the handheld terminal according to an embodiment of the present invention.
  • the display unit 151 can display various types of visual information in the form of characters, numerals, symbols, graphic or icons. To input the information, at least one of the characters, numerals, symbols, graphic and icons are displayed in predetermined arrangement in the form of a keypad. This keypad can be referred to as a ‘soft key’.
  • FIG. 3A shows that a touch applied to a soft key is input through the front side of the terminal body.
  • the display unit 151 can be operated through the overall area thereof. Otherwise, the display unit 151 can be divided into a plurality of regions and operated. In the latter instance, the display unit 151 can be constructed such that the plurality of regions interoperate.
  • an output window 151 a and an input window 151 b are respectively displayed in upper and lower parts of the display unit 151 .
  • the input window 151 b displays soft keys 151 c that represent numerals used to input numbers such as telephone numbers.
  • a soft key 151 c is touched, a numeral corresponding to the touched soft key is displayed on the output window 151 a.
  • connection of a call corresponding to a telephone number displayed on the output window 151 a is attempted.
  • FIG. 3B shows that a touch applied to soft keys is input through the rear side of the terminal body.
  • FIG. 3B also shows the landscape of the terminal body, while FIG. 3A shows the portrait of the terminal body. That is, the display unit 151 can be constructed such that an output image is converted according to the direction in which the terminal body is located. Further, FIG. 3B shows the operation of the handheld terminal in a text input mode. As shown, the display unit 151 displays an output window 135 a and an input window 135 b. A plurality of soft keys 135 c that indicate at least one of characters, symbols and numerals are arranged in the input window 135 b. The soft keys 135 c can also be arranged in the form of QWERTY keys.
  • the display unit 151 or the touch pad 135 can be constructed such that it receives touch input in a scroll manner. That is, the user can scroll the display unit 151 or the touch pad 135 to move an object displayed on the display unit 151 , for example, a cursor or a pointer located on an icon. Furthermore, when the user moves his or her finger on the display unit 151 or the touch pad 135 , the finger moving path can be visually displayed on the display unit 151 . This will be useful to edit an image displayed on the display unit 151 . Also, when the display unit 151 (touch screen) and the touch pad 135 are simultaneously touched in a predetermined period of time, a specific function of the terminal can be executed. This can include when the user clamps the terminal body using the thumb and the index finger. The specific function can include activation or inactivation of the display unit 151 or the touch pad 135 , for example.
  • FIG. 4 is a conceptual view for explaining a proximity depth of the proximity sensor 141 .
  • the proximity sensor 141 located inside or near the touch screen senses the approach and outputs a proximity signal.
  • the proximity sensor 141 can be constructed such that it outputs a proximity signal according to the distance between the pointer approaching the touch screen and the touch screen (referred to as “proximity depth”).
  • the distance in which the proximity signal is output when the pointer approaches the touch screen is referred to as a detection distance.
  • the proximity depth can be known by using a plurality of proximity sensors having different detection distances and comparing proximity signals respectively output from the proximity sensors.
  • FIG. 4 shows the section of the touch screen in which proximity sensors capable of sensing three proximity depths are arranged.
  • Proximity sensors capable of sensing less than three or more than four proximity depths can be arranged in the touch screen.
  • D 0 when the pointer completely comes into contact with the touch screen (D 0 ), it is recognized as contact touch.
  • the pointer is located within a distance D 1 from the touch screen, it is recognized as proximity touch of a first proximity depth, and when the pointer is located in a range between the distance D 1 and a distance D 2 from the touch screen, it is recognized as proximity touch of a second proximity depth.
  • the controller 180 can recognize the proximity touch as various input signals according to the proximity distance and proximity position of the pointer with respect to the touch screen and perform various operation controls according to the input signals.
  • the mobile terminal is configured such that an algorithm for voice recognition and an algorithm for Speech To Text (STT) are stored in the memory 160 . Further, the voice recognition function and the STT function cooperate together so as to convert a user's voice into a text format. The converted text can also be output on an execution screen of the terminal. Thus, the user can perform functions such as generating text for text messages or mails, etc. by speaking into the terminal.
  • the controller 180 can also activate the voice recognition function and automatically drive the STT function.
  • FIG. 5 is a flowchart illustrating a menu voice control method for a mobile terminal according to an embodiment of the present invention.
  • the controller 180 determines if the voice recognition function has been activated (S 101 ). Further, the voice recognition function may be activated by the user selecting hardware buttons on the mobile terminal, or soft touch buttons displayed on the display 151 . The user may also activate the voice recognition function by manipulating specific menus displayed on the display 151 , by generating a specific sound or sound effects, by short or long-range wireless signals, or by the user's body information such as hand gesture or body gesture.
  • the specific sound or sound effects may include impact sounds having a level more than a specific level. Further, the specific sound or sound effects may be detected using a sound level detecting algorithm.
  • the sound level detecting algorithm is preferably simpler than a voice recognition algorithm, and thus consumes less resources of the mobile terminal. Also, the sound level detecting algorithm (or circuit) may be individually implemented from the voice recognition algorithm or circuit, or may be implemented so as to specify some functions of the voice recognition algorithm.
  • the wireless signals may be received through the wireless communication unit 110 , and the user's hand or body gestures may be received through the sensing unit 140 .
  • the wireless communication unit 110 , the user input unit 130 , and the sensing unit 140 may be referred to as a signal input unit.
  • the voice recognition function may also be terminated in a similar manner.
  • Having the user physically activate the voice recognition function is particularly advantageous, because the user is more aware they are about to use voice commands to control the terminal. That is, because the user has to first perform a physical manipulation of the terminal, he or she intuitively recognizes they are going to input a voice command or instruction into the terminal, and therefore speak more clearly or slowly to thereby activate a particular function. Thus, because the user speaks more clearly or more slowly, for example, the probability of accurately recognizing the voice instruction increases. That is, in an embodiment of the present invention, the activation of the voice recognition function is performed by a physical manipulation of a button on the terminal, rather than activating the voice recognition function by speaking into the terminal.
  • the controller 180 may start or terminate activation of the voice recognition function based on how many times the user touches a particular button or portion of the touch screen, how long the user touches a particular button or portion of the touch screen, etc.
  • the user can also set how the controller 180 is to activate the voice recognition function using an appropriate menu option provided by the present invention. For example, the user can select a menu option on the terminal that includes 1) set activation of voice recognition based on X number of times the voice activation button is selected, 2) set activation of voice recognition based on X amount of time the voice activation button is selected, 3) set activation of voice recognition when the buttons X and Y are selected, etc.
  • the user can then enter the values of X and Y in order to variably set how the controller 180 determines the voice activation function is activated.
  • the user is actively engaged with the voice activation function of their own mobile terminal, which increases the probability that the controller 180 will determine the correct function corresponding to the user's voice instruction, and which allows the user to tailor the voice activation function according to his or her needs.
  • the controller 180 may also maintain the activated state of the voice recognition function while the designated button(s) are touched or selected, and stop the voice recognition function when the designated button(s) are released.
  • the controller 180 can maintain the activation of the voice recognition function for a predetermined time period after the designated button(s) are touched or selected, and stop or terminate the voice recognition function when the predetermined time period ends.
  • the controller 180 can store received voice instructions in the memory 160 while the voice recognition function is maintained in the activated state.
  • a domain of the database used as a reference for recognizing the meaning of the voice command is specified to information relating to specific functions or menus on the terminal (S 102 ).
  • the specified domain of database may be information relating to menus currently displayed on the display 151 , or information relating to sub-menus of one of the displayed menus.
  • the recognition rate for the input voice command is improved. Examples of domains include an e-mail domain, a received calls domain, and multimedia domain, etc.
  • the information relating to sub-menus may be configured as data in a database.
  • the information may be configured in the form of a keyword, and a plurality of information may correspond to one function or menu.
  • the database can be a plurality of databases according to features of information, and may be stored in the memory 160 . Further, the information in the database(s) may be advantageously updated or renewed through a learning process.
  • Each domain of the respective databases may also be specified into a domain relating to functions or menus being currently output, so as to enhance a recognition rate for a voice command. The domain may also change as menu steps continue to progress.
  • the controller 180 determines if the user has input a voice command (S 103 ).
  • the controller 180 analyzes a context and content of a voice command or instruction input through the microphone 122 based on a specific database, thereby judging a meaning of the voice command (S 104 ).
  • the controller 180 can determine the meaning of the voice instruction or command based on a language model and an acoustic model of the accessed domain.
  • the language model relates to the words themselves and the acoustic model corresponds to the way the words are spoken (e.g., frequency components of the spoken words or phrases).
  • the controller 180 can effectively determine the meaning of the input voice instructions or command.
  • the controller 180 may immediately start the process for judging the meaning of the input voice command when the user releases the activation of the voice recognition function when the controller 180 stores the input voice command in the memory 160 , or may simultaneously perform the voice activation function when the voice command is input.
  • the controller 180 can still perform other functions. For example, if the user performs another action by touching a menu option, etc. or presses a button on the terminal (Yes in S 109 ), the controller 180 performs the corresponding selected function (S 110 ).
  • the controller 180 determines the meaning of the input voice command in step S 104 , the controller 180 outputs a result value of the meaning (S 105 ). That is, the result value may include control signals for executing menus relating to functions or services corresponding to the determined meaning, for controlling specific components of the mobile terminal, etc. The result value may also include data for displaying information relating to the recognized voice command.
  • the controller 180 may also request the user confirm the output result value is accurate (S 106 ). For instance, when the voice command has a low recognition rate or is determined to have a plurality of meanings, the controller 180 can output a plurality of menus relating to the respective meanings, and then execute a menu that is selected by the user (S 107 ). Also, the controller 180 may ask a user whether to execute a specific menu having a high recognition rate, and then execute or display a corresponding function or menu according to the user's selection or response.
  • the controller 180 can also output a voice message asking the user to select a particular menu or option such as “Do you want to execute a message composing function? Reply with Yes or No.” Then, the controller 180 executes or does not execute a function corresponding to the particular menu or option based on the user's response. If the user does not respond in a particular time period (e.g., five seconds), the controller 180 can also immediately execute the particular menu or option. Thus, if there is no response from the user, the controller 180 may automatically execute the function or menu by judging the non-response as a positive answer.
  • a voice message asking the user to select a particular menu or option such as “Do you want to execute a message composing function? Reply with Yes or No.” Then, the controller 180 executes or does not execute a function corresponding to the particular menu or option based on the user's response. If the user does not respond in a particular time period (e.g., five seconds), the controller 180 can also immediately execute the particular menu or option. Thus
  • the error processing step may be performed (S 108 ) by again receiving input of a voice command, or may be performed by displaying a plurality of menus having a recognition rate more than a certain level or a plurality of menus that may be judged to have similar meanings. The user can then select one of the plurality of menus. Also, when the number of functions or menus having a recognition rate more than a certain level is less than a preset number (e.g., two), the controller 180 can automatically execute the corresponding function or menu.
  • a preset number e.g., two
  • FIG. 6A is an overview showing a method for activating a voice recognition function for a mobile terminal according to an embodiment of the present invention.
  • the user can activate the voice recognition function by touching a soft button 411 .
  • the user can also terminate the voice recognition function by releasing the soft button 411 .
  • the user can activate the voice recognition function by touching the soft button 411 and continuously touch the soft button 411 or hard button 412 , until the voice instruction has been completed. That is, the user can release the soft button 411 or hard button 412 when the voice instruction has been completed.
  • the controller 180 is made aware of when the voice instruction is to be input and when the voice instruction has been completed. As discussed above, because the user is directly involved in this determination, the accuracy of the interpretation of the input voice command is increased.
  • the controller 180 can also be configured to recognize the start of the voice activation feature when the user first touches the soft button 411 , and then recognize the voice instruction has been completed when the user touches the soft button 411 twice, for example. Other selection methods are also possible. Further, as shown in the display screen 410 in FIG. 6A , rather than using the soft button 411 , the voice activation and de-activation can be performed by manipulating the hard button 412 on the terminal.
  • the soft button 411 shown in the display screen 410 can be a single soft button that the user presses or releases to activate/deactivate the voice recognition function or may be a menu button that when selected produces a menu list such as “1. Start voice activation, and 2. Stop voice activation”.
  • the soft button 411 can also be displayed during a standby state, for example.
  • the user can also activate and deactivate the voice recognition function by touching an arbitrary position of the screen.
  • the display screen 430 in FIG. 6A illustrates yet another example in which the user activates and deactivates the voice recognition function by producing a specific sound or sound effects that is/are greater than a specific level. For example, the user may clap their hands together to produce such an impact sound.
  • the voice recognition function may be implemented in two modes.
  • the voice recognition function may be implemented in a first mode for detecting a particular sound or sound effects more than a certain level, and in a second mode for recognizing a voice command and determining a meaning of the voice command. If the sound or sound effects is/are more than a certain level in the first mode, the second mode is activated to thereby to recognize the voice command.
  • the display screen 440 in FIG. 6A illustrates still another method of the user activating and deactivating the voice recognition function.
  • the controller 180 is configured to interpret body movements of the user to start and stop the voice activation function.
  • the controller 180 may be configured to interpret the user moving his hand toward the display as an instruction to activate the voice recognition function, and the user moving his hand away from the display as an instruction to terminate the voice activation function.
  • Short or long-range wireless signals may also be used to start and stop the voice recognition function.
  • the voice recognition function is not continuously executed. That is, when the voice recognition function is continuously maintained in the activated state, the amount of resources on the mobile terminal is increased compared to the embodiment of the present invention.
  • the controller 180 specifies a domain of a specific database that is used as a reference for voice command recognition into a domain relating to a menu list on the display 151 . Then, if a specific menu is selected or executed from the menu list, the domain of the database may be specified into information relating to the selected menu or sub-menus of the specific menu.
  • the controller 180 may output help information relating to sub-menus of the specific menu in the form of a voice message, or pop-up windows or balloons.
  • help information relating to sub-menus of the specific menu in the form of a voice message, or pop-up windows or balloons.
  • the controller 180 displays information relating to the sub-menus (e.g., broadcasting, camera, text viewer, game, etc.) of the ‘multimedia menu’ as balloon-shaped help information 441 .
  • the controller 180 can output a voice signal 442 including the help information. The user can then select one of the displayed help options using a voice command or by a touching operation
  • FIG. 6C illustrates an embodiment of a user selecting a menu item using his or her body movements (in this example, the user's hand gesture).
  • the controller 180 displays the sub-menus 444 related to the menu 443 .
  • the controller 180 can recognize the user's body movement of information via the sensing unit 140 , for example.
  • the displayed help information can be displayed so as to have a transparency or brightness controlled according to the user's distance. That is, as the user's hand gets closer, the displayed items can be further highlighted.
  • the controller 180 can be configured to determine the starting and stopping of the voice recognition function based on a variety of different methods. For example, the user can select/manipulate soft or hard buttons, touch an arbitrary position on the touch screen, etc.
  • the controller 180 can also maintain the activation of the voice recognition function for a predetermined amount of time, and then automatically end the activation at the end of the predetermined amount of time. Also, the controller 180 may maintain the activation only while a specific button or touch operation is performed, and then automatically end the activation when the input is released.
  • the controller 180 can also end the activation process when the voice command is no longer input for a certain amount of time.
  • FIG. 7A is a flowchart showing a method for recognizing a voice command in a mobile terminal according to an embodiment of the present invention.
  • the controller 180 when the voice recognition function is activated, the controller 180 specifies a domain of a database that can be used as a reference for voice command recognition into a domain relating to a menu displayed on the display 151 , sub-menus of the menu, or a domain relating to a currently-executed function or menu (S 201 ).
  • the user also inputs the voice command (S 202 ) using either the precise menu name or using a natural language (spoken English, for example).
  • the controller 180 then stores the input voice command in the memory 160 (S 203 ).
  • the controller 180 analyzes a context and content of the voice command based on the specified domain by using a voice recognition algorithm. Also, the voice command may be converted into text-type information for analysis (S 204 ), and then stored in a specific database of the memory 160 . However, the step of converting the voice command into text-type information can be omitted.
  • the controller 180 detects a specific word or keyword of the voice command (S 205 ). Based on the detected words or keywords, the controller 180 analyzes the context and content of the voice command and determines or judges a meaning of the voice command by referring to information stored in the specific database (S 206 ).
  • the database used as a reference includes a specified domain, and functions or menus corresponding to a meaning of the voice command judged based on the database are executed (S 207 ).
  • the priorities of such information for the voice command recognition may be set to commands related to modifying text or commands related to searching for another party to receive the text message or transmission of such message.
  • the database for voice recognition is specified to each information relating to a currently-executed function or menu, the recognition rate and speed for of recognizing the voice command are improved, and the amount of resources used on the terminal is reduced. Further, the recognition rate indicates a matching degree with a name preset to a specific menu.
  • the recognition rate for an input voice command may also be judged by the number of information relating to specific functions or menus of the voice command. Therefore, the recognition rate for the input voice command is improved when the information precisely matches a specific function or menu (e.g., menu name) that is included in the voice command.
  • a specific function or menu e.g., menu name
  • FIG. 7B is an overview showing a method for recognizing a voice command of a mobile terminal according to an embodiment of the present invention.
  • the user inputs a voice command as a natural language composed of six words “I want to send text message.”
  • the recognition rate can be judged based on the number of meaningful words (e.g., send, text, message) relating to a specific menu (e.g., text message).
  • the controller 180 can determine whether the words included in the voice command are meaningful words relating to a specific function or menu based on the information stored in the database. For instance, meaningless words included in the natural language voice command (e.g., I want to send text message) that are irrelevant to the specific menu may be the subject (I) or the preposition (to).
  • the natural language is a language commonly used by people, and has a concept contrary to that of an artificial language. Further, the natural language may be processed by using a natural language processing algorithm.
  • the natural language may or may not include a precise name relating to a specific menu, which sometimes causes a difficulty in completely precisely recognizing a voice command. Therefore, according to an embodiment of the present invention, when a voice command has a recognition rate more than a certain level (e.g., 80%), the controller 180 judges the recognition to be precise. Further, when the controller 180 judges a plurality of menus to have similar meanings, the controller 180 displays the plurality of menus and the user can select one of the displayed menus to have its functions executed. In addition, a menu having a relatively higher recognition rate may be displayed first or distinctively displayed compared to the other menus.
  • FIG. 8 is an overview showing a method for displaying menus for a voice recognition rate of a mobile terminal according to an embodiment of the present invention.
  • a menu icon having a higher recognition rate is displayed at a central portion of the display screen 510 , or may be displayed with a larger size or a darker color as shown in the display screen 520 .
  • the menu icon having the higher recognition rate can also be displayed first and then followed in order or sequential manner by lower recognition rate menus.
  • the controller 180 can distinctively display the plurality of menus by changing at least one of the size, position, color, brightness of the menus or by highlighting in the order of a higher recognition rate. The transparency of the menus may also be appropriately changed or controlled.
  • a menu having a higher selection rate by a user may be updated or set to have a recognition rate. That is, the controller 180 stores a history of the user selections (S 231 ) and performs a learning process (S 232 ) to thereby update a particular recognition rate for a menu option that is selected by a user more than other menu options (S 233 ). Thus, the number of times a frequently used menu is selected by a user may be applied to recognition rate of the menu. Therefore, a voice command input in the same or similar manner in pronunciation or content may have a different recognition rate according to how many times a user selects a particular menu. Further, the controller 180 may also store time at which the user performs particular functions.
  • a user may check emails or missed messages every time they wake up on Mondays through Fridays. This time information may also be used to improve the recognition rate.
  • the state of the terminal e.g., standby mode, etc.
  • the user may check emails or missed messages when first turning on their mobile terminal, when the terminal is opened from a closed position, etc.
  • FIG. 9 is an overview showing a method for recognizing a voice command of a mobile terminal according to another embodiment of the present invention.
  • the user activates the voice recognition function, and inputs the voice command “I want to send text message.”
  • the controller 180 specifies a domain of a database for voice command recognition into a domain relating to the displayed sub-menus.
  • the controller 180 interprets the voice command (S 241 ) and in this example, displays a plurality of menus that have a probability greater than a particular value (e.g., 80%) (S 242 ).
  • the controller displays four multimedia menus.
  • the controller 180 also distinctively displays a menu having the highest probability (e.g., specific menu option 621 “Send Text” in this example). The user can then select any one of the displayed menus to execute a function corresponding to the selected menu. In the example shown in FIG. 9 , the user selects the Send Text menu option 621 and the controller 180 displays sub menus related to the selected Send Text menu option 621 as shown in the display screen 620 . Also, as shown in step (S 242 ) in the lower portion of FIG. 9 , the controller 180 can also immediately execute a function when only a single menu is determined to be higher than the predetermined probability rate.
  • a menu having the highest probability e.g., specific menu option 621 “Send Text” in this example.
  • the controller 180 displays the information related to the text sending as shown in the display screen 620 immediately without the user having to select the Send Text menu option 621 when the Send Text menu option 621 is determined to be the only menu that has a higher recognition rate or probability than a predetermined threshold.
  • the controller 180 can also output balloon-shaped help information related to the sub menus to the user in a voice or text format.
  • the user can set the operation mode for outputting the help using appropriate menu options provided in environment setting menus. Accordingly, a user can operate the terminal of the present invention without needing or having a high level of skill. That is, many older people may not be experienced in operating the plurality of different menus provided with terminal. However, with the terminal of the present invention, a user who is generally not familiar with the intricacies of the user interfaces provided with the terminal can easily operate the mobile terminal.
  • the controller 180 when the controller 180 recognizes the voice command to have a plurality of meanings (i.e., when a natural language voice command (e.g., I want to send text message) does not include a precise menu name such as when a menu is included in a ‘send message’ category but does not have a precise name among ‘send photo’, ‘send mail’, and ‘outbox’), the controller 180 displays a plurality of menus having a recognition rate more than a certain value (e.g. 80%).
  • a certain value e.g. 80%
  • FIG. 10 is an overview showing a plurality of databases used by the controller 180 for recognizing a voice command of a mobile terminal according to an embodiment of the present invention.
  • the databases store information that the controller 180 uses to judge a meaning of a voice command, and may be any number of databases according to information features.
  • the respective databases configured according to information features may be updated through a continuous learning process under control of the controller 180 .
  • the learning process attempts to match a user's voice with a corresponding word. For example, when a word “waiting” pronounced by a user is misunderstood as a word “eighteen”, the user corrects the word “eighteen” into “waiting”. Accordingly, the same pronunciation to be subsequently input by the user is made to be recognized as “waiting”.
  • the respective databases include a first database 161 , a second database 162 , a third database 163 , and a fourth database 164 .
  • the first database 161 stores voice information for recognizing a voice input through the microphone in units of phonemes or syllables, or morphemes.
  • the second database 162 stores information (e.g., grammar, pronunciation precision, sentence structure, etc.) for judging an entire meaning of a voice command based on the recognized voice information.
  • the third database 163 stores information relating to menus for functions or services of the mobile terminal, and the fourth database 164 stores a message or voice information to be output from the mobile terminal so as to receive a user's confirmation about the judged meaning of the voice command
  • the third database 163 may be specified into information relating to menus of a specific category according to a domain preset for voice command recognition.
  • the respective database may store sound (pronunciation) information, and phonemes, syllable, morphemes, words, keywords, or sentences corresponding to the pronunciation information.
  • the controller 180 can determine or judge the meaning of a voice command by using at least one of the plurality of databases 161 to 164 , and execute menus relating to functions or services corresponding to the judged meaning of the voice command.
  • the present invention can display an operation state or mode having the voice command recognition function or STT function applied thereto by using a specific shape of indicator or icon. Then, upon the output of the indicator or icon, the user can be notified through a specific sound or voice.
  • the mobile terminal according to an embodiment of the present invention includes a web browser function and is configured to access the wireless Internet.
  • the controller 180 displays a default web page (hereinafter, will be referred to as ‘home page’) that has been previously set in an environment setting option.
  • home page a default web page
  • the user can open either a web page of an address directly input in an address window of the web browser, or a web page of an address registered as a bookmark.
  • the web page corresponds to a separate popup window, the popup window is displayed on an upper layer of the main web page.
  • FIG. 11 is an overview showing a web browser 700 of a mobile terminal according to an embodiment of the present invention.
  • the web browser 700 includes an address input window 710 for inputting an address of a web page, and a plurality of function button regions 720 used to perform a web surfing operation.
  • the function button regions 720 include a previous button 721 for displaying a previously opened web page (e.g., a first web page) of a currently opened web page (e.g., a second web page), and a back button 722 for displaying a subsequently opened web page (e.g., a third web page) of the currently opened web page.
  • a home button, favorites button, refresh button, etc. may also be included in this region.
  • a web page generally has a resolution of at least 800 pixels in a horizontal direction. Therefore, the mobile terminal includes a display module also having a resolution of 800 pixels in a horizontal direction so as to provide full browsing capabilities.
  • the display module of the mobile terminal includes at most 450 pixels in a vertical direction, which is less than those of a general monitor. Therefore, to view the information on a mobile terminal, the user must often scroll down or up to view more information beyond the displayed 450 vertical pixels displayed in the vertical direction.
  • a screen size of the terminal is too small when compared with a resolution. Therefore, the webpage and corresponding information are displayed with a small font and are difficult to read and select.
  • the display is a touch screen, the user can touch a particular link or item to obtain more information about the selected link or item, but because the information is displayed with a small size, the user often touches the wrong link or item.
  • the user can view multiple links about different football news (e.g., different teams, live scores, etc.). The user can then touch a particular link to view more information about the selected link.
  • the links and other webpage information are condensed and displayed very close together.
  • the user often inadvertently touches the wrong link. This is particularly disadvantageous because the wrong additional information is accessed, which can take some time in a poor wireless environment, and if the user tries pressing the back page button, the main web page tends to freeze and the user must completely restart the web access function.
  • the touch screen may also be configured to recognize a touch from a stylus or other related touch pen.
  • the user often wants to search for information using the web browsing function on the mobile terminal.
  • FIG. 12 is a flowchart showing a method for searching information through a voice command in a web browsing mode according to an embodiment of the present invention.
  • the mobile terminal of the present invention may access a web page through the wireless Internet. That is, as shown in FIG. 11 , the controller 180 can access the wireless Internet by using the wireless communication unit 110 , and display the web page on a preset region (web page display region) 730 of a web browser (S 301 in FIG. 12 ). Further, when the web browser is executed, the controller 180 can automatically activate a voice recognition function and an STT function.
  • the controller 180 also constructs objects of the displayed web page (e.g., text, images, windows, etc.) as a database.
  • the database may include a plurality of databases according to the types of web pages, and may be stored in the memory 160 .
  • the objects of the database may be specified into objects displayed on a screen. Also, when the web page is enlarged or reduced in size according to a user's instruction, the controller 180 can appropriately reconfigure the database. Therefore, the controller 180 can recognize user voice commands based on the information of the objects of the database.
  • the controller 180 judges or determines a meaning of the voice command (S 303 ). That is, the controller 180 converts the voice command into text using an STT function, and judges the meaning of the voice command based on the converted text.
  • the controller 180 can refer to the database constructed with object information of the web page order to determine the meaning of the voice command.
  • the user can input the voice command in the form of names (titles) of objects or phrases or sentences including the names of the objects.
  • the user can also enlarge a portion of the webpage before issuing the voice command to have the controller 180 more easily judge the meaning of the voice command, to improve a recognition rate for the voice command, to more specifically specify a scope of an object to be recognized in the voice command manner, etc.
  • the controller 180 determines the input voice command has a meaning relating to an information search operation in the web browsing mode (Yes in S 304 ), the controller 180 enters an information search mode (S 305 ).
  • the information search mode indicates a mode for searching information relating to a search word input into a search word input window of a web page having an information search function.
  • the information search mode may also indicate a mode for searching contents included in a currently displayed web page.
  • the controller 180 may display information about the entered state. Accordingly, a user can recognize that the mobile terminal has entered an information search mode, and can then input search information. Further, the user can input the search information in the form of words, phrases, or sentences. The user can input the search information by manually typing the information or by inputting the search information using voice commands.
  • the controller 180 converts the voice command into a text using an STT function (S 307 ). The controller 180 can also automatically display the converted text on the search word input window (S 308 ).
  • the controller 180 may output a guide message relating to an operation state of the mobile terminal.
  • the guide message may be a message indicating that the mobile terminal has entered an information search mode, a message indicating that a search word can be input, or a message confirming whether an input search word is correct, etc.
  • the controller 180 performs the information search operation (S 310 ).
  • the search operation is generally not performed by the mobile terminal, but rather upon indication of a search operation, the controller 180 sends a search word and a search instruction to a web server, and receives results about the search word from the web server. The controller 180 then displays the results (S 311 ).
  • the searched information is displayed in the form of a web page via the controller 180 , the user can select any one of searched objects displayed on the web page in a voice command manner or in a key or touch input manner. Accordingly, detailed contents of the information can be displayed.
  • FIG. 13 is an overview showing a method for setting a database of objects displayed on a web page according to an embodiment of the present invention.
  • the controller 180 can construct objects of a web page (e.g., text, images, windows, etc.) as a database 165 for recognition of a voice command in a web browsing mode.
  • a web page e.g., text, images, windows, etc.
  • a voice command that can be input in a web browsing mode may be a command for selecting a specific object of a web page and displaying information linked to the selected object, a command for inputting a search word on a search window of a web page and searching relevant information, or a command for searching contents of a currently displayed web page.
  • the database constructed with the objects of the web page is referred to so as to recognize an object input in a voice command manner, thereby improving a recognition rate for a voice command and a recognition speed.
  • the controller 180 can refer to a source of a web page. For instance, when the web page has a source of ‘HYPER TEXT MARKUP LANGUAGE (HTML)’, the objects (e.g., texts, images, window) and information linked to the objects (e.g., an address of another web page) can be analyzed based on the source.
  • the objects of a database may be specified to objects currently displayed on a screen. Accordingly, when the web page is enlarged or reduced according to a user's instruction, the controller 180 can reconfigure the database.
  • the database may also include two or more databases according to a particular web page, and may be stored in the memory 160 .
  • FIG. 14 is an overview showing a method for entering an information search mode from a web browsing mode in a mobile terminal according to an embodiment of the present invention.
  • the displayed web page includes a search word input window 741 the user can use to search for information.
  • entering an information search mode indicates selecting the search word input window 741 , or alternatively searching desired information from contents of a web page currently displayed on a screen.
  • the search word input window 741 may be selected among objects of the web page when the user inputs a voice command indicating ‘search’ as shown by the reference numeral 742 in FIG. 14 .
  • the search word input window may be automatically activated or selected upon access the web page. The mobile terminal may thus enter an information search mode through the various input manners.
  • FIGS. 15A-15C are overviews showing a method for displaying a state that a mobile terminal has entered an information search mode according to an embodiment of the present invention.
  • the controller 180 may inform a user about the entered state, i.e., a state that the search word input window 741 of the web page has been selected.
  • the controller 180 displays an indicator 751 having a specific shape on the search word input window 741 so as to inform a user that the mobile terminal has entered the information search mode.
  • the indicator 751 may be implemented as a still image or as moving images (e.g., flickering effects).
  • the indicator 751 can also have various shapes or sizes.
  • the controller 180 outputs a guide message 753 using voice or text to inform the user that the mobile terminal has entered an information search mode.
  • the controller 180 outputs the message ‘You have entered information search mode’ as indicated by the reference number 752 or the message ‘Please input search word’ as indicated by the reference number 753 .
  • the guide message 753 using text instead of voice may also be displayed in the form of a balloon message. That is, as shown in FIG. 15C , the controller 180 can display a text message 754 in the search word input window 741 to indicate the search mode has been entered.
  • the controller 180 can also advantageously display the search word input window 741 with an enlarged size or a changed color.
  • the controller 180 can advantageously change the color of the search word input window 741 to be red after the mobile terminal enters the information search mode.
  • the controller 180 distinctively displays the search window 741 , the user can quickly see the search mode has been successfully entered.
  • FIGS. 16A and 16B are overviews showing a method for inputting a search word in an information search mode according to an embodiment of the present invention.
  • the user has recognized the mobile terminal has entered the information search mode via one of the methods shown in FIG. 15 , and may input a search word into the search word input window 741 using a voice command.
  • the controller 180 then converts the input voice command into text using an STT function.
  • the converted text is distinguished from a general voice command. That is, the meaning of the search word is not judged based on the database, but is only transcribed in a dictation operation until the search word has been completely input.
  • the controller 180 can also display the converted search word in the input window 741 so the user can verify the search word is correct.
  • the search word may also be displayed on any display region rather than the search word input window 741 .
  • the following description will refer to the input search word being displayed on the search word input window 741 .
  • the user can input one word (e.g., movie) 761 or a phrase or a sentence composed of two or more words (e.g., recent popular movie) 762 .
  • the controller 180 inserts an empty space between the words converted to text, thereby completing a sentence.
  • the controller 180 can determine that the search word has been completely input.
  • the search word may be input with a Boolean operator (e.g., AND, OR).
  • a Boolean operator e.g., AND
  • the Boolean operator may be converted only into English differently from other search words. That is, the search words (e.g., movie, theater) are converted into a text language of each country, whereas the Boolean operator (e.g., AND) is converted only into English (e.g., AND) 765 .
  • the Boolean operator may play a role of a Boolean operator. For instance, while converting a search word input in Korean into a Korean text, the controller 180 judges whether a Boolean operator has been input. If a Boolean operator has been input, the controller 180 converts the Boolean operator into English.
  • FIG. 17 is an overview showing a method for displaying a search word input in an information search mode according to an embodiment of the present invention.
  • the controller 180 inputs the search word onto the search word input window 741 . That is, the controller 180 displays the search word on the search word input window 741 . Accordingly, the user can check whether or not the search word has been precisely input. Further, the controller 180 can display a search word that has a highest recognition precision among search words input in a voice command manner.
  • the controller 180 can display the candidate search words (e.g., candidate search word 1 , candidate search word 2 , etc.) 772 as shown in the top portion in FIG. 17 .
  • the candidate search words may have priorities determined according to a recognition precision, and may be displayed in the determined orders.
  • the candidate search words may also have numbers according to priorities.
  • the user can select the candidate search word 774 or number 773 of the candidate search word in a voice command manner.
  • the user can select the numbers 773 in a key input manner.
  • the user can also select one of the candidate search words in a direct touch manner.
  • the controller 180 displays the selected candidate search word 777 on the search word input window as shown in the lower portion of FIG. 17 .
  • the controller 180 can then indicate that the search word has been completely input by outputting a guide message using a text or voice. For example, as shown in the lower portion of FIG.
  • the controller 180 can output a message 775 indicating that the search word has been completely input such as ‘You have input search word’, and output a corresponding message 776 inquiring whether or not to perform a search operation such as ‘Do you want to perform search operation?’.
  • the guide messages using text may be displayed in the form of a balloon message.
  • FIGS. 18A and 18B are overviews showing a method for indicating an information search according to an embodiment of the present invention.
  • the user may input a command instructing a search operation.
  • the command may be input in a voice command manner or in a hardware or soft key input manner.
  • the user can request a search operation be performed in a voice command manner by responding (Yes or No) to the guide message 776 asking the user if they want to perform a search operation as shown in FIG. 17 .
  • the user may input a preset word or command “OK” together with a search word (e.g., “mobile terminal”) as indicated by the reference number 781 .
  • a search word e.g., “mobile terminal”
  • the controller 180 may instruct a search operation be performed after a preset time lapses after a search word has been input.
  • the controller 180 can perform the search operation based on a preset voice command “Search” as identified by the reference numeral 783 shown in FIG. 18B .
  • the controller 180 can output a guide message 782 using text or voice (e.g., ‘Search will start’ or ‘Search is being performed.’), or output an indicator having the same meaning. Accordingly, the user can recognize the current state of the mobile terminal to determine that the search is being performed. Also, as discussed above, the search operation is generally not performed by the mobile terminal, but rather the controller 180 sends a search word and a search instruction to a web server, receives results about the search word from the web server, and then displays the results. Accordingly, the user may select any object they desire from the displayed search results. Then, a web page linked to the object is displayed, thereby allowing the user to view details of his or her desired information.
  • text or voice e.g., ‘Search will start’ or ‘Search is being performed.’
  • an indicator having the same meaning e.g., the user can recognize the current state of the mobile terminal to determine that the search is being performed.
  • the search operation is generally not performed by the mobile terminal, but rather the controller 180 sends a search
  • FIG. 19 is a flowchart showing a method for searching a user's desired information in a web page according to an embodiment of the present invention. In addition, details about the same operations that have been previously described will be omitted.
  • the controller 180 constructs objects of the displayed web page as a database (S 402 ).
  • the displayed web page can be represented as HTML.
  • the information contained in the HTML can be used to create data objects in the database.
  • the controller 180 searches objects corresponding to the voice command from the database (S 404 ).
  • the controller 180 can search the HTML constructing the web page for objects or text that includes the converted voice command. Further, the voice command is assumed to be a command instructing contents of the web page to be searched.
  • the controller 180 may specify a range of information that can be recognized after being input in a voice command manner into objects displayed on the web page.
  • the controller 180 can also specify the range of information into objects displayed on a current screen among objects displayed on the web page. Then, once objects corresponding to the voice command are searched, the controller 180 displays results of the search (S 405 ). Also, the search results may be the objects or phrases or sentences including the objects.
  • the search results may be distinctively displayed from other information displayed on the web page.
  • the user has requested the currently displayed web page be searched for the phrase “News Selection”.
  • the controller 180 then converts the input voice command into text, and searches the database including objects representing the displayed web page for objects that include the any of the terms “News” or “Selection.”
  • the controller 180 distinctively displays the found results 791 and 792 on corresponding positions on the web page with changed object features such that the user can quickly and easily see the objects that were found during the search of the currently displayed web page.
  • the controller 180 can display the search results with an enlarged size, color changes, background color display, transparency changes, font changes, or underlines for highlighting effects, etc.
  • the search results may also be displayed through various emphasizing methods rather than the above methods.
  • the search results may be displayed on a specific display region 790 .
  • the specific display region 790 may be displayed on a screen divided into a plurality of parts, or on a web page in an overlaying manner.
  • the search results may be also displayed in an ‘On Screen Display’ (OSD) manner. Further, the search results may be numbered.
  • OSD On Screen Display
  • the search results may be numbered. Then, as shown in FIG. 20A , the user can automatically or manually select one of the search results, by selecting the numbers in a voice command or key input manner, or in a direct touch manner (S 406 in FIG. 19 ). Accordingly, information linked to the selected object can be displayed (S 408 ).
  • FIGS. 20A and 20B also show a method for selecting specific information among information obtained as a result of information search according to an embodiment of the present invention.
  • the controller 180 searches the specific object or phrases including the object in the database, and displays the results of the search. Then, when a command to select one of the search results is input, a corresponding search result is displayed, and information (web page) linked to the search result is automatically displayed. As the selected search result is displayed, the user can check whether or not a desired search result has been selected. When a specific time lapses after the search result is selected, information linked to the search result may be displayed. Here, the selected object may be displayed with a highlighted state by overlaying an indicator having a specific shape, or by changing a color, a size, or a thickness. In addition, within a preset time after the selected search result is displayed, the user may input a command to cancel the selected search result. Upon the input of the command, a displayed state of a web page linked to the selected research result may be canceled.
  • an information search can be performed through a voice command in a web browsing mode, thereby enhancing a user's convenience. Furthermore, the information search can be easily performed even in a web browsing mode of the mobile terminal having a small screen by using both a touch input method and a voice command input method.
  • the user can advantageously search for items or objects on a currently displayed web page.
  • the items or objects can be plain text when the web site include text information or can be links to other web sites.
  • the user can enter the term “people” for example, and the controller 180 will distinctively display all items, text, objects, etc. on the web page that include the term “people” Therefore, the user does not have to visually search the website for desired information, which is often tedious and cumbersome.
  • the controller 180 can first enter an information search mode for searching the currently displayed web page before the user speaks the voice information to be used for searching the displayed web page.
  • the information search mode can entered based on a voice command (e.g., “enter search mode”), a key input (e.g., a separate hard key on the mobile terminal), or in a direct touching of a predetermined portion of the displayed web page.
  • the user can selectively determine when the search mode is entered so that the search mode is not inadvertently entered when the user is speaking and does not want the search mode to be entered.
  • the search mode can also be automatically entered as soon as the web page is displayed.

Abstract

A mobile terminal including a wireless communication unit configured to access a web page, a display unit configured to display the accessed web page, a receiving unit configured to receive input voice information, and a controller configured to convert the input voice information into text information, to search the displayed web page for objects that include the converted text information, and to control the display unit to distinctively display found objects that include the converted text information from other information displayed on the web page.

Description

    CROSS-REFERENCE TO A RELATED APPLICATION
  • The present disclosure relates to subject matter contained in priority Korean Application No. 10-2008-0106736, filed on Oct. 29, 2008, which is herein expressly incorporated by reference in its entirety.
  • BACKGROUND OF THE INVENTION
  • 1. Field of the Invention
  • The present invention relates to a mobile terminal and corresponding method for searching objects on a displayed web page.
  • 2. Background of the Invention
  • Mobile terminals now provide many additional services beside the basic call service. For example, user's can now access the Internet, play games, watch videos, listen to music, capture images and videos, record audio files, etc. Mobile terminals also now provide broadcasting programs such that user can watch television shows, sporting programs, videos etc.
  • In addition, mobile terminals also provide web browsing functions, However, because the mobile terminal display is small in size, it is difficult to select items or links displayed on a particular web page. It is also difficult to search for information using web browsing functions.
  • SUMMARY OF THE INVENTION
  • Accordingly, one object of the present invention is to address the above-noted and other problems.
  • Another object of the present invention is to provide a mobile terminal and corresponding method for performing and information search through a voice command in a web browsing mode.
  • Yet another object of the present invention is to provide a mobile terminal and corresponding method for entering an information search mode from a web browsing mode through a voice command.
  • Another object of the present invention is to provide a mobile terminal and corresponding method for inputting a search word on an information search window through a voice command in a web browsing mode.
  • Still another object of the present invention is to provide a mobile terminal and corresponding method for indicating a search execution through a voice command in a web browsing mode.
  • Another object of the present invention is to provide a mobile terminal and corresponding method for selecting through a voice command information searched in a web browsing mode, and displaying a web page relating to the selected information.
  • Still yet another object of the present invention is to provide a mobile terminal and corresponding method for searching information on a currently displayed web page based on a voice command.
  • To achieve these and other advantages and in accordance with the purpose of the present invention, as embodied and broadly described herein, the present invention provides in one aspect a mobile terminal including a wireless communication unit configured to access a web page, a display unit configured to display the accessed web page, a receiving unit configured to receive input voice information, and a controller configured to convert the input voice information into text information, to search the displayed web page for objects that include the converted text information, and to control the display unit to distinctively display found objects that include the converted text information from other information displayed on the web page.
  • In another aspect, the present invention provides a method of controlling a mobile terminal, and which includes displaying an accessed web page on a display of the mobile terminal, receiving input voice information, converting the input voice information into text information, searching the displayed web page for objects that include the converted text information, and distinctively displaying found objects that include the converted text information from other information displayed on the web page.
  • Further scope of applicability of the present invention will become apparent from the detailed description given hereinafter. However, it should be understood that the detailed description and specific examples, while indicating preferred embodiments of the invention, are given by illustration only, since various changes and modifications within the spirit and scope of the invention will become apparent to those skilled in the art from this detailed description.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention.
  • In the drawings:
  • FIG. 1 is a block diagram of a mobile terminal according to an embodiment of the present invention;
  • FIG. 2A is a front perspective view of the mobile terminal according to an embodiment of the present invention;
  • FIG. 2B is a rear perspective view of the mobile terminal according to an embodiment of the present invention;
  • FIGS. 3A and 3B are front views showing an operation state of the mobile terminal according to an embodiment of the present invention;
  • FIG. 4 is a conceptual view showing a proximity depth measured by a proximity sensor;
  • FIG. 5 is a flowchart illustrating a menu voice control method in a mobile terminal according to an embodiment of the present invention;
  • FIG. 6A includes overviews of display screens illustrating a method for activating a voice recognition function in a mobile terminal according to an embodiment of the present invention;
  • FIGS. 6B and 6C include overviews of display screens illustrating a method for outputting help information in a mobile terminal according to an embodiment of the present invention;
  • FIG. 7A is a flowchart illustrating a method for recognizing a voice command in a mobile terminal according to an embodiment of the present invention;
  • FIG. 7B is an overview illustrating a method for recognizing a voice command in a mobile terminal according to an embodiment of the present invention;
  • FIG. 8 includes overviews of display screens illustrating a method for displaying a menu in cooperation with a rate of voice recognition in a mobile terminal according to an embodiment of the present invention;
  • FIG. 9 includes overviews of display screens illustrating a method for recognizing a voice command in a mobile terminal according to an embodiment of the present invention;
  • FIG. 10 is an overview illustrating an organization of databases used for recognizing a voice command in a mobile terminal according an embodiment of the present invention;
  • FIG. 10 is an overview of a configuration of database used as a reference for voice command recognition of a mobile terminal according to the present invention;
  • FIG. 11 is an overview showing a web browser of a mobile terminal according to an embodiment of the present invention;
  • FIG. 12 is a flowchart showing a method for searching information through a voice command in a web browsing mode according to an embodiment of the present invention;
  • FIG. 13 is an overview showing a method for setting database of objects displayed on a web page according to an embodiment of the present invention;
  • FIG. 14 is an overview showing a method for entering an information search mode from a web browsing mode in a mobile terminal according to an embodiment of the present invention;
  • FIGS. 15A to 15C are overviews showing a method for displaying a state that a mobile terminal has entered an information search mode according to an embodiment of the present invention;
  • FIGS. 16A and 16B are overviews showing a method for inputting a search word in an information search mode according to an embodiment of the present invention;
  • FIG. 17 is an exemplary view showing a method for displaying a search word input in an information search mode according to an embodiment of the present invention;
  • FIGS. 18A and 18B are exemplary views showing a method for indicating information search according to an embodiment of the present invention;
  • FIG. 19 is a flowchart showing a method for searching a user's desired information in a web page according to an embodiment of the present invention; and
  • FIGS. 20A and 20B are overviews showing a method for selecting specific information among information obtained as a result of information search according to an embodiment the present invention.
  • DETAILED DESCRIPTION OF THE INVENTION
  • Hereinafter, a mobile terminal relating to the present invention will be described below in more detail with reference to the accompanying drawings. Further, the mobile terminal described in the specification can include a cellular phone, a smart phone, a laptop computer, a digital broadcasting terminal, personal digital assistants (PDA), a portable multimedia player (PMP), a navigation system and so on.
  • FIG. 1 is a block diagram of a mobile terminal 100 according to an embodiment of the present invention. As shown, the mobile terminal 100 includes a radio communication unit 110, an audio/video (A/V) input unit 120, a user input unit 130, a sensing unit 140, an output unit 150, a memory 160, an interface 170, a controller 180, and a power supply 190. Not all of the components shown in FIG. 1 are essential parts and the number of components included in the mobile terminal can be varied.
  • In addition, the radio communication unit 110 includes at least one module that enables radio communication between the mobile terminal 100 and a radio communication system or between the mobile terminal 100 and a network in which the mobile terminal 100 is located. For example, in FIG. 1, the radio communication unit 110 includes a broadcasting receiving module 111, a mobile communication module 112, a wireless Internet module 113, a local area communication module 114 and a position information module 115.
  • The broadcasting receiving module 111 receives broadcasting signals and/or broadcasting related information from an external broadcasting management server through a broadcasting channel. Further, the broadcasting channel can include a satellite channel and a terrestrial channel. Also, the broadcasting management server can be a server that generates and transmits broadcasting signals and/or broadcasting related information or a server that receives previously created broadcasting signals and/or broadcasting related information and transmits the broadcasting signals and/or broadcasting related information to a terminal. The broadcasting signals can include not only TV broadcasting signals, radio broadcasting signals and data broadcasting signals but also signals in the form of a combination of a TV broadcasting signal and a radio broadcasting signal.
  • In addition, the broadcasting related information can be information on a broadcasting channel, a broadcasting program or a broadcasting service provider. The broadcasting related information can also be provided through a mobile communication network. In this instance, the broadcasting related information can be received by the mobile communication module 112. The broadcasting related information can also exist in various forms. For example, the broadcasting related information can exist in the form of an electronic program guide (EPG) of the digital multimedia broadcasting (DMB) system or in the form of an electronic service guide (ESG) of the digital video broadcast-handheld (DVB-H) system.
  • In addition, the broadcasting receiving module 111 receives broadcasting signals using various broadcasting systems. In particular, the broadcasting receiving module 111 can receive digital broadcasting signals using digital broadcasting systems such as the digital multimedia broadcasting-terrestrial (DMB-T) system, the digital multimedia broadcasting-satellite (DMB-S) system, the media forward link only (MediaFLO) system, the DVB-H system and the integrated services digital broadcast-terrestrial (ISDB-T) system. The broadcasting receiving module 111 can also be constructed to be suited to broadcasting systems providing broadcasting signals other than the above-described digital broadcasting systems. The broadcasting signals and/or broadcasting related information received through the broadcasting receiving module 111 can also be stored in the memory 160.
  • Further, the mobile communication module 112 transmits/receives a radio signal to/from at least one of a base station, an external terminal, and a server on a mobile communication network. The radio signal can include a voice call signal, a video telephony call signal or data in various forms according to transmission and receiving of text/multimedia messages.
  • The wireless Internet module 113 corresponds to a module for wireless Internet access and can be included in the mobile terminal 100 or externally attached to the mobile terminal 100. Wireless LAN (WLAN) (Wi-Fi), wireless broadband (Wibro), world interoperability for microwave access (Wimax), high speed downlink packet access (HSDPA) and so on can be used as a wireless Internet technique. The local area communication module 114 corresponds to a module for local area communication. Bluetooth, radio frequency identification (RFID), infrared data association (IrDA), ultra wideband (UWB) and ZigBee can be used as a local area communication technique.
  • In addition, the position information module 115 confirms or obtains the position of the mobile terminal 100. A global positioning system (GPS) module is a representative example of the position information module 115. Further, the GPS module 115 can calculate information on distances between one point (object) and at least three satellites and information on the time when the distance information is measured and apply trigonometry to the obtained distance information to obtain three-dimensional position information on the point (object) according to latitude, longitude and altitude coordinate at a predetermined time. Furthermore, a method of calculating position and time information using three satellites and correcting the calculated position and time information using another satellite is also used. In addition, the GPS module 115 continuously calculates the current position in real time and calculates velocity information using the position information.
  • Referring to FIG. 1, the A/V input unit 120 is used to input an audio signal or a video signal and includes a camera 121 and a microphone 122. The camera 121 processes image frames of still images or moving images obtained by an image sensor in a video telephony mode or a photographing mode. The processed image frames can be displayed on a display unit 151 included in the output unit 150. In addition, the image frames processed by the camera 121 can be stored in the memory 160 or transmitted to an external device through the radio communication unit 110. The mobile terminal 100 can also include at least two cameras according to constitution of the terminal.
  • Further, the microphone 122 receives an external audio signal in a call mode, a recording mode or a speed recognition mode and processes the received audio signal into electric audio data. The audio data can also be converted into a form that can be transmitted to a mobile communication base station through the mobile communication module 112 and output in the call mode. The microphone 122 can employ various noise removal algorithms for removing noise generated when the external audio signal is received.
  • In addition, the user input unit 130 receives input data for controlling the operation of the terminal from a user. The user input unit 130 can include a keypad, a dome switch, a touch pad (constant voltage/capacitance), jog wheel, jog switch and so on. The sensing unit 140 senses the current state of the mobile terminal 100, such as open/close state of the mobile terminal 100, the position of the mobile terminal 100, whether a user touches the mobile terminal 100, the direction of the mobile terminal 100 and acceleration/deceleration of the mobile terminal 100 and generates a detection signal for controlling the operation of the mobile terminal 100. For example, the sensing unit 140 can sense whether a slide phone is opened or closed when the mobile terminal 100 is the slide phone. Furthermore, the sensing unit 140 can sense whether the power supply 190 supplies power and whether the interface 170 is connected to an external device. The sensing unit 140 can also include a proximity sensor 141.
  • In addition, the output unit 150 generates visual, auditory or tactile output and can include the display unit 151, an audio output module 152, an alarm 153 and a haptic module 154. The display unit 151 displays information processed by the mobile terminal 100. For example, the display unit 151 displays a UI or graphic user interface (GUI) related to a telephone call when the mobile terminal is in the call mode. The display unit 151 also displays a captured or/and received image, UL or GUI when the mobile terminal 100 is in the video telephony mode or the photographing mode.
  • The display unit 151 can also include at least one of a liquid crystal display, a thin film transistor liquid crystal display, an organic light-emitting diode display, a flexible display and a three-dimensional display. Some of these displays can be of a transparent type or a light transmission type, which is referred to as a transparent display. The transparent display also includes a transparent liquid crystal display. The rear structure of the display unit 151 can also be of the light transmission type. According to this structure, a user can see an object located behind the body of the mobile terminal 100 through an area of the body of the mobile terminal 100, which is occupied by the display unit 151.
  • Further, the mobile terminal 100 can include at least two display units 151 according to constitution of the terminal. For example, the mobile terminal 100 can include a plurality of displays that are arranged on a single face at a predetermined distance or integrated. Otherwise, the plurality of displays can be arranged on different sides. In addition, when the display unit 151 and a sensor sensing touch (referred to as a touch sensor hereinafter) form a layered structure, which is referred to as a touch screen hereinafter, the display unit 151 can be used as an input device in addition to an output device. The touch sensor can be in the form of a touch film, a touch sheet and a touch pad, for example.
  • Also, the touch sensor can be constructed such that it converts a variation in pressure applied to a specific portion of the display unit 151 or a variation in capacitance generated at a specific portion of the display unit 151 into an electric input signal. The touch sensor can also be constructed such that it can sense pressure of touch as well as the position and area of touch. When touch input is applied to the touch sensor, a signal corresponding to the touch input is transmitted to a touch controller. The touch controller then processes the signal and transmits data corresponding to the processed signal to the controller 180. Accordingly, the controller 180 can detect a touched portion of the display 151.
  • Referring to FIG. 1, the proximity sensor 141 can be located in an internal region of the mobile terminal 100, surrounded by the touch screen, or near the touch screen. The proximity sensor 141 senses an object approaching a predetermined sensing face or an object located near the proximity sensor 141 using an electromagnetic force or infrared rays without having mechanical contact. Further, the proximity sensor 141 has a lifetime longer than that of a contact sensor and has a wide application. The proximity sensor 141 also includes a transmission type photo-electric sensor, a direct reflection type photo-electric sensor, a mirror reflection type photo-electric sensor, a high-frequency oscillating proximity sensor, a capacitive proximity sensor, a magnetic proximity sensor, an infrared proximity sensor, etc.
  • In addition, a capacitive touch screen is constructed such that a proximity of a pointer is detected through a variation in an electric field according to the proximity of the pointer. In this instance, the touch screen (touch sensor) can be classified as a proximity sensor. For convenience of explanation, an action of approaching the pointer to the touch screen while the pointer it not in contact with the touch screen such that the location of the pointer on the touch screen is recognized is referred to as “proximity touch” and an action of bringing the pointer into contact with the touch screen is referred to as “contact touch” in the following description. Also, a proximity touch point of the pointer on the touch screen means a point of the touch screen to which the pointer corresponds perpendicularly to the touch screen when the pointer proximity-touches the touch screen.
  • Further, the proximity sensor 141 senses a proximity touch and a proximity touch pattern (for example, a proximity touch distance, a proximity touch direction, a proximity touch velocity, a proximity touch time, a proximity touch position, a proximity touch moving state, etc.). Information corresponding to the sensed proximity touch action and proximity touch pattern can also be displayed on the touch screen. Also, the audio output module 152 can output audio data received from the radio communication unit 110 or stored in the memory 160 in a call signal receiving mode, a telephone call mode or a recording mode, a speech recognition mode and a broadcasting receiving mode. The audio output module 152 also outputs audio signals related to functions (for example, a call signal incoming tone, a message incoming tone, etc.) performed in the mobile terminal 100. The audio output module 152 can include a receiver, a speaker, a buzzer, etc.
  • The alarm 153 outputs a signal for indicating generation of an event of the mobile terminal 100. Examples of events generated in the mobile terminal 100 include receiving a call signal, receiving a message, input of a key signal, input of touch, etc. The alarm 153 can also output signals in forms different from video signals or audio signals, for example, a signal for indicating a generation of an event through vibration. The video signals or the audio signals can also be output through the display unit 151 or the audio output module 152.
  • In addition, the haptic module 154 generates various haptic effects that the user can feel. A representative example of the haptic effect is vibration. The intensity and pattern of vibration generated by the haptic module 154 can also be controlled. For example, different vibrations can be combined and output or sequentially output. The haptic module 154 can also generate a variety of haptic effects including an effect of stimulus according to arrangement of pins vertically moving for a contact skin face, an effect of stimulus according to a jet force or sucking force of air through a jet hole or a sucking hole, an effect of stimulus rubbing the skin, an effect of stimulus according to contact of an electrode, an effect of stimulus using an electrostatic force and an effect according to reproduction of cold and warmth using an element capable of absorbing or radiating heat in addition to vibrations. Further, the haptic module 154 can not only transmit haptic effects through direct contact but also allow the user to feel haptic effects through kinesthetic sense of his or her fingers or arms. The mobile terminal 100 can also include at least two or more haptic modules 154 according to constitution of the mobile terminal.
  • In addition, the memory 160 stores a program for the operation of the controller 180 and temporarily stores input/output data (for example, phone book, messages, still images, moving images, etc.). The memory 160 can also store data about vibrations and sounds in various patterns, which are output when a touch input is applied to the touch screen. The memory 160 can include at least one of a flash memory, a hard disk type memory, a multimedia card micro type memory, a card type memory (for example, SD or XD memory), a random access memory (RAM), a static RAM (SRAM), a read-only memory (ROM), an electrically erasable programmable ROM (EEPROM), a programmable ROM (PROM) magnetic memory, a magnetic disk and an optical disk. The mobile terminal 100 can also operate in relation to a web storage performing the storing function of the memory 160 on the Internet.
  • Further, the interface 170 serves as a path to all external devices connected to the mobile terminal 100. The interface 170 receives data from the external devices or power and transmits the data or power to the internal components of the mobile terminal 100 or transmits data of the mobile terminal 100 to the external devices. The interface 170 can also include a wired/wireless headset port, an external charger port, a wired/wireless data port, a memory card port, a port for connecting a device having a user identification module, an audio I/O port, a video I/O port, an earphone port, etc., for example.
  • In addition, an identification module is a chip that stores information for authenticating the authority to use the mobile terminal 100 and can include a user identify module (UIM), a subscriber identify module (SIM) and a universal subscriber identify module (USIM). A device (referred to as an identification device hereinafter) including the identification module can be manufactured in the form of a smart card. Accordingly, the identification device can be connected to the mobile terminal 100 through a port.
  • Also, the interface 170 can serve as a path through which power from an external cradle is provided to the mobile terminal 100 when the mobile terminal 100 is connected to the external cradle or a path through which various command signals input by the user through the cradle to the mobile terminal 100. The various command signals or power input from the cradle can be used as a signal for confirming whether the mobile terminal 100 is correctly set in the cradle.
  • The controller 180 controls the overall operation of the mobile terminal. For example, the controller 180 performs control and processing for voice communication, data communication and video telephony. In FIG. 1, the controller 180 includes a multimedia module 181 for playing multimedia. The multimedia module 181 can be included in the controller 180 or separated from the controller 180. Further, the controller 180 can perform a pattern recognition process capable of recognizing handwriting input or picture-drawing input applied to the touch screen as characters or images. In addition, the power supply 190 receives external power and internal power and provides power required for the operations of the components of the mobile terminal under the control of the controller 180.
  • Further, various embodiments of the present invention can be implemented in a computer or similar device readable recording medium using software, hardware or a combination thereof, for example. According to a hardware implementation, the embodiments of the present invention can be implemented using at least one of application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays (FPGAs), processors, controllers, micro-controllers, microprocessors, electrical units for executing functions. The embodiments can also be implemented by the controller 180.
  • According to a software implementation, embodiments such as procedures or functions can be implemented with a separate software module executing at least one function or operation. Software codes can be implemented according to a software application written in an appropriate software language. Furthermore, the software codes can be stored in the memory 160 and executed by the controller 180.
  • Next, FIG. 2A is a front perspective view of a mobile terminal or a handheld terminal 100 according to an embodiment of the present invention. As shown, the handheld terminal 100 has a bar type terminal body. However, the present invention is not limited to a bar type terminal and can be applied to terminals of various types including a slide type, folder type, swing type and swivel type terminals having at least two bodies that are relatively movably combined.
  • In addition, the terminal body includes a case (a casing, a housing, a cover, etc.) forming the exterior of the terminal 100. In the present embodiment, the case is divided into a front case 101 and a rear case 102. Various electronic components are also arranged in the space formed between the front case 101 and the rear case 102. At least one middle case can be additionally arranged between the front case 101 and the rear case 102. The cases can also be formed of plastics through injection molding or made of a metal material such as stainless steel (STS) or titanium (Ti).
  • In addition, the display unit 151, the audio output unit 152, the camera 121, user input units 131 and 132 of the user input unit 130 (FIG. 1), the microphone 122 and the interface 170 are arranged in the terminal body, specifically, in the front case 101. Also, the display unit 151 occupies most part of the main face of the front case 101. The audio output unit 152 and the camera 121 are arranged in a region in proximity to one of both ends of the display unit 151, and the user input unit 131 and the microphone 122 are located in a region in proximity to the other end of the display unit 151. In addition, the user input unit 132 and the interface 170 are arranged on the sides of the front case 101 and the rear case 102.
  • Further, the user input unit 130 is operated to receive commands for controlling the operation of the handheld terminal 100 and can include the plurality of operating units 131 and 132. The operating units 131 and 132 can be referred to as manipulating portions and employ any tactile manner in which a user operates the operating units 131 and 132 while having tactile feeling. The operating units 131 and 132 can also receive various inputs. For example, the operating unit 131 receives commands such as start, end and scroll, and the second operating unit 132 receives commands such as control of the volume of sound output from the audio output unit 152 or conversion of the display unit 151 to a touch recognition mode.
  • Next, FIG. 2B is a rear perspective view of the handheld terminal shown in FIG. 2A according to an embodiment of the present invention. Referring to FIG. 2B, a camera 121′ is additionally attached to the rear side of the terminal body, that is, the rear case 102. The camera 121′ has a photographing direction opposite to that of the camera 121 shown in FIG. 2A and can have pixels different from those of the camera 121 shown in FIG. 2A. For example, it is preferable that the camera 121 has low pixels such that it can capture an image of the face of a user and transmit the image to a receiving part for video telephony, while the camera 121′ has high pixels because it captures an image of a general object and does not immediately transmit the image in many instances. The cameras 121 and 121′ can also be attached to the terminal body such that they can be rotated or pop-up.
  • A flash bulb 123 and a mirror 124 are additionally arranged in proximity to the camera 121′. The flash bulb 123 lights an object when the camera 121′ takes a picture of the object, and the mirror 124 is used for the user to look at his/her face in the mirror when the user wants to self-photograph himself/herself using the camera 121′. An audio output unit 152′ is additionally provided on the rear side of the terminal body. The audio output unit 152′ can thus achieve a stereo function with the audio output unit 152 shown in FIG. 2A and be used for a speaker phone mode when the terminal is used for a telephone call. A broadcasting signal receiving antenna 124 is also attached to the side of the terminal body in addition to an antenna for telephone calls. The antenna 124 constructing a part of the broadcasting receiving module 111 shown in FIG. 1 can be set in the terminal body such that the antenna 124 can be pulled out of the terminal body.
  • Further, the power supply 190 for providing power to the handheld terminal 100 is set in the terminal body. The power supply 190 can be included in the terminal body or detachably attached to the terminal body. A touch pad 135 for sensing touch is also attached to the rear case 102. The touch pad 135 can be of a light transmission type as the display unit 151. In this instance, if the display unit 151 outputs visual information through both sides thereof, the visual information can be recognized through the touch pad 135. The information output through both sides of the display unit 151 can also be controlled by the touch pad 135. Otherwise, a display is additionally attached to the touch pad 135 such that a touch screen can be arranged even in the rear case 102.
  • The touch pad 135 also operates in connection with the display unit 151 of the front case 101. The touch pad 135 can be located in parallel with the display unit 151 behind the display unit 151, and can be identical to or smaller than the display unit 151 in size. Interoperations of the display unit 151 and the touch pad 135 will now be described with reference to FIGS. 3A and 3B. In more detail, FIGS. 3A and 3B are front views of the handheld terminal 100 for explaining an operating state of the handheld terminal according to an embodiment of the present invention. In addition, the display unit 151 can display various types of visual information in the form of characters, numerals, symbols, graphic or icons. To input the information, at least one of the characters, numerals, symbols, graphic and icons are displayed in predetermined arrangement in the form of a keypad. This keypad can be referred to as a ‘soft key’.
  • Further, FIG. 3A shows that a touch applied to a soft key is input through the front side of the terminal body. The display unit 151 can be operated through the overall area thereof. Otherwise, the display unit 151 can be divided into a plurality of regions and operated. In the latter instance, the display unit 151 can be constructed such that the plurality of regions interoperate. For example, an output window 151 a and an input window 151 b are respectively displayed in upper and lower parts of the display unit 151. The input window 151 b displays soft keys 151 c that represent numerals used to input numbers such as telephone numbers. When a soft key 151 c is touched, a numeral corresponding to the touched soft key is displayed on the output window 151 a. When the user operates a first operating unit 116, connection of a call corresponding to a telephone number displayed on the output window 151 a is attempted.
  • Next, FIG. 3B shows that a touch applied to soft keys is input through the rear side of the terminal body. FIG. 3B also shows the landscape of the terminal body, while FIG. 3A shows the portrait of the terminal body. That is, the display unit 151 can be constructed such that an output image is converted according to the direction in which the terminal body is located. Further, FIG. 3B shows the operation of the handheld terminal in a text input mode. As shown, the display unit 151 displays an output window 135 a and an input window 135 b. A plurality of soft keys 135 c that indicate at least one of characters, symbols and numerals are arranged in the input window 135 b. The soft keys 135 c can also be arranged in the form of QWERTY keys.
  • When the soft keys 135 c are touched through the touch pad 135, characters, numerals and symbols corresponding to the touched soft keys 135 c are displayed on the output window 135 a. Touch input through the touch pad 135 can prevent the soft keys 135 c from being covered with the user's fingers when the soft keys 135 c are touched as compared to touch input through the display unit 151. When the display unit 151 and the touch pad 135 are transparent, fingers located behind the terminal body can be seen by the user, and thus touch input can be performed more correctly.
  • In addition, the display unit 151 or the touch pad 135 can be constructed such that it receives touch input in a scroll manner. That is, the user can scroll the display unit 151 or the touch pad 135 to move an object displayed on the display unit 151, for example, a cursor or a pointer located on an icon. Furthermore, when the user moves his or her finger on the display unit 151 or the touch pad 135, the finger moving path can be visually displayed on the display unit 151. This will be useful to edit an image displayed on the display unit 151. Also, when the display unit 151 (touch screen) and the touch pad 135 are simultaneously touched in a predetermined period of time, a specific function of the terminal can be executed. This can include when the user clamps the terminal body using the thumb and the index finger. The specific function can include activation or inactivation of the display unit 151 or the touch pad 135, for example.
  • The proximity sensor 141 described with reference to FIG. 1 will now be explained in more detail with reference to FIG. 4. That is, FIG. 4 is a conceptual view for explaining a proximity depth of the proximity sensor 141. As shown in FIG. 4, when a pointer such as a user's finger approaches the touch screen, the proximity sensor 141 located inside or near the touch screen senses the approach and outputs a proximity signal. The proximity sensor 141 can be constructed such that it outputs a proximity signal according to the distance between the pointer approaching the touch screen and the touch screen (referred to as “proximity depth”). The distance in which the proximity signal is output when the pointer approaches the touch screen is referred to as a detection distance. The proximity depth can be known by using a plurality of proximity sensors having different detection distances and comparing proximity signals respectively output from the proximity sensors.
  • Further, FIG. 4 shows the section of the touch screen in which proximity sensors capable of sensing three proximity depths are arranged. Proximity sensors capable of sensing less than three or more than four proximity depths can be arranged in the touch screen. Specifically, when the pointer completely comes into contact with the touch screen (D0), it is recognized as contact touch. When the pointer is located within a distance D1 from the touch screen, it is recognized as proximity touch of a first proximity depth, and when the pointer is located in a range between the distance D1 and a distance D2 from the touch screen, it is recognized as proximity touch of a second proximity depth. Further, when the pointer is located in a range between the distance D2 and a distance D3 from the touch screen, it is recognized as proximity touch of a third proximity depth, and when the pointer is located at longer than the distance D3 from the touch screen, it is recognized as cancellation of proximity touch. Accordingly, the controller 180 can recognize the proximity touch as various input signals according to the proximity distance and proximity position of the pointer with respect to the touch screen and perform various operation controls according to the input signals.
  • In the following description, a control method applicable to the above-configured mobile terminal 100 is explained with respect to various embodiments. However, the following embodiments can be implemented independently or through combinations thereof. In addition, in the following description, it is assumed that the display 151 includes a touch screen.
  • The mobile terminal according to the present invention is configured such that an algorithm for voice recognition and an algorithm for Speech To Text (STT) are stored in the memory 160. Further, the voice recognition function and the STT function cooperate together so as to convert a user's voice into a text format. The converted text can also be output on an execution screen of the terminal. Thus, the user can perform functions such as generating text for text messages or mails, etc. by speaking into the terminal. The controller 180 can also activate the voice recognition function and automatically drive the STT function.
  • Next, FIG. 5 is a flowchart illustrating a menu voice control method for a mobile terminal according to an embodiment of the present invention. As shown in FIG. 5, the controller 180 determines if the voice recognition function has been activated (S101). Further, the voice recognition function may be activated by the user selecting hardware buttons on the mobile terminal, or soft touch buttons displayed on the display 151. The user may also activate the voice recognition function by manipulating specific menus displayed on the display 151, by generating a specific sound or sound effects, by short or long-range wireless signals, or by the user's body information such as hand gesture or body gesture.
  • In more detail, the specific sound or sound effects may include impact sounds having a level more than a specific level. Further, the specific sound or sound effects may be detected using a sound level detecting algorithm. In addition, the sound level detecting algorithm is preferably simpler than a voice recognition algorithm, and thus consumes less resources of the mobile terminal. Also, the sound level detecting algorithm (or circuit) may be individually implemented from the voice recognition algorithm or circuit, or may be implemented so as to specify some functions of the voice recognition algorithm. In addition, the wireless signals may be received through the wireless communication unit 110, and the user's hand or body gestures may be received through the sensing unit 140. Thus, in an embodiment of the present invention, the wireless communication unit 110, the user input unit 130, and the sensing unit 140 may be referred to as a signal input unit. Further, the voice recognition function may also be terminated in a similar manner.
  • Having the user physically activate the voice recognition function is particularly advantageous, because the user is more aware they are about to use voice commands to control the terminal. That is, because the user has to first perform a physical manipulation of the terminal, he or she intuitively recognizes they are going to input a voice command or instruction into the terminal, and therefore speak more clearly or slowly to thereby activate a particular function. Thus, because the user speaks more clearly or more slowly, for example, the probability of accurately recognizing the voice instruction increases. That is, in an embodiment of the present invention, the activation of the voice recognition function is performed by a physical manipulation of a button on the terminal, rather than activating the voice recognition function by speaking into the terminal.
  • Further, the controller 180 may start or terminate activation of the voice recognition function based on how many times the user touches a particular button or portion of the touch screen, how long the user touches a particular button or portion of the touch screen, etc. The user can also set how the controller 180 is to activate the voice recognition function using an appropriate menu option provided by the present invention. For example, the user can select a menu option on the terminal that includes 1) set activation of voice recognition based on X number of times the voice activation button is selected, 2) set activation of voice recognition based on X amount of time the voice activation button is selected, 3) set activation of voice recognition when the buttons X and Y are selected, etc. The user can then enter the values of X and Y in order to variably set how the controller 180 determines the voice activation function is activated. Thus, according to an embodiment of the present invention, the user is actively engaged with the voice activation function of their own mobile terminal, which increases the probability that the controller 180 will determine the correct function corresponding to the user's voice instruction, and which allows the user to tailor the voice activation function according to his or her needs.
  • The controller 180 may also maintain the activated state of the voice recognition function while the designated button(s) are touched or selected, and stop the voice recognition function when the designated button(s) are released. Alternatively, the controller 180 can maintain the activation of the voice recognition function for a predetermined time period after the designated button(s) are touched or selected, and stop or terminate the voice recognition function when the predetermined time period ends. In yet another embodiment, the controller 180 can store received voice instructions in the memory 160 while the voice recognition function is maintained in the activated state.
  • In addition, as shown in FIG. 5, a domain of the database used as a reference for recognizing the meaning of the voice command is specified to information relating to specific functions or menus on the terminal (S102). For instance, the specified domain of database may be information relating to menus currently displayed on the display 151, or information relating to sub-menus of one of the displayed menus. Further, because the domain of database is specified, the recognition rate for the input voice command is improved. Examples of domains include an e-mail domain, a received calls domain, and multimedia domain, etc.
  • Also, the information relating to sub-menus may be configured as data in a database. For example, the information may be configured in the form of a keyword, and a plurality of information may correspond to one function or menu. In addition, the database can be a plurality of databases according to features of information, and may be stored in the memory 160. Further, the information in the database(s) may be advantageously updated or renewed through a learning process. Each domain of the respective databases may also be specified into a domain relating to functions or menus being currently output, so as to enhance a recognition rate for a voice command. The domain may also change as menu steps continue to progress.
  • Once the voice recognition function is activated (Yes in S101) and the domain has been specified (S102), the controller 180 determines if the user has input a voice command (S103). When the controller 180 determines the user has input the voice command (Yes in S103), the controller 180 analyzes a context and content of a voice command or instruction input through the microphone 122 based on a specific database, thereby judging a meaning of the voice command (S104).
  • Further, the controller 180 can determine the meaning of the voice instruction or command based on a language model and an acoustic model of the accessed domain. In more detail, the language model relates to the words themselves and the acoustic model corresponds to the way the words are spoken (e.g., frequency components of the spoken words or phrases). Using the language and acoustic models together with a specific domain and a state of the mobile terminal 100, the controller 180 can effectively determine the meaning of the input voice instructions or command.
  • Further, the controller 180 may immediately start the process for judging the meaning of the input voice command when the user releases the activation of the voice recognition function when the controller 180 stores the input voice command in the memory 160, or may simultaneously perform the voice activation function when the voice command is input. In addition, if the voice command has not been fully input (No in S103), the controller 180 can still perform other functions. For example, if the user performs another action by touching a menu option, etc. or presses a button on the terminal (Yes in S109), the controller 180 performs the corresponding selected function (S110).
  • Further, after the controller 180 determines the meaning of the input voice command in step S104, the controller 180 outputs a result value of the meaning (S105). That is, the result value may include control signals for executing menus relating to functions or services corresponding to the determined meaning, for controlling specific components of the mobile terminal, etc. The result value may also include data for displaying information relating to the recognized voice command.
  • The controller 180 may also request the user confirm the output result value is accurate (S106). For instance, when the voice command has a low recognition rate or is determined to have a plurality of meanings, the controller 180 can output a plurality of menus relating to the respective meanings, and then execute a menu that is selected by the user (S107). Also, the controller 180 may ask a user whether to execute a specific menu having a high recognition rate, and then execute or display a corresponding function or menu according to the user's selection or response.
  • In addition, the controller 180 can also output a voice message asking the user to select a particular menu or option such as “Do you want to execute a message composing function? Reply with Yes or No.” Then, the controller 180 executes or does not execute a function corresponding to the particular menu or option based on the user's response. If the user does not respond in a particular time period (e.g., five seconds), the controller 180 can also immediately execute the particular menu or option. Thus, if there is no response from the user, the controller 180 may automatically execute the function or menu by judging the non-response as a positive answer. That is, the error processing step may be performed (S108) by again receiving input of a voice command, or may be performed by displaying a plurality of menus having a recognition rate more than a certain level or a plurality of menus that may be judged to have similar meanings. The user can then select one of the plurality of menus. Also, when the number of functions or menus having a recognition rate more than a certain level is less than a preset number (e.g., two), the controller 180 can automatically execute the corresponding function or menu.
  • Next, FIG. 6A is an overview showing a method for activating a voice recognition function for a mobile terminal according to an embodiment of the present invention. As shown in the display screen 410, the user can activate the voice recognition function by touching a soft button 411. The user can also terminate the voice recognition function by releasing the soft button 411. In more detail, the user can activate the voice recognition function by touching the soft button 411 and continuously touch the soft button 411 or hard button 412, until the voice instruction has been completed. That is, the user can release the soft button 411 or hard button 412 when the voice instruction has been completed. Thus, the controller 180 is made aware of when the voice instruction is to be input and when the voice instruction has been completed. As discussed above, because the user is directly involved in this determination, the accuracy of the interpretation of the input voice command is increased.
  • The controller 180 can also be configured to recognize the start of the voice activation feature when the user first touches the soft button 411, and then recognize the voice instruction has been completed when the user touches the soft button 411 twice, for example. Other selection methods are also possible. Further, as shown in the display screen 410 in FIG. 6A, rather than using the soft button 411, the voice activation and de-activation can be performed by manipulating the hard button 412 on the terminal.
  • In addition, the soft button 411 shown in the display screen 410 can be a single soft button that the user presses or releases to activate/deactivate the voice recognition function or may be a menu button that when selected produces a menu list such as “1. Start voice activation, and 2. Stop voice activation”. The soft button 411 can also be displayed during a standby state, for example. In another example, and as shown in the display screen 420, the user can also activate and deactivate the voice recognition function by touching an arbitrary position of the screen. The display screen 430 in FIG. 6A illustrates yet another example in which the user activates and deactivates the voice recognition function by producing a specific sound or sound effects that is/are greater than a specific level. For example, the user may clap their hands together to produce such an impact sound.
  • Thus, according to an embodiment of the present invention, the voice recognition function may be implemented in two modes. For example, the voice recognition function may be implemented in a first mode for detecting a particular sound or sound effects more than a certain level, and in a second mode for recognizing a voice command and determining a meaning of the voice command. If the sound or sound effects is/are more than a certain level in the first mode, the second mode is activated to thereby to recognize the voice command.
  • The display screen 440 in FIG. 6A illustrates still another method of the user activating and deactivating the voice recognition function. In this example, the controller 180 is configured to interpret body movements of the user to start and stop the voice activation function. For example, and as shown in the display screen 440, the controller 180 may be configured to interpret the user moving his hand toward the display as an instruction to activate the voice recognition function, and the user moving his hand away from the display as an instruction to terminate the voice activation function. Short or long-range wireless signals may also be used to start and stop the voice recognition function.
  • Thus, according to an embodiment of the present invention, because the voice activation function is started and stopped, the voice recognition function is not continuously executed. That is, when the voice recognition function is continuously maintained in the activated state, the amount of resources on the mobile terminal is increased compared to the embodiment of the present invention. Further, as discussed above with respect to FIG. 5, when the voice recognition function is activated, the controller 180 specifies a domain of a specific database that is used as a reference for voice command recognition into a domain relating to a menu list on the display 151. Then, if a specific menu is selected or executed from the menu list, the domain of the database may be specified into information relating to the selected menu or sub-menus of the specific menu.
  • In addition, when the specific menu is selected or executed through a voice command or touch input, the controller 180 may output help information relating to sub-menus of the specific menu in the form of a voice message, or pop-up windows or balloons. For example, as shown in FIG. 6B, when the user selects the ‘multimedia menu’ via a touching or voice operation, the controller 180 displays information relating to the sub-menus (e.g., broadcasting, camera, text viewer, game, etc.) of the ‘multimedia menu’ as balloon-shaped help information 441. Alternatively, the controller 180 can output a voice signal 442 including the help information. The user can then select one of the displayed help options using a voice command or by a touching operation
  • FIG. 6C illustrates an embodiment of a user selecting a menu item using his or her body movements (in this example, the user's hand gesture). In more detail, as the user moves his or her finger closer to the menu item 443, the controller 180 displays the sub-menus 444 related to the menu 443. The controller 180 can recognize the user's body movement of information via the sensing unit 140, for example. In addition, the displayed help information can be displayed so as to have a transparency or brightness controlled according to the user's distance. That is, as the user's hand gets closer, the displayed items can be further highlighted.
  • As discussed above, the controller 180 can be configured to determine the starting and stopping of the voice recognition function based on a variety of different methods. For example, the user can select/manipulate soft or hard buttons, touch an arbitrary position on the touch screen, etc. The controller 180 can also maintain the activation of the voice recognition function for a predetermined amount of time, and then automatically end the activation at the end of the predetermined amount of time. Also, the controller 180 may maintain the activation only while a specific button or touch operation is performed, and then automatically end the activation when the input is released. The controller 180 can also end the activation process when the voice command is no longer input for a certain amount of time.
  • Next, FIG. 7A is a flowchart showing a method for recognizing a voice command in a mobile terminal according to an embodiment of the present invention. Referring to FIG. 7A, when the voice recognition function is activated, the controller 180 specifies a domain of a database that can be used as a reference for voice command recognition into a domain relating to a menu displayed on the display 151, sub-menus of the menu, or a domain relating to a currently-executed function or menu (S201). The user also inputs the voice command (S202) using either the precise menu name or using a natural language (spoken English, for example). The controller 180 then stores the input voice command in the memory 160 (S203). Further, when the voice command is input under a specified domain, the controller 180 analyzes a context and content of the voice command based on the specified domain by using a voice recognition algorithm. Also, the voice command may be converted into text-type information for analysis (S204), and then stored in a specific database of the memory 160. However, the step of converting the voice command into text-type information can be omitted.
  • Then, to analyze the context and content of the voice command, the controller 180 detects a specific word or keyword of the voice command (S205). Based on the detected words or keywords, the controller 180 analyzes the context and content of the voice command and determines or judges a meaning of the voice command by referring to information stored in the specific database (S206). In addition, as discussed above, the database used as a reference includes a specified domain, and functions or menus corresponding to a meaning of the voice command judged based on the database are executed (S207). For example, if it is assumed that text is input using a STT function after executing the text message writing function, the priorities of such information for the voice command recognition may be set to commands related to modifying text or commands related to searching for another party to receive the text message or transmission of such message. Also, because the database for voice recognition is specified to each information relating to a currently-executed function or menu, the recognition rate and speed for of recognizing the voice command are improved, and the amount of resources used on the terminal is reduced. Further, the recognition rate indicates a matching degree with a name preset to a specific menu.
  • The recognition rate for an input voice command may also be judged by the number of information relating to specific functions or menus of the voice command. Therefore, the recognition rate for the input voice command is improved when the information precisely matches a specific function or menu (e.g., menu name) that is included in the voice command.
  • In more detail, FIG. 7B is an overview showing a method for recognizing a voice command of a mobile terminal according to an embodiment of the present invention. As shown in FIG. 7B, the user inputs a voice command as a natural language composed of six words “I want to send text message.” In this example, the recognition rate can be judged based on the number of meaningful words (e.g., send, text, message) relating to a specific menu (e.g., text message). In addition, the controller 180 can determine whether the words included in the voice command are meaningful words relating to a specific function or menu based on the information stored in the database. For instance, meaningless words included in the natural language voice command (e.g., I want to send text message) that are irrelevant to the specific menu may be the subject (I) or the preposition (to).
  • Also, the natural language is a language commonly used by people, and has a concept contrary to that of an artificial language. Further, the natural language may be processed by using a natural language processing algorithm. The natural language may or may not include a precise name relating to a specific menu, which sometimes causes a difficulty in completely precisely recognizing a voice command. Therefore, according to an embodiment of the present invention, when a voice command has a recognition rate more than a certain level (e.g., 80%), the controller 180 judges the recognition to be precise. Further, when the controller 180 judges a plurality of menus to have similar meanings, the controller 180 displays the plurality of menus and the user can select one of the displayed menus to have its functions executed. In addition, a menu having a relatively higher recognition rate may be displayed first or distinctively displayed compared to the other menus.
  • For example, FIG. 8 is an overview showing a method for displaying menus for a voice recognition rate of a mobile terminal according to an embodiment of the present invention. As shown in FIG. 8, a menu icon having a higher recognition rate is displayed at a central portion of the display screen 510, or may be displayed with a larger size or a darker color as shown in the display screen 520. The menu icon having the higher recognition rate can also be displayed first and then followed in order or sequential manner by lower recognition rate menus. Further, the controller 180 can distinctively display the plurality of menus by changing at least one of the size, position, color, brightness of the menus or by highlighting in the order of a higher recognition rate. The transparency of the menus may also be appropriately changed or controlled.
  • In addition, as shown in the lower portion of FIG. 8, a menu having a higher selection rate by a user may be updated or set to have a recognition rate. That is, the controller 180 stores a history of the user selections (S231) and performs a learning process (S232) to thereby update a particular recognition rate for a menu option that is selected by a user more than other menu options (S233). Thus, the number of times a frequently used menu is selected by a user may be applied to recognition rate of the menu. Therefore, a voice command input in the same or similar manner in pronunciation or content may have a different recognition rate according to how many times a user selects a particular menu. Further, the controller 180 may also store time at which the user performs particular functions. For example, a user may check emails or missed messages every time they wake up on Mondays through Fridays. This time information may also be used to improve the recognition rate. The state of the terminal (e.g., standby mode, etc.) may also be used to improve the recognition rate. For example, the user may check emails or missed messages when first turning on their mobile terminal, when the terminal is opened from a closed position, etc.
  • Next, FIG. 9 is an overview showing a method for recognizing a voice command of a mobile terminal according to another embodiment of the present invention. As shown in FIG. 9, the user activates the voice recognition function, and inputs the voice command “I want to send text message.” The controller 180 then specifies a domain of a database for voice command recognition into a domain relating to the displayed sub-menus. The controller 180 then interprets the voice command (S241) and in this example, displays a plurality of menus that have a probability greater than a particular value (e.g., 80%) (S242). As shown in the display screen 610 in FIG. 9, the controller displays four multimedia menus.
  • The controller 180 also distinctively displays a menu having the highest probability (e.g., specific menu option 621 “Send Text” in this example). The user can then select any one of the displayed menus to execute a function corresponding to the selected menu. In the example shown in FIG. 9, the user selects the Send Text menu option 621 and the controller 180 displays sub menus related to the selected Send Text menu option 621 as shown in the display screen 620. Also, as shown in step (S242) in the lower portion of FIG. 9, the controller 180 can also immediately execute a function when only a single menu is determined to be higher than the predetermined probability rate. That is, the controller 180 displays the information related to the text sending as shown in the display screen 620 immediately without the user having to select the Send Text menu option 621 when the Send Text menu option 621 is determined to be the only menu that has a higher recognition rate or probability than a predetermined threshold.
  • Further, as discussed above with respect to FIG. 6B, when a specific menu is selected or executed through a voice command or touch input according to an operation state or mode (e.g., a mode for indicating a voice recognition function), the controller 180 can also output balloon-shaped help information related to the sub menus to the user in a voice or text format. In addition, the user can set the operation mode for outputting the help using appropriate menu options provided in environment setting menus. Accordingly, a user can operate the terminal of the present invention without needing or having a high level of skill. That is, many older people may not be experienced in operating the plurality of different menus provided with terminal. However, with the terminal of the present invention, a user who is generally not familiar with the intricacies of the user interfaces provided with the terminal can easily operate the mobile terminal.
  • In addition, when the controller 180 recognizes the voice command to have a plurality of meanings (i.e., when a natural language voice command (e.g., I want to send text message) does not include a precise menu name such as when a menu is included in a ‘send message’ category but does not have a precise name among ‘send photo’, ‘send mail’, and ‘outbox’), the controller 180 displays a plurality of menus having a recognition rate more than a certain value (e.g. 80%).
  • Next, FIG. 10 is an overview showing a plurality of databases used by the controller 180 for recognizing a voice command of a mobile terminal according to an embodiment of the present invention. In this embodiment, the databases store information that the controller 180 uses to judge a meaning of a voice command, and may be any number of databases according to information features. Further, the respective databases configured according to information features may be updated through a continuous learning process under control of the controller 180. For example, the learning process attempts to match a user's voice with a corresponding word. For example, when a word “waiting” pronounced by a user is misunderstood as a word “eighteen”, the user corrects the word “eighteen” into “waiting”. Accordingly, the same pronunciation to be subsequently input by the user is made to be recognized as “waiting”.
  • As shown in FIG. 10, the respective databases according to information features include a first database 161, a second database 162, a third database 163, and a fourth database 164. In this embodiment, the first database 161 stores voice information for recognizing a voice input through the microphone in units of phonemes or syllables, or morphemes. The second database 162 stores information (e.g., grammar, pronunciation precision, sentence structure, etc.) for judging an entire meaning of a voice command based on the recognized voice information. The third database 163 stores information relating to menus for functions or services of the mobile terminal, and the fourth database 164 stores a message or voice information to be output from the mobile terminal so as to receive a user's confirmation about the judged meaning of the voice command
  • In addition, the third database 163 may be specified into information relating to menus of a specific category according to a domain preset for voice command recognition. Also, the respective database may store sound (pronunciation) information, and phonemes, syllable, morphemes, words, keywords, or sentences corresponding to the pronunciation information. Accordingly, the controller 180 can determine or judge the meaning of a voice command by using at least one of the plurality of databases 161 to 164, and execute menus relating to functions or services corresponding to the judged meaning of the voice command. Further, the present invention can display an operation state or mode having the voice command recognition function or STT function applied thereto by using a specific shape of indicator or icon. Then, upon the output of the indicator or icon, the user can be notified through a specific sound or voice.
  • Next, a method for controlling a mobile terminal according to an embodiment of the present invention will be explained. The discussed embodiments may be used independently or in combination with each other, and/or in combination with the user interface (UI). In addition, the mobile terminal according to an embodiment of the present invention includes a web browser function and is configured to access the wireless Internet. When the web browser is executed, the controller 180 displays a default web page (hereinafter, will be referred to as ‘home page’) that has been previously set in an environment setting option. Then, the user can open either a web page of an address directly input in an address window of the web browser, or a web page of an address registered as a bookmark. When the web page corresponds to a separate popup window, the popup window is displayed on an upper layer of the main web page.
  • For example, FIG. 11 is an overview showing a web browser 700 of a mobile terminal according to an embodiment of the present invention. As shown, the web browser 700 includes an address input window 710 for inputting an address of a web page, and a plurality of function button regions 720 used to perform a web surfing operation. Further, the function button regions 720 include a previous button 721 for displaying a previously opened web page (e.g., a first web page) of a currently opened web page (e.g., a second web page), and a back button 722 for displaying a subsequently opened web page (e.g., a third web page) of the currently opened web page. A home button, favorites button, refresh button, etc. may also be included in this region.
  • In addition, a web page generally has a resolution of at least 800 pixels in a horizontal direction. Therefore, the mobile terminal includes a display module also having a resolution of 800 pixels in a horizontal direction so as to provide full browsing capabilities. However, the display module of the mobile terminal includes at most 450 pixels in a vertical direction, which is less than those of a general monitor. Therefore, to view the information on a mobile terminal, the user must often scroll down or up to view more information beyond the displayed 450 vertical pixels displayed in the vertical direction. In addition, one problem that may occur while performing a full web browsing function is that a screen size of the terminal is too small when compared with a resolution. Therefore, the webpage and corresponding information are displayed with a small font and are difficult to read and select. When the display is a touch screen, the user can touch a particular link or item to obtain more information about the selected link or item, but because the information is displayed with a small size, the user often touches the wrong link or item.
  • For example, if the user is viewing a main webpage about football, the user can view multiple links about different football news (e.g., different teams, live scores, etc.). The user can then touch a particular link to view more information about the selected link. However, because the display size of the terminal is so small, the links and other webpage information are condensed and displayed very close together. Thus, the user often inadvertently touches the wrong link. This is particularly disadvantageous because the wrong additional information is accessed, which can take some time in a poor wireless environment, and if the user tries pressing the back page button, the main web page tends to freeze and the user must completely restart the web access function.
  • In addition, when the weather is cold, the user often tries to touch a web link or other related web item while wearing gloves. However, because the touch screen is limited to recognizing only the user's finger, the user must take off their gloves to operate the touch screen terminal. This is particular disadvantageous especially in cites where the weather is quite cold in the Winter. The touch screen may also be configured to recognize a touch from a stylus or other related touch pen. However, it is inconvenient for the user to retract the stylus and then touch an item. The stylus is often misplaced or lost, resulting in even more inconvenience for the user. Further, the user often wants to search for information using the web browsing function on the mobile terminal. However, because the screen is so small and because the keypad used for inputting the search information is also small, it is very difficult for the user to input search commands. The present invention solves the above problems by providing a method for facilitating an information search process in a web browsing mode not only through a touch input, but also through a voice command. In more detail, FIG. 12 is a flowchart showing a method for searching information through a voice command in a web browsing mode according to an embodiment of the present invention.
  • In addition, as discussed above, the mobile terminal of the present invention may access a web page through the wireless Internet. That is, as shown in FIG. 11, the controller 180 can access the wireless Internet by using the wireless communication unit 110, and display the web page on a preset region (web page display region) 730 of a web browser (S301 in FIG. 12). Further, when the web browser is executed, the controller 180 can automatically activate a voice recognition function and an STT function. The controller 180 also constructs objects of the displayed web page (e.g., text, images, windows, etc.) as a database. In more detail, the database may include a plurality of databases according to the types of web pages, and may be stored in the memory 160. The objects of the database may be specified into objects displayed on a screen. Also, when the web page is enlarged or reduced in size according to a user's instruction, the controller 180 can appropriately reconfigure the database. Therefore, the controller 180 can recognize user voice commands based on the information of the objects of the database.
  • Accordingly, as shown in FIG. 12, when the user inputs a voice command in a web browsing mode (Yes in S302), the controller 180 judges or determines a meaning of the voice command (S303). That is, the controller 180 converts the voice command into text using an STT function, and judges the meaning of the voice command based on the converted text. For example, the controller 180 can refer to the database constructed with object information of the web page order to determine the meaning of the voice command. In addition, the user can input the voice command in the form of names (titles) of objects or phrases or sentences including the names of the objects. The user can also enlarge a portion of the webpage before issuing the voice command to have the controller 180 more easily judge the meaning of the voice command, to improve a recognition rate for the voice command, to more specifically specify a scope of an object to be recognized in the voice command manner, etc.
  • Then, as shown in FIG. 12, if the controller 180 determines the input voice command has a meaning relating to an information search operation in the web browsing mode (Yes in S304), the controller 180 enters an information search mode (S305). In addition, the information search mode indicates a mode for searching information relating to a search word input into a search word input window of a web page having an information search function. The information search mode may also indicate a mode for searching contents included in a currently displayed web page.
  • After entering the information search mode, the controller 180 may display information about the entered state. Accordingly, a user can recognize that the mobile terminal has entered an information search mode, and can then input search information. Further, the user can input the search information in the form of words, phrases, or sentences. The user can input the search information by manually typing the information or by inputting the search information using voice commands. When the user inputs the search information via a voice command (Yes in S306), the controller 180 converts the voice command into a text using an STT function (S307). The controller 180 can also automatically display the converted text on the search word input window (S308).
  • In addition, the controller 180 may output a guide message relating to an operation state of the mobile terminal. For instance, the guide message may be a message indicating that the mobile terminal has entered an information search mode, a message indicating that a search word can be input, or a message confirming whether an input search word is correct, etc. Then, when the user inputs a search instruction command (Yes in S309), the controller 180 performs the information search operation (S310). Also, the search operation is generally not performed by the mobile terminal, but rather upon indication of a search operation, the controller 180 sends a search word and a search instruction to a web server, and receives results about the search word from the web server. The controller 180 then displays the results (S311). Further, when the searched information is displayed in the form of a web page via the controller 180, the user can select any one of searched objects displayed on the web page in a voice command manner or in a key or touch input manner. Accordingly, detailed contents of the information can be displayed.
  • Next, a method for searching information in the web browsing mode will be explained in more detail. In particular, FIG. 13 is an overview showing a method for setting a database of objects displayed on a web page according to an embodiment of the present invention. As discussed above, and with reference to FIG. 13, the controller 180 can construct objects of a web page (e.g., text, images, windows, etc.) as a database 165 for recognition of a voice command in a web browsing mode. That is, a voice command that can be input in a web browsing mode may be a command for selecting a specific object of a web page and displaying information linked to the selected object, a command for inputting a search word on a search window of a web page and searching relevant information, or a command for searching contents of a currently displayed web page.
  • In the web browsing mode, the database constructed with the objects of the web page is referred to so as to recognize an object input in a voice command manner, thereby improving a recognition rate for a voice command and a recognition speed. In addition, to construct the database, the controller 180 can refer to a source of a web page. For instance, when the web page has a source of ‘HYPER TEXT MARKUP LANGUAGE (HTML)’, the objects (e.g., texts, images, window) and information linked to the objects (e.g., an address of another web page) can be analyzed based on the source. Also, the objects of a database may be specified to objects currently displayed on a screen. Accordingly, when the web page is enlarged or reduced according to a user's instruction, the controller 180 can reconfigure the database. The database may also include two or more databases according to a particular web page, and may be stored in the memory 160.
  • Next, FIG. 14 is an overview showing a method for entering an information search mode from a web browsing mode in a mobile terminal according to an embodiment of the present invention. Also, in this embodiment, the displayed web page includes a search word input window 741 the user can use to search for information. In addition, entering an information search mode indicates selecting the search word input window 741, or alternatively searching desired information from contents of a web page currently displayed on a screen. These two different types of search modes will now be explained. First, the search word input window 741 may be selected among objects of the web page in a hardware or soft key input manner or in a touch input manner. Alternatively, the search word input window 741 may be selected among objects of the web page when the user inputs a voice command indicating ‘search’ as shown by the reference numeral 742 in FIG. 14. Alternatively, the search word input window may be automatically activated or selected upon access the web page. The mobile terminal may thus enter an information search mode through the various input manners.
  • Next, FIGS. 15A-15C are overviews showing a method for displaying a state that a mobile terminal has entered an information search mode according to an embodiment of the present invention. In particular, when the mobile terminal has entered an information search mode via a voice command manner or through other input methods, the controller 180 may inform a user about the entered state, i.e., a state that the search word input window 741 of the web page has been selected. For example, as shown in FIG. 15A, the controller 180 displays an indicator 751 having a specific shape on the search word input window 741 so as to inform a user that the mobile terminal has entered the information search mode. The indicator 751 may be implemented as a still image or as moving images (e.g., flickering effects). The indicator 751 can also have various shapes or sizes.
  • Referring to FIG. 15B, the controller 180 outputs a guide message 753 using voice or text to inform the user that the mobile terminal has entered an information search mode. In particular, in FIG. 15B, the controller 180 outputs the message ‘You have entered information search mode’ as indicated by the reference number 752 or the message ‘Please input search word’ as indicated by the reference number 753. The guide message 753 using text instead of voice may also be displayed in the form of a balloon message. That is, as shown in FIG. 15C, the controller 180 can display a text message 754 in the search word input window 741 to indicate the search mode has been entered. In addition, the controller 180 can also advantageously display the search word input window 741 with an enlarged size or a changed color. For instance, when the search word input window 741 has the same color as the background color before the mobile terminal enters the information search mode, the controller 180 can advantageously change the color of the search word input window 741 to be red after the mobile terminal enters the information search mode. Thus, because the controller 180 distinctively displays the search window 741, the user can quickly see the search mode has been successfully entered.
  • Next, FIGS. 16A and 16B are overviews showing a method for inputting a search word in an information search mode according to an embodiment of the present invention. For example, as shown in FIGS. 16A and 16B, the user has recognized the mobile terminal has entered the information search mode via one of the methods shown in FIG. 15, and may input a search word into the search word input window 741 using a voice command. The controller 180 then converts the input voice command into text using an STT function. In addition, the converted text is distinguished from a general voice command. That is, the meaning of the search word is not judged based on the database, but is only transcribed in a dictation operation until the search word has been completely input.
  • Further, the controller 180 can also display the converted search word in the input window 741 so the user can verify the search word is correct. The search word may also be displayed on any display region rather than the search word input window 741. However, for convenience purposes, the following description will refer to the input search word being displayed on the search word input window 741. Thus, with reference to FIG. 16A, the user can input one word (e.g., movie) 761 or a phrase or a sentence composed of two or more words (e.g., recent popular movie) 762. Then, when each word is input within a time interval more than one specific time (a first specific time), the controller 180 inserts an empty space between the words converted to text, thereby completing a sentence. However, when the search word is not input for more than another specific time (second specific time), the controller 180 can determine that the search word has been completely input.
  • In addition, with reference to FIG. 16B, the search word may be input with a Boolean operator (e.g., AND, OR). When the search word (e.g., movie and theater) includes a Boolean operator (e.g., AND), the Boolean operator may be converted only into English differently from other search words. That is, the search words (e.g., movie, theater) are converted into a text language of each country, whereas the Boolean operator (e.g., AND) is converted only into English (e.g., AND) 765. Accordingly, the Boolean operator may play a role of a Boolean operator. For instance, while converting a search word input in Korean into a Korean text, the controller 180 judges whether a Boolean operator has been input. If a Boolean operator has been input, the controller 180 converts the Boolean operator into English.
  • Next, FIG. 17 is an overview showing a method for displaying a search word input in an information search mode according to an embodiment of the present invention. When a search word is input in a voice command manner or through other methods as described above, the controller 180 inputs the search word onto the search word input window 741. That is, the controller 180 displays the search word on the search word input window 741. Accordingly, the user can check whether or not the search word has been precisely input. Further, the controller 180 can display a search word that has a highest recognition precision among search words input in a voice command manner. However, when there are two or more search words (hereinafter, referred to as ‘candidate search words’) having a small error within an error range, the controller 180 can display the candidate search words (e.g., candidate search word 1, candidate search word 2, etc.) 772 as shown in the top portion in FIG. 17. In addition, the candidate search words may have priorities determined according to a recognition precision, and may be displayed in the determined orders. The candidate search words may also have numbers according to priorities.
  • Accordingly, as shown in the middle portion of FIG. 17, the user can select the candidate search word 774 or number 773 of the candidate search word in a voice command manner. Alternatively, the user can select the numbers 773 in a key input manner. The user can also select one of the candidate search words in a direct touch manner. Once the search word has been completely input, the controller 180 displays the selected candidate search word 777 on the search word input window as shown in the lower portion of FIG. 17. The controller 180 can then indicate that the search word has been completely input by outputting a guide message using a text or voice. For example, as shown in the lower portion of FIG. 17, the controller 180 can output a message 775 indicating that the search word has been completely input such as ‘You have input search word’, and output a corresponding message 776 inquiring whether or not to perform a search operation such as ‘Do you want to perform search operation?’. Further, the guide messages using text may be displayed in the form of a balloon message.
  • Next, FIGS. 18A and 18B are overviews showing a method for indicating an information search according to an embodiment of the present invention. As discussed above with respect to FIG. 17, once the search word has been completely input, the user may input a command instructing a search operation. The command may be input in a voice command manner or in a hardware or soft key input manner. For example, the user can request a search operation be performed in a voice command manner by responding (Yes or No) to the guide message 776 asking the user if they want to perform a search operation as shown in FIG. 17.
  • Alternatively, as shown in FIG. 18A, the user may input a preset word or command “OK” together with a search word (e.g., “mobile terminal”) as indicated by the reference number 781. For instance, when the user inputs the words ‘mobile terminal’ and ‘OK’ within a specific time in a voice command manner, both a search word input and a search operation are performed by the controller 180. Alternatively, the controller 180 may instruct a search operation be performed after a preset time lapses after a search word has been input. In still another example, the controller 180 can perform the search operation based on a preset voice command “Search” as identified by the reference numeral 783 shown in FIG. 18B.
  • Then, upon receiving the command instructing the search operation be performed, the controller 180 can output a guide message 782 using text or voice (e.g., ‘Search will start’ or ‘Search is being performed.’), or output an indicator having the same meaning. Accordingly, the user can recognize the current state of the mobile terminal to determine that the search is being performed. Also, as discussed above, the search operation is generally not performed by the mobile terminal, but rather the controller 180 sends a search word and a search instruction to a web server, receives results about the search word from the web server, and then displays the results. Accordingly, the user may select any object they desire from the displayed search results. Then, a web page linked to the object is displayed, thereby allowing the user to view details of his or her desired information.
  • Next, a method for searching information from the displayed web page will be explained. That is, as discussed above, the mobile terminal displays text with a very small size due to the small size of the display in spite of its high resolution, while implementing a full browsing capability. Thus, it is difficult for the user to view the text. FIG. 19 is a flowchart showing a method for searching a user's desired information in a web page according to an embodiment of the present invention. In addition, details about the same operations that have been previously described will be omitted.
  • As shown in FIG. 19, once a web page is displayed (S401), the controller 180 constructs objects of the displayed web page as a database (S402). For example, the displayed web page can be represented as HTML. Then the information contained in the HTML can be used to create data objects in the database. Then, once a voice command is input (Yes in S403), the controller 180 searches objects corresponding to the voice command from the database (S404). For example, the controller 180 can search the HTML constructing the web page for objects or text that includes the converted voice command. Further, the voice command is assumed to be a command instructing contents of the web page to be searched. In order to improve a recognition rate of the voice command, the controller 180 may specify a range of information that can be recognized after being input in a voice command manner into objects displayed on the web page. The controller 180 can also specify the range of information into objects displayed on a current screen among objects displayed on the web page. Then, once objects corresponding to the voice command are searched, the controller 180 displays results of the search (S405). Also, the search results may be the objects or phrases or sentences including the objects.
  • Further, the search results may be distinctively displayed from other information displayed on the web page. For example, in FIG. 20B, the user has requested the currently displayed web page be searched for the phrase “News Selection”. The controller 180 then converts the input voice command into text, and searches the database including objects representing the displayed web page for objects that include the any of the terms “News” or “Selection.” As shown in FIG. 20B, the controller 180 distinctively displays the found results 791 and 792 on corresponding positions on the web page with changed object features such that the user can quickly and easily see the objects that were found during the search of the currently displayed web page. For instance, the controller 180 can display the search results with an enlarged size, color changes, background color display, transparency changes, font changes, or underlines for highlighting effects, etc. The search results may also be displayed through various emphasizing methods rather than the above methods.
  • In addition, as shown in FIG. 20A, the search results may be displayed on a specific display region 790. Here, the specific display region 790 may be displayed on a screen divided into a plurality of parts, or on a web page in an overlaying manner. The search results may be also displayed in an ‘On Screen Display’ (OSD) manner. Further, the search results may be numbered. Then, as shown in FIG. 20A, the user can automatically or manually select one of the search results, by selecting the numbers in a voice command or key input manner, or in a direct touch manner (S406 in FIG. 19). Accordingly, information linked to the selected object can be displayed (S408). Also, once the user inputs a command to cancel the search, the displayed state of the search results may be released. In addition, when a selected command is not input for a preset time after the research results are displayed, the displayed state of the search results may be automatically released. Alternatively, once the user selects one of the search results, the displayed state of the rest research results may be automatically released (S407). In addition, FIGS. 20A and 20B also show a method for selecting specific information among information obtained as a result of information search according to an embodiment of the present invention.
  • As discussed above, once the user instructs a search relating to a specific object of the web page, the controller 180 searches the specific object or phrases including the object in the database, and displays the results of the search. Then, when a command to select one of the search results is input, a corresponding search result is displayed, and information (web page) linked to the search result is automatically displayed. As the selected search result is displayed, the user can check whether or not a desired search result has been selected. When a specific time lapses after the search result is selected, information linked to the search result may be displayed. Here, the selected object may be displayed with a highlighted state by overlaying an indicator having a specific shape, or by changing a color, a size, or a thickness. In addition, within a preset time after the selected search result is displayed, the user may input a command to cancel the selected search result. Upon the input of the command, a displayed state of a web page linked to the selected research result may be canceled.
  • In the mobile terminal according to one or more embodiments of the present invention, an information search can be performed through a voice command in a web browsing mode, thereby enhancing a user's convenience. Furthermore, the information search can be easily performed even in a web browsing mode of the mobile terminal having a small screen by using both a touch input method and a voice command input method. In addition, the user can advantageously search for items or objects on a currently displayed web page. The items or objects can be plain text when the web site include text information or can be links to other web sites. Thus, the user can enter the term “people” for example, and the controller 180 will distinctively display all items, text, objects, etc. on the web page that include the term “people” Therefore, the user does not have to visually search the website for desired information, which is often tedious and cumbersome.
  • In addition, the controller 180 can first enter an information search mode for searching the currently displayed web page before the user speaks the voice information to be used for searching the displayed web page. For example, the information search mode can entered based on a voice command (e.g., “enter search mode”), a key input (e.g., a separate hard key on the mobile terminal), or in a direct touching of a predetermined portion of the displayed web page. Thus, the user can selectively determine when the search mode is entered so that the search mode is not inadvertently entered when the user is speaking and does not want the search mode to be entered. The search mode can also be automatically entered as soon as the web page is displayed.
  • The foregoing embodiments and advantages are merely exemplary and are not to be construed as limiting the present disclosure. The present teachings can be readily applied to other types of apparatuses. This description is intended to be illustrative, and not to limit the scope of the claims. Many alternatives, modifications, and variations will be apparent to those skilled in the art. The features, structures, methods, and other characteristics of the exemplary embodiments described herein may be combined in various ways to obtain additional and/or alternative exemplary embodiments.
  • As the present features may be embodied in several forms without departing from the characteristics thereof, it should also be understood that the above-described embodiments are not limited by any of the details of the foregoing description, unless otherwise specified, but rather should be construed broadly within its scope as defined in the appended claims, and therefore all changes and modifications that fall within the metes and bounds of the claims, or equivalents of such metes and bounds are therefore intended to be embraced by the appended claims.

Claims (20)

1. A mobile terminal, comprising:
a wireless communication unit configured to access a web page;
a display unit configured to display the accessed web page;
a receiving unit configured to receive input voice information; and
a controller configured to convert the input voice information into text information, to search the displayed web page for objects that include the converted text information, and to control the display unit to distinctively display found objects that include the converted text information from other information displayed on the web page.
2. The mobile terminal of claim 1, wherein the controller is further configured to distinctively display the found objects by at least one of enlarging a size of the found objects, changing a color of the found objects, changing a background color display of the found objects, changing a transparency of the found objects, underlining the found objects, changing a font of the found objects and highlighting the found objects.
3. The mobile terminal of claim 1, wherein the controller is further configured to distinctively display the found objects by grouping the found objects together.
4. The mobile terminal of claim 3, wherein the controller is further configured to display the grouped found objects in a predetermined display region of the display unit.
5. The mobile terminal of claim 1, wherein the controller is further configured to distinctively display the found objects by displaying the found objects in an overlaying manner on the displayed web page or in a separate popup window.
6. The mobile terminal of claim 1, wherein when a corresponding found object includes a link to a separate web page, the controller is further configured to automatically access the separate web page when the corresponding found object is selected with a voice command, a key input, or in a direct touching of the corresponding found object.
7. The mobile terminal of claim 1, wherein the controller is further configured to stop distinctively display the found objects after a predetermined amount of time or when one of the found objects is selected.
8. The mobile terminal of claim 1, wherein the controller is further configured to construct objects of the displayed web page as a database that can be searched.
9. The mobile terminal of claim 1, wherein the controller is further configured to enter an information search mode for searching the currently displayed web page before the receiving unit receives the voice information.
10. The mobile terminal of claim 9, wherein the information search mode is entered based on a voice command, a key input, or in a direct touching of a predetermined portion of the displayed web page.
11. A method of controlling a mobile terminal, the method comprising:
displaying an accessed web page on a display of the mobile terminal;
receiving input voice information;
converting the input voice information into text information;
searching the displayed web page for objects that include the converted text information; and
distinctively displaying found objects that include the converted text information from other information displayed on the web page.
12. The method of claim 11, wherein the distinctively displaying step distinctively displays the found objects by at least one of enlarging a size of the found objects, changing a color of the found objects, changing a background color display of the found objects, changing a transparency of the found objects, underlining the found objects, changing a font of the found objects and highlighting the found objects.
13. The method of claim 11, wherein the distinctively displaying step distinctively displays the found objects by grouping the found objects together.
14. The method of claim 13, wherein the distinctively displaying step distinctively displays the grouped found objects in a predetermined display region of the display unit.
15. The method of claim 11, wherein the distinctively displaying step distinctively displays the found objects by displaying the found objects in an overlaying manner on the displayed web page or in a separate popup window.
16. The method of claim 11, wherein when a corresponding found object includes a link to a separate web page, the method further comprises automatically accessing the separate web page when the corresponding found object is selected with a voice command, a key input, or in a direct touching of the corresponding found object.
17. The method of claim 11, further comprising:
stop distinctively displaying the found objects after a predetermined amount of time or when one of the found objects is selected.
18. The method of claim 11, further comprising:
constructing objects of the displayed web page as a database that can be searched.
19. The method of claim 11, further comprising:
entering an information search mode for searching the currently displayed web page before the receiving step receives the voice information.
20. The method of claim 19, wherein the information search mode is entered based on a voice command, a key input, or in a direct touching of a predetermined portion of the displayed web page.
US12/433,133 2008-10-29 2009-04-30 Mobile terminal and control method thereof Active 2029-10-26 US9129011B2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2008-0106736 2008-10-29
KR1020080106736A KR101545582B1 (en) 2008-10-29 2008-10-29 Terminal and method for controlling the same

Publications (2)

Publication Number Publication Date
US20100105364A1 true US20100105364A1 (en) 2010-04-29
US9129011B2 US9129011B2 (en) 2015-09-08

Family

ID=41557994

Family Applications (1)

Application Number Title Priority Date Filing Date
US12/433,133 Active 2029-10-26 US9129011B2 (en) 2008-10-29 2009-04-30 Mobile terminal and control method thereof

Country Status (4)

Country Link
US (1) US9129011B2 (en)
EP (1) EP2182452B1 (en)
KR (1) KR101545582B1 (en)
CN (1) CN101729656B (en)

Cited By (52)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100121636A1 (en) * 2008-11-10 2010-05-13 Google Inc. Multisensory Speech Detection
US20110037710A1 (en) * 2009-08-11 2011-02-17 Lg Electronics Inc. Mobile terminal and controlling method thereof
US20110045803A1 (en) * 2009-08-19 2011-02-24 Samsung Electronics Co., Ltd. Method of informing occurrence of a missed event and mobile terminal using the same
US20110074693A1 (en) * 2009-09-25 2011-03-31 Paul Ranford Method of processing touch commands and voice commands in parallel in an electronic device supporting speech recognition
US20110119572A1 (en) * 2009-11-17 2011-05-19 Lg Electronics Inc. Mobile terminal
US20110159918A1 (en) * 2009-12-24 2011-06-30 Otos Wing Co., Ltd. Anti-blinding device having wireless communication function
US20110288854A1 (en) * 2010-05-20 2011-11-24 Xerox Corporation Human readable sentences to represent complex color changes
US20120102436A1 (en) * 2010-10-21 2012-04-26 Nokia Corporation Apparatus and method for user input for controlling displayed information
US20120296646A1 (en) * 2011-05-17 2012-11-22 Microsoft Corporation Multi-mode text input
US20130024197A1 (en) * 2011-07-19 2013-01-24 Lg Electronics Inc. Electronic device and method for controlling the same
US20130033363A1 (en) * 2011-08-05 2013-02-07 TrackDSound LLC Apparatus and Method to Automatically Set a Master-Slave Monitoring System
US20130125168A1 (en) * 2011-11-11 2013-05-16 Sony Network Entertainment International Llc System and method for voice driven cross service search using second display
EP2601650A1 (en) * 2010-08-06 2013-06-12 Google, Inc. Automatically monitoring for voice input based on context
EP2629291A1 (en) * 2012-02-15 2013-08-21 Research In Motion Limited Method for quick scroll search using speech recognition
US20130218573A1 (en) * 2012-02-21 2013-08-22 Yiou-Wen Cheng Voice command recognition method and related electronic device and computer-readable medium
CN103294368A (en) * 2012-03-05 2013-09-11 腾讯科技(深圳)有限公司 Browse information processing method, browse and mobile terminal
US8543397B1 (en) * 2012-10-11 2013-09-24 Google Inc. Mobile device voice activation
US20140136213A1 (en) * 2012-11-13 2014-05-15 Lg Electronics Inc. Mobile terminal and control method thereof
US20140163976A1 (en) * 2012-12-10 2014-06-12 Samsung Electronics Co., Ltd. Method and user device for providing context awareness service using speech recognition
US20140180698A1 (en) * 2012-12-26 2014-06-26 Kabushiki Kaisha Toshiba Information processing apparatus, information processing method and storage medium
US20140189518A1 (en) * 2012-12-31 2014-07-03 Lg Electronics Inc. Mobile terminal
US8788273B2 (en) 2012-02-15 2014-07-22 Robbie Donald EDGAR Method for quick scroll search using speech recognition
US20140287724A1 (en) * 2011-10-25 2014-09-25 Kyocera Corporation Mobile terminal and lock control method
US20140304606A1 (en) * 2013-04-03 2014-10-09 Sony Corporation Information processing apparatus, information processing method and computer program
US20140359523A1 (en) * 2012-02-13 2014-12-04 Lg Electronics Inc. Method for providing user interface on terminal
US20150026613A1 (en) * 2013-07-19 2015-01-22 Lg Electronics Inc. Mobile terminal and method of controlling the same
US20150194167A1 (en) * 2014-01-06 2015-07-09 Samsung Electronics Co., Ltd. Display apparatus which operates in response to voice commands and control method thereof
US20150212791A1 (en) * 2014-01-28 2015-07-30 Oracle International Corporation Voice recognition of commands extracted from user interface screen devices
US20150262578A1 (en) * 2012-10-05 2015-09-17 Kyocera Corporation Electronic device, control method, and control program
US20150277846A1 (en) * 2014-03-31 2015-10-01 Microsoft Corporation Client-side personal voice web navigation
US20150324197A1 (en) * 2014-05-07 2015-11-12 Giga-Byte Technology Co., Ltd. Input system of macro activation
US20160078865A1 (en) * 2014-09-16 2016-03-17 Lenovo (Beijing) Co., Ltd. Information Processing Method And Electronic Device
CN105901891A (en) * 2016-06-03 2016-08-31 海宁滴滴箱包智能科技有限公司 Self-energy box with air quality monitoring function
US20160358460A1 (en) * 2015-06-03 2016-12-08 Lg Electronics Inc. Terminal, network system and controlling method thereof
WO2017014374A1 (en) * 2015-07-20 2017-01-26 Lg Electronics Inc. Mobile terminal and controlling method thereof
US20170286061A1 (en) * 2014-12-25 2017-10-05 Kyocera Corporation Information processing terminal and information processing method
US9805178B2 (en) * 2014-07-28 2017-10-31 Shi-Eun JUNG Portable terminal and method of setting and releasing use restriction therefor
US9865261B2 (en) * 2015-10-01 2018-01-09 Yahoo Holdings, Inc. Webpage navigation utilizing audio commands
CN107578776A (en) * 2017-09-25 2018-01-12 咪咕文化科技有限公司 A kind of awakening method of interactive voice, device and computer-readable recording medium
US20180136904A1 (en) * 2016-11-16 2018-05-17 Samsung Electronics Co., Ltd. Electronic device and method for controlling electronic device using speech recognition
US20180226077A1 (en) * 2015-08-05 2018-08-09 Lg Electronics Inc. Vehicle driving assist and vehicle having same
US10147421B2 (en) 2014-12-16 2018-12-04 Microcoft Technology Licensing, Llc Digital assistant voice input integration
CN109410932A (en) * 2018-10-17 2019-03-01 百度在线网络技术(北京)有限公司 Voice operating method and apparatus based on HTML5 webpage
US10542144B2 (en) 2016-02-25 2020-01-21 Samsung Electronics Co., Ltd. Electronic apparatus for providing voice recognition control and operating method therefor
US10582033B2 (en) 2011-06-09 2020-03-03 Samsung Electronics Co., Ltd. Method of providing information and mobile telecommunication terminal thereof
US10770075B2 (en) 2014-04-21 2020-09-08 Qualcomm Incorporated Method and apparatus for activating application by speech input
US10802851B2 (en) 2017-05-12 2020-10-13 Samsung Electronics Co., Ltd. Display apparatus and controlling method thereof
US11176930B1 (en) * 2016-03-28 2021-11-16 Amazon Technologies, Inc. Storing audio commands for time-delayed execution
US11374782B2 (en) * 2015-12-23 2022-06-28 Samsung Electronics Co., Ltd. Method and apparatus for controlling electronic device
US20220321965A1 (en) * 2013-11-12 2022-10-06 Samsung Electronics Co., Ltd. Voice recognition system, voice recognition server and control method of display apparatus for providing voice recognition function based on usage status
US11688386B2 (en) * 2017-09-01 2023-06-27 Georgetown University Wearable vibrotactile speech aid
US11763812B2 (en) * 2012-01-09 2023-09-19 Samsung Electronics Co., Ltd. Image display apparatus and method of controlling the same

Families Citing this family (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8952895B2 (en) * 2011-06-03 2015-02-10 Apple Inc. Motion-based device operations
KR101677630B1 (en) * 2010-06-14 2016-11-18 엘지전자 주식회사 Apparatus for displaying information by using voice recognition
CN102637061A (en) * 2011-02-10 2012-08-15 鸿富锦精密工业(深圳)有限公司 Electronic device and method for inputting information to electronic device
CN102867499A (en) * 2011-07-05 2013-01-09 中兴通讯股份有限公司 Method and device for adjusting font of mobile terminal user interface
US20130053007A1 (en) * 2011-08-24 2013-02-28 Microsoft Corporation Gesture-based input mode selection for mobile devices
KR20130125067A (en) * 2012-05-08 2013-11-18 삼성전자주식회사 Electronic apparatus and method for controlling electronic apparatus thereof
KR101944414B1 (en) * 2012-06-04 2019-01-31 삼성전자주식회사 Method for providing voice recognition service and an electronic device thereof
CN103179261B (en) * 2012-07-18 2015-09-09 深圳市金立通信设备有限公司 The system and method for controlling mobile phone through speech is realized based on ambient temperature
CN102929479A (en) * 2012-09-27 2013-02-13 东莞宇龙通信科技有限公司 Display method for application icons and communication terminal
CN103809948A (en) * 2012-11-12 2014-05-21 三亚中兴软件有限责任公司 Mobile application edit-box application method and device based on event monitoring
CN103942230B (en) * 2013-01-21 2017-03-29 上海智臻智能网络科技股份有限公司 A kind of methods, devices and systems for carrying out voice web page navigation
KR20140143556A (en) * 2013-06-07 2014-12-17 삼성전자주식회사 Portable terminal and method for user interface in the portable terminal
KR102209519B1 (en) * 2014-01-27 2021-01-29 삼성전자주식회사 Display apparatus for performing a voice control and method therefor
KR102223728B1 (en) * 2014-06-20 2021-03-05 엘지전자 주식회사 Mobile terminal and method for controlling the same
US9875322B2 (en) * 2014-07-31 2018-01-23 Google Llc Saving and retrieving locations of objects
US9472196B1 (en) * 2015-04-22 2016-10-18 Google Inc. Developer voice actions system
CN108344988B (en) * 2016-08-30 2022-05-10 李言飞 Distance measurement method, device and system
CN107609018B (en) * 2017-08-04 2021-09-17 百度在线网络技术(北京)有限公司 Search result presenting method and device and terminal equipment
CN107609174A (en) * 2017-09-27 2018-01-19 珠海市魅族科技有限公司 A kind of method and device of content retrieval, terminal and readable storage medium storing program for executing
CN112367430B (en) * 2017-11-02 2023-04-14 单正建 APP touch method, instant message APP and electronic device
KR102480570B1 (en) * 2017-11-10 2022-12-23 삼성전자주식회사 Display apparatus and the control method thereof
CN107832036B (en) * 2017-11-22 2022-01-18 北京小米移动软件有限公司 Voice control method, device and computer readable storage medium
CN108959511A (en) * 2018-06-27 2018-12-07 北京小度信息科技有限公司 Voice-based information search method, device, equipment and computer storage medium
CN112185370A (en) * 2019-07-05 2021-01-05 百度在线网络技术(北京)有限公司 Voice interaction method, device, equipment and computer storage medium
KR102593866B1 (en) * 2021-12-02 2023-10-26 한국과학기술원 METHOD AND DEVICE FOR Task-oriented Sounding Guide with Object Detection to Guide Visually Impaired People During Smart Device Usage

Citations (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4829576A (en) * 1986-10-21 1989-05-09 Dragon Systems, Inc. Voice recognition system
US5819226A (en) * 1992-09-08 1998-10-06 Hnc Software Inc. Fraud detection using predictive modeling
US5819220A (en) * 1996-09-30 1998-10-06 Hewlett-Packard Company Web triggered word set boosting for speech interfaces to the world wide web
US20020010566A1 (en) * 2000-04-11 2002-01-24 Chester Thomas Lee Methods for modeling, predicting, and optimizing high performance liquid chromatography parameters
US20020010586A1 (en) * 2000-04-27 2002-01-24 Fumiaki Ito Voice browser apparatus and voice browsing method
US20030125958A1 (en) * 2001-06-19 2003-07-03 Ahmet Alpdemir Voice-interactive marketplace providing time and money saving benefits and real-time promotion publishing and feedback
US20030156130A1 (en) * 2002-02-15 2003-08-21 Frankie James Voice-controlled user interfaces
US6724403B1 (en) * 1999-10-29 2004-04-20 Surfcast, Inc. System and method for simultaneous display of multiple information sources
US20040267739A1 (en) * 2000-02-24 2004-12-30 Dowling Eric Morgan Web browser with multilevel functions
US20050177359A1 (en) * 2004-02-09 2005-08-11 Yuan-Chia Lu [video device with voice-assisted system ]
US20060041848A1 (en) * 2004-08-23 2006-02-23 Luigi Lira Overlaid display of messages in the user interface of instant messaging and other digital communication services
US7039635B1 (en) * 2002-06-11 2006-05-02 Microsoft Corporation Dynamically updated quick searches and strategies
US20070088557A1 (en) * 2005-10-17 2007-04-19 Microsoft Corporation Raising the visibility of a voice-activated user interface
US7346613B2 (en) * 2004-01-26 2008-03-18 Microsoft Corporation System and method for a unified and blended search
US7366668B1 (en) * 2001-02-07 2008-04-29 Google Inc. Voice interface for a search engine
US20080153465A1 (en) * 2006-12-26 2008-06-26 Voice Signal Technologies, Inc. Voice search-enabled mobile device
US20080288252A1 (en) * 2007-03-07 2008-11-20 Cerra Joseph P Speech recognition of speech recorded by a mobile communication facility
US20090043777A1 (en) * 2006-03-01 2009-02-12 Eran Shmuel Wyler Methods and apparatus for enabling use of web content on various types of devices
US20090138439A1 (en) * 2007-11-27 2009-05-28 Helio, Llc. Systems and methods for location based Internet search
US20090228281A1 (en) * 2008-03-07 2009-09-10 Google Inc. Voice Recognition Grammar Selection Based on Context
US20090234811A1 (en) * 2008-03-17 2009-09-17 Microsoft Corporation Combined web browsing and searching
US7672931B2 (en) * 2005-06-30 2010-03-02 Microsoft Corporation Searching for content using voice search queries
US20100069123A1 (en) * 2008-09-16 2010-03-18 Yellowpages.Com Llc Systems and Methods for Voice Based Search
US20100100383A1 (en) * 2008-10-17 2010-04-22 Aibelive Co., Ltd. System and method for searching webpage with voice control
US20110113062A1 (en) * 2006-03-31 2011-05-12 Visto Corporation System and method for searching disparate datastores via a remote device

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU3104599A (en) 1998-03-20 1999-10-11 Inroad, Inc. Voice controlled web browser
US6615176B2 (en) 1999-07-13 2003-09-02 International Business Machines Corporation Speech enabling labeless controls in an existing graphical user interface
KR20000036801A (en) 2000-03-29 2000-07-05 정현태 Method of emboding sound recognize on the web browser
US6791529B2 (en) * 2001-12-13 2004-09-14 Koninklijke Philips Electronics N.V. UI with graphics-assisted voice control system
KR20030088087A (en) 2002-05-11 2003-11-17 이경목 One Click Internet Key Word Searching Method with a Moving Search Key Word Window and Multi Search Engine Icons

Patent Citations (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4829576A (en) * 1986-10-21 1989-05-09 Dragon Systems, Inc. Voice recognition system
US5819226A (en) * 1992-09-08 1998-10-06 Hnc Software Inc. Fraud detection using predictive modeling
US5819220A (en) * 1996-09-30 1998-10-06 Hewlett-Packard Company Web triggered word set boosting for speech interfaces to the world wide web
US6724403B1 (en) * 1999-10-29 2004-04-20 Surfcast, Inc. System and method for simultaneous display of multiple information sources
US20040267739A1 (en) * 2000-02-24 2004-12-30 Dowling Eric Morgan Web browser with multilevel functions
US20020010566A1 (en) * 2000-04-11 2002-01-24 Chester Thomas Lee Methods for modeling, predicting, and optimizing high performance liquid chromatography parameters
US20020010586A1 (en) * 2000-04-27 2002-01-24 Fumiaki Ito Voice browser apparatus and voice browsing method
US7366668B1 (en) * 2001-02-07 2008-04-29 Google Inc. Voice interface for a search engine
US20030125958A1 (en) * 2001-06-19 2003-07-03 Ahmet Alpdemir Voice-interactive marketplace providing time and money saving benefits and real-time promotion publishing and feedback
US20030156130A1 (en) * 2002-02-15 2003-08-21 Frankie James Voice-controlled user interfaces
US7039635B1 (en) * 2002-06-11 2006-05-02 Microsoft Corporation Dynamically updated quick searches and strategies
US7346613B2 (en) * 2004-01-26 2008-03-18 Microsoft Corporation System and method for a unified and blended search
US20050177359A1 (en) * 2004-02-09 2005-08-11 Yuan-Chia Lu [video device with voice-assisted system ]
US20060041848A1 (en) * 2004-08-23 2006-02-23 Luigi Lira Overlaid display of messages in the user interface of instant messaging and other digital communication services
US7672931B2 (en) * 2005-06-30 2010-03-02 Microsoft Corporation Searching for content using voice search queries
US20070088557A1 (en) * 2005-10-17 2007-04-19 Microsoft Corporation Raising the visibility of a voice-activated user interface
US20090043777A1 (en) * 2006-03-01 2009-02-12 Eran Shmuel Wyler Methods and apparatus for enabling use of web content on various types of devices
US20110113062A1 (en) * 2006-03-31 2011-05-12 Visto Corporation System and method for searching disparate datastores via a remote device
US20080153465A1 (en) * 2006-12-26 2008-06-26 Voice Signal Technologies, Inc. Voice search-enabled mobile device
US20080288252A1 (en) * 2007-03-07 2008-11-20 Cerra Joseph P Speech recognition of speech recorded by a mobile communication facility
US20090138439A1 (en) * 2007-11-27 2009-05-28 Helio, Llc. Systems and methods for location based Internet search
US20090228281A1 (en) * 2008-03-07 2009-09-10 Google Inc. Voice Recognition Grammar Selection Based on Context
US20090234811A1 (en) * 2008-03-17 2009-09-17 Microsoft Corporation Combined web browsing and searching
US20100069123A1 (en) * 2008-09-16 2010-03-18 Yellowpages.Com Llc Systems and Methods for Voice Based Search
US20100100383A1 (en) * 2008-10-17 2010-04-22 Aibelive Co., Ltd. System and method for searching webpage with voice control

Cited By (107)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9009053B2 (en) 2008-11-10 2015-04-14 Google Inc. Multisensory speech detection
US10714120B2 (en) 2008-11-10 2020-07-14 Google Llc Multisensory speech detection
US20100121636A1 (en) * 2008-11-10 2010-05-13 Google Inc. Multisensory Speech Detection
US10026419B2 (en) 2008-11-10 2018-07-17 Google Llc Multisensory speech detection
US10020009B1 (en) 2008-11-10 2018-07-10 Google Llc Multisensory speech detection
US9570094B2 (en) 2008-11-10 2017-02-14 Google Inc. Multisensory speech detection
US8862474B2 (en) 2008-11-10 2014-10-14 Google Inc. Multisensory speech detection
US10720176B2 (en) 2008-11-10 2020-07-21 Google Llc Multisensory speech detection
US20110037710A1 (en) * 2009-08-11 2011-02-17 Lg Electronics Inc. Mobile terminal and controlling method thereof
US8823654B2 (en) * 2009-08-11 2014-09-02 Lg Electronics Inc. Mobile terminal and controlling method thereof
US20110045803A1 (en) * 2009-08-19 2011-02-24 Samsung Electronics Co., Ltd. Method of informing occurrence of a missed event and mobile terminal using the same
US20110074693A1 (en) * 2009-09-25 2011-03-31 Paul Ranford Method of processing touch commands and voice commands in parallel in an electronic device supporting speech recognition
US8294683B2 (en) * 2009-09-25 2012-10-23 Mitac International Corp. Method of processing touch commands and voice commands in parallel in an electronic device supporting speech recognition
US20110119572A1 (en) * 2009-11-17 2011-05-19 Lg Electronics Inc. Mobile terminal
US8473297B2 (en) * 2009-11-17 2013-06-25 Lg Electronics Inc. Mobile terminal
US8208959B2 (en) * 2009-12-24 2012-06-26 Otos Wing Co., Ltd. Anti-blinding device having wireless communication function
US20110159918A1 (en) * 2009-12-24 2011-06-30 Otos Wing Co., Ltd. Anti-blinding device having wireless communication function
US8510100B2 (en) * 2010-05-20 2013-08-13 Xerox Corporation Human readable sentences to represent complex color changes
US20110288854A1 (en) * 2010-05-20 2011-11-24 Xerox Corporation Human readable sentences to represent complex color changes
US8918121B2 (en) 2010-08-06 2014-12-23 Google Inc. Method, apparatus, and system for automatically monitoring for voice input based on context
US9105269B2 (en) 2010-08-06 2015-08-11 Google Inc. Method, apparatus, and system for automatically monitoring for voice input based on context
US9251793B2 (en) 2010-08-06 2016-02-02 Google Inc. Method, apparatus, and system for automatically monitoring for voice input based on context
EP2601650A1 (en) * 2010-08-06 2013-06-12 Google, Inc. Automatically monitoring for voice input based on context
EP2601650A4 (en) * 2010-08-06 2014-07-16 Google Inc Automatically monitoring for voice input based on context
US20120102436A1 (en) * 2010-10-21 2012-04-26 Nokia Corporation Apparatus and method for user input for controlling displayed information
US9043732B2 (en) * 2010-10-21 2015-05-26 Nokia Corporation Apparatus and method for user input for controlling displayed information
US20120296646A1 (en) * 2011-05-17 2012-11-22 Microsoft Corporation Multi-mode text input
US9865262B2 (en) 2011-05-17 2018-01-09 Microsoft Technology Licensing, Llc Multi-mode text input
US9263045B2 (en) * 2011-05-17 2016-02-16 Microsoft Technology Licensing, Llc Multi-mode text input
US10582033B2 (en) 2011-06-09 2020-03-03 Samsung Electronics Co., Ltd. Method of providing information and mobile telecommunication terminal thereof
US9794613B2 (en) * 2011-07-19 2017-10-17 Lg Electronics Inc. Electronic device and method for controlling the same
US20130024197A1 (en) * 2011-07-19 2013-01-24 Lg Electronics Inc. Electronic device and method for controlling the same
US9866891B2 (en) 2011-07-19 2018-01-09 Lg Electronics Inc. Electronic device and method for controlling the same
US10009645B2 (en) 2011-07-19 2018-06-26 Lg Electronics Inc. Electronic device and method for controlling the same
US10386457B2 (en) * 2011-08-05 2019-08-20 TrackThings LLC Apparatus and method to automatically set a master-slave monitoring system
US20130033363A1 (en) * 2011-08-05 2013-02-07 TrackDSound LLC Apparatus and Method to Automatically Set a Master-Slave Monitoring System
US10107893B2 (en) * 2011-08-05 2018-10-23 TrackThings LLC Apparatus and method to automatically set a master-slave monitoring system
US20140287724A1 (en) * 2011-10-25 2014-09-25 Kyocera Corporation Mobile terminal and lock control method
US8863202B2 (en) * 2011-11-11 2014-10-14 Sony Corporation System and method for voice driven cross service search using second display
US20130125168A1 (en) * 2011-11-11 2013-05-16 Sony Network Entertainment International Llc System and method for voice driven cross service search using second display
US11763812B2 (en) * 2012-01-09 2023-09-19 Samsung Electronics Co., Ltd. Image display apparatus and method of controlling the same
US20140359523A1 (en) * 2012-02-13 2014-12-04 Lg Electronics Inc. Method for providing user interface on terminal
US9557903B2 (en) * 2012-02-13 2017-01-31 Lg Electronics Inc. Method for providing user interface on terminal
EP2629291A1 (en) * 2012-02-15 2013-08-21 Research In Motion Limited Method for quick scroll search using speech recognition
US8788273B2 (en) 2012-02-15 2014-07-22 Robbie Donald EDGAR Method for quick scroll search using speech recognition
US9691381B2 (en) * 2012-02-21 2017-06-27 Mediatek Inc. Voice command recognition method and related electronic device and computer-readable medium
US20130218573A1 (en) * 2012-02-21 2013-08-22 Yiou-Wen Cheng Voice command recognition method and related electronic device and computer-readable medium
CN103294368A (en) * 2012-03-05 2013-09-11 腾讯科技(深圳)有限公司 Browse information processing method, browse and mobile terminal
US20150262578A1 (en) * 2012-10-05 2015-09-17 Kyocera Corporation Electronic device, control method, and control program
US9734829B2 (en) * 2012-10-05 2017-08-15 Kyocera Corporation Electronic device, control method, and control program
US8543397B1 (en) * 2012-10-11 2013-09-24 Google Inc. Mobile device voice activation
GB2507002B (en) * 2012-10-11 2015-10-14 Google Inc Mobile device voice activation
GB2507002A (en) * 2012-10-11 2014-04-16 Google Inc A method in which candidate search phrase/(s) is displayed in response to receiving a spoken phrase and a sliding action from the phrase/(s) to an icon is p
EP2731028A3 (en) * 2012-11-13 2016-08-24 LG Electronics, Inc. Mobile terminal and control method thereof
US20140136213A1 (en) * 2012-11-13 2014-05-15 Lg Electronics Inc. Mobile terminal and control method thereof
US20190362705A1 (en) * 2012-12-10 2019-11-28 Samsung Electronics Co., Ltd. Method and user device for providing context awareness service using speech recognition
US10395639B2 (en) * 2012-12-10 2019-08-27 Samsung Electronics Co., Ltd. Method and user device for providing context awareness service using speech recognition
US9940924B2 (en) * 2012-12-10 2018-04-10 Samsung Electronics Co., Ltd. Method and user device for providing context awareness service using speech recognition
US11410640B2 (en) * 2012-12-10 2022-08-09 Samsung Electronics Co., Ltd. Method and user device for providing context awareness service using speech recognition
US20220383852A1 (en) * 2012-12-10 2022-12-01 Samsung Electronics Co., Ltd. Method and user device for providing context awareness service using speech recognition
US10832655B2 (en) * 2012-12-10 2020-11-10 Samsung Electronics Co., Ltd. Method and user device for providing context awareness service using speech recognition
US20140163976A1 (en) * 2012-12-10 2014-06-12 Samsung Electronics Co., Ltd. Method and user device for providing context awareness service using speech recognition
US20180182374A1 (en) * 2012-12-10 2018-06-28 Samsung Electronics Co., Ltd. Method and user device for providing context awareness service using speech recognition
US11721320B2 (en) * 2012-12-10 2023-08-08 Samsung Electronics Co., Ltd. Method and user device for providing context awareness service using speech recognition
US20140180698A1 (en) * 2012-12-26 2014-06-26 Kabushiki Kaisha Toshiba Information processing apparatus, information processing method and storage medium
CN103916534A (en) * 2012-12-31 2014-07-09 Lg电子株式会社 Mobile terminal
US20140189518A1 (en) * 2012-12-31 2014-07-03 Lg Electronics Inc. Mobile terminal
US20140304606A1 (en) * 2013-04-03 2014-10-09 Sony Corporation Information processing apparatus, information processing method and computer program
US20150026613A1 (en) * 2013-07-19 2015-01-22 Lg Electronics Inc. Mobile terminal and method of controlling the same
US9965166B2 (en) * 2013-07-19 2018-05-08 Lg Electronics Inc. Mobile terminal and method of controlling the same
US20220321965A1 (en) * 2013-11-12 2022-10-06 Samsung Electronics Co., Ltd. Voice recognition system, voice recognition server and control method of display apparatus for providing voice recognition function based on usage status
US20150194167A1 (en) * 2014-01-06 2015-07-09 Samsung Electronics Co., Ltd. Display apparatus which operates in response to voice commands and control method thereof
US20150212791A1 (en) * 2014-01-28 2015-07-30 Oracle International Corporation Voice recognition of commands extracted from user interface screen devices
US9858039B2 (en) * 2014-01-28 2018-01-02 Oracle International Corporation Voice recognition of commands extracted from user interface screen devices
US9547468B2 (en) * 2014-03-31 2017-01-17 Microsoft Technology Licensing, Llc Client-side personal voice web navigation
US20150277846A1 (en) * 2014-03-31 2015-10-01 Microsoft Corporation Client-side personal voice web navigation
US10770075B2 (en) 2014-04-21 2020-09-08 Qualcomm Incorporated Method and apparatus for activating application by speech input
US20150324197A1 (en) * 2014-05-07 2015-11-12 Giga-Byte Technology Co., Ltd. Input system of macro activation
US9805178B2 (en) * 2014-07-28 2017-10-31 Shi-Eun JUNG Portable terminal and method of setting and releasing use restriction therefor
US10282528B2 (en) * 2014-07-28 2019-05-07 Shi-Eun JUNG Portable terminal and method of setting and releasing use restriction therefor
US20160078865A1 (en) * 2014-09-16 2016-03-17 Lenovo (Beijing) Co., Ltd. Information Processing Method And Electronic Device
US10699712B2 (en) * 2014-09-16 2020-06-30 Lenovo (Beijing) Co., Ltd. Processing method and electronic device for determining logic boundaries between speech information using information input in a different collection manner
US10147421B2 (en) 2014-12-16 2018-12-04 Microcoft Technology Licensing, Llc Digital assistant voice input integration
US20170286061A1 (en) * 2014-12-25 2017-10-05 Kyocera Corporation Information processing terminal and information processing method
US20160358460A1 (en) * 2015-06-03 2016-12-08 Lg Electronics Inc. Terminal, network system and controlling method thereof
CN106254624A (en) * 2015-06-03 2016-12-21 Lg电子株式会社 terminal, network system and control method thereof
US9799212B2 (en) * 2015-06-03 2017-10-24 Lg Electronics Inc. Terminal, network system and controlling method thereof
US9928837B2 (en) 2015-07-20 2018-03-27 Lg Electronics Inc. Mobile terminal and controlling method thereof
WO2017014374A1 (en) * 2015-07-20 2017-01-26 Lg Electronics Inc. Mobile terminal and controlling method thereof
US9711142B2 (en) 2015-07-20 2017-07-18 Lg Electronics Inc. Mobile terminal and controlling method thereof
US20180226077A1 (en) * 2015-08-05 2018-08-09 Lg Electronics Inc. Vehicle driving assist and vehicle having same
US9865261B2 (en) * 2015-10-01 2018-01-09 Yahoo Holdings, Inc. Webpage navigation utilizing audio commands
US11374782B2 (en) * 2015-12-23 2022-06-28 Samsung Electronics Co., Ltd. Method and apparatus for controlling electronic device
US11218592B2 (en) 2016-02-25 2022-01-04 Samsung Electronics Co., Ltd. Electronic apparatus for providing voice recognition control and operating method therefor
US11838445B2 (en) * 2016-02-25 2023-12-05 Samsung Electronics Co., Ltd. Electronic apparatus for providing voice recognition control and operating method therefor
US20220094787A1 (en) * 2016-02-25 2022-03-24 Samsung Electronics Co., Ltd. Electronic apparatus for providing voice recognition control and operating method therefor
US10542144B2 (en) 2016-02-25 2020-01-21 Samsung Electronics Co., Ltd. Electronic apparatus for providing voice recognition control and operating method therefor
US11176930B1 (en) * 2016-03-28 2021-11-16 Amazon Technologies, Inc. Storing audio commands for time-delayed execution
CN105901891A (en) * 2016-06-03 2016-08-31 海宁滴滴箱包智能科技有限公司 Self-energy box with air quality monitoring function
US20180136904A1 (en) * 2016-11-16 2018-05-17 Samsung Electronics Co., Ltd. Electronic device and method for controlling electronic device using speech recognition
US20200401426A1 (en) * 2017-05-12 2020-12-24 Samsung Electronics Co., Ltd. Display apparatus and controlling method thereof
US11726806B2 (en) * 2017-05-12 2023-08-15 Samsung Electronics Co., Ltd. Display apparatus and controlling method thereof
US10802851B2 (en) 2017-05-12 2020-10-13 Samsung Electronics Co., Ltd. Display apparatus and controlling method thereof
US11688386B2 (en) * 2017-09-01 2023-06-27 Georgetown University Wearable vibrotactile speech aid
US20230306960A1 (en) * 2017-09-01 2023-09-28 Georgetown University Wearable vibrotactile speech aid
CN107578776A (en) * 2017-09-25 2018-01-12 咪咕文化科技有限公司 A kind of awakening method of interactive voice, device and computer-readable recording medium
CN109410932A (en) * 2018-10-17 2019-03-01 百度在线网络技术(北京)有限公司 Voice operating method and apparatus based on HTML5 webpage

Also Published As

Publication number Publication date
EP2182452B1 (en) 2018-07-11
EP2182452A1 (en) 2010-05-05
CN101729656A (en) 2010-06-09
KR20100047719A (en) 2010-05-10
KR101545582B1 (en) 2015-08-19
CN101729656B (en) 2016-08-03
US9129011B2 (en) 2015-09-08

Similar Documents

Publication Publication Date Title
US9129011B2 (en) Mobile terminal and control method thereof
KR100988397B1 (en) Mobile terminal and text correcting method in the same
US8498670B2 (en) Mobile terminal and text input method thereof
US8428654B2 (en) Mobile terminal and method for displaying menu thereof
US8423087B2 (en) Mobile terminal with touch screen and method of processing message using the same
US8627235B2 (en) Mobile terminal and corresponding method for assigning user-drawn input gestures to functions
KR101462932B1 (en) Mobile terminal and text correction method
US8958848B2 (en) Mobile terminal and menu control method thereof
EP2133870B1 (en) Mobile terminal and method for recognizing voice thereof
KR20090107364A (en) Mobile terminal and its menu control method
KR101537693B1 (en) Terminal and method for controlling the same
KR101502004B1 (en) Mobile terminal and method for recognition voice command thereof
KR20090115599A (en) Mobile terminal and its information processing method
KR20130080713A (en) Mobile terminal having function of voice recognition and method for providing search results thereof
KR101504212B1 (en) Terminal and method for controlling the same
KR101513635B1 (en) Terminal and method for controlling the same
KR101495183B1 (en) Terminal and method for controlling the same
KR101631939B1 (en) Mobile terminal and method for controlling the same
KR101521923B1 (en) Terminal and method for controlling the same
KR101513629B1 (en) Terminal and method for controlling the same
KR101521927B1 (en) Terminal and method for controlling the same

Legal Events

Date Code Title Description
AS Assignment

Owner name: LG ELECTRONICS INC.,KOREA, REPUBLIC OF

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:YANG, SEUNG-JIN;REEL/FRAME:022673/0579

Effective date: 20090423

Owner name: LG ELECTRONICS INC., KOREA, REPUBLIC OF

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:YANG, SEUNG-JIN;REEL/FRAME:022673/0579

Effective date: 20090423

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCF Information on status: patent grant

Free format text: PATENTED CASE

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 8