US8963983B2 - Mobile terminal and method of controlling the same - Google Patents

Mobile terminal and method of controlling the same Download PDF

Info

Publication number
US8963983B2
US8963983B2 US13/565,320 US201213565320A US8963983B2 US 8963983 B2 US8963983 B2 US 8963983B2 US 201213565320 A US201213565320 A US 201213565320A US 8963983 B2 US8963983 B2 US 8963983B2
Authority
US
United States
Prior art keywords
application
mobile terminal
controller
voice
call
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US13/565,320
Other versions
US20130176377A1 (en
Inventor
Jaeseok HO
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
LG Electronics Inc
Original Assignee
LG Electronics Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by LG Electronics Inc filed Critical LG Electronics Inc
Assigned to LG ELECTRONICS INC. reassignment LG ELECTRONICS INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: Ho, Jaeseok
Publication of US20130176377A1 publication Critical patent/US20130176377A1/en
Application granted granted Critical
Publication of US8963983B2 publication Critical patent/US8963983B2/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
    • H04M1/72451User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions according to schedules, e.g. using calendar applications
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/64Automatic arrangements for answering calls; Automatic arrangements for recording messages for absent subscribers; Arrangements for recording conversations
    • H04M1/65Recording arrangements for recording a message from the calling party
    • H04M1/656Recording arrangements for recording a message from the calling party for recording conversations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2250/00Details of telephonic subscriber devices
    • H04M2250/22Details of telephonic subscriber devices including a touch pad, a touch sensor or a touch detector
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2250/00Details of telephonic subscriber devices
    • H04M2250/74Details of telephonic subscriber devices with voice recognition means

Definitions

  • the present invention relates to a mobile terminal and a method of controlling the same.
  • Terminals such as a personal computer (PC), a laptop computer, and a mobile terminal are formed to perform various functions, for example, a data and voice communication function, a function of photographing a picture or a moving picture through a camera, a function of storing a voice, a function of reproducing a music file through a speaker system, and a function of displaying an image or video.
  • Some terminals include an addition function that can execute a game, and some other terminals may be embodied as a multimedia device.
  • a recent terminal enables to view video or a television program by receiving broadcasting or a multicast signal.
  • terminals are classified into a mobile terminal and a stationary terminal according to mobility, and mobile terminals are again classified into a handheld terminal and a vehicle mount terminal according to whether a user can directly carry.
  • An aspect of the present invention is to provide a mobile terminal and a method of controlling the same that can control execution of an item related to voice call contents based on voice call contents acquired through speech recognition while performing call.
  • Another aspect of the present invention is to provide a mobile terminal and a method of controlling the same that can more conveniently acquire desired data through a predetermined item while performing call by recognizing a speaker's speech while performing call and tagging the recognized speech to the predetermined item.
  • Another aspect of the present invention is to provide a mobile terminal and a method of controlling the same that can more conveniently acquire desired data by linking at least one item that can interlock with a speech recognition function to communication contents while performing call.
  • Another aspect of the present invention is to provide a mobile terminal and a method of controlling the same that can more efficiently perform multicasting through speech recognition while performing call.
  • a mobile terminal includes: a speech recognition unit; a mobile communication unit for performing at least one of voice call and video call; and a controller for recognizing voice call contents through the speech recognition unit according to a predetermined input while performing the call and for tagging the voice call contents to at least one item, and for controlling execution of the item according to the tagged voice call contents.
  • a mobile terminal includes: a mobile communication unit; a camera; a microphone; and a controller for acquiring a sound signal through the microphone while performing video call with at least one external device through the mobile communication unit and the camera, for executing a voice search through a predetermined voice search application based on the acquired sound signal, and for storing a voice search result.
  • a method of controlling a mobile terminal includes: performing at least one of voice call and video call; recognizing voice call contents through a speech recognition unit when a predetermined input is received while performing the call; tagging the voice call contents to at least one item; and controlling execution of the item according to the tagged voice call contents.
  • a method of controlling a mobile terminal includes: performing video call; recording voice call contents while performing the video call; executing a voice search application as a predetermined input is received; extracting a search word from the recorded voice call contents; executing a voice search in the voice search application based on the extracted search word; and storing a voice search result.
  • a mobile terminal and a method of controlling the same according to an embodiment of the present invention have the following effects.
  • execution of an item related to voice call contents can be controlled based on the voice call contents acquired through speech recognition while performing call.
  • more efficient multicasting can be performed through speech recognition while performing call.
  • FIG. 1 is a block diagram of a mobile terminal according to an embodiment of the present invention.
  • FIG. 2 a is a front perspective view of a mobile terminal or a handheld terminal according to an embodiment of the present invention.
  • FIG. 2 b is a rear perspective view of the handheld terminal shown in FIG. 2 a according to an embodiment of the present invention.
  • FIG. 3 is a flowchart illustrating a method of controlling a mobile terminal according to an embodiment of the present invention.
  • FIG. 4 is a flowchart illustrating an example of sharing an item execution result acquired through an embodiment shown in FIG. 3 with an external device.
  • FIGS. 5 a to 5 d are diagrams illustrating an embodiment shown in FIGS. 3 and 4 .
  • FIG. 6 is a flowchart illustrating a method of controlling a mobile terminal according to an embodiment of the present invention.
  • FIGS. 7 a and 7 b are diagrams illustrating an embodiment shown in FIG. 6 .
  • FIG. 8 is a flowchart illustrating a method of controlling a mobile terminal according to an embodiment of the present invention.
  • FIGS. 9 to 11 are diagrams illustrating an embodiment shown in FIG. 8 .
  • FIG. 12 is a flowchart illustrating a method of controlling a mobile terminal according to an embodiment of the present invention.
  • FIGS. 13 a and 13 b , 14 a and 14 b , and 15 a to 15 d are diagrams illustrating an embodiment shown in FIG. 12 .
  • FIG. 16 is a flowchart illustrating a method of controlling a mobile terminal according to an embodiment of the present invention.
  • FIG. 17 is a flowchart illustrating an embodiment shown in FIG. 16 .
  • FIGS. 18 a to 18 c are diagrams illustrating an embodiment shown in FIG. 17 .
  • FIG. 19 is a flowchart illustrating an embodiment shown in FIG. 16 .
  • FIGS. 20 a to 20 d are diagrams illustrating an embodiment shown in FIG. 19 .
  • FIG. 21 is a flowchart illustrating a method of controlling a mobile terminal according to an embodiment of the present invention.
  • FIGS. 22 , 23 a and 23 b , and 24 a and 24 b are diagrams illustrating an embodiment shown in FIG. 21 .
  • FIG. 25 is a flowchart illustrating a method of controlling a mobile terminal according to an embodiment of the present invention.
  • FIG. 26 is a flowchart illustrating a method of controlling a mobile terminal according to an embodiment of the present invention.
  • FIGS. 27 a and 27 b are diagrams illustrating an embodiment shown in FIG. 26 .
  • the mobile terminal described in the specification can include a cellular phone, a smart phone, a laptop computer, a digital broadcasting terminal, personal digital assistants (PDA), a portable multimedia player (PMP), a navigation system and so on.
  • PDA personal digital assistants
  • PMP portable multimedia player
  • FIG. 1 is a block diagram of a mobile terminal 100 according to an embodiment of the present invention.
  • the mobile terminal 100 includes a wireless communication unit 110 , an A/V (Audio/Video) input unit 120 , a user input unit 130 , a sensing unit 140 , an output unit 150 , a memory 160 , an interface unit 170 , a controller 180 , and a power supply unit 190 , etc.
  • FIG. 1 shows the mobile terminal as having various components, but implementing all of the illustrated components is not a requirement. Greater or fewer components may alternatively be implemented.
  • the wireless communication unit 110 generally includes one or more components allowing radio communication between the mobile terminal 100 and a wireless communication system or a network in which the mobile terminal is located.
  • the wireless communication unit includes at least one of a broadcast receiving module 111 , a mobile communication module 112 , a wireless Internet module 113 , a short-range communication module 114 , and a location information module 115 .
  • the broadcast receiving module 111 receives broadcast signals and/or broad cast associated information from an external broadcast management server via a broadcast channel.
  • the broadcast channel may include a satellite channel and/or a terrestrial channel.
  • the broadcast management server may be a server that generates and transmits a broadcast signal and/or broadcast associated information or a server that receives a previously generated broadcast signal and/or broadcast associated information and transmits the same to a terminal.
  • the broadcast signal may include a TV broadcast signal, a radio broadcast signal, a data broadcast signal, and the like. Also, the broadcast signal may further include a broadcast signal combined with a TV or radio broadcast signal.
  • the broadcast associated information may refer to information associated with a broadcast channel, a broadcast program or a broadcast service provider.
  • the broadcast associated information may also be provided via a mobile communication network and, in this instance, the broadcast associated information may be received by the mobile communication module 112 .
  • the broadcast signal may exist in various forms.
  • the broadcast signal may exist in the form of an electronic program guide (EPG) of the digital multimedia broadcasting (DMB) system, and electronic service guide (ESG) of the digital video broadcast-handheld (DVB-H) system, and the like.
  • EPG electronic program guide
  • ESG electronic service guide
  • DMB digital multimedia broadcasting
  • DVB-H digital video broadcast-handheld
  • the broadcast receiving module 111 may also be configured to receive signals broadcast by using various types of broadcast systems.
  • the broadcast receiving module 111 can receive a digital broadcast using a digital broadcast system such as the multimedia broadcasting-terrestrial (DMB-T) system, the digital multimedia broadcasting-satellite (DMB-S) system, the digital video broadcast-handheld (DVB-H) system, the data broadcasting system known as the media forward link only (MediaFLO®), the integrated services digital broadcast-terrestrial (ISDB-T) system, etc.
  • DMB-T multimedia broadcasting-terrestrial
  • DMB-S digital multimedia broadcasting-satellite
  • DVD-H digital video broadcast-handheld
  • MediaFLO® media forward link only
  • ISDB-T integrated services digital broadcast-terrestrial
  • the broadcast receiving module 111 can also be configured to be suitable for all broadcast systems that provide a broadcast signal as well as the above-mentioned digital broadcast systems.
  • the broadcast signals and/or broadcast-associated information received via the broadcast receiving module 111 may be stored in the memory 160 .
  • the mobile communication module 112 transmits and/or receives radio signals to and/or from at least one of a base station, an external terminal and a server.
  • radio signals may include a voice call signal, a video call signal or various types of data according to text and/or multimedia message transmission and/or reception.
  • the wireless Internet module 113 supports wireless Internet access for the mobile terminal and may be internally or externally coupled to the terminal.
  • the wireless Internet access technique implemented may include a WLAN (Wireless LAN) (Wi-Fi), Wibro (Wireless broadband), Wimax (World Interoperability for Microwave Access), HSDPA (High Speed Downlink Packet Access), or the like.
  • the short-range communication module 114 is a module for supporting short range communications.
  • Some examples of short-range communication technology include BluetoothTM, Radio Frequency IDentification (RFID), Infrared Data Association (IrDA), Ultra-WideBand (UWB), ZigBeeTM, and the like.
  • the location information module 115 is a module for checking or acquiring a location or position of the mobile terminal.
  • the location information module 115 may acquire location information by using a global navigation satellite system (GNSS).
  • GNSS global navigation satellite system
  • the GNSS may include the United States' global positioning system (GPS), the European Union's Galileo positioning system, the Russian global orbiting navigational satellite system (GLONASS), COMPASS, a compass navigation system, by the People's Republic of China, and the quasi-zenith satellite system (QZSS) by Japan.
  • GPS global positioning system
  • GLONASS Russian global orbiting navigational satellite system
  • COMPASS COMPASS
  • compass navigation system by the People's Republic of China
  • QZSS quasi-zenith satellite system
  • GNSS Global Positioning System
  • the GPS module may calculate information related to the distance from one point (entity) to three or more satellites and information related to time at which the distance information was measured, and applies trigonometry to the calculated distance, thereby calculating three-dimensional location information according to latitude, longitude, and altitude with respect to the one point (entity).
  • a method of acquiring location and time information by using three satellites and correcting an error of the calculated location and time information by using another one satellite may be also used.
  • the GPS module may also continuously calculate the current location in real time and also calculate speed information by using the continuously calculated current location.
  • the A/V input unit 120 is configured to receive an audio or video signal, and includes a camera 121 and a microphone 122 .
  • the camera 121 processes image data of still pictures or video obtained by an image capture device in a video capturing mode or an image capturing mode, and the processed image frames can then be displayed on a display unit 151 .
  • the image frames processed by the camera 121 may be stored in the memory 160 or transmitted via the wireless communication unit 110 .
  • Two or more cameras 121 may also be provided according to the configuration of the mobile terminal.
  • the microphone 122 can receive sounds via a microphone in a phone call mode, a recording mode, a voice recognition mode, and the like, and can process such sounds into audio data.
  • the processed audio data may then be converted for output into a format transmittable to a mobile communication base station via the mobile communication module 112 for the phone call mode.
  • the microphone 122 may also implement various types of noise canceling (or suppression) algorithms to cancel or suppress noise or interference generated when receiving and transmitting audio signals.
  • the user input unit 130 can generate input data from commands entered by a user to control various operations of the mobile terminal.
  • the user input unit 130 may include a keypad, a dome switch, a touch pad (e.g., a touch sensitive member that detects changes in resistance, pressure, capacitance, etc. due to being contacted), a jog wheel, a jog switch, and the like.
  • the sensing unit 140 detects a current status of the mobile terminal 100 such as an opened or closed state of the mobile terminal 100 , a location of the mobile terminal 100 , the presence or absence of user contact with the mobile terminal 100 , the orientation of the mobile terminal 100 , an acceleration or deceleration movement and direction of the mobile terminal 100 , etc., and generates command or signals for controlling the operation of the mobile terminal 100 .
  • a current status of the mobile terminal 100 such as an opened or closed state of the mobile terminal 100 , a location of the mobile terminal 100 , the presence or absence of user contact with the mobile terminal 100 , the orientation of the mobile terminal 100 , an acceleration or deceleration movement and direction of the mobile terminal 100 , etc.
  • the sensing unit 140 may sense whether the slide phone is opened or closed.
  • the sensing unit 140 can detect whether or not the power supply unit 190 supplies power or whether or not the interface unit 170 is coupled with an external device.
  • the sensing unit 140 also includes a proximity sensor 141 .
  • the output unit 150 is configured to provide outputs in a visual, audible, and/or tactile manner.
  • the output unit 150 includes the display unit 151 , an audio output module 152 , an alarm unit 153 , a haptic module 154 , and the like.
  • the display unit 151 can display information processed in the mobile terminal 100 .
  • the display unit 151 can display a User Interface (UI) or a Graphic User Interface (GUI) associated with a call or other communication.
  • UI User Interface
  • GUI Graphic User Interface
  • the display unit 151 may also include at least one of a Liquid Crystal Display (LCD), a Thin Film Transistor-LCD (TFT-LCD), an Organic Light Emitting Diode (OLED) display, a flexible display, a three-dimensional (3D) display, or the like. Some of these displays may also be configured to be transparent or light-transmissive to allow for viewing of the exterior, which is called transparent displays.
  • LCD Liquid Crystal Display
  • TFT-LCD Thin Film Transistor-LCD
  • OLED Organic Light Emitting Diode
  • An example transparent display is a TOLED (Transparent Organic Light Emitting Diode) display, or the like.
  • a rear structure of the display unit 151 may be also light-transmissive. Through such configuration, the user can view an object positioned at the rear side of the terminal body through the region occupied by the display unit 151 of the terminal body.
  • the mobile terminal 100 may include two or more display units according to its particular desired embodiment.
  • a plurality of display units may be separately or integrally disposed on one surface of the mobile terminal, or may be separately disposed on mutually different surfaces.
  • the display unit 151 and a sensor (referred to as a touch sensor', hereinafter) for detecting a touch operation are overlaid in a layered manner to form a touch screen
  • the display unit 151 can function as both an input device and an output device.
  • the touch sensor may have a form of a touch film, a touch sheet, a touch pad, and the like.
  • the touch sensor may be configured to convert pressure applied to a particular portion of the display unit 151 or a change in the capacitance or the like generated at a particular portion of the display unit 151 into an electrical input signal.
  • the touch sensor may also be configured to detect the pressure when a touch is applied, as well as the touched position and area.
  • corresponding signals are transmitted to a touch controller, and the touch controller processes the signals and transmits corresponding data to the controller 180 . Accordingly, the controller 180 can recognize which portion of the display unit 151 has been touched.
  • the proximity sensor 141 may be disposed within or near the touch screen.
  • the proximity sensor 141 is a sensor for detecting the presence or absence of an object relative to a certain detection surface or an object that exists nearby by using the force of electromagnetism or infrared rays without a physical contact.
  • the proximity sensor 141 has a considerably longer life span compared with a contact type sensor, and can be utilized for various purposes.
  • Examples of the proximity sensor 141 include a transmission type photoelectric sensor, a direct reflection type photoelectric sensor, a mirror-reflection type photo sensor, an RF oscillation type proximity sensor, a capacitance type proximity sensor, a magnetic proximity sensor, an infrared proximity sensor, and the like.
  • the touch screen is the capacitance type, proximity of the pointer is detected by a change in electric field according to the proximity of the pointer.
  • the touch screen may be classified as a proximity sensor.
  • proximity touch recognition of the pointer positioned to be close to the touch screen
  • contact touch recognition of actual contacting of the pointer on the touch screen
  • a proximity touch and a proximity touch pattern e.g., a proximity touch distance, a proximity touch speed, a proximity touch time, a proximity touch position, a proximity touch movement state, or the like
  • a proximity touch and a proximity touch pattern can be detected, and information corresponding to the detected proximity touch operation and the proximity touch pattern can be output to the touch screen.
  • the audio output module 152 can convert and output as sound and data received from the wireless communication unit 110 or stored in the memory 160 in a call signal reception mode, a call mode, a record mode, a voice recognition mode, a broadcast reception mode, and the like. Also, the audio output module 152 can provide audible outputs related to a particular function performed by the mobile terminal 100 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output module 152 may also include a speaker, a buzzer, or the like. In addition, the audio output module 152 may output a sound through an earphone jack.
  • the alarm unit 153 can output information about the occurrence of an event of the mobile terminal 100 . Typical events include call reception, message reception, key signal inputs, a touch input etc.
  • the alarm unit 153 can provide outputs in a different manner to inform about the occurrence of an event. For example, the alarm unit 153 can provide an output in the form of vibrations.
  • the video signal or the audio signal may be also output through the display unit 151 or the audio output module 152 .
  • the haptic module 154 generates various tactile effects the user may feel.
  • One example of the tactile effects generated by the haptic module 154 is vibration.
  • the strength and pattern of the haptic module 154 can also be controlled. For example, different vibrations may be combined to be output or sequentially output.
  • the haptic module 154 can generate various other tactile effects such as an effect by stimulation such as a pin arrangement vertically moving with respect to a contact skin, a spray force or suction force of air through a jet orifice or a suction opening, a contact on the skin, a contact of an electrode, electrostatic force, etc., an effect by reproducing the sense of cold and warmth using an element that can absorb or generate heat.
  • an effect by stimulation such as a pin arrangement vertically moving with respect to a contact skin, a spray force or suction force of air through a jet orifice or a suction opening, a contact on the skin, a contact of an electrode, electrostatic force, etc.
  • the haptic module 154 may also be implemented to allow the user to feel a tactile effect through a muscle sensation such as fingers or arm of the user, as well as transferring the tactile effect through a direct contact. Two or more haptic modules 154 may be provided according to the configuration of the mobile terminal 100 .
  • the memory 160 can store software programs used for the processing and controlling operations performed by the controller 180 , or temporarily store data (e.g., a phonebook, messages, still images, video, etc.) that are input or output.
  • the memory 160 may store data regarding various patterns of vibrations and audio signals output when a touch is input to the touch screen.
  • the memory 160 may also include at least one type of storage medium including a flash memory, a hard disk, a multimedia card micro type, a card-type memory (e.g., SD or DX memory, etc), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a Read-Only Memory (ROM), an Electrically Erasable Programmable Read-Only Memory (EEPROM), a Programmable Read-Only memory (PROM), a magnetic memory, a magnetic disk, and an optical disk.
  • the mobile terminal 100 may be operated in relation to a web storage device that performs the storage function of the memory 160 over the Internet.
  • the interface unit 170 serves as an interface with external devices connected with the mobile terminal 100 .
  • the external devices can transmit data to an external device, receive and transmit power to each element of the mobile terminal 100 , or transmit internal data of the mobile terminal 100 to an external device.
  • the interface unit 170 may include wired or wireless headset ports, external power supply ports, wired or wireless data ports, memory card ports, ports for connecting a device having an identification module, audio input/output (I/O) ports, video I/O ports, earphone ports, or the like.
  • the identification module may also be a chip that stores various types of information for authenticating the authority of using the mobile terminal 100 and may include a user identity module (UIM), a subscriber identity module (SIM) a universal subscriber identity module (USIM), and the like.
  • the device having the identification module (referred to as ‘identifying device’, hereinafter) may take the form of a smart card. Accordingly, the identifying device can be connected with the mobile terminal 100 via a port.
  • the interface unit 170 can also serve as a passage to allow power from the cradle to be supplied therethrough to the mobile terminal 100 or serve as a passage to allow various command signals input by the user from the cradle to be transferred to the mobile terminal therethrough.
  • Various command signals or power input from the cradle may operate as signals for recognizing that the mobile terminal is properly mounted on the cradle.
  • the controller 180 controls the general operations of the mobile terminal.
  • the controller 180 performs controlling and processing associated with voice calls, data communications, video calls, and the like.
  • the controller 180 also includes a multimedia module 181 for reproducing multimedia data.
  • the multimedia module 181 may be configured within the controller 180 or may be configured to be separated from the controller 180 .
  • the controller 180 can also perform a pattern recognition processing to recognize a handwriting input or a picture drawing input performed on the touch screen as characters or images, respectively.
  • the controller 180 performs a control and processing related to a voice output.
  • the controller 180 may further include a speech recognition unit 182 for performing speech recognition from speech transferred from a speaker, a speech processor 183 for converting a sound signal to a text signal, a voice synthesis unit, a sound source direction search module, and a distance measurement unit for measuring a distance to a sound source.
  • the speech recognition unit 182 performs speech recognition of a sound signal input through the microphone 122 of the mobile terminal 100 and acquires at least one recognition candidate corresponding to the recognized speech. For example, the speech recognition unit 182 detects a speech segment from the input sound signal, performs sound analysis, recognizes the sound segment in a recognition unit, and recognizes the input sound signal. The speech recognition unit 182 acquires at least one recognition candidate corresponding to a recognized result of speech with reference to a recognition dictionary and a translation database stored at the memory 160 .
  • the speech processor 183 performs a processing of receiving a sound signal from a user through the microphone 122 , recognizing the sound signal, and converting the sound signal to a text signal.
  • the speech processor 183 converts a sound signal to a text (or a message) using a sound to text (STT) function.
  • STT sound to text
  • the STT function is a function of converting the input sound signal to a text.
  • the controller 180 is connected to the speech recognition unit 182 .
  • the voice synthesis unit converts a text to speech using a text-to-speech (TTS) engine.
  • TTS technology is technology that converts and tells text information or a symbol to a human's speech.
  • TTS technology constructs a pronunciation database for all phonemes of language, generates continuous speech by connecting to the pronunciation database, synthesizes natural speech by adjusting a magnitude, a length, and high and low of speech, and may include natural language processing technology for this purpose.
  • TTS technology may be easily seen in an electronic community field such as CTI, a PC, a PDA, and a mobile terminal and an electronic field such as a recorder, a toy, and a game player and contributes to improvement of productivity in a factory or is widely used in a home automation system for a more convenient daily life. Because TTS technology is well-known technology, a more detailed description will be omitted.
  • the power supply unit 190 receives external power or internal power and supplies appropriate power required for operating respective elements and components under the control of the controller 180 .
  • various embodiments described herein may be implemented in a computer-readable or its similar medium using, for example, software, hardware, or any combination thereof.
  • the embodiments described herein may be implemented by using at least one of application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays (FPGAs), processors, controllers, micro-controllers, microprocessors, electronic units designed to perform the functions described herein. In some cases, such embodiments may be implemented by the controller 180 itself.
  • ASICs application specific integrated circuits
  • DSPs digital signal processors
  • DSPDs digital signal processing devices
  • PLDs programmable logic devices
  • FPGAs field programmable gate arrays
  • processors controllers, micro-controllers, microprocessors, electronic units designed to perform the functions described herein.
  • controller 180 such embodiments may be implemented by the controller 180 itself.
  • the embodiments such as procedures or functions described herein may be implemented by separate software modules. Each software module may perform one or more functions or operations described herein.
  • Software codes can be implemented by a software application written in any suitable programming language. The software codes may be stored in the memory 160 and executed by the controller 180 .
  • FIG. 2 a is a front perspective view of a mobile terminal or a handheld terminal 100 according to an embodiment of the present invention.
  • the handheld terminal 100 has a bar type terminal body.
  • the present invention is not limited to a bar type terminal and can be applied to terminals of various types including slide type, folder type, swing type and swivel type terminals having at least two bodies that are relatively movably combined.
  • the terminal body includes a case (a casing, a housing, a cover, etc.) forming the exterior of the terminal 100 .
  • the case can be divided into a front case 101 and a rear case 102 .
  • Various electronic components are arranged in the space formed between the front case 101 and the rear case 102 .
  • At least one middle case can be additionally arranged between the front case 101 and the rear case 102 .
  • the cases can be formed of plastics through injection molding or made of a metal material such as stainless steel (STS) or titanium (Ti).
  • STS stainless steel
  • Ti titanium
  • the display unit 151 , the audio output unit 152 , the camera 121 , the user input unit 131 and 132 , the microphone 122 and the interface 170 can be arranged in the terminal body, specifically, in the front case 101 .
  • the display unit 151 occupies most part of the main face of the front case 101 .
  • the audio output unit 152 and the camera 121 are arranged in a region in proximity to one of both ends of the display unit 151 and the user input unit 131 and the microphone 122 are located in a region in proximity to the other end of the display unit 151 .
  • the user input unit 132 and the interface 170 are arranged on the sides of the front case 101 and the rear case 102 .
  • the user input unit 131 and 132 is operated to receive commands for controlling the operation of the handheld terminal 100 and can include a plurality of operating units 131 and 132 .
  • the operating units 131 and 132 can be referred to as manipulating portions and employ any tactile manner in which a user operates the operating units 131 and 132 while having tactile feeling.
  • First and second operating units 131 and 132 can receive various inputs.
  • the first operating unit 131 receives commands such as start, end and scroll and the second operating unit 132 receives commands such as control of the volume of sound output from the audio output unit 152 or conversion of the display unit 151 to a touch recognition mode.
  • FIG. 2 b is a rear perspective view of the handheld terminal shown in FIG. 2 a according to an embodiment of the present invention.
  • a camera 121 ′ can be additionally attached to the rear side of the terminal body, that is, the rear case 102 .
  • the camera 121 ′ has a photographing direction opposite to that of the camera 121 shown in FIG. 2 a and can have pixels different from those of the camera 121 shown in FIG. 2 a.
  • the camera 121 has low pixels such that it can capture an image of the face of a user and transmit the image to a receiving part in case of video telephony while the camera 121 ′ has high pixels because it captures an image of a general object and does not immediately transmit the image in many cases.
  • the cameras 121 and 121 ′ can be attached to the terminal body such that they can be rotated or pop-up.
  • a flash bulb 123 and a mirror 124 are additionally arranged in proximity to the camera 121 ′.
  • the flash bulb 123 lights an object when the camera 121 ′ take a picture of the object.
  • the mirror 124 is used for the user to look at his/her face in the mirror when the user wants to self-photograph himself/herself using the camera 121 ′.
  • An audio output unit 152 ′ can be additionally provided on the rear side of the terminal body.
  • the audio output unit 152 ′ can achieve a stereo function with the audio output unit 152 shown in FIG. 2 a and be used for a speaker phone mode when the terminal is used for a telephone call.
  • a broadcasting signal receiving antenna can be additionally attached to the side of the terminal body in addition to an antenna for telephone calls.
  • the antenna constructing a part of the broadcasting receiving module 111 shown in FIG. 1 can be set in the terminal body such that the antenna can be pulled out of the terminal body.
  • the power supply 190 for providing power to the handheld terminal 100 is set in the terminal body.
  • the power supply 190 can be included in the terminal body or detachably attached to the terminal body.
  • a touch pad 135 for sensing touch can be additionally attached to the rear case 102 .
  • the touch pad 135 can be of a light transmission type as the display unit 151 .
  • the display unit 151 outputs visual information through both sides thereof, the visual information can be recognized through the touch pad 135 .
  • the information output through both sides of the display unit 151 can be controlled by the touch pad 135 .
  • a display is additionally attached to the touch pad 135 such that a touch screen can be arranged even in the rear case 102 .
  • the touch pad 135 operates in connection with the display unit 151 of the front case 101 .
  • the touch pad 135 can be located in parallel with the display unit 151 behind the display unit 151 .
  • the touch panel 135 can be identical to or smaller than the display unit 151 in size.
  • the display unit 151 is the touch screen 151 .
  • the touch screen 151 can perform both an information display function and an information input function.
  • the present invention is not limited thereto.
  • a touch described in this document may include both a contact touch and a proximity touch.
  • FIG. 3 is a flowchart illustrating a method of controlling a mobile terminal according to an embodiment of the present invention
  • FIG. 4 is a flowchart illustrating an example of sharing an item execution result acquired through an embodiment shown in FIG. 3 with an external device.
  • FIGS. 5 a to 5 d are diagrams illustrating an embodiment shown in FIGS. 3 and 4 .
  • a method of controlling a mobile terminal according to an embodiment of the present invention can be performed in the mobile terminal 100 described with reference to FIGS. 1 , 2 a , and 2 b .
  • a method of controlling a mobile terminal according to an embodiment of the present invention and operation of the mobile terminal 100 for performing the method will be described in detail with reference to necessary drawings.
  • the controller 180 performs at least one of a voice call and video call (S 100 ).
  • the controller 180 transmits and receives a voice call signal or a video call signal to and from an external device through the mobile communication module 112 and thus performs the voice call or video call.
  • the controller 180 recognizes voice call contents through a speech recognition unit while performing the call (S 110 ). For example, when a predetermined input is received while performing a voice call or video call, the controller 180 activates the microphone 122 and recognizes speech of a speaker. As described above, the controller 180 may separately include the speech recognition unit 182 or the speech processor 183 .
  • the controller 180 transmits an image of a user through the camera 121 to another party side (counterparty) and displays another party's image on the touch screen 151 . Therefore, while performing video call, the predetermined input may include a touch input to one area of the touch screen 151 in which another party's image is displayed. That is, while performing the video call, when a touch input to the touch screen 151 is received, the controller 180 controls to simultaneously perform a video call mode and a speech recognition mode.
  • the controller 180 When performing a voice call, the controller 180 enters a speech recognition mode that can recognize communication contents through an outside input, for example, a hard key input provided in a body of the mobile terminal 100 , as shown in FIGS. 2 a and 2 b.
  • the controller 180 recognizes voice call contents in the speech recognition mode and recognizes all speeches or a specific voice command in which a user speaks.
  • the controller 180 tags the voice call contents to at least one item (S 120 ).
  • the item may include at least one application that can interlock with a voice search function.
  • the application that can interlock with a voice search function is an application that can recognize a speaker's voice command while executing an application and that can control operation according to the recognized voice command.
  • at least one application that can interlock with the voice search function may include at least one of web browser, phonebook, map, e-book, and calendar applications.
  • the controller 180 controls execution of the item according to the tagged voice call contents (S 130 ).
  • speech based on the voice call contents may be used for executing an intrinsic function of the specific item.
  • the controller 180 recognizes predetermined speech in which a speaker speaks through a speech recognition unit, extracts a search word from the recognized speech, and performs a search based on the search word through the web browser.
  • a position of predetermined position information recognized while performing call may be determined through the map application.
  • a meaning of a predetermined word recognized while performing a call may be determined through the electronic dictionary.
  • a speech recognition mode is activated while performing call, and predetermined voice call contents recognized in the speech recognition mode may be applied to at least one application that can interlock with the voice search function.
  • the item is not limited to an application that can interlock with a voice search function.
  • the item may include at least one application that can execute based on input text information.
  • the item may include an application that is not interlocked with a voice search function, but that converts a sound signal recognized through a speech recognition unit to a text and that controls execution of an item based on the converted text.
  • the mobile terminal 100 can share a control result of execution of a predetermined item through speech recognition while performing the call with an external device.
  • the controller 180 controls execution of a specific item through speech recognition while performing a call and then continues to perform the communication regardless of execution control of the specific item.
  • the controller 180 determines whether to store an item execution result according to the tagged voice call contents (S 160 ).
  • the predetermined touch input may be a long touch input to the touch screen 151 for displaying an item execution screen based on the voice call contents.
  • the controller 180 determines whether to share the item execution result with an external device (S 170 ). If the item execution result is shared with an external device, the controller 180 transmits the item execution result to the external device (S 180 ).
  • FIGS. 5 a to 5 b are diagrams illustrating examples of S 100 , S 110 , and S 120 of FIG. 3 .
  • the controller 180 displays another party's image VCI while performing a video call on the touch screen 151 .
  • the controller 180 enters a speech recognition mode that can recognize voice call contents according to video call.
  • a predetermined touch input may be a single touch input to the touch screen 151 .
  • the controller 180 displays an identifier for identifying a predetermined voice search engine on the touch screen 151 .
  • a predetermined voice search engine a GOOGLE search engine 10 may be used, and the controller 180 displays an identification display 12 that can represent a voice command input and a speech recognition mode on the touch screen 151 .
  • FIGS. 5 c to 5 d are diagrams illustrating an embodiment shown in FIG. 4 .
  • the controller 180 tags the recognized speech to a web page (e.g., Google search engine).
  • a web page e.g., Google search engine
  • the controller 180 displays a search result 13 of the input speech “android” 21 , which is the voice call contents, within the web page on the touch screen 151 .
  • the controller 180 may display the search result 13 together with the other party's (counterparty's) image VCI of video call on the touch screen 151 .
  • the controller 180 stores the search result 13 in the memory 160 .
  • the controller 180 uses the recognized speech while performing the video call as a search word of a predetermined web page and shares a search result with an external device.
  • the controller 180 may transmit the search result 13 shown in FIG. 5 c through at least one application (e.g., an e-mail, a messenger, a text message, and an SNS).
  • an application e.g., an e-mail, a messenger, a text message, and an SNS.
  • the controller 180 displays a menu 30 for selecting the at least one application.
  • FIG. 6 is a flowchart illustrating a method of controlling a mobile terminal according to an embodiment of the present invention.
  • FIGS. 7 a to 7 b are diagrams illustrating an embodiment shown in FIG. 6 .
  • a method of controlling a mobile terminal according to an embodiment of the present invention can be performed in the mobile terminal 100 described with reference to FIGS. 1 , 2 a and 2 b .
  • a method of controlling a mobile terminal according to an embodiment of the present invention and operation of the mobile terminal 100 for embodying the method will be described in detail with reference to necessary drawings.
  • An embodiment according to the present invention to be described later may be performed in the embodiment described with reference to FIGS. 3 and 4 .
  • the controller 180 executes a video call through the camera 121 and the mobile communication module 112 (S 191 ).
  • the controller 180 displays another party's image VCI on the touch screen 151 (S 192 ) (see FIG. 7 a ).
  • the mobile terminal 100 While executing the video call, the mobile terminal 100 enters a speech recognition mode and displays the identifier 12 notifying a speech recognition mode together with the other party's image VCI on the touch screen 151 (see FIG. 7 a ).
  • the controller 180 displays an application list 40 that can interlock with a voice search function on the touch screen 151 (S 194 ).
  • the controller 180 displays an application list 40 including a map application 41 , a web browser application 42 , and a phonebook application 43 on the touch screen 151 .
  • the controller 180 executes the selected application (S 195 ).
  • the controller 180 may execute the video call and the selected application in parallel in a multitask form instead of terminating an already performing video call.
  • the controller 180 receives an input of recognized voice call contents in the video call while executing the selected application (S 196 ). That is, in the foregoing embodiment, an example of recognizing communication contents through a speech recognition process while performing call and applying the recognized communication contents to a predetermined application has been described.
  • FIG. 8 is a flowchart illustrating a method of controlling a mobile terminal according to an embodiment of the present invention
  • FIGS. 9 to 11 are diagrams illustrating an embodiment shown in FIG. 8 .
  • the control method may be performed by the control of the controller 180 .
  • An embodiment described hereinafter may be performed with reference to the foregoing embodiments.
  • the controller 180 executes one of a video call and voice call (S 200 ).
  • the controller 180 selects a map application from the application list 40 that can interlock with the voice search function shown in FIG. 7 b (S 210 ).
  • the controller 180 interlocks a map application (MA) with a voice search function while executing an MA (S 220 ). Accordingly, the controller 180 displays an area 10 for a voice search on the touch screen 151 . Thereafter, the controller 180 receives an input of voice call contents or a voice command while executing communication and performs speech recognition (S 230 ). After speech recognition, the controller 180 extracts a search word for a voice search (S 240 ).
  • the search word may include a position related keyword included in the voice call contents or the voice command.
  • the controller 180 detects a position on a map corresponding to the search word (S 250 ). For example, referring to FIG. 9 , when the position related keyword is “Franklin” 11 , the controller 180 tags position information “Franklin” 11 to the map application MA, and the map application MA searches for the position information “Franklin” 11 on a map and displays a corresponding position SP on the map.
  • the controller 180 stores a search result of a specific position while executing the map application.
  • the controller 180 displays the search result on the touch screen 151 .
  • the controller 180 maps a voice tag identifier to a map application and displays the voice tag identifier on the touch screen 151 (S 260 ). For example, when communication is being performed, referring to FIG. 10 , the controller 180 maps the voice tag identifier 12 representing that predetermined communication contents to a map application icon 41 and displays the voice tag identifier 12 together with the other party's image VCI on the touch screen 151 .
  • the controller 180 maps the voice tag identifier 12 to the map application icon 41 and displays the voice tag identifier 12 on the touch screen 151 .
  • An application in which recognized speech is to be interlocked while performing call may be preset by a user.
  • an application to be interlocked while performing video call may be previously executed before performing video call.
  • FIG. 12 is a flowchart illustrating a method of controlling a mobile terminal according to an embodiment of the present invention.
  • the control method may be executed by the control of the controller 180 .
  • the controller 180 executes at least one application (S 300 ).
  • the application may be an application that can interlock with a voice search function, or that can recognize a text in which recognized speech is converted and that can search for predetermined information.
  • the controller 180 While the at least one application is being executed, the controller 180 performs one of a video call and voice call (S 310 ). When a predetermined touch input is received while performing the call, the controller 180 displays a presently executing application list on the touch screen 151 (S 320 ).
  • the controller 180 receives an input of selecting a specific application of the application list (S 330 ) and enters a speech recognition mode.
  • the controller 180 recognizes voice call contents or a specific voice command in the speech recognition mode (S 340 ).
  • the controller 180 extracts a search word for a voice search as a speech recognition result (S 350 ).
  • the controller 180 determines whether the selected application supports a voice search (S 360 ). If the selected application does not support a voice search, the controller 180 converts a recognized sound signal to a text and extracts a predetermined search word (S 370 ). If the selected application supports a voice search, the controller 180 interlocks the extracted search word with the selected application and executes a search operation (S 380 ).
  • FIGS. 13 a and 13 b , 14 a and 14 b , and 15 a to 15 d each will be described.
  • the controller 180 executes at least one application before performing a call and displays at least one presently executing application list on the touch screen 151 .
  • the presently executing application may include an electronic dictionary application 51 , an e-book application 52 , and a map application 53 . Thereafter, when the video call is connected, the controller 180 displays the other party's image VCI on the touch screen 151 .
  • the controller 180 while executing the video call, the controller 180 displays a presently executing application list 50 on the touch screen 151 .
  • the controller 180 tags voice call contents to the electronic dictionary application 51 . For example, when a “smart phone” is included in the voice call contents, the controller 180 tags the “smart phone” to the electronic dictionary application and searches for the tagged speech in the electronic dictionary using the tagged “smart phone” as a search word.
  • an application included in the presently executing application list 50 may be changed in a real time according to the recognized voice call contents.
  • the controller 180 can add the dictionary application 51 in a real time in the presently executing application list 50 .
  • the controller 180 can add the map application 53 in a real time in the presently executing application list 50 .
  • the application being changed in a real time may be other application except an application most recently executed application.
  • FIGS. 14 a and 14 b illustrate a case where an already executing application is an e-book.
  • the controller 180 displays an execution screen of the e-book application on the touch screen 151 as shown in FIG. 14 b .
  • the controller 180 turns a page of an executing e-book to a next page and displays the next page as shown in FIG. 14 b.
  • FIGS. 15 a to 15 d illustrate a case where an already executing application is a calendar application.
  • a calendar application 53 is selected from a presently executing application list 50 (see FIG. 15 a ) while performing a video call
  • the controller 180 displays a calendar application execution screen on the touch screen 151 , and a specific date SD may be selected (see FIG. 15 b ).
  • the controller 180 recognizes communication contents while executing a calendar application, and when a sound signal that can control execution of a calendar application is detected from the communication contents ( FIG. 15 c ), the controller 180 maps and stores the detected sound signal to the selected specific date SD ( FIG. 15 d ). The controller 180 maps the voice tag identifier 12 to the specific date SD and displays the voice tag identifier 12 .
  • a schedule can be simply set and the schedule can be registered to a calendar application through a process of selecting a calendar application while performing call and a process of selecting a specific date.
  • the controller 180 determines whether an application to register the schedule related information exists in presently executing applications, and if an application to register the schedule related information exists in presently executing applications, the controller 180 tags a sound signal including the schedule related information to the application to register the schedule related information.
  • the controller 180 automatically executes a calendar application and automatically tags recognized speech.
  • FIG. 16 is a flowchart illustrating a method of controlling a mobile terminal according to an embodiment of the present invention.
  • the mobile terminal 100 executes communication with an external device through the mobile communication module 112 and/or the camera 121 (S 400 ).
  • the controller 180 records voice call contents included in communication contents by activating the microphone 122 while performing the call (S 410 ) and stores the voice call contents (S 420 ).
  • FIG. 17 is a flowchart illustrating an embodiment shown in FIG. 16
  • FIGS. 18 a to 18 c are diagrams illustrating an embodiment shown in FIG. 17 .
  • the control method may be executed by the control of the controller 180 .
  • the controller 180 displays a category to store voice call contents on the touch screen 151 (S 431 ).
  • the controller 180 stores recorded voice call contents (S 433 ).
  • the controller 180 displays a category 60 to store recorded voice call contents on the touch screen 151 .
  • the category 60 may include a name 61 , a phone number 62 , and an address 63 .
  • a name 61 a phone number 62
  • an address 63 a category of the name 61 and storing voice call contents according to the name category.
  • the controller 180 maps the voice tag identifier 12 to a name data list of video call another party of a user name data list in a phonebook application and displays the voice tag identifier 12 . Further, referring to FIG. 18 c , when an item of the phone number 62 is selected from the category 60 , the controller 180 maps the voice tag identifier 12 to the other party's phone number data list of a user phone number data list in the phonebook application and displays the voice tag identifier 12 .
  • a user of the mobile terminal 100 can easily determine a situation in which communication contents are recorded and tagged on a user basis of a contact list. Further, when the voice tag identifier 12 is selected from the contact list, the controller 180 reproduces recorded voice call contents.
  • recorded communication contents may be edited, and edited contents may be reused in execution of a specific item.
  • FIG. 19 is a flowchart illustrating an embodiment shown in FIG. 16
  • FIGS. 20 a to 20 d are diagrams illustrating an embodiment shown in FIG. 19 .
  • the control method may be executed by the control of the controller 180 .
  • the controller 180 converts the recorded voice call contents to text (S 441 ). Thereafter, the controller 180 receives a selection signal indicating a selection of a specific portion of the converted text (S 442 ).
  • a specific portion of the converted text indicates at least one word of text formed with a plurality of words.
  • the controller 180 displays at least one application that can interlock with the selected text on the touch screen 151 (S 443 ).
  • at least one application that can interlock with the selected text is an application that can use the selected text when executing the application.
  • an application that can interlock with the selected text may be a calendar application.
  • a method of selecting the specific portion can be performed through a touch input of dragging a corresponding text. Thereafter, when a specific application is selected (S 444 ), the controller 180 controls execution of the selected application based on the selected text (S 445 ).
  • the controller 180 records voice call contents while performing a video call and displays text 70 in which the recorded voice call contents are converted on the touch screen 151 .
  • a specific portion for example, a portion “Taehui Kim, 01099807129” 71 of the converted text 70 may be selected.
  • the controller 180 displays at least one application list 80 that can interlock with the selected text 71 on the touch screen 151 .
  • the application list 80 may include information about applications 81 and 82 to store the selected text 71 and methods 83 and 84 of editing the selected text 71 .
  • the controller 180 may interlock the selected text 71 with the contact list application 81 .
  • the controller 180 displays an application list 90 according to an attribute of the selected portion on the touch screen 151 .
  • the selected portion is information related to a date, a time, and a place
  • the controller 180 includes an application, for example, a calendar application that can interlock with the selected portion in the application list 90 and displays the application on the touch screen 151 .
  • the application list 90 may include information about applications 91 , 92 , and 93 to store the selected text 72 and methods 94 and 95 of editing the selected text 72 .
  • the controller 180 controls execution of a calendar application based on the selected text 72 . That is, date, time, and place information corresponding to the selected text 72 may be registered to the calendar application.
  • a specific sound signal is tagged and stored to a record file in which voice call contents are recorded.
  • FIG. 21 is a flowchart illustrating a method of controlling a mobile terminal according to an embodiment of the present invention
  • FIGS. 22 , 23 a and 23 b , and 24 a and 24 b are diagrams illustrating an embodiment shown in FIG. 21 .
  • the control method may be executed by the control of the controller 180 .
  • the controller 180 executes a voice search in a specific item based on tagged voice call contents (S 451 ).
  • the controller 180 maps at least one voice search result to a recorded file and displays the at least one voice search result on the touch screen 151 (S 452 ).
  • voice call contents may be provided as a voice recorded file 800 of a graph form represented with a frequency size based on a communication time.
  • the controller 180 tags a voice search result 900 performed based on voice call contents to the voice record file 800 of a graph form and stores the voice search result 900 .
  • the voice search result 900 may be stored to be tagged to correspond to a stored time as a voice search is performed. Referring to FIG. 22 , at the voice search result 900 , voice call contents for 19 seconds after communication is started were recorded, a first voice search result 901 stored at about 2 seconds, a second voice search result 902 stored at about 5 seconds, a third voice search result 903 stored at about 9 seconds, a fourth voice search result 904 stored at about 14 seconds, and a fifth voice search result 905 stored at about 17 seconds are tagged and stored.
  • each of the voice search results 901 , 902 , 903 , 904 , and 905 is tagged to a predetermined item while performing a call and corresponds to sound signals applied to execution of an item. Therefore, the voice search results are distinguished from a recorded file in which voice call contents with another party are simply recorded.
  • a communication segment (e.g., 20 seconds) is classified as a segment in which communication is performed as a communication call is connected to a communication call request segment.
  • a sound recognized at the communication call request segment may be a communication connection sound (music A) of another party's terminal. Therefore, when an input of selecting the first voice search result 901 is received, the controller 180 changes a communication connection sound (e.g., ring tone) of the mobile terminal 100 to music A. For example, the controller 180 displays text 801 asking the user if they want to change the ringtone to music A.
  • a communication connection sound e.g., ring tone
  • voice call contents with another party are generally recorded, and when an input of selecting the third voice search result 903 is received, the controller 180 displays text 801 corresponding to the third voice search result on the touch screen 151 .
  • the controller 180 uses the voice search result.
  • voice call contents used for the voice search result 903 may be used for execution of a map application.
  • the controller 180 displays schedule information 803 of a message form on the touch screen 151 . Further, when a touch input of dragging a map application to the voice search result 903 is received, the controller 180 executes a map application, searches for a position corresponding to the voice search result on a map, and displays the position.
  • the controller 180 maps a map application icon MA to the third voice search result 903 and to display the MA.
  • a kind of items applied according to an attribute of a sound signal output on a communication segment basis while performing call may be differently set.
  • FIG. 25 is a flowchart illustrating a method of controlling a mobile terminal according to an embodiment of the present invention.
  • the controller 180 performs one of a voice call and video call through the mobile communication module 112 and/or the camera 121 (S 500 ).
  • the controller 180 records voice call contents (S 510 ), and determines an attribute of a sound signal output on a communication segment basis (S 520 ). Accordingly, the controller 180 differently sets an item to which a sound signal is tagged according to an attribute of the output sound signal (S 530 ).
  • a sound signal output on a communication segment basis includes a sound signal output at a communication call connection request segment and a sound signal output at a segment in which voice communication with another party is performed as a communication call is connected.
  • a voice output at a communication call connection request segment may be a communication connection sound
  • a communication voice may be a speaker's communication voice according to communication contents with another party after a communication call is connected.
  • the controller 180 executes a predetermined application that can distinguish a communication connection sound and thus distinguishes music used for the communication connection sound.
  • the mobile terminal 100 may tag video call contents to video call another party's image and store the video call contents.
  • FIG. 26 is a flowchart illustrating a method of controlling a mobile terminal according to an embodiment of the present invention
  • FIGS. 27 a and 27 b are diagrams illustrating an embodiment shown in FIG. 26 .
  • the control method may be executed by the control of the controller 180 .
  • the method will be described with reference to FIGS. 26 , 27 a , and 27 b.
  • the controller 180 executes a video call (S 600 ), and displays another party's image VCI on the touch screen 151 (S 610 ).
  • the controller 180 also records video call contents while performing the video call.
  • the controller 180 captures the other party's image displayed on the touch screen 151 (S 620 ) and tags voice call contents to the captured image (S 630 ). Further, the controller 180 maps the voice tag identifier 12 to the captured image and displays the voice tag identifier 12 (S 640 ).
  • FIG. 27 a illustrates these features.
  • the controller 180 stores a capture image to which the voice tag identifier 12 is mapped in the memory 160 (S 650 ). Further, the controller 180 shares a capture image to which the voice tag identifier 12 is mapped with an external device.
  • the above-described method of controlling a mobile terminal according to the present invention may be written and provided in a computer readable recording medium with a program to execute in a computer.
  • the method of controlling the mobile terminal according to the present invention may be executed through software.
  • constituent means of the present invention are code segments that perform required tasks.
  • Programs or code segments may be stored in a processor readable medium or may be transmitted by a computer data signal combined with a carrier through a transmission medium or a communication network.
  • the computer readable recording medium may be any data storage device for storing data that can be read by a computer system.
  • the computer readable recording medium may include, for example, a ROM, a RAM, a CD-ROM, a DVD ⁇ ROM, a DVD-RAM, a magnetic tape, a floppy disk, a hard disk, and an optical data storage device.
  • the computer readable recording medium may also be distributed in a computer system connected to a network and thus a computer readable code may be stored and executed in a distributed manner.

Abstract

A mobile terminal including a speech recognition unit configured to recognize input speech; a mobile communication unit configured to perform a calling operation with at least one other terminal; and a controller configured to receive a predetermined input while performing the calling operation, to recognize voice call contents through the speech recognition unit based on the received predetermined input, to tag the recognized voice call contents to at least one application executed by the mobile terminal, and to execute the at least one application using the tagged voice call contents.

Description

CROSS-REFERENCE TO RELATED APPLICATION
Pursuant to 35 U.S.C. §119(a), this application claims the benefit of earlier filing date and right of priority to Korean Patent Application No. 10-2012-0002206, filed on Jan. 6, 2012, the contents of which are hereby incorporated by reference herein in its entirety.
FIELD OF THE INVENTION
The present invention relates to a mobile terminal and a method of controlling the same.
RELATED ART
Terminals such as a personal computer (PC), a laptop computer, and a mobile terminal are formed to perform various functions, for example, a data and voice communication function, a function of photographing a picture or a moving picture through a camera, a function of storing a voice, a function of reproducing a music file through a speaker system, and a function of displaying an image or video. Some terminals include an addition function that can execute a game, and some other terminals may be embodied as a multimedia device. Moreover, a recent terminal enables to view video or a television program by receiving broadcasting or a multicast signal.
In general, terminals are classified into a mobile terminal and a stationary terminal according to mobility, and mobile terminals are again classified into a handheld terminal and a vehicle mount terminal according to whether a user can directly carry.
Efforts for supporting and enlarging a function of a terminal have been continuously performed, and such effort includes improvement of software or hardware as well as a change and improvement of structural constituent elements that form a terminal.
SUMMARY OF THE INVENTION
An aspect of the present invention is to provide a mobile terminal and a method of controlling the same that can control execution of an item related to voice call contents based on voice call contents acquired through speech recognition while performing call.
Another aspect of the present invention is to provide a mobile terminal and a method of controlling the same that can more conveniently acquire desired data through a predetermined item while performing call by recognizing a speaker's speech while performing call and tagging the recognized speech to the predetermined item.
Another aspect of the present invention is to provide a mobile terminal and a method of controlling the same that can more conveniently acquire desired data by linking at least one item that can interlock with a speech recognition function to communication contents while performing call.
Another aspect of the present invention is to provide a mobile terminal and a method of controlling the same that can more efficiently perform multicasting through speech recognition while performing call.
In an aspect, a mobile terminal includes: a speech recognition unit; a mobile communication unit for performing at least one of voice call and video call; and a controller for recognizing voice call contents through the speech recognition unit according to a predetermined input while performing the call and for tagging the voice call contents to at least one item, and for controlling execution of the item according to the tagged voice call contents.
In another aspect, a mobile terminal includes: a mobile communication unit; a camera; a microphone; and a controller for acquiring a sound signal through the microphone while performing video call with at least one external device through the mobile communication unit and the camera, for executing a voice search through a predetermined voice search application based on the acquired sound signal, and for storing a voice search result.
In another aspect, a method of controlling a mobile terminal includes: performing at least one of voice call and video call; recognizing voice call contents through a speech recognition unit when a predetermined input is received while performing the call; tagging the voice call contents to at least one item; and controlling execution of the item according to the tagged voice call contents.
In another aspect, a method of controlling a mobile terminal includes: performing video call; recording voice call contents while performing the video call; executing a voice search application as a predetermined input is received; extracting a search word from the recorded voice call contents; executing a voice search in the voice search application based on the extracted search word; and storing a voice search result.
The detailed matters of the embodiments will be included in the detailed description and the drawings.
A mobile terminal and a method of controlling the same according to an embodiment of the present invention have the following effects.
According to an embodiment of the present invention, execution of an item related to voice call contents can be controlled based on the voice call contents acquired through speech recognition while performing call.
Further, according to an embodiment of the present invention, by recognizing a speaker's speech while performing call and tagging the recognized speech to a predetermined item, while performing the call, desired data can be more conveniently acquired through the item.
Further, according to an embodiment of the present invention, while performing call, by linking at least one item that can interlock with a speech recognition function to communication contents, desired data can be more conveniently acquired.
Further, according to an embodiment of the present invention, more efficient multicasting can be performed through speech recognition while performing call.
BRIEF DESCRIPTION OF THE DRAWINGS
The accompanying drawings, which are included to provide a further understanding of described embodiments of the present invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the present invention and together with the description serve to explain aspects and features of the present invention.
FIG. 1 is a block diagram of a mobile terminal according to an embodiment of the present invention.
FIG. 2 a is a front perspective view of a mobile terminal or a handheld terminal according to an embodiment of the present invention.
FIG. 2 b is a rear perspective view of the handheld terminal shown in FIG. 2 a according to an embodiment of the present invention.
FIG. 3 is a flowchart illustrating a method of controlling a mobile terminal according to an embodiment of the present invention.
FIG. 4 is a flowchart illustrating an example of sharing an item execution result acquired through an embodiment shown in FIG. 3 with an external device.
FIGS. 5 a to 5 d are diagrams illustrating an embodiment shown in FIGS. 3 and 4.
FIG. 6 is a flowchart illustrating a method of controlling a mobile terminal according to an embodiment of the present invention.
FIGS. 7 a and 7 b are diagrams illustrating an embodiment shown in FIG. 6.
FIG. 8 is a flowchart illustrating a method of controlling a mobile terminal according to an embodiment of the present invention.
FIGS. 9 to 11 are diagrams illustrating an embodiment shown in FIG. 8.
FIG. 12 is a flowchart illustrating a method of controlling a mobile terminal according to an embodiment of the present invention.
FIGS. 13 a and 13 b, 14 a and 14 b, and 15 a to 15 d are diagrams illustrating an embodiment shown in FIG. 12.
FIG. 16 is a flowchart illustrating a method of controlling a mobile terminal according to an embodiment of the present invention.
FIG. 17 is a flowchart illustrating an embodiment shown in FIG. 16.
FIGS. 18 a to 18 c are diagrams illustrating an embodiment shown in FIG. 17.
FIG. 19 is a flowchart illustrating an embodiment shown in FIG. 16.
FIGS. 20 a to 20 d are diagrams illustrating an embodiment shown in FIG. 19.
FIG. 21 is a flowchart illustrating a method of controlling a mobile terminal according to an embodiment of the present invention.
FIGS. 22, 23 a and 23 b, and 24 a and 24 b are diagrams illustrating an embodiment shown in FIG. 21.
FIG. 25 is a flowchart illustrating a method of controlling a mobile terminal according to an embodiment of the present invention.
FIG. 26 is a flowchart illustrating a method of controlling a mobile terminal according to an embodiment of the present invention.
FIGS. 27 a and 27 b are diagrams illustrating an embodiment shown in FIG. 26.
DETAILED DESCRIPTION
The present invention will now be described more fully with reference to the accompanying drawings, in which exemplary embodiments of the invention are shown. The invention may, however, be embodied in many different forms and should not be construed as being limited to the embodiments set forth herein; rather, there embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the invention to those skilled in the art.
Hereinafter, a mobile terminal relating to the present invention will be described below in more detail with reference to the accompanying drawings. In the following description, suffixes “module” and “unit” are given to components of the mobile terminal in consideration of only facilitation of description and do not have meanings or functions discriminated from each other.
The mobile terminal described in the specification can include a cellular phone, a smart phone, a laptop computer, a digital broadcasting terminal, personal digital assistants (PDA), a portable multimedia player (PMP), a navigation system and so on.
FIG. 1 is a block diagram of a mobile terminal 100 according to an embodiment of the present invention. As shown, the mobile terminal 100 includes a wireless communication unit 110, an A/V (Audio/Video) input unit 120, a user input unit 130, a sensing unit 140, an output unit 150, a memory 160, an interface unit 170, a controller 180, and a power supply unit 190, etc. FIG. 1 shows the mobile terminal as having various components, but implementing all of the illustrated components is not a requirement. Greater or fewer components may alternatively be implemented.
In addition, the wireless communication unit 110 generally includes one or more components allowing radio communication between the mobile terminal 100 and a wireless communication system or a network in which the mobile terminal is located. For example, in FIG. 1, the wireless communication unit includes at least one of a broadcast receiving module 111, a mobile communication module 112, a wireless Internet module 113, a short-range communication module 114, and a location information module 115.
The broadcast receiving module 111 receives broadcast signals and/or broad cast associated information from an external broadcast management server via a broadcast channel. Further, the broadcast channel may include a satellite channel and/or a terrestrial channel. The broadcast management server may be a server that generates and transmits a broadcast signal and/or broadcast associated information or a server that receives a previously generated broadcast signal and/or broadcast associated information and transmits the same to a terminal. The broadcast signal may include a TV broadcast signal, a radio broadcast signal, a data broadcast signal, and the like. Also, the broadcast signal may further include a broadcast signal combined with a TV or radio broadcast signal.
In addition, the broadcast associated information may refer to information associated with a broadcast channel, a broadcast program or a broadcast service provider. The broadcast associated information may also be provided via a mobile communication network and, in this instance, the broadcast associated information may be received by the mobile communication module 112.
Further, the broadcast signal may exist in various forms. For example, the broadcast signal may exist in the form of an electronic program guide (EPG) of the digital multimedia broadcasting (DMB) system, and electronic service guide (ESG) of the digital video broadcast-handheld (DVB-H) system, and the like.
The broadcast receiving module 111 may also be configured to receive signals broadcast by using various types of broadcast systems. In particular, the broadcast receiving module 111 can receive a digital broadcast using a digital broadcast system such as the multimedia broadcasting-terrestrial (DMB-T) system, the digital multimedia broadcasting-satellite (DMB-S) system, the digital video broadcast-handheld (DVB-H) system, the data broadcasting system known as the media forward link only (MediaFLO®), the integrated services digital broadcast-terrestrial (ISDB-T) system, etc.
The broadcast receiving module 111 can also be configured to be suitable for all broadcast systems that provide a broadcast signal as well as the above-mentioned digital broadcast systems. In addition, the broadcast signals and/or broadcast-associated information received via the broadcast receiving module 111 may be stored in the memory 160.
In addition, the mobile communication module 112 transmits and/or receives radio signals to and/or from at least one of a base station, an external terminal and a server. Such radio signals may include a voice call signal, a video call signal or various types of data according to text and/or multimedia message transmission and/or reception.
The wireless Internet module 113 supports wireless Internet access for the mobile terminal and may be internally or externally coupled to the terminal. The wireless Internet access technique implemented may include a WLAN (Wireless LAN) (Wi-Fi), Wibro (Wireless broadband), Wimax (World Interoperability for Microwave Access), HSDPA (High Speed Downlink Packet Access), or the like.
Further, the short-range communication module 114 is a module for supporting short range communications. Some examples of short-range communication technology include Bluetooth™, Radio Frequency IDentification (RFID), Infrared Data Association (IrDA), Ultra-WideBand (UWB), ZigBee™, and the like.
Also, the location information module 115 is a module for checking or acquiring a location or position of the mobile terminal. The location information module 115 may acquire location information by using a global navigation satellite system (GNSS). Here, the GNSS is a standard generic term for satellite navigation systems revolving around the earth and allowing certain types of radio navigation receivers to transmit reference signals determining their location on or in the vicinity of the surface of the earth. The GNSS may include the United States' global positioning system (GPS), the European Union's Galileo positioning system, the Russian global orbiting navigational satellite system (GLONASS), COMPASS, a compass navigation system, by the People's Republic of China, and the quasi-zenith satellite system (QZSS) by Japan.
An example of GNSS is a GPS (Global Positioning System) module. The GPS module may calculate information related to the distance from one point (entity) to three or more satellites and information related to time at which the distance information was measured, and applies trigonometry to the calculated distance, thereby calculating three-dimensional location information according to latitude, longitude, and altitude with respect to the one point (entity). In addition, a method of acquiring location and time information by using three satellites and correcting an error of the calculated location and time information by using another one satellite may be also used. The GPS module may also continuously calculate the current location in real time and also calculate speed information by using the continuously calculated current location.
With reference to FIG. 1, the A/V input unit 120 is configured to receive an audio or video signal, and includes a camera 121 and a microphone 122. The camera 121 processes image data of still pictures or video obtained by an image capture device in a video capturing mode or an image capturing mode, and the processed image frames can then be displayed on a display unit 151.
Further, the image frames processed by the camera 121 may be stored in the memory 160 or transmitted via the wireless communication unit 110. Two or more cameras 121 may also be provided according to the configuration of the mobile terminal.
In addition, the microphone 122 can receive sounds via a microphone in a phone call mode, a recording mode, a voice recognition mode, and the like, and can process such sounds into audio data. The processed audio data may then be converted for output into a format transmittable to a mobile communication base station via the mobile communication module 112 for the phone call mode. The microphone 122 may also implement various types of noise canceling (or suppression) algorithms to cancel or suppress noise or interference generated when receiving and transmitting audio signals.
Also, the user input unit 130 can generate input data from commands entered by a user to control various operations of the mobile terminal. The user input unit 130 may include a keypad, a dome switch, a touch pad (e.g., a touch sensitive member that detects changes in resistance, pressure, capacitance, etc. due to being contacted), a jog wheel, a jog switch, and the like.
Further, the sensing unit 140 detects a current status of the mobile terminal 100 such as an opened or closed state of the mobile terminal 100, a location of the mobile terminal 100, the presence or absence of user contact with the mobile terminal 100, the orientation of the mobile terminal 100, an acceleration or deceleration movement and direction of the mobile terminal 100, etc., and generates command or signals for controlling the operation of the mobile terminal 100. For example, when the mobile terminal 100 is implemented as a slide type mobile phone, the sensing unit 140 may sense whether the slide phone is opened or closed. In addition, the sensing unit 140 can detect whether or not the power supply unit 190 supplies power or whether or not the interface unit 170 is coupled with an external device. In FIG. 1, the sensing unit 140 also includes a proximity sensor 141.
In addition, the output unit 150 is configured to provide outputs in a visual, audible, and/or tactile manner. In the example in FIG. 1, the output unit 150 includes the display unit 151, an audio output module 152, an alarm unit 153, a haptic module 154, and the like. In more detail, the display unit 151 can display information processed in the mobile terminal 100. For example, when the mobile terminal 100 is in a phone call mode, the display unit 151 can display a User Interface (UI) or a Graphic User Interface (GUI) associated with a call or other communication.
The display unit 151 may also include at least one of a Liquid Crystal Display (LCD), a Thin Film Transistor-LCD (TFT-LCD), an Organic Light Emitting Diode (OLED) display, a flexible display, a three-dimensional (3D) display, or the like. Some of these displays may also be configured to be transparent or light-transmissive to allow for viewing of the exterior, which is called transparent displays.
An example transparent display is a TOLED (Transparent Organic Light Emitting Diode) display, or the like. A rear structure of the display unit 151 may be also light-transmissive. Through such configuration, the user can view an object positioned at the rear side of the terminal body through the region occupied by the display unit 151 of the terminal body.
Further, the mobile terminal 100 may include two or more display units according to its particular desired embodiment. For example, a plurality of display units may be separately or integrally disposed on one surface of the mobile terminal, or may be separately disposed on mutually different surfaces.
Meanwhile, when the display unit 151 and a sensor (referred to as a touch sensor', hereinafter) for detecting a touch operation are overlaid in a layered manner to form a touch screen, the display unit 151 can function as both an input device and an output device. The touch sensor may have a form of a touch film, a touch sheet, a touch pad, and the like.
Further, the touch sensor may be configured to convert pressure applied to a particular portion of the display unit 151 or a change in the capacitance or the like generated at a particular portion of the display unit 151 into an electrical input signal. The touch sensor may also be configured to detect the pressure when a touch is applied, as well as the touched position and area.
When there is a touch input with respect to the touch sensor, corresponding signals are transmitted to a touch controller, and the touch controller processes the signals and transmits corresponding data to the controller 180. Accordingly, the controller 180 can recognize which portion of the display unit 151 has been touched.
With reference to FIG. 1, the proximity sensor 141 may be disposed within or near the touch screen. In more detail, the proximity sensor 141 is a sensor for detecting the presence or absence of an object relative to a certain detection surface or an object that exists nearby by using the force of electromagnetism or infrared rays without a physical contact. Thus, the proximity sensor 141 has a considerably longer life span compared with a contact type sensor, and can be utilized for various purposes.
Examples of the proximity sensor 141 include a transmission type photoelectric sensor, a direct reflection type photoelectric sensor, a mirror-reflection type photo sensor, an RF oscillation type proximity sensor, a capacitance type proximity sensor, a magnetic proximity sensor, an infrared proximity sensor, and the like. When the touch screen is the capacitance type, proximity of the pointer is detected by a change in electric field according to the proximity of the pointer. In this instance, the touch screen (touch sensor) may be classified as a proximity sensor.
In the following description, for the sake of brevity, recognition of the pointer positioned to be close to the touch screen will be called a ‘proximity touch’, while recognition of actual contacting of the pointer on the touch screen will be called a ‘contact touch’. Further, when the pointer is in the state of the proximity touch, it means that the pointer is positioned to correspond vertically to the touch screen.
By employing the proximity sensor 141, a proximity touch and a proximity touch pattern (e.g., a proximity touch distance, a proximity touch speed, a proximity touch time, a proximity touch position, a proximity touch movement state, or the like) can be detected, and information corresponding to the detected proximity touch operation and the proximity touch pattern can be output to the touch screen.
Further, the audio output module 152 can convert and output as sound and data received from the wireless communication unit 110 or stored in the memory 160 in a call signal reception mode, a call mode, a record mode, a voice recognition mode, a broadcast reception mode, and the like. Also, the audio output module 152 can provide audible outputs related to a particular function performed by the mobile terminal 100 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output module 152 may also include a speaker, a buzzer, or the like. In addition, the audio output module 152 may output a sound through an earphone jack.
In addition, the alarm unit 153 can output information about the occurrence of an event of the mobile terminal 100. Typical events include call reception, message reception, key signal inputs, a touch input etc. In addition to audio or video outputs, the alarm unit 153 can provide outputs in a different manner to inform about the occurrence of an event. For example, the alarm unit 153 can provide an output in the form of vibrations. The video signal or the audio signal may be also output through the display unit 151 or the audio output module 152.
In addition, the haptic module 154 generates various tactile effects the user may feel. One example of the tactile effects generated by the haptic module 154 is vibration. The strength and pattern of the haptic module 154 can also be controlled. For example, different vibrations may be combined to be output or sequentially output.
Besides vibration, the haptic module 154 can generate various other tactile effects such as an effect by stimulation such as a pin arrangement vertically moving with respect to a contact skin, a spray force or suction force of air through a jet orifice or a suction opening, a contact on the skin, a contact of an electrode, electrostatic force, etc., an effect by reproducing the sense of cold and warmth using an element that can absorb or generate heat.
The haptic module 154 may also be implemented to allow the user to feel a tactile effect through a muscle sensation such as fingers or arm of the user, as well as transferring the tactile effect through a direct contact. Two or more haptic modules 154 may be provided according to the configuration of the mobile terminal 100.
Further, the memory 160 can store software programs used for the processing and controlling operations performed by the controller 180, or temporarily store data (e.g., a phonebook, messages, still images, video, etc.) that are input or output. In addition, the memory 160 may store data regarding various patterns of vibrations and audio signals output when a touch is input to the touch screen.
The memory 160 may also include at least one type of storage medium including a flash memory, a hard disk, a multimedia card micro type, a card-type memory (e.g., SD or DX memory, etc), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a Read-Only Memory (ROM), an Electrically Erasable Programmable Read-Only Memory (EEPROM), a Programmable Read-Only memory (PROM), a magnetic memory, a magnetic disk, and an optical disk. Also, the mobile terminal 100 may be operated in relation to a web storage device that performs the storage function of the memory 160 over the Internet.
Also, the interface unit 170 serves as an interface with external devices connected with the mobile terminal 100. For example, the external devices can transmit data to an external device, receive and transmit power to each element of the mobile terminal 100, or transmit internal data of the mobile terminal 100 to an external device. For example, the interface unit 170 may include wired or wireless headset ports, external power supply ports, wired or wireless data ports, memory card ports, ports for connecting a device having an identification module, audio input/output (I/O) ports, video I/O ports, earphone ports, or the like.
The identification module may also be a chip that stores various types of information for authenticating the authority of using the mobile terminal 100 and may include a user identity module (UIM), a subscriber identity module (SIM) a universal subscriber identity module (USIM), and the like. In addition, the device having the identification module (referred to as ‘identifying device’, hereinafter) may take the form of a smart card. Accordingly, the identifying device can be connected with the mobile terminal 100 via a port.
When the mobile terminal 100 is connected with an external cradle, the interface unit 170 can also serve as a passage to allow power from the cradle to be supplied therethrough to the mobile terminal 100 or serve as a passage to allow various command signals input by the user from the cradle to be transferred to the mobile terminal therethrough. Various command signals or power input from the cradle may operate as signals for recognizing that the mobile terminal is properly mounted on the cradle.
In addition, the controller 180 controls the general operations of the mobile terminal. For example, the controller 180 performs controlling and processing associated with voice calls, data communications, video calls, and the like. In the example in FIG. 1, the controller 180 also includes a multimedia module 181 for reproducing multimedia data. The multimedia module 181 may be configured within the controller 180 or may be configured to be separated from the controller 180. The controller 180 can also perform a pattern recognition processing to recognize a handwriting input or a picture drawing input performed on the touch screen as characters or images, respectively.
Further, the controller 180 performs a control and processing related to a voice output. The controller 180 may further include a speech recognition unit 182 for performing speech recognition from speech transferred from a speaker, a speech processor 183 for converting a sound signal to a text signal, a voice synthesis unit, a sound source direction search module, and a distance measurement unit for measuring a distance to a sound source.
The speech recognition unit 182 performs speech recognition of a sound signal input through the microphone 122 of the mobile terminal 100 and acquires at least one recognition candidate corresponding to the recognized speech. For example, the speech recognition unit 182 detects a speech segment from the input sound signal, performs sound analysis, recognizes the sound segment in a recognition unit, and recognizes the input sound signal. The speech recognition unit 182 acquires at least one recognition candidate corresponding to a recognized result of speech with reference to a recognition dictionary and a translation database stored at the memory 160.
The speech processor 183 performs a processing of receiving a sound signal from a user through the microphone 122, recognizing the sound signal, and converting the sound signal to a text signal. The speech processor 183 converts a sound signal to a text (or a message) using a sound to text (STT) function. Here, when a sound signal is input, the STT function is a function of converting the input sound signal to a text. In order to perform such an STT function, the controller 180 is connected to the speech recognition unit 182.
The voice synthesis unit converts a text to speech using a text-to-speech (TTS) engine. TTS technology is technology that converts and tells text information or a symbol to a human's speech. TTS technology constructs a pronunciation database for all phonemes of language, generates continuous speech by connecting to the pronunciation database, synthesizes natural speech by adjusting a magnitude, a length, and high and low of speech, and may include natural language processing technology for this purpose. TTS technology may be easily seen in an electronic community field such as CTI, a PC, a PDA, and a mobile terminal and an electronic field such as a recorder, a toy, and a game player and contributes to improvement of productivity in a factory or is widely used in a home automation system for a more convenient daily life. Because TTS technology is well-known technology, a more detailed description will be omitted.
Also, the power supply unit 190 receives external power or internal power and supplies appropriate power required for operating respective elements and components under the control of the controller 180. Further, various embodiments described herein may be implemented in a computer-readable or its similar medium using, for example, software, hardware, or any combination thereof.
For a hardware implementation, the embodiments described herein may be implemented by using at least one of application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays (FPGAs), processors, controllers, micro-controllers, microprocessors, electronic units designed to perform the functions described herein. In some cases, such embodiments may be implemented by the controller 180 itself.
For a software implementation, the embodiments such as procedures or functions described herein may be implemented by separate software modules. Each software module may perform one or more functions or operations described herein. Software codes can be implemented by a software application written in any suitable programming language. The software codes may be stored in the memory 160 and executed by the controller 180.
FIG. 2 a is a front perspective view of a mobile terminal or a handheld terminal 100 according to an embodiment of the present invention.
The handheld terminal 100 has a bar type terminal body. However, the present invention is not limited to a bar type terminal and can be applied to terminals of various types including slide type, folder type, swing type and swivel type terminals having at least two bodies that are relatively movably combined.
The terminal body includes a case (a casing, a housing, a cover, etc.) forming the exterior of the terminal 100. In the present embodiment, the case can be divided into a front case 101 and a rear case 102. Various electronic components are arranged in the space formed between the front case 101 and the rear case 102. At least one middle case can be additionally arranged between the front case 101 and the rear case 102.
The cases can be formed of plastics through injection molding or made of a metal material such as stainless steel (STS) or titanium (Ti).
The display unit 151, the audio output unit 152, the camera 121, the user input unit 131 and 132, the microphone 122 and the interface 170 can be arranged in the terminal body, specifically, in the front case 101.
The display unit 151 occupies most part of the main face of the front case 101. The audio output unit 152 and the camera 121 are arranged in a region in proximity to one of both ends of the display unit 151 and the user input unit 131 and the microphone 122 are located in a region in proximity to the other end of the display unit 151. The user input unit 132 and the interface 170 are arranged on the sides of the front case 101 and the rear case 102.
The user input unit 131 and 132 is operated to receive commands for controlling the operation of the handheld terminal 100 and can include a plurality of operating units 131 and 132. The operating units 131 and 132 can be referred to as manipulating portions and employ any tactile manner in which a user operates the operating units 131 and 132 while having tactile feeling.
First and second operating units 131 and 132 can receive various inputs. For example, the first operating unit 131 receives commands such as start, end and scroll and the second operating unit 132 receives commands such as control of the volume of sound output from the audio output unit 152 or conversion of the display unit 151 to a touch recognition mode.
FIG. 2 b is a rear perspective view of the handheld terminal shown in FIG. 2 a according to an embodiment of the present invention.
Referring to FIG. 2 b, a camera 121′ can be additionally attached to the rear side of the terminal body, that is, the rear case 102. The camera 121′ has a photographing direction opposite to that of the camera 121 shown in FIG. 2 a and can have pixels different from those of the camera 121 shown in FIG. 2 a.
For example, it is desirable that the camera 121 has low pixels such that it can capture an image of the face of a user and transmit the image to a receiving part in case of video telephony while the camera 121′ has high pixels because it captures an image of a general object and does not immediately transmit the image in many cases. The cameras 121 and 121′ can be attached to the terminal body such that they can be rotated or pop-up.
A flash bulb 123 and a mirror 124 are additionally arranged in proximity to the camera 121′. The flash bulb 123 lights an object when the camera 121′ take a picture of the object. The mirror 124 is used for the user to look at his/her face in the mirror when the user wants to self-photograph himself/herself using the camera 121′.
An audio output unit 152′ can be additionally provided on the rear side of the terminal body. The audio output unit 152′ can achieve a stereo function with the audio output unit 152 shown in FIG. 2 a and be used for a speaker phone mode when the terminal is used for a telephone call.
A broadcasting signal receiving antenna can be additionally attached to the side of the terminal body in addition to an antenna for telephone calls. The antenna constructing a part of the broadcasting receiving module 111 shown in FIG. 1 can be set in the terminal body such that the antenna can be pulled out of the terminal body.
The power supply 190 for providing power to the handheld terminal 100 is set in the terminal body. The power supply 190 can be included in the terminal body or detachably attached to the terminal body.
A touch pad 135 for sensing touch can be additionally attached to the rear case 102. The touch pad 135 can be of a light transmission type as the display unit 151. In this case, if the display unit 151 outputs visual information through both sides thereof, the visual information can be recognized through the touch pad 135. The information output through both sides of the display unit 151 can be controlled by the touch pad 135. Otherwise, a display is additionally attached to the touch pad 135 such that a touch screen can be arranged even in the rear case 102.
The touch pad 135 operates in connection with the display unit 151 of the front case 101. The touch pad 135 can be located in parallel with the display unit 151 behind the display unit 151. The touch panel 135 can be identical to or smaller than the display unit 151 in size.
Hereinafter, embodiments of the present invention will be described. In the present invention, for convenience of description, it is assumed that the display unit 151 is the touch screen 151. As described above, the touch screen 151 can perform both an information display function and an information input function. However, the present invention is not limited thereto. Further, a touch described in this document may include both a contact touch and a proximity touch.
FIG. 3 is a flowchart illustrating a method of controlling a mobile terminal according to an embodiment of the present invention, and FIG. 4 is a flowchart illustrating an example of sharing an item execution result acquired through an embodiment shown in FIG. 3 with an external device. FIGS. 5 a to 5 d are diagrams illustrating an embodiment shown in FIGS. 3 and 4.
A method of controlling a mobile terminal according to an embodiment of the present invention can be performed in the mobile terminal 100 described with reference to FIGS. 1, 2 a, and 2 b. Hereinafter, a method of controlling a mobile terminal according to an embodiment of the present invention and operation of the mobile terminal 100 for performing the method will be described in detail with reference to necessary drawings.
Referring to FIG. 3, the controller 180 performs at least one of a voice call and video call (S100). The controller 180 transmits and receives a voice call signal or a video call signal to and from an external device through the mobile communication module 112 and thus performs the voice call or video call.
The controller 180 recognizes voice call contents through a speech recognition unit while performing the call (S110). For example, when a predetermined input is received while performing a voice call or video call, the controller 180 activates the microphone 122 and recognizes speech of a speaker. As described above, the controller 180 may separately include the speech recognition unit 182 or the speech processor 183.
When a video call is performed, the controller 180 transmits an image of a user through the camera 121 to another party side (counterparty) and displays another party's image on the touch screen 151. Therefore, while performing video call, the predetermined input may include a touch input to one area of the touch screen 151 in which another party's image is displayed. That is, while performing the video call, when a touch input to the touch screen 151 is received, the controller 180 controls to simultaneously perform a video call mode and a speech recognition mode.
When performing a voice call, the controller 180 enters a speech recognition mode that can recognize communication contents through an outside input, for example, a hard key input provided in a body of the mobile terminal 100, as shown in FIGS. 2 a and 2 b.
The controller 180 recognizes voice call contents in the speech recognition mode and recognizes all speeches or a specific voice command in which a user speaks.
The controller 180 tags the voice call contents to at least one item (S120). Here, the item may include at least one application that can interlock with a voice search function. The application that can interlock with a voice search function is an application that can recognize a speaker's voice command while executing an application and that can control operation according to the recognized voice command. For example, at least one application that can interlock with the voice search function may include at least one of web browser, phonebook, map, e-book, and calendar applications.
Thereafter, when the voice call contents are tagged to a predetermined item, the controller 180 controls execution of the item according to the tagged voice call contents (S130). Thus, when the voice call contents are tagged to a specific item, speech based on the voice call contents may be used for executing an intrinsic function of the specific item.
For example, when the specific item is a web browser, while performing a video call through the mobile terminal 100, as the mobile terminal 100 enters a speech recognition mode, the controller 180 recognizes predetermined speech in which a speaker speaks through a speech recognition unit, extracts a search word from the recognized speech, and performs a search based on the search word through the web browser.
In another example, when the specific item is a map application, a position of predetermined position information recognized while performing call may be determined through the map application. In still another example, when the specific item is an electronic dictionary, a meaning of a predetermined word recognized while performing a call may be determined through the electronic dictionary.
In an embodiment of the present invention, a speech recognition mode is activated while performing call, and predetermined voice call contents recognized in the speech recognition mode may be applied to at least one application that can interlock with the voice search function.
The item is not limited to an application that can interlock with a voice search function. For example, the item may include at least one application that can execute based on input text information. For example, the item may include an application that is not interlocked with a voice search function, but that converts a sound signal recognized through a speech recognition unit to a text and that controls execution of an item based on the converted text.
The mobile terminal 100 according to an embodiment of the present invention can share a control result of execution of a predetermined item through speech recognition while performing the call with an external device.
Referring to FIG. 4, the controller 180 controls execution of a specific item through speech recognition while performing a call and then continues to perform the communication regardless of execution control of the specific item.
Therefore, when a predetermined touch input to the touch screen 151 is received while performing the call (S150), the controller 180 determines whether to store an item execution result according to the tagged voice call contents (S160). The predetermined touch input may be a long touch input to the touch screen 151 for displaying an item execution screen based on the voice call contents.
If an input of storing the item execution result is received, the controller 180 determines whether to share the item execution result with an external device (S170). If the item execution result is shared with an external device, the controller 180 transmits the item execution result to the external device (S180).
Next, FIGS. 5 a to 5 b are diagrams illustrating examples of S100, S110, and S120 of FIG. 3. Referring to FIG. 5 a, the controller 180 displays another party's image VCI while performing a video call on the touch screen 151. Thereafter, when a predetermined touch input to the touch screen 151 is received, the controller 180 enters a speech recognition mode that can recognize voice call contents according to video call. Here, a predetermined touch input may be a single touch input to the touch screen 151.
Referring to FIG. 5 b, in the speech recognition mode, the controller 180 displays an identifier for identifying a predetermined voice search engine on the touch screen 151. For example, as the predetermined voice search engine, a GOOGLE search engine 10 may be used, and the controller 180 displays an identification display 12 that can represent a voice command input and a speech recognition mode on the touch screen 151.
FIGS. 5 c to 5 d are diagrams illustrating an embodiment shown in FIG. 4. Referring to FIG. 5 c, in the speech recognition mode, when a speaker's speech, for example, “android” 21 is recognized, the controller 180 tags the recognized speech to a web page (e.g., Google search engine).
The controller 180 displays a search result 13 of the input speech “android” 21, which is the voice call contents, within the web page on the touch screen 151. The controller 180 may display the search result 13 together with the other party's (counterparty's) image VCI of video call on the touch screen 151. When a long touch input to the search result 13 is received, the controller 180 stores the search result 13 in the memory 160.
Referring to FIG. 5 d, the controller 180 uses the recognized speech while performing the video call as a search word of a predetermined web page and shares a search result with an external device. For example, the controller 180 may transmit the search result 13 shown in FIG. 5 c through at least one application (e.g., an e-mail, a messenger, a text message, and an SNS).
For this, when a predetermined touch input to the touch screen 151 is received, the controller 180 displays a menu 30 for selecting the at least one application.
Hereinafter, exemplary embodiments of recognizing predetermined speech while performing a call and applying the recognized speech to a process of executing at least one item will be described.
FIG. 6 is a flowchart illustrating a method of controlling a mobile terminal according to an embodiment of the present invention. FIGS. 7 a to 7 b are diagrams illustrating an embodiment shown in FIG. 6.
A method of controlling a mobile terminal according to an embodiment of the present invention can be performed in the mobile terminal 100 described with reference to FIGS. 1, 2 a and 2 b. Hereinafter, a method of controlling a mobile terminal according to an embodiment of the present invention and operation of the mobile terminal 100 for embodying the method will be described in detail with reference to necessary drawings. An embodiment according to the present invention to be described later may be performed in the embodiment described with reference to FIGS. 3 and 4.
Referring to FIG. 6, the controller 180 executes a video call through the camera 121 and the mobile communication module 112 (S191). When the video call is executed, the controller 180 displays another party's image VCI on the touch screen 151 (S192) (see FIG. 7 a).
Further, while executing the video call, the mobile terminal 100 enters a speech recognition mode and displays the identifier 12 notifying a speech recognition mode together with the other party's image VCI on the touch screen 151 (see FIG. 7 a).
When a predetermined touch input (e.g., a single touch input) to the touch screen 151 is received (S193), the controller 180 displays an application list 40 that can interlock with a voice search function on the touch screen 151 (S194). Referring to FIG. 7 b, for example, the controller 180 displays an application list 40 including a map application 41, a web browser application 42, and a phonebook application 43 on the touch screen 151.
Thereafter, when a specific application is selected from the application list 40, the controller 180 executes the selected application (S195). Here, the controller 180 may execute the video call and the selected application in parallel in a multitask form instead of terminating an already performing video call.
The controller 180 receives an input of recognized voice call contents in the video call while executing the selected application (S196). That is, in the foregoing embodiment, an example of recognizing communication contents through a speech recognition process while performing call and applying the recognized communication contents to a predetermined application has been described.
Hereinafter, the application example will be described in detail. FIG. 8 is a flowchart illustrating a method of controlling a mobile terminal according to an embodiment of the present invention, and FIGS. 9 to 11 are diagrams illustrating an embodiment shown in FIG. 8. The control method may be performed by the control of the controller 180. An embodiment described hereinafter may be performed with reference to the foregoing embodiments.
Referring to FIG. 8, the controller 180 executes one of a video call and voice call (S200). The controller 180 selects a map application from the application list 40 that can interlock with the voice search function shown in FIG. 7 b (S210).
Referring to FIG. 9, the controller 180 interlocks a map application (MA) with a voice search function while executing an MA (S220). Accordingly, the controller 180 displays an area 10 for a voice search on the touch screen 151. Thereafter, the controller 180 receives an input of voice call contents or a voice command while executing communication and performs speech recognition (S230). After speech recognition, the controller 180 extracts a search word for a voice search (S240). Here, the search word may include a position related keyword included in the voice call contents or the voice command.
The controller 180 detects a position on a map corresponding to the search word (S250). For example, referring to FIG. 9, when the position related keyword is “Franklin” 11, the controller 180 tags position information “Franklin” 11 to the map application MA, and the map application MA searches for the position information “Franklin” 11 on a map and displays a corresponding position SP on the map.
The controller 180 stores a search result of a specific position while executing the map application. When the mobile terminal 100 is in a communication state or when communication through the mobile terminal 100 is terminated, the controller 180 displays the search result on the touch screen 151.
That is, the controller 180 maps a voice tag identifier to a map application and displays the voice tag identifier on the touch screen 151 (S260). For example, when communication is being performed, referring to FIG. 10, the controller 180 maps the voice tag identifier 12 representing that predetermined communication contents to a map application icon 41 and displays the voice tag identifier 12 together with the other party's image VCI on the touch screen 151.
In another example, when communication is terminated, referring to FIG. 11, the controller 180 maps the voice tag identifier 12 to the map application icon 41 and displays the voice tag identifier 12 on the touch screen 151.
An application in which recognized speech is to be interlocked while performing call may be preset by a user. For example, an application to be interlocked while performing video call may be previously executed before performing video call.
FIG. 12 is a flowchart illustrating a method of controlling a mobile terminal according to an embodiment of the present invention. The control method may be executed by the control of the controller 180. Referring to FIG. 12, the controller 180 executes at least one application (S300). The application may be an application that can interlock with a voice search function, or that can recognize a text in which recognized speech is converted and that can search for predetermined information.
While the at least one application is being executed, the controller 180 performs one of a video call and voice call (S310). When a predetermined touch input is received while performing the call, the controller 180 displays a presently executing application list on the touch screen 151 (S320).
The controller 180 receives an input of selecting a specific application of the application list (S330) and enters a speech recognition mode. The controller 180 recognizes voice call contents or a specific voice command in the speech recognition mode (S340). The controller 180 extracts a search word for a voice search as a speech recognition result (S350).
The controller 180 determines whether the selected application supports a voice search (S360). If the selected application does not support a voice search, the controller 180 converts a recognized sound signal to a text and extracts a predetermined search word (S370). If the selected application supports a voice search, the controller 180 interlocks the extracted search word with the selected application and executes a search operation (S380).
In order to describe in detail the embodiment shown in FIG. 12, FIGS. 13 a and 13 b, 14 a and 14 b, and 15 a to 15 d each will be described. Referring to FIG. 13 a, the controller 180 executes at least one application before performing a call and displays at least one presently executing application list on the touch screen 151.
For example, the presently executing application may include an electronic dictionary application 51, an e-book application 52, and a map application 53. Thereafter, when the video call is connected, the controller 180 displays the other party's image VCI on the touch screen 151.
Referring to FIG. 13 b, while executing the video call, the controller 180 displays a presently executing application list 50 on the touch screen 151. When the electronic dictionary application 51 is selected from the application list 50, the controller 180 tags voice call contents to the electronic dictionary application 51. For example, when a “smart phone” is included in the voice call contents, the controller 180 tags the “smart phone” to the electronic dictionary application and searches for the tagged speech in the electronic dictionary using the tagged “smart phone” as a search word.
Further, an application included in the presently executing application list 50 may be changed in a real time according to the recognized voice call contents. For example, although the presently executing application list 50 displayed on the touch screen does not comprise a dictionary application 51, if the recognized voice call contents comprises “you can find a meaning of “vague” by searching a dictionary”, the controller 180 can add the dictionary application 51 in a real time in the presently executing application list 50. And for example, although the presently executing application list 50 displayed on the touch screen does not comprise a map application 51, if the recognized voice call contents comprises “I can't find an location for meeting”, the controller 180 can add the map application 53 in a real time in the presently executing application list 50.
And the application being changed in a real time may be other application except an application most recently executed application.
Next, FIGS. 14 a and 14 b illustrate a case where an already executing application is an e-book. Referring to FIG. 14 a, when the e-book application 52 is selected from a presently executing application list 50 while performing video call, the controller 180 displays an execution screen of the e-book application on the touch screen 151 as shown in FIG. 14 b. Thereafter, when communication contents of a “next page” are recognized, the controller 180 turns a page of an executing e-book to a next page and displays the next page as shown in FIG. 14 b.
Next, FIGS. 15 a to 15 d illustrate a case where an already executing application is a calendar application. When a calendar application 53 is selected from a presently executing application list 50 (see FIG. 15 a) while performing a video call, the controller 180 displays a calendar application execution screen on the touch screen 151, and a specific date SD may be selected (see FIG. 15 b).
Accordingly, the controller 180 recognizes communication contents while executing a calendar application, and when a sound signal that can control execution of a calendar application is detected from the communication contents (FIG. 15 c), the controller 180 maps and stores the detected sound signal to the selected specific date SD (FIG. 15 d). The controller 180 maps the voice tag identifier 12 to the specific date SD and displays the voice tag identifier 12.
Accordingly, according to a method of controlling a mobile terminal in accordance with an embodiment of the present invention, a schedule can be simply set and the schedule can be registered to a calendar application through a process of selecting a calendar application while performing call and a process of selecting a specific date.
In the present invention, a process of selecting one application from a presently executing application list while performing a call is described, but the present invention is not limited thereto. For example, when schedule related information is recognized in communication contents, the controller 180 determines whether an application to register the schedule related information exists in presently executing applications, and if an application to register the schedule related information exists in presently executing applications, the controller 180 tags a sound signal including the schedule related information to the application to register the schedule related information.
If an application to register the schedule related information does not exist in presently executing applications, the controller 180 automatically executes a calendar application and automatically tags recognized speech.
In the foregoing embodiment, by tagging a recognized sound signal to a predetermined item (including at least one application that can interlock with a voice search function) through a speech recognition process while performing a call, an example of using the recognized sound signal in execution of a predetermined item has been described.
Hereinafter, embodiments of storing and editing an execution result of the item and sharing the result with an external device will be described. FIG. 16 is a flowchart illustrating a method of controlling a mobile terminal according to an embodiment of the present invention.
Referring to FIG. 16, the mobile terminal 100 executes communication with an external device through the mobile communication module 112 and/or the camera 121 (S400). The controller 180 records voice call contents included in communication contents by activating the microphone 122 while performing the call (S410) and stores the voice call contents (S420).
Next, FIG. 17 is a flowchart illustrating an embodiment shown in FIG. 16, and FIGS. 18 a to 18 c are diagrams illustrating an embodiment shown in FIG. 17. The control method may be executed by the control of the controller 180.
Referring to FIG. 17, the controller 180 displays a category to store voice call contents on the touch screen 151 (S431). When one category is selected from the displayed categories (S432), the controller 180 stores recorded voice call contents (S433).
Referring to FIG. 18 a, in a state in which video call contents are recorded, when a predetermined input to the touch screen 151 is received, the controller 180 displays a category 60 to store recorded voice call contents on the touch screen 151.
The category 60 may include a name 61, a phone number 62, and an address 63. Hereinafter, an example of selecting a category of the name 61 and storing voice call contents according to the name category will be described.
Referring to FIG. 18 b, the controller 180 maps the voice tag identifier 12 to a name data list of video call another party of a user name data list in a phonebook application and displays the voice tag identifier 12. Further, referring to FIG. 18 c, when an item of the phone number 62 is selected from the category 60, the controller 180 maps the voice tag identifier 12 to the other party's phone number data list of a user phone number data list in the phonebook application and displays the voice tag identifier 12.
Accordingly, a user of the mobile terminal 100 can easily determine a situation in which communication contents are recorded and tagged on a user basis of a contact list. Further, when the voice tag identifier 12 is selected from the contact list, the controller 180 reproduces recorded voice call contents.
According to a method of controlling a mobile terminal in accordance with an embodiment of the present invention, recorded communication contents may be edited, and edited contents may be reused in execution of a specific item.
FIG. 19 is a flowchart illustrating an embodiment shown in FIG. 16, and FIGS. 20 a to 20 d are diagrams illustrating an embodiment shown in FIG. 19. The control method may be executed by the control of the controller 180.
Referring to FIG. 19, in order to edit recorded voice call contents, the controller 180 converts the recorded voice call contents to text (S441). Thereafter, the controller 180 receives a selection signal indicating a selection of a specific portion of the converted text (S442). Here, a specific portion of the converted text indicates at least one word of text formed with a plurality of words.
The controller 180 displays at least one application that can interlock with the selected text on the touch screen 151 (S443). Here, at least one application that can interlock with the selected text is an application that can use the selected text when executing the application. For example, when the selected text is a “date”, an application that can interlock with the selected text may be a calendar application.
A method of selecting the specific portion can be performed through a touch input of dragging a corresponding text. Thereafter, when a specific application is selected (S444), the controller 180 controls execution of the selected application based on the selected text (S445).
In order to describe in detail an embodiment shown in FIG. 19, a description will be described with reference to FIGS. 20 a to 20 d. Referring to FIG. 20 a, the controller 180 records voice call contents while performing a video call and displays text 70 in which the recorded voice call contents are converted on the touch screen 151.
Referring to FIG. 20 b, a specific portion, for example, a portion “Taehui Kim, 01099807129” 71 of the converted text 70 may be selected. Referring to FIG. 20 c, the controller 180 displays at least one application list 80 that can interlock with the selected text 71 on the touch screen 151.
The application list 80 may include information about applications 81 and 82 to store the selected text 71 and methods 83 and 84 of editing the selected text 71. For example, as a contact list application 81 is selected from the application list 80, the controller 180 may interlock the selected text 71 with the contact list application 81.
Referring to FIG. 20 d, when a selected portion of the converted text 70 is “ Seoul Station 72, 2 PM, March 19”, the controller 180 displays an application list 90 according to an attribute of the selected portion on the touch screen 151.
The selected portion is information related to a date, a time, and a place, and the controller 180 includes an application, for example, a calendar application that can interlock with the selected portion in the application list 90 and displays the application on the touch screen 151.
The application list 90 may include information about applications 91, 92, and 93 to store the selected text 72 and methods 94 and 95 of editing the selected text 72. When the calendar application 91 is selected from the application list 90, the controller 180 controls execution of a calendar application based on the selected text 72. That is, date, time, and place information corresponding to the selected text 72 may be registered to the calendar application.
According to a method of controlling a mobile terminal in accordance with an embodiment of the present invention, a specific sound signal is tagged and stored to a record file in which voice call contents are recorded.
FIG. 21 is a flowchart illustrating a method of controlling a mobile terminal according to an embodiment of the present invention, and FIGS. 22, 23 a and 23 b, and 24 a and 24 b are diagrams illustrating an embodiment shown in FIG. 21. The control method may be executed by the control of the controller 180.
Referring to FIG. 21, the controller 180 executes a voice search in a specific item based on tagged voice call contents (S451). The controller 180 maps at least one voice search result to a recorded file and displays the at least one voice search result on the touch screen 151 (S452).
As shown in FIG. 22, in the recorded file, voice call contents may be provided as a voice recorded file 800 of a graph form represented with a frequency size based on a communication time. The controller 180 tags a voice search result 900 performed based on voice call contents to the voice record file 800 of a graph form and stores the voice search result 900.
The voice search result 900 may be stored to be tagged to correspond to a stored time as a voice search is performed. Referring to FIG. 22, at the voice search result 900, voice call contents for 19 seconds after communication is started were recorded, a first voice search result 901 stored at about 2 seconds, a second voice search result 902 stored at about 5 seconds, a third voice search result 903 stored at about 9 seconds, a fourth voice search result 904 stored at about 14 seconds, and a fifth voice search result 905 stored at about 17 seconds are tagged and stored.
Here, each of the voice search results 901, 902, 903, 904, and 905 is tagged to a predetermined item while performing a call and corresponds to sound signals applied to execution of an item. Therefore, the voice search results are distinguished from a recorded file in which voice call contents with another party are simply recorded.
Referring to FIG. 23 a, a communication segment (e.g., 20 seconds) is classified as a segment in which communication is performed as a communication call is connected to a communication call request segment. A sound recognized at the communication call request segment may be a communication connection sound (music A) of another party's terminal. Therefore, when an input of selecting the first voice search result 901 is received, the controller 180 changes a communication connection sound (e.g., ring tone) of the mobile terminal 100 to music A. For example, the controller 180 displays text 801 asking the user if they want to change the ringtone to music A.
Referring to FIG. 23 b, at a segment in which communication is performed, voice call contents with another party are generally recorded, and when an input of selecting the third voice search result 903 is received, the controller 180 displays text 801 corresponding to the third voice search result on the touch screen 151.
Here, it can be seen that a process of registering a predetermined schedule in a calendar with contents of “Exit of gate 9, Kangnam station, 7:30 PM, December 30,” while performing call and interlocking with communication contents with a specific item exists through the third voice search result 903.
Referring to FIG. 24 a, when an input of moving a specific application to a voice search result is received, in order to control execution of the specific application, the controller 180 uses the voice search result.
For example, referring to FIG. 24 a, when a drag input of moving a map application to the third voice search result 903 is received, voice call contents used for the voice search result 903 may be used for execution of a map application.
That is, when a user selects the voice search result 903, the controller 180 displays schedule information 803 of a message form on the touch screen 151. Further, when a touch input of dragging a map application to the voice search result 903 is received, the controller 180 executes a map application, searches for a position corresponding to the voice search result on a map, and displays the position.
Further, referring to FIG. 24 b, the controller 180 maps a map application icon MA to the third voice search result 903 and to display the MA.
According to a method of controlling a mobile terminal in accordance with an embodiment of the present invention, a kind of items applied according to an attribute of a sound signal output on a communication segment basis while performing call may be differently set.
Next, FIG. 25 is a flowchart illustrating a method of controlling a mobile terminal according to an embodiment of the present invention. Referring to FIG. 25, the controller 180 performs one of a voice call and video call through the mobile communication module 112 and/or the camera 121 (S500).
The controller 180 records voice call contents (S510), and determines an attribute of a sound signal output on a communication segment basis (S520). Accordingly, the controller 180 differently sets an item to which a sound signal is tagged according to an attribute of the output sound signal (S530). Here, a sound signal output on a communication segment basis includes a sound signal output at a communication call connection request segment and a sound signal output at a segment in which voice communication with another party is performed as a communication call is connected.
In an attribute of a sound signal output on each segment basis, a voice output at a communication call connection request segment may be a communication connection sound, and a communication voice may be a speaker's communication voice according to communication contents with another party after a communication call is connected.
Therefore, before a communication call is connected (e.g., a communication call request segment of FIG. 23 a), the controller 180 executes a predetermined application that can distinguish a communication connection sound and thus distinguishes music used for the communication connection sound.
According to a method of controlling a mobile terminal in accordance with an embodiment of the present invention, the mobile terminal 100 may tag video call contents to video call another party's image and store the video call contents.
FIG. 26 is a flowchart illustrating a method of controlling a mobile terminal according to an embodiment of the present invention, and FIGS. 27 a and 27 b are diagrams illustrating an embodiment shown in FIG. 26. The control method may be executed by the control of the controller 180. Hereinafter, the method will be described with reference to FIGS. 26, 27 a, and 27 b.
The controller 180 executes a video call (S600), and displays another party's image VCI on the touch screen 151 (S610). The controller 180 also records video call contents while performing the video call. The controller 180 captures the other party's image displayed on the touch screen 151 (S620) and tags voice call contents to the captured image (S630). Further, the controller 180 maps the voice tag identifier 12 to the captured image and displays the voice tag identifier 12 (S640). FIG. 27 a illustrates these features.
Referring to FIG. 27 b, the controller 180 stores a capture image to which the voice tag identifier 12 is mapped in the memory 160 (S650). Further, the controller 180 shares a capture image to which the voice tag identifier 12 is mapped with an external device.
The above-described method of controlling a mobile terminal according to the present invention may be written and provided in a computer readable recording medium with a program to execute in a computer.
The method of controlling the mobile terminal according to the present invention may be executed through software. When executed with the software, constituent means of the present invention are code segments that perform required tasks. Programs or code segments may be stored in a processor readable medium or may be transmitted by a computer data signal combined with a carrier through a transmission medium or a communication network.
The computer readable recording medium may be any data storage device for storing data that can be read by a computer system. The computer readable recording medium may include, for example, a ROM, a RAM, a CD-ROM, a DVD±ROM, a DVD-RAM, a magnetic tape, a floppy disk, a hard disk, and an optical data storage device. The computer readable recording medium may also be distributed in a computer system connected to a network and thus a computer readable code may be stored and executed in a distributed manner.
The foregoing embodiments and features are merely exemplary in nature and are not to be construed as limiting the present invention. The disclosed embodiments and features may be readily applied to other types of apparatuses. The description of the foregoing embodiments is intended to be illustrative, and not to limit the scope of the claims. Many alternatives, modifications, and variations will be apparent to those skilled in the art.

Claims (20)

What is claimed is:
1. A mobile terminal, comprising:
a speech recognition unit configured to recognize input speech;
a mobile communication unit configured to perform a video call with at least one other terminal; and
a controller configured to:
receive a predetermined input while performing the video call,
recognize voice call contents through the speech recognition unit based on the received predetermined input,
display, while performing the video call, a list including at least one application that was executing before the video call was performed, wherein the at least one application is interlocked with a voice search function,
receive an input for selecting a specific application from the displayed list,
tag the recognized voice call contents to the selected application executed by the mobile terminal,
execute the selected application using the tagged voice call contents,
transmit an execution result of the selected application to at least one external device through a wireless communication unit of the mobile terminal, and
display a screen related to the video call after transmitting the execution result of the selected application.
2. The mobile terminal of claim 1, wherein the controller is further configured to execute a search operation in the selected application according to a search word corresponding to the voice call contents tagged to the selected application.
3. The mobile terminal of claim 2, wherein the at least one application includes any one or more of a web browser application, phonebook application, map application, e-book application, electronic dictionary application, and calendar application.
4. The mobile terminal of claim 2, wherein the controller is further configured to map a voice tagging identifier identifying that the recognized voice call contents are tagged to the at least one application, and to control a display unit to display the voice tagging identifier along with an indicator indicating the at least one application.
5. The mobile terminal of claim 1,
wherein the at least one application is changed in real time according to the recognized voice call contents.
6. The mobile terminal of claim 1, wherein the controller is further configured to store the voice call contents in a memory, to convert the stored voice call contents to text, to display the converted text, to receive a selection signal indicating a selection of a portion of the displayed text, to display a list of applications that can be interlocked with the selected portion of text, and to execute a selected application using the selected portion of text.
7. The mobile terminal of claim 1, wherein the controller is further configured to display at least one category to store the voice call contents, to receive a selection signal indicating a selection of one category, and to store the voice call contents according to the selected one category.
8. The mobile terminal of claim 1, wherein the video call includes at least a first call interval followed by a second call interval in which the first call interval includes an output sound signal, and
wherein the recognized voice call contents includes the output sound signal in the first call interval.
9. The mobile terminal of claim 1,
wherein the controller is further configured to capture a counterparty's image displayed on a display unit of the mobile terminal while the video call is performed and to tag the recognized voice call contents to the captured image.
10. The mobile terminal of claim 9, wherein the controller is further configured to map a voice tagging identifier to the captured image and to store the voice tagging identifier mapped to the captured image in a memory.
11. The mobile terminal of claim 1, wherein the controller is further configured to transmit an execution result of the at least one application to at least one external device through a wireless communication unit of the mobile terminal.
12. The mobile terminal of claim 1,
wherein the controller is configured to acquire a sound signal through a microphone while performing the video call with the at least one external device through the mobile communication unit, to execute a voice search through a predetermined voice search application based on the acquired sound signal, and to store a voice search result.
13. A method of controlling a mobile terminal, the method comprising:
performing, via a mobile communication unit of the mobile terminal, a video call with at least one other terminal;
receiving, via a controller of the mobile terminal, a predetermined input while performing the video call;
recognizing, via a speech recognition unit of the mobile terminal, input voice call contents based on the received predetermined input;
displaying, while performing the video call, via a display unit of the mobile terminal, a list including at least one application that was executing before the video call was performed, wherein the at least one application is interlocked with a voice search function;
receive an input for selecting a specific application from the displayed list;
tagging, via the controller, the recognized voice call contents to the selected application executed by the mobile terminal;
executing, via the controller, the selected application using the tagged voice call contents;
transmitting an execution result of the selected application to at least one external device through a wireless communication unit of the mobile terminal; and
displaying a screen related to the video call after transmitting the execution result of the selected application.
14. The method of claim 13, further comprising:
executing, via the controller, a search operation in the selected application according to a search word corresponding to the voice call contents tagged to the selected application,
wherein the at least one application includes any one or more of a web browser application, phonebook application, map application, e-book application, electronic dictionary application, and calendar application.
15. The method of claim 13, further comprising:
mapping, via the controller, a voice tagging identifier identifying that the recognized voice call contents are tagged to the at least one application; and
displaying, via a display unit of the mobile terminal, the voice tagging identifier along with an indicator indicating the at least one application.
16. The method of claim 13, wherein the at least one application is changed in real time according to the recognized voice call contents.
17. The method of claim 13, further comprising:
storing, in a memory associated with the mobile terminal, the voice call contents;
converting, via the controller, the stored voice call contents to text;
displaying, via a display unit of the mobile terminal, the converted text;
receiving, via the controller, a selection signal indicating a selection of a portion of the displayed text;
displaying, via the display unit, a list of applications that can be interlocked with the selected portion of text; and
executing, via the controller, a selected application using the selected portion of text.
18. The method of claim 13, further comprising:
displaying, via the display unit, at least one category to store the voice call contents; and
receiving, via the controller, a selection signal indicating a selection of one category, and to store the voice call contents according to the selected one category.
19. The method of claim 13, wherein the video call includes at least a first call interval followed by a second call interval in which the first call interval includes an output sound signal, and
wherein the recognized voice call contents includes the output sound signal in the first call interval.
20. The method of claim 13,
further comprising:
capturing, via the controller, a counterparty's image displayed on a display unit of the mobile terminal while the video calling operation is performed and to tag the recognized voice call contents to the captured image; and
mapping, via the controller, a voice tagging identifier to the captured image and to store the voice tagging identifier mapped to the captured image in a memory.
US13/565,320 2012-01-06 2012-08-02 Mobile terminal and method of controlling the same Active 2033-06-01 US8963983B2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020120002206A KR101912409B1 (en) 2012-01-06 2012-01-06 Mobile terminal and mothod for controling of the same
KR10-2012-0002206 2012-01-06

Publications (2)

Publication Number Publication Date
US20130176377A1 US20130176377A1 (en) 2013-07-11
US8963983B2 true US8963983B2 (en) 2015-02-24

Family

ID=46934365

Family Applications (1)

Application Number Title Priority Date Filing Date
US13/565,320 Active 2033-06-01 US8963983B2 (en) 2012-01-06 2012-08-02 Mobile terminal and method of controlling the same

Country Status (3)

Country Link
US (1) US8963983B2 (en)
EP (1) EP2613507B1 (en)
KR (1) KR101912409B1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9380264B1 (en) * 2015-02-16 2016-06-28 Siva Prasad Vakalapudi System and method for video communication
US11561763B2 (en) 2016-11-28 2023-01-24 Samsung Electronics Co., Ltd. Electronic device for processing multi-modal input, method for processing multi-modal input and server for processing multi-modal input

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5772331B2 (en) * 2011-07-20 2015-09-02 カシオ計算機株式会社 Learning apparatus and program
US8947220B2 (en) * 2012-10-31 2015-02-03 GM Global Technology Operations LLC Speech recognition functionality in a vehicle through an extrinsic device
US20140189572A1 (en) * 2012-12-31 2014-07-03 Motorola Mobility Llc Ranking and Display of Results from Applications and Services with Integrated Feedback
US20140215401A1 (en) * 2013-01-29 2014-07-31 Lg Electronics Inc. Mobile terminal and control method thereof
US8994774B2 (en) * 2013-08-01 2015-03-31 Sony Corporation Providing information to user during video conference
KR102332675B1 (en) * 2013-09-02 2021-11-30 삼성전자 주식회사 Method and apparatus to sharing contents of electronic device
CN104423552B (en) * 2013-09-03 2017-11-03 联想(北京)有限公司 The method and electronic equipment of a kind of processing information
KR101584887B1 (en) * 2014-03-07 2016-01-22 주식회사 엘지유플러스 Method and system of supporting multitasking of speech recognition service in in communication device
US9696806B2 (en) * 2014-07-02 2017-07-04 Immersion Corporation Systems and methods for multi-output electrostatic haptic effects
KR101667109B1 (en) * 2014-09-12 2016-10-17 엘지전자 주식회사 Mobile device and method for controlling the same
KR102206387B1 (en) * 2014-12-08 2021-01-22 엘지전자 주식회사 Mobile terminal and control method for the mobile terminal
US20170193063A1 (en) * 2016-01-06 2017-07-06 Samsung Electronics Co., Ltd. Mobile device and method of acquiring and searching for information thereof
KR102542766B1 (en) * 2016-11-17 2023-06-14 엘지전자 주식회사 Display device and operating method thereof
US11074292B2 (en) * 2017-12-29 2021-07-27 Realwear, Inc. Voice tagging of video while recording

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070073718A1 (en) * 2005-09-14 2007-03-29 Jorey Ramer Mobile search service instant activation
US20070121814A1 (en) * 2005-11-30 2007-05-31 Mypeople, L.L.C. Speech recognition based computer telephony system
US20070179778A1 (en) * 2002-02-07 2007-08-02 Sap Ag Dynamic Grammar for Voice-Enabled Applications
EP2114058A2 (en) 2008-04-30 2009-11-04 LG Electronics, Inc. Automatic content analyser for mobile phones
EP2146491A1 (en) 2008-07-14 2010-01-20 LG Electronics Inc. Mobile terminal and method for displaying menu thereof
US20100069123A1 (en) * 2008-09-16 2010-03-18 Yellowpages.Com Llc Systems and Methods for Voice Based Search
US20110069024A1 (en) * 2009-09-21 2011-03-24 Samsung Electronics Co., Ltd. Input method and input device of portable terminal
US20110202874A1 (en) * 2005-09-14 2011-08-18 Jorey Ramer Mobile search service instant activation
US20110206198A1 (en) * 2004-07-14 2011-08-25 Nice Systems Ltd. Method, apparatus and system for capturing and analyzing interaction based content
US20120035931A1 (en) * 2010-08-06 2012-02-09 Google Inc. Automatically Monitoring for Voice Input Based on Context
US20140052452A1 (en) * 2012-08-16 2014-02-20 Nuance Communications, Inc. User interface for entertainment systems
US20140153705A1 (en) * 2012-11-30 2014-06-05 At&T Intellectual Property I, Lp Apparatus and method for managing interactive television and voice communication services
US20140195244A1 (en) * 2013-01-07 2014-07-10 Samsung Electronics Co., Ltd. Display apparatus and method of controlling display apparatus

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100696439B1 (en) * 2002-07-02 2007-03-19 노키아 코포레이션 Method and communication device for handling data records by speech recognition
KR20100034856A (en) * 2008-09-25 2010-04-02 엘지전자 주식회사 Mobile terminal and method of providing search function using same
KR101642001B1 (en) * 2010-02-01 2016-07-22 엘지전자 주식회사 Mobile terminal and contents generating method for mobile terminal

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070179778A1 (en) * 2002-02-07 2007-08-02 Sap Ag Dynamic Grammar for Voice-Enabled Applications
US20110206198A1 (en) * 2004-07-14 2011-08-25 Nice Systems Ltd. Method, apparatus and system for capturing and analyzing interaction based content
US20070073718A1 (en) * 2005-09-14 2007-03-29 Jorey Ramer Mobile search service instant activation
US20110202874A1 (en) * 2005-09-14 2011-08-18 Jorey Ramer Mobile search service instant activation
US20070121814A1 (en) * 2005-11-30 2007-05-31 Mypeople, L.L.C. Speech recognition based computer telephony system
EP2114058A2 (en) 2008-04-30 2009-11-04 LG Electronics, Inc. Automatic content analyser for mobile phones
EP2146491A1 (en) 2008-07-14 2010-01-20 LG Electronics Inc. Mobile terminal and method for displaying menu thereof
US20100069123A1 (en) * 2008-09-16 2010-03-18 Yellowpages.Com Llc Systems and Methods for Voice Based Search
US20110069024A1 (en) * 2009-09-21 2011-03-24 Samsung Electronics Co., Ltd. Input method and input device of portable terminal
US20120035931A1 (en) * 2010-08-06 2012-02-09 Google Inc. Automatically Monitoring for Voice Input Based on Context
US20130095805A1 (en) * 2010-08-06 2013-04-18 Michael J. Lebeau Automatically Monitoring for Voice Input Based on Context
US20140052452A1 (en) * 2012-08-16 2014-02-20 Nuance Communications, Inc. User interface for entertainment systems
US20140153705A1 (en) * 2012-11-30 2014-06-05 At&T Intellectual Property I, Lp Apparatus and method for managing interactive television and voice communication services
US20140195244A1 (en) * 2013-01-07 2014-07-10 Samsung Electronics Co., Ltd. Display apparatus and method of controlling display apparatus

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9380264B1 (en) * 2015-02-16 2016-06-28 Siva Prasad Vakalapudi System and method for video communication
US11561763B2 (en) 2016-11-28 2023-01-24 Samsung Electronics Co., Ltd. Electronic device for processing multi-modal input, method for processing multi-modal input and server for processing multi-modal input

Also Published As

Publication number Publication date
EP2613507A2 (en) 2013-07-10
EP2613507A3 (en) 2013-07-31
US20130176377A1 (en) 2013-07-11
KR101912409B1 (en) 2018-10-26
KR20130081176A (en) 2013-07-16
EP2613507B1 (en) 2017-08-02

Similar Documents

Publication Publication Date Title
US8963983B2 (en) Mobile terminal and method of controlling the same
KR101466027B1 (en) Mobile terminal and its call contents management method
EP2755399B1 (en) Electronic device and control method thereof
KR101658087B1 (en) Mobile terminal and method for displaying data using augmented reality thereof
US10241743B2 (en) Mobile terminal for matching displayed text with recorded external audio and method of controlling the mobile terminal
US9304737B2 (en) Electronic device and method of controlling the same
KR20150086030A (en) Mobile terminal and controlling method thereof
US9247144B2 (en) Mobile terminal generating a user diary based on extracted information
US20130104032A1 (en) Mobile terminal and method of controlling the same
KR101626874B1 (en) Mobile terminal and method for transmitting contents thereof
US8180370B2 (en) Mobile terminal and method of display position on map thereof
KR101698096B1 (en) Method for searching information by using drawing and terminal thereof
US9565289B2 (en) Mobile terminal and method of controlling the same
KR101798968B1 (en) Mobiel Terminal And Mehtod For Controlling The Same
US9336242B2 (en) Mobile terminal and displaying method thereof
KR101669520B1 (en) Electronic device and control method thereof
KR20140133153A (en) Mobile terminal and method for controlling of the same
KR101781849B1 (en) Mobile terminal and method for controlling the same
KR20150085749A (en) Mobile terminal and controlling method thereof
US20150373184A1 (en) Mobile terminal and control method thereof
KR101604698B1 (en) Mobile terminal and method for controlling the same
CN110362760B (en) Method, device and medium for intelligently prompting search results
KR101979260B1 (en) Mobile terminal
KR101531194B1 (en) Method of controlling application interworking with map key and mobile terminal using the same
KR101676798B1 (en) Electronic Device, Method Of Transferring Message Of Eletronic Device, Method Of Providing User Interface Of Eletronic Device

Legal Events

Date Code Title Description
AS Assignment

Owner name: LG ELECTRONICS INC., KOREA, REPUBLIC OF

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:HO, JAESEOK;REEL/FRAME:028721/0870

Effective date: 20120420

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCF Information on status: patent grant

Free format text: PATENTED CASE

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551)

Year of fee payment: 4

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 8