US20130263032A1 - Method and apparatus for fluid graphical user interface - Google Patents
Method and apparatus for fluid graphical user interface Download PDFInfo
- Publication number
- US20130263032A1 US20130263032A1 US13/910,753 US201313910753A US2013263032A1 US 20130263032 A1 US20130263032 A1 US 20130263032A1 US 201313910753 A US201313910753 A US 201313910753A US 2013263032 A1 US2013263032 A1 US 2013263032A1
- Authority
- US
- United States
- Prior art keywords
- user interface
- objects
- selectable
- user
- graphical user
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04842—Selection of displayed objects or displayed text elements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
Definitions
- Service providers and device manufacturers are continually challenged to deliver value and convenience to consumers by, for example, providing compelling services and vast array of media and products.
- Service providers can provide various user interface applications for use on user equipment that enhance the user's interface experience with the user equipment and utilization of the various products and services offered by the service provider.
- users can have difficulty utilizing such equipment and searching through the vast amounts of data and application accessible on the user equipment.
- Currently available user interface applications have limitations and thus fail to provide the user with an interface that can allow for the user to fully appreciate and utilize the various products and services offered by the service provider.
- the modern user interface is essential part of entertainment and media consumption, thus it should also provide a playful and enjoyable experience.
- a method comprises causing, at least in part, display of selectable objects on a graphical user interface, where each of the selectable objects corresponds to data or an application accessible via the graphical user interface.
- the method further comprises causing, at least in part, display of the selectable objects in motion travelling across the graphical user interface based on a category of the selectable object or context dependent data, and allowing user selection and manipulation of the selectable objects displayed on the graphical user interface.
- an apparatus comprising: at least one processor; and at least one memory including computer program code, the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following: cause, at least in part, display of selectable objects on a graphical user interface, wherein each of the selectable objects corresponds to data or an application accessible via the graphical user interface; cause, at least in part, display of the selectable objects in motion travelling across the graphical user interface based on a category of the selectable object or context dependent data; and allow user selection and manipulation of the selectable objects displayed on the graphical user interface.
- a computer-readable storage medium carrying one or more sequences of one or more instructions which, when executed by one or more processors, cause an apparatus to at least perform the following steps: causing, at least in part, display of selectable objects on a graphical user interface, wherein each of the selectable objects corresponds to data or an application accessible via the graphical user interface; causing, at least in part, display of the selectable objects in motion travelling across the graphical user interface based on a category of the selectable object or context dependent data; and allowing user selection and manipulation of the selectable objects displayed on the graphical user interface.
- an apparatus comprises means for causing, at least in part, display of selectable objects on a graphical user interface, where each of the selectable objects corresponds to data or an application accessible via the graphical user interface.
- the apparatus further comprises means for causing, at least in part, display of the selectable objects in motion travelling across the graphical user interface based on a category of the selectable object or context dependent data, and means for allowing user selection and manipulation of the selectable objects displayed on the graphical user interface.
- FIG. 1 is a diagram of a system capable of providing a fluid graphical user interface, according to one embodiment
- FIG. 2 is a diagram of the components of user equipment including a user interface widget, according to one embodiment
- FIG. 3A is a flowchart of a process for providing a fluid graphical user interface, according to one embodiment
- FIG. 3B is a flowchart of a process for providing a fluid graphical user interface allowing display of categorized objects, according to one embodiment
- FIG. 3C is a flowchart of a process for providing a fluid graphical user interface allowing selection and manipulation of objects shown in the graphical user interface, according to one embodiment
- FIGS. 4A-4C are diagrams of graphical user interfaces depicting the processes of FIGS. 3A-3C , according to various embodiments;
- FIG. 5 is a diagram of a graphical user interface, according to various embodiments.
- FIGS. 6A-6C are diagrams of mobile devices displaying graphical user interfaces, according to various embodiments.
- FIG. 7 is a diagram of hardware that can be used to implement an embodiment of the invention.
- FIG. 8 is a diagram of a chip set that can be used to implement an embodiment of the invention.
- FIG. 9 is a diagram of a mobile terminal (e.g., handset) that can be used to implement an embodiment of the invention.
- a mobile terminal e.g., handset
- FIG. 1 is a diagram of a system capable of providing a fluid graphical user interface, according to an embodiment.
- the system 100 comprises user equipment (UE) 101 A . . . 101 N and 103 having connectivity to a communication network 105 .
- a service provider server 107 is provided that is also connected to communication network 105 .
- UE 101 A . . . UE 101 N, UE 103 , and service provider 107 are each shown as including a user interface widget 109 A . . .
- UE 101 A could be provided as a mobile device having user interface widget 109 A, and such UE 101 A could provide the user interface displays described herein without the need for any other user interface widget.
- the UE 101 A can utilize the user interface widget 109 A in order to provide such a display, or the user interface widget 103 A or the user interface widget 111 , or a combination thereof depending on whether the widget is being run locally or remotely.
- UE 103 is shown as being connected to UE 101 A by a dashed line, which can be any form of wireless or wired connection, such as, for example, when a mobile device is connected with a computer for syncing, etc.
- the communication network 105 of system 100 includes one or more networks such as a data network (not shown), a wireless network (not shown), a telephony network (not shown), short range wireless network (not shown), broadcast network (not shown) or any combination thereof.
- the data network may be any local area network (LAN), metropolitan area network (MAN), wide area network (WAN), a public data network (e.g., the Internet), or any other suitable packet-switched network, such as a commercially owned, proprietary packet-switched network, e.g., a proprietary cable or fiber-optic network.
- the wireless network may be, for example, a cellular network and may employ various technologies including enhanced data rates for global evolution (EDGE), general packet radio service (GPRS), global system for mobile communications (GSM), Internet protocol multimedia subsystem (IMS), universal mobile telecommunications system (UMTS), etc., as well as any other suitable wireless medium, e.g., worldwide interoperability for microwave access (WiMAX), Long Term Evolution (LTE) networks, code division multiple access (CDMA), wideband code division multiple access (WCDMA), wireless fidelity (WiFi), satellite, mobile ad-hoc network (MANET), wireless LAN (WLAN), Bluetooth® network, Ultra Wide Band (UWB) network, and the like.
- EDGE enhanced data rates for global evolution
- GPRS general packet radio service
- GSM global system for mobile communications
- IMS Internet protocol multimedia subsystem
- UMTS universal mobile telecommunications system
- WiMAX worldwide interoperability for microwave access
- LTE Long Term Evolution
- CDMA code division multiple access
- the UEs 101 A . . . 101 N and 103 A is any type of mobile terminal, fixed terminal, or portable terminal including a mobile handset, station, unit, device, multimedia computer, multimedia tablet, Internet node, communicator, communication device, desktop computer, laptop computer, Personal Digital Assistants (PDAs), audio/video player, digital still/video camera, game device, analog/digital television broadcast receiver, analog/digital radio broadcast receiver, positioning device, electronic book device, or any combination thereof. It is also contemplated that the UEs 101 A . . . 101 N can support any type of interface to the user (such as “wearable” circuitry, etc.).
- a protocol includes a set of rules defining how the network nodes within the communication network 105 interact with each other based on information sent over the communication links.
- the protocols are effective at different layers of operation within each node, from generating and receiving physical signals of various types, to selecting a link for transferring those signals, to the format of information indicated by those signals, to identifying which software application executing on a computer system sends or receives the information.
- the conceptually different layers of protocols for exchanging information over a network are described in the Open Systems Interconnection (OSI) Reference Model.
- OSI Open Systems Interconnection
- Each packet typically comprises (1) header information associated with a particular protocol, and (2) payload information that follows the header information and contains information that may be processed independently of that particular protocol.
- the packet includes (3) trailer information following the payload and indicating the end of the payload information.
- the header includes information such as the source of the packet, its destination, the length of the payload, and other properties used by the protocol.
- the data in the payload for the particular protocol includes a header and payload for a different protocol associated with a different, higher layer of the OSI Reference Model.
- the header for a particular protocol typically indicates a type for the next protocol contained in its payload.
- the higher layer protocol is said to be encapsulated in the lower layer protocol.
- the headers included in a packet traversing multiple heterogeneous networks, such as the Internet typically include a physical (layer 1) header, a data-link (layer 2) header, an internetwork (layer 3) header and a transport (layer 4) header, and various application headers (layer 5, layer 6 and layer 7) as defined by the OSI Reference Model.
- One or more embodiments described herein are related to multimodal user interface (UI) concepts and graphical UIs, and can act as a replacement for current UIs and can replace the entire UI framework.
- UI user interface
- GUIs are intended to simplify navigation and make it easier to find things and manipulate them.
- the desktop metaphor used in personal computers is a common example of GUIs.
- GUIs For smaller screens, such as on mobile telephones, personal digitals assistants (PDAs), digital media players, etc., metaphors are slightly different such as, for example, an idle-screen, or an application view arrangement, etc. Even with these variations, they are based on the same basic principle as typical text menu based UIs, where a user has to actively navigate through various menus in order to find things, which means that the user has to know what he or she is after. However, if the user is not sure of what they are looking for, then it is difficult for the user to find what they are looking for in the various menus.
- UIs are configured and adaption of the UI to user preferences.
- settings and configuration controls are in a different view or in a different mode of operation, and therefore, the user has to open separate settings dialogs, change settings, close the settings dialogs, and then the user can continue normal UI operations.
- such procedures distract the user and increase the difficulty in performing such settings changes, thereby reducing the effectiveness of the system. Therefore, an improved UI is desired.
- human beings are not always rational. They can act spontaneously based on associations or stimuli. In other words, a user can decide to do something when he or she sees something. For many decisions and actions, human beings need stimulus or some triggering event. Very static surroundings have little means of providing such stimulus. Therefore, they may go to shops to spontaneously browse shelves, without knowing what they would like to buy. The same concept applies for computers and smart phones, where the user may simply want to browse through various applications and data on the device without having a specific destination in mind.
- the user's interest may suddenly be triggered by some association, whereby the user connects a visual item on the shelf or on the device to some older memory association, and based on this association the user decides to buy the product or open the data/application.
- GUIs do not support spontaneous human behavior described above. If a user is familiar with the system and its navigation structure and has some specific task in mind, then traditional GUIs are rather well suited for the task. However, if the user is unfamiliar with the system and navigation structure, then traditional UIs can be very difficult for the user to utilize to its fullest potential. Also, if the user just wants to kill some time or do a “window shopping” kind of activity, then such activities are not well supported by traditional UIs.
- the device may contain functions or data the user is not aware of, and thus cannot even find. Static graphical presentations do not trigger new associations.
- Embodiments described herein advantageously provide GUIs that a randomness aspect and provide a type of “living” functionality, which would feed the user some new “fuel” for associations in order to trigger some new and even unexpected events.
- Modern mobile devices are typically relatively small, and therefore offer challenges for typical GUIs.
- the screen is typically relatively small and cannot hold very much information at a time. Limited screen space leads usually to difficult navigation through deep menus, which may also lead to loss of position and uncertainty on how to get back or how to find items in complex menu structures.
- Modern devices often use desktop metaphor (e.g., windows), home screens or idle screens (e.g., S60 idle screen that runs on Symbian OS (operating system)), in which there are typically few icons or widgets, which user can usually configure.
- desktop metaphor e.g., windows
- home screens or idle screens e.g., S60 idle screen that runs on Symbian OS (operating system)
- there are typically few icons or widgets which user can usually configure.
- modern mobile devices have a lot of functionalities and can store a lot of data, so selecting only a few widgets for the screen is difficult, and the screen can fill up quickly.
- embodiments of the GUI described herein advantageously provide a new and unique way to present data and device functions (or applications) to the user, which takes into account an association process, by which the human brain processes inputs.
- the GUI presents data and applications as “objects” that are presented in a fluid manner as flowing across the display so as to provide the user with a novel manner in which to access and utilize the data and applications.
- data or applications are navigating or flowing to the user, so user has to just wait like a hunter and hit when he or she sees the target.
- the “objects” can be any piece of data (e.g., contact information, pictures, videos, movies, music, messages, files, calendar entries, web links, game data, electronic books, television channels and/or programs, radio broadcasting channels and/or programs, media streams, point of interest (POI) information, data (e.g., regarding various products for sale online, such data being used to identify the products during an online shopping search, etc.), etc.
- data e.g., contact information, pictures, videos, movies, music, messages, files, calendar entries, web links, game data, electronic books, television channels and/or programs, radio broadcasting channels and/or programs, media streams, point of interest (POI) information, data (e.g., regarding various products for sale online, such data being used to identify the products during an online shopping search, etc.), etc.
- POI point of interest
- GUI virtual reality
- the GUI system can treat all objects regardless of their content in the same way, and high level manipulation of any object can be identical. Therefore, the GUI can present objects from different categories and abstraction layers, and those objects can be manipulated in the same manner regardless of their category or abstraction layer. For example, the user can create object groups, where contacts, links, applications, music, etc. can be within one group.
- a “source” is a category designation that is used to generate a flow of objects within that category on the GUI.
- the source can be a labeled or non-labeled area of the display screen, from which objects start flowing across the GUI.
- the source can be broadly defined to include all data and applications accessible by the GUI, or it can be more narrowly categorized by application (e.g., all applications, mapping applications, messaging applications, media playing applications, etc., etc.), by a data item (e.g., all applications and data that have some predefined relationship to a particular contact entry, such as all photos, messages, etc.
- the sources can be one or more different sources of objects provided on the GUI at any given time, and the user can manipulate the source(s) by activating or deactivating the source, by defining the location of the source on the GUI and direction of flow of objects therefrom, and by defining the boundaries of the source (e.g., if the source is music, then the user could limit it to a certain genre, or to certain recording date(s), or to certain artists, etc.).
- the sources are located on the edge of the screen and are labeled using a transparent bar. The user may, for example, activate a source on the left side of the GUI by making a left-to-right stroke motion across a touch screen, and, after that, objects that are associated with that source begin flowing across the GUI from left to right.
- the user is provided with means to filter the stream of objects flowing onto the screen. All search methods are available for all content types, when applicable. If the user learns to search for a contact by using text based search, he can apply this skill to any object, which contains something corresponding to the search text string. In certain embodiments, there are no separate interfaces for searching for a contact, or a message, or any kind of content. It is obvious that some search methods are better suited for searching for a specific content; however, it is up to the user to decide the methods, and thus the system does not set some predefined restrictions to the user. As an example, S60 operating system provides text based search for finding a contact.
- these high level object manipulation and search methods are the same for all the objects and contents.
- the system just provides a set of searching methods and it is up to the user how he applies them for all the objects available.
- any object can act as a source.
- the user can transform any flowing object into a source by activating the object.
- One embodiment of this activation is that user drags the flowing object into specific area of the screen.
- sources can be placed on the side of the screen. If the user drags some flowing object into that position, it will transform itself into a source and start producing content to the flow. That content can be anything, which is somehow associated to that object.
- a group object when acting as a source, will create a flow of objects which belong to that group, like a group of contacts.
- the user can also drag an individual contact as a source. Then that contact acting as a source, can flow out some relevant contact dependent data, like friends of that contact, or pictures relating to that contact.
- source can be also interpreted as a one form of any object, or as a state of an object. An object either flows across the screen, acting as itself, or the object is acting as a source, presenting all associations relevant to that object.
- These source elements or objects acting as a source can be also stacked in the screen as a “stack of cards”. If the user has put some object to the side of the screen to act as a source, then he can drag a new object on top of the old source and then that new object will start acting as a source. However, when the user drags the latest object away from source area, the original object under the new one will again activate itself.
- the user can stack an infinite amount of objects into the source stack and instead of taking objects out of stack one by one, the user can also flip through the stack like flipping through a deck of cards. Always the top object visible on the stack is active and produces content to the flow. This flipping of source stack can be implemented on the touch screen by gestures or strokes mimicking the real flipping actions of a user hand or finger.
- the GUI introduces a dynamic idle-screen type of interface, in which objects are flowing across the screen, and in which human interaction and/or context related data (e.g., location of the device, time of day, etc.) can affect the flow of objects and/or the category definition of the source by which the objects are flowing from. Objects will appear on sides of the screen and flow across the screen, and then disappear off another side of screen, if user does not access or manipulate them.
- human interaction and/or context related data e.g., location of the device, time of day, etc.
- the user has full control of the flow (e.g., speed, direction, content, size of objects, number of (moving and/or static) objects visible simultaneously at any given time, pattern of flow, etc.), so the user can speed it up, “kick” unwanted objects out, “pin” objects at a location on the GUI, move objects on the GUI, select an object and perform actions related to that object, etc.
- the user can also control the type of objects flowing past his vision with some simple multimodal actions or gestures, such as strokes on a touch screen or speech, by activating and manipulating the sources on the edges of the GUI.
- the GUI therefore does not require view switching or deep menu structures as in traditional UIs, since the data and applications of the device are just flowing past the user, and the user can act when he or she sees something interesting.
- the user can adapt the flow's content, speed, and type on the fly.
- the GUI system can learn the user's habits and preferences and adapt thereto, since the user can easily enable or disable objects or change the flow properties of the GUI to fit the user's needs or mood. Based on learned/tracked habits (e.g., selections made, associations made, objects kicked, etc.) of a user, the system can provide suggested objects to the user, for example, by increasing the frequency and/or priority of certain objects that correlate to the learned/tracked habits of the user.
- the user can easily set some objects to be static in order to stop them from flowing, and can move them to a desired location on the GUI by dragging.
- the user can further lock a static object in place, which will disable dragging of the object, and thereby prevent accidental relocations.
- the user can also unlock and/or set the object in motion by “removing the pin” and the object will move away with the flow. Adding new static elements simply involves pinning down the objects from the flow with simple user gestures or other actions.
- the user has total control of how many static items are in the screen and what kind of data is flowing across the screen.
- the GUI can be in constant motion until the user stops it or limits the flow.
- the GUI can continuously provide new excitation to the user. Without any user active action, the GUI system can gradually present all the data and applications to the user. If something appears that is not interesting to the user, then the user can explicitly discard or remove it with some simple action, thus indicating to the GUI system, that the object is not interesting to the user or is not wanted. Because all visual objects (except those objects that are pinned) have temporal visible life span, even objects that are uninteresting to the user will disappear and therefore do not create constant nuisance to the user.
- the GUI system can propose some intelligent guesses for objects that are displayed based on the user's past use of the GUI and objects that were previously selected by the user.
- the GUI system can include a flow of objects, like a waterfall or a stream, which flows past the user's field of vision.
- the user can manipulate that stream and slowly adapt its behavior to fit user's personal needs.
- the user can pick any interesting item from the stream and operate it.
- the GUI system can also include sources, which can be defined by the user so that the user can control categories of data and/or applications that are flowing in the stream. The user can shut down or open these flow sources as he or she sees fit. Since the GUI screen is used more actively, it can display more data and applications than a static UI, thus allowing for more effective use of the relatively small screens of mobile devices.
- Configuration of the flow is done in the same context as manipulation of objects, so there are no separate views for settings.
- Settings can be performed on the fly during normal flow of the GUI, thus making adjustments easier for the user.
- the GUI concept supports association of related events. It fits well to a basic way in which human memory works, as many activities of humans are triggered by a person associating two objects or events and acting based on that association. At first glance, the two objects may appear to be totally unrelated to one another to an outside observer; however, these two objects may trigger an association in a user's brain. Such associations might not be recognizable by any intelligent UI-logic and thus a UI might not be able to predict such associations; however, the GUI described herein facilitates such associations to occur in the user by providing the user with a dynamic and diverse display of objects which may trigger such associations in the user, and allow the user to act on such associations.
- the GUI described herein provides several ways of harnessing and utilizing this association phenomenon.
- the GUI facilitates object associations.
- object associations a user sees two objects flowing in the GUI that are related to one another based on the users experiences. For example, the user may see an object for a picture of a friend, and an object for an album that reminds the user of that friend, and the user may want to group the picture and the album together based on this association. While there are no predefined system rules to predict such an association, since this association occurs in the user's mind, the GUI provides a flow of objects that can facilitate such associations to be made by a user. When the user notices some relation between two objects, then the user can start different activities based on that observation. For example, the user can group those objects together to make a link between objects.
- the user can manipulate those objects together, or if the user later sees one of those objects alone on the GUI, then the user can quickly recover all the objects grouped/linked/associated to that object.
- the user may see some data and a contact entry simultaneously on the GUI, and decide to send that data to that contact.
- These associations can happen between any objects, and the system will not prevent the user from making “non-sensical” associations or groupings.
- Such associations are purely up to the whim of the user. For example, the user can connect a web-link and a person, or a music album and a food recipe, if so desired. Also objects with different abstraction levels can be combined.
- the GUI system just sees this process as a network of user generated associations and does not care what the content is of the associated objects.
- the user can group together contacts from a contact list with pictures, music albums, applications, etc.
- the GUI system can intelligently propose some objects to the user and see, whether user sees some association between the proposed objects.
- system can only try to help and create some potential or probable stimulus.
- the invented system supports this behavior very well, system proposed items just flow past user vision and if the proposal was incorrect, objects just flow away, not bothering user anymore.
- static pop-up windows and icons start irritating the user if the life-span of those proposals is too long. There are no such problems in the invented system.
- the GUI also facilitates context associations.
- context associations the user sees an object on the GUI and an association is triggered based on user context. For example, the user may see a contact on the GUI, which the user has not seen for long time, and the user then suddenly notices that this person is living near by and decides to contact him or her. In another example, the user may be sitting in a restaurant and sees a contact that the user has promised to offer lunch.
- the GUI also facilitates source associations.
- source associations the user associates certain objects to a specific source, which is located in the certain location of GUI. Thus, the user will learn to assume that the source will produce certain kinds of objects. Also, sources need not be fixed, but rather can be adapted by the user and any associations that the user wants to define.
- the GUI described herein advantageously provides constant excitation and is partially deterministic, partially random, and user guided, which allows it to facilitate such associations.
- the GUI is a tool that provides means for allowing a user to make his or her own associations, and to adapt to the way the user's memory works.
- FIG. 2 is a diagram of the components of user equipment including a user interface widget, according to one embodiment.
- the user interface widget 109 A includes a control logic 201 that controls the widget and fluid graphical user interface (GUI), an object and source manager module 203 , a database 205 , a setup manager module 207 , an object flow manager module, and a presentation module 211 .
- the object and source manager module 203 can manage a list of the objects for the GUI and the defined sources, and store such information in the database 205 .
- the object and source manager module 203 can control the appearance of sources and objects based on user actions, and can determine different context information to influence the process.
- the setup manager module 207 can manage any user settings that are defined by the user for the GUI (e.g., size of objects, maximum number of objects that can be displayed at any given time, speed of flow of objects, etc.). and store such information in the database 205 .
- the object flow manager module 209 can manage the flow of the objects based on inputs of the user, and store such information in the database 205 .
- the object flow manager module 209 can control the number of objects visible simultaneously to avoid overloading user cognition with too many moving objects, and can handle system configurations in light of user actions that are performed during operation of the GUI.
- the control logic 201 can also monitor various actions of the user, and control the operation of the GUI based on usage history (e.g., frequently used contacts, albums, applications, etc. can be given priority on the GUI by increasing the frequency by which they are presented on the GUI, or by displaying them on the GUI first, etc.).
- the presentation module 211 can communicate with a display of a user interface 213 of the UE 101 A to display the GUI. Additionally, the UE 101 A includes a communication module 215 that allows the UI widget 109 A to communicate with any remote device or server, if needed in order to present objects on the GUI, or to utilize data or applications associated with the objects. Also, the UE 101 A includes a database 217 that can be used to store data and applications.
- FIG. 2 depicts the user interface widget 109 A provided in UE 101 A in order to provide a GUI for data and applications locally stored on the UE 101 A or accessible remotely from service provider 107 or another server or UE.
- the user interface widgets 103 A and 111 in UE 103 and server provider 107 can have the same components as user interface widget 109 A, and thus can perform similar functions.
- the user interface widget 111 can have the same components as user interface widget 103 A, and thus can provide, for example, a web-based GUI to any UE connected thereto via the communication network 105 .
- such user interface widgets (or one or more components thereof) can be provided at various devices/servers, which can then be used in conjunction with each other to provide the GUI functionalities described herein.
- FIG. 3A is a flowchart of a process 300 for providing a fluid graphical user interface, according to one embodiment.
- FIG. 3B is a flowchart of a process 320 for providing a fluid graphical user interface allowing display of categorized objects, according to one embodiment.
- FIG. 3C is a flowchart of a process 340 for providing a fluid graphical user interface allowing selection and manipulation of objects shown in the graphical user interface, according to one embodiment.
- the user interface widget e.g., 103 A, 109 A . . . 109 N, and/or 111
- the processes 300 , 320 , and 340 performs the processes 300 , 320 , and 340 , and is implemented in, for instance, a chip set including a processor and a memory as shown FIG. 8 .
- FIGS. 4A-4C are diagrams of graphical user interfaces depicting the processes of FIGS. 3A-3C , according to various embodiments.
- step 301 of the process 300 for providing a fluid graphical user interface various selectable objects are caused to be displayed on a GUI, where the objects correspond to data or application(s) accessible via the GUI.
- step 303 the selectable objects are caused to be displayed in motion travelling across the GUI based on a category or “source” of the selectable object.
- a graphical user interface 400 is displayed that includes a screen area 401 in which various objects are display.
- the category or source is broadly defined as any data or application accessible via the GUI, and therefore objects representing all data and applications accessible via the GUI will be cycled across the display screen.
- each of the objects will, by default, be in motion moving from left to right, such that they appear on the left side of the display screen area 401 , move across the display screen area 401 , and disappear off the right side of the display screen area.
- FIG. 4A depicts objects that include message icons 403 A, 403 B, music icons 405 A, 405 B, a clock icon 407 that displays the current time, a calendar icon 409 , a picture icon 411 , a contact icon 413 , a store icon 415 , and a grouped icon 417 .
- step 305 of process 300 user selection of manipulation of selectable objects displayed on the GUI is allowed. In the instance shown in FIG.
- the store icon 415 has been selected by the user (e.g., by touch screen or other button command, by voice command, etc.) and pinned at a desired location on the display screen area 401 such that it is no longer moving.
- the music icon 405 B has been selected by the user and fixed at a desired location, and the music associated with the music icon 405 B has been instructed to play via an audio output, as can be seen by the triangle symbol shown in the center of the music icon 405 B that indicates that the music icon is in a playback mode.
- the user could select other icons, which could have associated therewith an editor or viewer, which would be activated by such selection.
- the music and contact icons that make up the grouped icon 417 have been selected and linked together by the user.
- the remaining icons continue to flow in the right-to-left direction, as indicated by the directional arrows shown in FIG. 4A .
- the flow is paused while the user has selected some object in order to not distract the user while he is performing operation on some object. After completed user operation, the flowing will resume.
- the directional arrows are merely shown to indicate the direction of flow in the static screenshot of FIG. 4A , and would not need to be displayed on the GUI in a working GUI.
- selectable objects can, at various times, be either visible (e.g., while travelling across the GUI) or can be non-visible (e.g., after the selectable object has travelled out of the field of vision of the GUI).
- FIG. 4A depicts the objects as generic icons
- the objects can be displayed such that the content represented by the object is shown.
- the message icons 403 A, 403 B could display the sender of the message and a portion of the message
- the music icons 405 A, 405 B could display the album name, artist name, track name, album artwork, genre, etc.
- the calendar icon 409 could display a description of a meeting or date reminder, location, attendees, etc., or it could be a generic representation of the calendar application
- the picture icon 411 could display a thumbnail view of the picture, title of the picture, date/time stamp, etc., or it could be a generic representation of camera, photo album, photo editing applications
- a video icon could display a static or streaming thumbnail of the video, the video name, etc., or it could be a generic representation of video camera, video library, video editing applications
- the contact icon 413 could display a description of a contact entity, thumbnail picture of the contact entity, etc., or it could be a generic representation of the contact list application
- the store icon 415 could display an item for sale with information such as price, etc., or it could generically display the store logo and name, contact information, etc.
- the group icon 417 could display a name given to the group, etc.; etc.
- the objects can each travel at the same speed, or at different speeds, or each group or category can travel at different speeds with the objects in each group traveling at the same speed, or any combination thereof.
- the objects can travel in a straight line in any direction, or the objects can travel in a non-straight path in a consistent or random pattern, or any combination thereof.
- the objects can be shown at different fields of depth within the GUI, such that certain objects or groups are presented in front of others.
- the characteristics of the flow of the objects across GUI 400 can be controlled by user input.
- the user can select the direction of flow, the speed of flow, the pattern of flow, the number of objects simultaneously shown in the display screen.
- Such user preference selections can be made using gestures, such as swiping motions across a touch screen (e.g., if the user prefers the flow to be from right-to-left then the user can swipe across the touch screen from right to left, etc.), or tilting the mobile device (e.g., where the angle and/or direction of tilt control the direction and speed of flow), etc., or using input commands, such as using buttons, touch screen selections, voice commands, etc.
- a toggle could be provided that enables and disables such inputs to control the user preferences, for example, so that a user can enable such inputs, then make adjustments by gestures/commands, and then disable such inputs, so that use of the mobile device does not make unwanted changes to such user preferences. Also, the user can access such selection options via an object representative of such options, and/or by accessing a selection options menu.
- step 321 of the process 320 for providing a fluid graphical user interface allowing display of categorized objects a user is allowed to select a first selectable object and move the first selectable object to an area on the GUI for use as a first category or “source.”
- the user selects contact icon 413 A, and drags the contact icon 413 A (as shown by dashed arrow) to an area along the upper edge of the display screen 401 . Therefore, the contact icon 413 A is used as a category or source of objects that flow therefrom. Any object can be used as a source for the GUI.
- a category bar is caused to be displayed in the area, as can be seen by the contacts bar 419 in FIGS.
- step 325 categorized selectable objects are caused to be displayed in motion travelling across the GUI that have a relation to the first category of the first selectable object.
- contact icons such as contact icons 413 B and 413 C, begin to flow from the contacts bar 419 from the upper side of the display screen 401 towards the lower side of the display screen 401 .
- the objects that do not fall within the contact grouping continue to flow in a left-to-right direction (unless they have been pinned).
- the contact icon 413 A could be representative of the overall contact list application, in which case all contact entries will flow from the contact bar 419 , or could be representative of a specific contact entry, in which case any objects that are related to the contact entry (e.g., pictures/videos of or from that person, objects grouped/linked to that entry, messages from that person, calendar entries related to that person, etc.) will flow from the contact bar 419 .
- FIG. 3C is a flowchart of the process 340 for providing a fluid graphical user interface allowing selection and manipulation of objects shown in the graphical user interface, according to various embodiments.
- the GUI determines if a selectable object has been selected. If no selection has been made, then the process simply continues monitoring for user input until such a selection is made. If an object is selected, then the GUI determines the nature of the manipulation of the selected object as commanded by the user's input. For example, in step 343 , the GUI determines whether the user has instructed the GUI to fix the object at a location on the GUI, for example by dragging the object a location and fixing and/or locking the object at that location.
- step 345 the GUI causes the display of the selected object at that fixed location on the GUI. If the GUI has not been instructed to fix the object, then, in step 347 , the GUI determines whether the user has instructed the GUI to remove the object, for example by using a flicking motion on the touch screen to quickly remove the object from the display screen. If the GUI has been instructed to remove the object, then in step 349 the GUI causes the removal of the selected object from the GUI. If the GUI has not been instructed to remove the object, then, in step 351 , the GUI determines whether the user has instructed the GUI to associate (or group or link) the selected object with another object, for example by dragging the select object over the other object using the touch screen.
- step 353 the GUI causes the display of the associated objects on the GUI. If the GUI has not been instructed to associate the object, then, in step 355 , the GUI can causes the deselection of the selected object after a predetermined time period has elapsed.
- FIG. 5 is a diagram of a graphical user interface 500 , according to various embodiments.
- the GUI 500 includes a display screen 501 , and various category or source bars provided around the edges of the screen.
- the GUI 500 includes a music bar 503 A, a messages bar 503 B, a contacts bar 503 C, an applications bar 503 D, a pictures bar 503 E, and undefined or inactive bars 503 F, 503 G, 503 H, and 503 I.
- the size and location of the various bars can be adjusted by the user, and if preferred the active and/or inactive bars can be visible (and opaque or transparent) or invisible (e.g., consistently invisible or invisible unless the user makes a gesture or command to temporarily make them visible).
- only one bar is active and visible at a time, indicating the latest user selected category, producing flow of object in the direction the user has initiated. It also possible to have system generated flow of objects, with no visible source bars. This flow can be triggered by some system event, like time of day, location or some other external event.
- FIG. 5 depicts objects that include a pinned clock icon 505 , music icons 507 A, 507 B, 507 C (which is pinned and in a play mode), message icons 509 A, 509 B, 509 C, a contact icon 511 , a grouped/link icon 513 , a computer file icon 515 , a calendar icon 517 , shopping or store icons 519 A (which is pinned), 519 B, a mapping icon 521 , and picture icons 523 A (which is pinned), 523 B.
- the objects can flow from the side having a category or source bar from which they are generated.
- music icons 507 A, 507 B, 507 C (prior to pinning), and even grouped icon 513 that contains a music icon, flow from the left side of the display screen 501 where the music bar 503 A is located.
- message icons 509 A, 509 B, 509 C flow from the left side at which the messages bar 503 B is located.
- the contact icon 511 flows from the upper side at which the contacts bar 503 C is located.
- Various application icons such as the computer file icon 515 (which can be representative of a word processor application, etc.), the calendar icon 517 , the shopping or store icons 519 A (prior to pinning), 519 B, and the mapping icon 521 flow from the upper side at which the applications bar 503 D is located.
- the picture icons 523 A (prior to pinning), 523 B flow from the upper side at which the pictures bar 503 E is located.
- the user could define bars 503 F and 503 G to a particular category or source and objects can flow from the right side of the display screen 501 towards the left side, and could define bars 503 H and 503 I to a particular category or source and objects can flow from the lower side of the display screen 501 towards the upper side.
- the user can adjust any of these default flow directions as desire on the fly using gestures and/or commands.
- the user may have selected some category, which has only a few items matching to that category and when those items have moved into the visible screen, then the flow may be stopped in order to make easier manipulation of objects.
- FIGS. 6A-6C are diagrams of mobile devices displaying graphical user interfaces, according to various embodiments.
- FIG. 6A depicts a mobile device 600 that includes a display screen, such as a touch screen display, that is displaying a graphical user interface 601 .
- the mobile device 600 also includes various user input devices, such as buttons 603 .
- the GUI 601 includes a display screen 605 , and various category or source bars provided around the edges of the screen.
- the GUI 601 includes an applications bar 607 , a music bar 609 , and a web bar 611 .
- the GUI 601 also has various selectable objects, such as objects 613 and 615 , which are flowing across the display screen 605 .
- FIGS. 6B and 6C depict the mobile device 600 and GUI 601 thereof in a slightly different configuration, such that a contacts bar and a messages bar are provided as sources on a left side of the display screen thereof.
- the user e.g., using the touch screen, voice commands, or other input
- selects object 617 as can be seen by a selection box 619 that appears around object 617
- a selection box 619 that appears around object 617
- drags object 617 over object 621 which is then indicated as being selected by a selection box 623 that appears around object 621 .
- the GUI 601 displays a text box 625 , as shown in FIG.
- objects can include metadata that defined certain characteristics of the object, such that when the object is selected, then the system can use the metadata of the selected object to search for other similar types of objects or related objects, and then the objects found during that search can flow closer to the selected object so that user can be given the opportunity to build a group from these suggested objects found during the search.
- the user can then ignore the suggested objects, group the suggested objects, or kick out some of the suggested objects if the user does not want them to belong to the group.
- sources there is no limit to number of sources that can be defined on the display screen of the GUI within the confines of size and shape restrictions of the screen. For a typical smartphone screen, one to six sources can be a good estimate; however, additional sources can be defined if so desired by the user. It is also possible to stack the sources (e.g., like a stack of cards), which can be shuffled using some gestures or commands. The top-most visible source on that stack is the active source, which produces objects to the flow.
- the basic operation of the GUI is very simple and straightforward, thus creating a natural and easy to grasp interface for the user.
- the user sees sources and objects, and can easily learn to manipulate, access, and control them in one uniform and simple interface.
- Pinned objects can remain stationary until a user releases it, moves it to another location, removes it from the display screen, etc. Pinned objects can be dragged freely to any suitable position on the display screen, and can be locked in position, if desired, in order to prevent any accidental movement thereof from the pinned location.
- the GUI is customizable and allows a user to select and manipulate objects and sources using a plurality of interaction methods such as speech or touch. Rules for selected objects can be defined by a selecting action and/or function of metadata linked to objects.
- Two or more objects can be associated with one another, in order to create a link between them. Such associations can trigger some interaction between two or more linked objects. Also, two or more groups of linked objects can be associated with one another, thereby creating links between these groups of already associated objects. The groupings can allow the user to access and operate all the elements in the associated groups through single object in that group.
- the GUI presents data and application that “flow” to the user, so that the user can simply wait like a hunter and select a target when he or she sees the target.
- the GUI provides a very natural and relaxed way of accessing data.
- the user does not have to know exactly what he or she is looking for, and can access data and applications on an ad-hoc basis.
- the GUI may trigger some user actions almost accidentally, for example, the user may start some action just because he or she associates something moving across the display to the current context the user is living in. So the system is utilizes the user's intelligibility and ability to associate things based on the user's context.
- the GUI provides tools to the user and does not try to pretend to be too intelligent, since machines cannot be intelligent in wide enough sense to really predict irrational human behavior.
- the GUI supports spontaneous and irrational access to device functions. It will however adapt the appearance and order of the flow for the objects based on the frequency of use and other context data. So even though the flow may look random, it has some deterministic elements (e.g., the most frequently used contacts may flow onto the display screen first or more often than less frequently used contacts, etc.). Also, very infrequently used objects can also enter the display screen, even if the user has forgotten the object, thereby supporting discovery or rediscovery of objects.
- the GUI is also ideal for learning to use a new device, because some hidden functionalities will navigate their way to the user via the GUI, not the other way around.
- the user can adapt the system to his or her liking during normal usage, and therefore separate settings and configurations menus are not necessary, but rather the settings and configurations can be changed by the user on the fly.
- a user can use gestures and/or speech to manipulate the flow, as well as the objects and sources.
- the GUI is dynamic and adaptive, and the user has full control thereof (e.g., if the user wants to maintain some fixed objects, etc.), such that the user can decide how much freedom the GUI allows in the manipulation of objects/sources.
- Using a dynamic flow of objects enables better handling of large number of objects.
- the user can access different functions and tasks from a single interface, without the need to switch between different applications. Complex hierarchical menu systems and view switching can be avoided or at least reduced.
- the GUI can be a fun and enjoyable manner in which to utilize the data and applications of the device, and can always offer something to the user that might otherwise go unnoticed.
- the GUI is forgiving, for example, uninteresting objects can simply flow away without remaining in the display screen without user permission.
- the GUI provides new stimulus to the user that allows the user to make new associations between various objects. Human associations can be even very irrational, thus needing some partly random stimulus from the GUI, which is not offer by a purely static GUI.
- the GUI is also very suitable for any advertising purposes because advertisements can act like rest of the fluid objects, by having advertisement objects (e.g., such objects can be provided to the GUI from a remote server, for example, from service/product providers that the user has utilized) flow in and out of the display screen.
- the user can provide feedback to such advertisements, for example, by actively voting on or rating such advertisements by kicking then out of the screen or accessing them. This co-operation and user control is a benefit for both user and advertiser.
- the processes described herein for providing a fluid graphical user interface may be advantageously implemented via software, hardware (e.g., general processor, Digital Signal Processing (DSP) chip, an Application Specific Integrated Circuit (ASIC), Field Programmable Gate Arrays (FPGAs), etc.), firmware or a combination thereof.
- DSP Digital Signal Processing
- ASIC Application Specific Integrated Circuit
- FPGA Field Programmable Gate Arrays
- firmware or a combination thereof.
- FIG. 7 illustrates a computer system 700 upon which an embodiment of the invention may be implemented.
- computer system 700 is depicted with respect to a particular device or equipment, it is contemplated that other devices or equipment (e.g., network elements, servers, etc.) within FIG. 7 can deploy the illustrated hardware and components of system 700 .
- Computer system 700 is programmed (e.g., via computer program code or instructions) to provide a fluid graphical user interface as described herein and includes a communication mechanism such as a bus 710 for passing information between other internal and external components of the computer system 700 .
- Information is represented as a physical expression of a measurable phenomenon, typically electric voltages, but including, in other embodiments, such phenomena as magnetic, electromagnetic, pressure, chemical, biological, molecular, atomic, sub-atomic and quantum interactions.
- a measurable phenomenon typically electric voltages, but including, in other embodiments, such phenomena as magnetic, electromagnetic, pressure, chemical, biological, molecular, atomic, sub-atomic and quantum interactions.
- north and south magnetic fields, or a zero and non-zero electric voltage represent two states (0, 1) of a binary digit (bit).
- Other phenomena can represent digits of a higher base.
- a superposition of multiple simultaneous quantum states before measurement represents a quantum bit (qubit).
- a sequence of one or more digits constitutes digital data that is used to represent a number or code for a character.
- information called analog data is represented by a near continuum of measurable values within a particular range.
- Computer system 700 or a portion thereof, constitutes a means for performing one or more steps of providing a fluid
- a bus 710 includes one or more parallel conductors of information so that information is transferred quickly among devices coupled to the bus 710 .
- One or more processors 702 for processing information are coupled with the bus 710 .
- a processor 702 performs a set of operations on information as specified by computer program code related to provide a fluid graphical user interface.
- the computer program code is a set of instructions or statements providing instructions for the operation of the processor and/or the computer system to perform specified functions.
- the code for example, may be written in a computer programming language that is compiled into a native instruction set of the processor.
- the code may also be written directly using the native instruction set (e.g., machine language).
- the set of operations include bringing information in from the bus 710 and placing information on the bus 710 .
- the set of operations also typically include comparing two or more units of information, shifting positions of units of information, and combining two or more units of information, such as by addition or multiplication or logical operations like OR, exclusive OR (XOR), and AND.
- Each operation of the set of operations that can be performed by the processor is represented to the processor by information called instructions, such as an operation code of one or more digits.
- a sequence of operations to be executed by the processor 702 such as a sequence of operation codes, constitute processor instructions, also called computer system instructions or, simply, computer instructions.
- Processors may be implemented as mechanical, electrical, magnetic, optical, chemical or quantum components, among others, alone or in combination.
- Computer system 700 also includes a memory 704 coupled to bus 710 .
- the memory 704 such as a random access memory (RAM) or other dynamic storage device, stores information including processor instructions for providing a fluid graphical user interface. Dynamic memory allows information stored therein to be changed by the computer system 700 . RAM allows a unit of information stored at a location called a memory address to be stored and retrieved independently of information at neighboring addresses.
- the memory 704 is also used by the processor 702 to store temporary values during execution of processor instructions.
- the computer system 700 also includes a read only memory (ROM) 706 or other static storage device coupled to the bus 710 for storing static information, including instructions, that is not changed by the computer system 700 . Some memory is composed of volatile storage that loses the information stored thereon when power is lost.
- ROM read only memory
- non-volatile (persistent) storage device 708 such as a magnetic disk, optical disk or flash card, for storing information, including instructions, that persists even when the computer system 700 is turned off or otherwise loses power.
- Information including instructions for providing a fluid graphical user interface, is provided to the bus 710 for use by the processor from an external input device 712 , such as a keyboard containing alphanumeric keys operated by a human user, or a sensor.
- an external input device 712 such as a keyboard containing alphanumeric keys operated by a human user, or a sensor.
- a sensor detects conditions in its vicinity and transforms those detections into physical expression compatible with the measurable phenomenon used to represent information in computer system 700 .
- Other external devices coupled to bus 710 used primarily for interacting with humans, include a display device 714 , such as a cathode ray tube (CRT) or a liquid crystal display (LCD), or plasma screen or printer for presenting text or images, and a pointing device 716 , such as a mouse or a trackball or cursor direction keys, or motion sensor, for controlling a position of a small cursor image presented on the display 714 and issuing commands associated with graphical elements presented on the display 714 .
- a display device 714 such as a cathode ray tube (CRT) or a liquid crystal display (LCD), or plasma screen or printer for presenting text or images
- a pointing device 716 such as a mouse or a trackball or cursor direction keys, or motion sensor, for controlling a position of a small cursor image presented on the display 714 and issuing commands associated with graphical elements presented on the display 714 .
- a display device 714 such as a cathode ray
- special purpose hardware such as an application specific integrated circuit (ASIC) 720 , is coupled to bus 710 .
- the special purpose hardware is configured to perform operations not performed by processor 702 quickly enough for special purposes.
- Examples of application specific ICs include graphics accelerator cards for generating images for display 714 , cryptographic boards for encrypting and decrypting messages sent over a network, speech recognition, and interfaces to special external devices, such as robotic arms and medical scanning equipment that repeatedly perform some complex sequence of operations that are more efficiently implemented in hardware.
- Computer system 700 also includes one or more instances of a communications interface 770 coupled to bus 710 .
- Communication interface 770 provides a one-way or two-way communication coupling to a variety of external devices that operate with their own processors, such as printers, scanners and external disks. In general the coupling is with a network link 778 that is connected to a local network 780 to which a variety of external devices with their own processors are connected.
- communication interface 770 may be a parallel port or a serial port or a universal serial bus (USB) port on a personal computer.
- USB universal serial bus
- communications interface 770 is an integrated services digital network (ISDN) card or a digital subscriber line (DSL) card or a telephone modem that provides an information communication connection to a corresponding type of telephone line.
- ISDN integrated services digital network
- DSL digital subscriber line
- a communication interface 770 is a cable modem that converts signals on bus 710 into signals for a communication connection over a coaxial cable or into optical signals for a communication connection over a fiber optic cable.
- communications interface 770 may be a local area network (LAN) card to provide a data communication connection to a compatible LAN, such as Ethernet. Wireless links may also be implemented.
- LAN local area network
- the communications interface 770 sends or receives or both sends and receives electrical, acoustic or electromagnetic signals, including infrared and optical signals, that carry information streams, such as digital data.
- the communications interface 770 includes a radio band electromagnetic transmitter and receiver called a radio transceiver.
- the communications interface 770 enables connection to the communication network 105 for providing a fluid graphical user interface to the UEs 101 A . . . 101 N or UE 103 .
- Non-transitory media such as non-volatile media, include, for example, optical or magnetic disks, such as storage device 708 .
- Volatile media include, for example, dynamic memory 704 .
- Transmission media include, for example, coaxial cables, copper wire, fiber optic cables, and carrier waves that travel through space without wires or cables, such as acoustic waves and electromagnetic waves, including radio, optical and infrared waves.
- Signals include man-made transient variations in amplitude, frequency, phase, polarization or other physical properties transmitted through the transmission media.
- Common forms of computer-readable media include, for example, a floppy disk, a flexible disk, hard disk, magnetic tape, any other magnetic medium, a CD-ROM, CDRW, DVD, any other optical medium, punch cards, paper tape, optical mark sheets, any other physical medium with patterns of holes or other optically recognizable indicia, a RAM, a PROM, an EPROM, a FLASH-EPROM, any other memory chip or cartridge, a carrier wave, or any other medium from which a computer can read.
- the term computer-readable storage medium is used herein to refer to any computer-readable medium except transmission media.
- Logic encoded in one or more tangible media includes one or both of processor instructions on a computer-readable storage media and special purpose hardware, such as ASIC 720 .
- Network link 778 typically provides information communication using transmission media through one or more networks to other devices that use or process the information.
- network link 778 may provide a connection through local network 780 to a host computer 782 or to equipment 784 operated by an Internet Service Provider (ISP).
- ISP equipment 784 in turn provides data communication services through the public, world-wide packet-switching communication network of networks now commonly referred to as the Internet 790 .
- a computer called a server host 792 connected to the Internet hosts a process that provides a service in response to information received over the Internet.
- server host 792 hosts a process that provides information representing video data for presentation at display 714 . It is contemplated that the components of system 700 can be deployed in various configurations within other computer systems, e.g., host 782 and server 792 .
- At least some embodiments of the invention are related to the use of computer system 700 for implementing some or all of the techniques described herein. According to one embodiment of the invention, those techniques are performed by computer system 700 in response to processor 702 executing one or more sequences of one or more processor instructions contained in memory 704 . Such instructions, also called computer instructions, software and program code, may be read into memory 704 from another computer-readable medium such as storage device 708 or network link 778 . Execution of the sequences of instructions contained in memory 704 causes processor 702 to perform one or more of the method steps described herein. In alternative embodiments, hardware, such as ASIC 720 , may be used in place of or in combination with software to implement the invention. Thus, embodiments of the invention are not limited to any specific combination of hardware and software, unless otherwise explicitly stated herein.
- the signals transmitted over network link 778 and other networks through communications interface 770 carry information to and from computer system 700 .
- Computer system 700 can send and receive information, including program code, through the networks 780 , 790 among others, through network link 778 and communications interface 770 .
- a server host 792 transmits program code for a particular application, requested by a message sent from computer 700 , through Internet 790 , ISP equipment 784 , local network 780 and communications interface 770 .
- the received code may be executed by processor 702 as it is received, or may be stored in memory 704 or in storage device 708 or other non-volatile storage for later execution, or both. In this manner, computer system 700 may obtain application program code in the form of signals on a carrier wave.
- instructions and data may initially be carried on a magnetic disk of a remote computer such as host 782 .
- the remote computer loads the instructions and data into its dynamic memory and sends the instructions and data over a telephone line using a modem.
- a modem local to the computer system 700 receives the instructions and data on a telephone line and uses an infra-red transmitter to convert the instructions and data to a signal on an infra-red carrier wave serving as the network link 778 .
- An infrared detector serving as communications interface 770 receives the instructions and data carried in the infrared signal and places information representing the instructions and data onto bus 710 .
- Bus 710 carries the information to memory 704 from which processor 702 retrieves and executes the instructions using some of the data sent with the instructions.
- the instructions and data received in memory 704 may optionally be stored on storage device 708 , either before or after execution by the processor 702 .
- FIG. 8 illustrates a chip set 800 upon which an embodiment of the invention may be implemented.
- Chip set 800 is programmed to provide a fluid graphical user interface as described herein and includes, for instance, the processor and memory components described with respect to FIG. 7 incorporated in one or more physical packages (e.g., chips).
- a physical package includes an arrangement of one or more materials, components, and/or wires on a structural assembly (e.g., a baseboard) to provide one or more characteristics such as physical strength, conservation of size, and/or limitation of electrical interaction.
- the chip set can be implemented in a single chip.
- Chip set 800 or a portion thereof, constitutes a means for performing one or more steps of providing a fluid graphical user interface.
- the chip set 800 includes a communication mechanism such as a bus 801 for passing information among the components of the chip set 800 .
- a processor 803 has connectivity to the bus 801 to execute instructions and process information stored in, for example, a memory 805 .
- the processor 803 may include one or more processing cores with each core configured to perform independently.
- a multi-core processor enables multiprocessing within a single physical package. Examples of a multi-core processor include two, four, eight, or greater numbers of processing cores.
- the processor 803 may include one or more microprocessors configured in tandem via the bus 801 to enable independent execution of instructions, pipelining, and multithreading.
- the processor 803 may also be accompanied with one or more specialized components to perform certain processing functions and tasks such as one or more digital signal processors (DSP) 807 , or one or more application-specific integrated circuits (ASIC) 809 .
- DSP digital signal processor
- ASIC application-specific integrated circuits
- a DSP 807 typically is configured to process real-world signals (e.g., sound) in real time independently of the processor 803 .
- an ASIC 809 can be configured to performed specialized functions not easily performed by a general purposed processor.
- Other specialized components to aid in performing the inventive functions described herein include one or more field programmable gate arrays (FPGA) (not shown), one or more controllers (not shown), or one or more other special-purpose computer chips.
- FPGA field programmable gate arrays
- the processor 803 and accompanying components have connectivity to the memory 805 via the bus 801 .
- the memory 805 includes both dynamic memory (e.g., RAM, magnetic disk, writable optical disk, etc.) and static memory (e.g., ROM, CD-ROM, etc.) for storing executable instructions that when executed perform the inventive steps described herein to provide a fluid graphical user interface.
- the memory 805 also stores the data associated with or generated by the execution of the inventive steps.
- FIG. 9 is a diagram of exemplary components of a mobile terminal (e.g., handset) for communications, which is capable of operating in the system of FIG. 1 , according to one embodiment.
- mobile terminal 900 or a portion thereof, constitutes a means for performing one or more steps of providing a fluid graphical user interface.
- a radio receiver is often defined in terms of front-end and back-end characteristics. The front-end of the receiver encompasses all of the Radio Frequency (RF) circuitry whereas the back-end encompasses all of the base-band processing circuitry.
- RF Radio Frequency
- circuitry refers to both: (1) hardware-only implementations (such as implementations in only analog and/or digital circuitry), and (2) to combinations of circuitry and software (and/or firmware) (such as, if applicable to the particular context, to a combination of processor(s), including digital signal processor(s), software, and memory(ies) that work together to cause an apparatus, such as a mobile phone or server, to perform various functions).
- This definition of “circuitry” applies to all uses of this term in this application, including in any claims.
- the term “circuitry” would also cover an implementation of merely a processor (or multiple processors) and its (or their) accompanying software/or firmware.
- the term “circuitry” would also cover if applicable to the particular context, for example, a baseband integrated circuit or applications processor integrated circuit in a mobile phone or a similar integrated circuit in a cellular network device or other network devices.
- Pertinent internal components of the telephone include a Main Control Unit (MCU) 903 , a Digital Signal Processor (DSP) 905 , and a receiver/transmitter unit including a microphone gain control unit and a speaker gain control unit.
- a main display unit 907 provides a display to the user in support of various applications and mobile terminal functions that perform or support the steps of providing a fluid graphical user interface.
- the display 9 includes display circuitry configured to display at least a portion of a user interface of the mobile terminal (e.g., mobile telephone). Additionally, the display 907 and display circuitry are configured to facilitate user control of at least some functions of the mobile terminal.
- An audio function circuitry 909 includes a microphone 911 and microphone amplifier that amplifies the speech signal output from the microphone 911 . The amplified speech signal output from the microphone 911 is fed to a coder/decoder (CODEC) 913 .
- CDEC coder/decoder
- a radio section 915 amplifies power and converts frequency in order to communicate with a base station, which is included in a mobile communication system, via antenna 917 .
- the power amplifier (PA) 919 and the transmitter/modulation circuitry are operationally responsive to the MCU 903 , with an output from the PA 919 coupled to the duplexer 921 or circulator or antenna switch, as known in the art.
- the PA 919 also couples to a battery interface and power control unit 920 .
- a user of mobile terminal 901 speaks into the microphone 911 and his or her voice along with any detected background noise is converted into an analog voltage.
- the analog voltage is then converted into a digital signal through the Analog to Digital Converter (ADC) 923 .
- the control unit 903 routes the digital signal into the DSP 905 for processing therein, such as speech encoding, channel encoding, encrypting, and interleaving.
- the processed voice signals are encoded, by units not separately shown, using a cellular transmission protocol such as global evolution (EDGE), general packet radio service (GPRS), global system for mobile communications (GSM), Internet protocol multimedia subsystem (IMS), universal mobile telecommunications system (UMTS), etc., as well as any other suitable wireless medium, e.g., microwave access (WiMAX), Long Term Evolution (LTE) networks, code division multiple access (CDMA), wideband code division multiple access (WCDMA), wireless fidelity (WiFi), satellite, and the like.
- a cellular transmission protocol such as global evolution (EDGE), general packet radio service (GPRS), global system for mobile communications (GSM), Internet protocol multimedia subsystem (IMS), universal mobile telecommunications system (UMTS), etc.
- EDGE global evolution
- GPRS general packet radio service
- GSM global system for mobile communications
- IMS Internet protocol multimedia subsystem
- UMTS universal mobile telecommunications system
- any other suitable wireless medium e.g., microwave access (Wi
- the encoded signals are then routed to an equalizer 925 for compensation of any frequency-dependent impairments that occur during transmission though the air such as phase and amplitude distortion.
- the modulator 927 combines the signal with a RF signal generated in the RF interface 929 .
- the modulator 927 generates a sine wave by way of frequency or phase modulation.
- an up-converter 931 combines the sine wave output from the modulator 927 with another sine wave generated by a synthesizer 933 to achieve the desired frequency of transmission.
- the signal is then sent through a PA 919 to increase the signal to an appropriate power level.
- the PA 919 acts as a variable gain amplifier whose gain is controlled by the DSP 905 from information received from a network base station.
- the signal is then filtered within the duplexer 921 and optionally sent to an antenna coupler 935 to match impedances to provide maximum power transfer. Finally, the signal is transmitted via antenna 917 to a local base station.
- An automatic gain control (AGC) can be supplied to control the gain of the final stages of the receiver.
- the signals may be forwarded from there to a remote telephone which may be another cellular telephone, other mobile phone or a land-line connected to a Public Switched Telephone Network (PSTN), or other telephony networks.
- PSTN Public Switched Telephone Network
- Voice signals transmitted to the mobile terminal 901 are received via antenna 917 and immediately amplified by a low noise amplifier (LNA) 937 .
- a down-converter 939 lowers the carrier frequency while the demodulator 941 strips away the RF leaving only a digital bit stream.
- the signal then goes through the equalizer 925 and is processed by the DSP 905 .
- a Digital to Analog Converter (DAC) 943 converts the signal and the resulting output is transmitted to the user through the speaker 945 , all under control of a Main Control Unit (MCU) 903 —which can be implemented as a Central Processing Unit (CPU) (not shown).
- MCU Main Control Unit
- CPU Central Processing Unit
- the MCU 903 receives various signals including input signals from the keyboard 947 .
- the keyboard 947 and/or the MCU 903 in combination with other user input components comprise a user interface circuitry for managing user input.
- the MCU 903 runs a user interface software to facilitate user control of at least some functions of the mobile terminal 901 to provide a fluid graphical user interface.
- the MCU 903 also delivers a display command and a switch command to the display 907 and to the speech output switching controller, respectively. Further, the MCU 903 exchanges information with the DSP 905 and can access an optionally incorporated SIM card 949 and a memory 951 .
- the MCU 903 executes various control functions required of the terminal.
- the DSP 905 may, depending upon the implementation, perform any of a variety of conventional digital processing functions on the voice signals. Additionally, DSP 905 determines the background noise level of the local environment from the signals detected by microphone 911 and sets the gain of microphone 911 to a level selected to compensate for the natural tendency of the user of the mobile terminal 901 .
- the CODEC 913 includes the ADC 923 and DAC 943 .
- the memory 951 stores various data including call incoming tone data and is capable of storing other data including music data received via, e.g., the global Internet.
- the software module could reside in RAM memory, flash memory, registers, or any other form of writable storage medium known in the art.
- the memory device 951 may be, but not limited to, a single memory, CD, DVD, ROM, RAM, EEPROM, optical storage, or any other non-volatile storage medium capable of storing digital data.
- An optionally incorporated SIM card 949 carries, for instance, important information, such as the cellular phone number, the carrier supplying service, subscription details, and security information.
- the SIM card 949 serves primarily to identify the mobile terminal 901 on a radio network.
- the card 949 also contains a memory for storing a personal telephone number registry, text messages, and user specific mobile terminal settings.
Abstract
A method including causing, at least in part, display of selectable objects on a graphical user interface, where each of the selectable objects corresponds to data or an application accessible via the graphical user interface. The method further includes detecting a touch gesture on the graphical user interface and causing, at least in part, display of the selectable objects in motion travelling across the graphical user interface responsive to the detected touch gesture, where the selectable objects are displayed based on a category of the selectable object or context dependent data. The method further includes allowing user selection and manipulation of the selectable objects displayed on the graphical user interface.
Description
- This application is a continuation of U.S. application Ser. No. 12/651,071, filed Dec. 31, 2009, the contents of which are incorporated herein by reference in their entireties.
- Service providers and device manufacturers are continually challenged to deliver value and convenience to consumers by, for example, providing compelling services and vast array of media and products. Service providers can provide various user interface applications for use on user equipment that enhance the user's interface experience with the user equipment and utilization of the various products and services offered by the service provider. For example, with the ever increasing capabilities of user equipment and large amount of media content that is available today, users can have difficulty utilizing such equipment and searching through the vast amounts of data and application accessible on the user equipment. Currently available user interface applications have limitations and thus fail to provide the user with an interface that can allow for the user to fully appreciate and utilize the various products and services offered by the service provider. In addition to being easy to use and simple, the modern user interface is essential part of entertainment and media consumption, thus it should also provide a playful and enjoyable experience. Strict effectiveness is not the only factor in measuring a good user interface. Combining an easy-to-use and effective user interface with playful and entertaining aspects is a challenging task and there are no obvious and straightforward solutions. In order to provide an answer, a user interface designer has to take into account human behavioral factors.
- Therefore, there is a need for an approach for providing a fluid graphical user interface. It will combine a clean and simple interface with playful and entertaining factors.
- According to one embodiment, a method comprises causing, at least in part, display of selectable objects on a graphical user interface, where each of the selectable objects corresponds to data or an application accessible via the graphical user interface. The method further comprises causing, at least in part, display of the selectable objects in motion travelling across the graphical user interface based on a category of the selectable object or context dependent data, and allowing user selection and manipulation of the selectable objects displayed on the graphical user interface.
- According to another embodiment, an apparatus comprising: at least one processor; and at least one memory including computer program code, the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following: cause, at least in part, display of selectable objects on a graphical user interface, wherein each of the selectable objects corresponds to data or an application accessible via the graphical user interface; cause, at least in part, display of the selectable objects in motion travelling across the graphical user interface based on a category of the selectable object or context dependent data; and allow user selection and manipulation of the selectable objects displayed on the graphical user interface.
- According to another embodiment, a computer-readable storage medium carrying one or more sequences of one or more instructions which, when executed by one or more processors, cause an apparatus to at least perform the following steps: causing, at least in part, display of selectable objects on a graphical user interface, wherein each of the selectable objects corresponds to data or an application accessible via the graphical user interface; causing, at least in part, display of the selectable objects in motion travelling across the graphical user interface based on a category of the selectable object or context dependent data; and allowing user selection and manipulation of the selectable objects displayed on the graphical user interface.
- According to another embodiment, an apparatus comprises means for causing, at least in part, display of selectable objects on a graphical user interface, where each of the selectable objects corresponds to data or an application accessible via the graphical user interface. The apparatus further comprises means for causing, at least in part, display of the selectable objects in motion travelling across the graphical user interface based on a category of the selectable object or context dependent data, and means for allowing user selection and manipulation of the selectable objects displayed on the graphical user interface.
- Still other aspects, features, and advantages of the invention are readily apparent from the following detailed description, simply by illustrating a number of particular embodiments and implementations, including the best mode contemplated for carrying out the invention. The invention is also capable of other and different embodiments, and its several details can be modified in various obvious respects, all without departing from the spirit and scope of the invention. Accordingly, the drawings and description are to be regarded as illustrative in nature, and not as restrictive.
- The embodiments of the invention are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings:
-
FIG. 1 is a diagram of a system capable of providing a fluid graphical user interface, according to one embodiment; -
FIG. 2 is a diagram of the components of user equipment including a user interface widget, according to one embodiment; -
FIG. 3A is a flowchart of a process for providing a fluid graphical user interface, according to one embodiment; -
FIG. 3B is a flowchart of a process for providing a fluid graphical user interface allowing display of categorized objects, according to one embodiment; -
FIG. 3C is a flowchart of a process for providing a fluid graphical user interface allowing selection and manipulation of objects shown in the graphical user interface, according to one embodiment; -
FIGS. 4A-4C are diagrams of graphical user interfaces depicting the processes ofFIGS. 3A-3C , according to various embodiments; -
FIG. 5 is a diagram of a graphical user interface, according to various embodiments; -
FIGS. 6A-6C are diagrams of mobile devices displaying graphical user interfaces, according to various embodiments; -
FIG. 7 is a diagram of hardware that can be used to implement an embodiment of the invention; -
FIG. 8 is a diagram of a chip set that can be used to implement an embodiment of the invention; and -
FIG. 9 is a diagram of a mobile terminal (e.g., handset) that can be used to implement an embodiment of the invention. - Examples of a method, apparatus, and computer program for providing a fluid graphical user interface are disclosed. In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the invention. It is apparent, however, to one skilled in the art that the embodiments of the invention may be practiced without these specific details or with an equivalent arrangement. In other instances, well-known structures and devices are shown in block diagram form in order to avoid unnecessarily obscuring the embodiments of the invention.
- Although various embodiments are described with respect to the use thereof on mobile devices such as cellular telephones, it is contemplated that the approach described herein may be used with any other type of user equipment and/or in conjunction with the use of on a server such as a service provider server or any other type of server.
-
FIG. 1 is a diagram of a system capable of providing a fluid graphical user interface, according to an embodiment. As shown inFIG. 1 , thesystem 100 comprises user equipment (UE) 101A . . . 101N and 103 having connectivity to acommunication network 105. Also aservice provider server 107 is provided that is also connected tocommunication network 105. In this figure, UE 101A . . . UE101N, UE 103, andservice provider 107 are each shown as including a user interface widget 109A . . . 109N, 103A, and 111, respectively; however, it is contemplated that such a widget need not be provided in each but rather it could alternatively be provided in one or any combination of more than one such apparatuses. By way of illustration and not limitation, UE 101A could be provided as a mobile device having user interface widget 109A, and such UE 101A could provide the user interface displays described herein without the need for any other user interface widget. Thus, if a user is utilizing the user interface display on UE 101A, the UE 101A can utilize the user interface widget 109A in order to provide such a display, or theuser interface widget 103A or the user interface widget 111, or a combination thereof depending on whether the widget is being run locally or remotely. Also, by way of illustration and not limitation, UE 103 is shown as being connected to UE 101A by a dashed line, which can be any form of wireless or wired connection, such as, for example, when a mobile device is connected with a computer for syncing, etc. - By way of example, the
communication network 105 ofsystem 100 includes one or more networks such as a data network (not shown), a wireless network (not shown), a telephony network (not shown), short range wireless network (not shown), broadcast network (not shown) or any combination thereof. It is contemplated that the data network may be any local area network (LAN), metropolitan area network (MAN), wide area network (WAN), a public data network (e.g., the Internet), or any other suitable packet-switched network, such as a commercially owned, proprietary packet-switched network, e.g., a proprietary cable or fiber-optic network. In addition, the wireless network may be, for example, a cellular network and may employ various technologies including enhanced data rates for global evolution (EDGE), general packet radio service (GPRS), global system for mobile communications (GSM), Internet protocol multimedia subsystem (IMS), universal mobile telecommunications system (UMTS), etc., as well as any other suitable wireless medium, e.g., worldwide interoperability for microwave access (WiMAX), Long Term Evolution (LTE) networks, code division multiple access (CDMA), wideband code division multiple access (WCDMA), wireless fidelity (WiFi), satellite, mobile ad-hoc network (MANET), wireless LAN (WLAN), Bluetooth® network, Ultra Wide Band (UWB) network, and the like. - The UEs 101A . . . 101N and 103A is any type of mobile terminal, fixed terminal, or portable terminal including a mobile handset, station, unit, device, multimedia computer, multimedia tablet, Internet node, communicator, communication device, desktop computer, laptop computer, Personal Digital Assistants (PDAs), audio/video player, digital still/video camera, game device, analog/digital television broadcast receiver, analog/digital radio broadcast receiver, positioning device, electronic book device, or any combination thereof. It is also contemplated that the UEs 101A . . . 101N can support any type of interface to the user (such as “wearable” circuitry, etc.).
- By way of example, the UEs 101A . . . 101N, 103, and
service provider 107 can communicate with each other and other components of thecommunication network 105 using well known, new or still developing protocols. In this context, a protocol includes a set of rules defining how the network nodes within thecommunication network 105 interact with each other based on information sent over the communication links. The protocols are effective at different layers of operation within each node, from generating and receiving physical signals of various types, to selecting a link for transferring those signals, to the format of information indicated by those signals, to identifying which software application executing on a computer system sends or receives the information. The conceptually different layers of protocols for exchanging information over a network are described in the Open Systems Interconnection (OSI) Reference Model. - Communications between the network nodes are typically effected by exchanging discrete packets of data. Each packet typically comprises (1) header information associated with a particular protocol, and (2) payload information that follows the header information and contains information that may be processed independently of that particular protocol. In some protocols, the packet includes (3) trailer information following the payload and indicating the end of the payload information. The header includes information such as the source of the packet, its destination, the length of the payload, and other properties used by the protocol. Often, the data in the payload for the particular protocol includes a header and payload for a different protocol associated with a different, higher layer of the OSI Reference Model. The header for a particular protocol typically indicates a type for the next protocol contained in its payload. The higher layer protocol is said to be encapsulated in the lower layer protocol. The headers included in a packet traversing multiple heterogeneous networks, such as the Internet, typically include a physical (layer 1) header, a data-link (layer 2) header, an internetwork (layer 3) header and a transport (layer 4) header, and various application headers (layer 5, layer 6 and layer 7) as defined by the OSI Reference Model.
- One or more embodiments described herein are related to multimodal user interface (UI) concepts and graphical UIs, and can act as a replacement for current UIs and can replace the entire UI framework.
- Typically GUIs are intended to simplify navigation and make it easier to find things and manipulate them. The desktop metaphor used in personal computers, for example, is a common example of GUIs. For smaller screens, such as on mobile telephones, personal digitals assistants (PDAs), digital media players, etc., metaphors are slightly different such as, for example, an idle-screen, or an application view arrangement, etc. Even with these variations, they are based on the same basic principle as typical text menu based UIs, where a user has to actively navigate through various menus in order to find things, which means that the user has to know what he or she is after. However, if the user is not sure of what they are looking for, then it is difficult for the user to find what they are looking for in the various menus. If user wants to customize the idle screen, then the user is forced to find the correct place in the settings to customize the idle screen. Different aspects of the screen may even be controlled in multiple places. One problem for such UIs is the configuration and adaption of the UI to user preferences. Typically, settings and configuration controls are in a different view or in a different mode of operation, and therefore, the user has to open separate settings dialogs, change settings, close the settings dialogs, and then the user can continue normal UI operations. However, such procedures distract the user and increase the difficulty in performing such settings changes, thereby reducing the effectiveness of the system. Therefore, an improved UI is desired.
- Also, human beings are not always rational. They can act spontaneously based on associations or stimuli. In other words, a user can decide to do something when he or she sees something. For many decisions and actions, human beings need stimulus or some triggering event. Very static surroundings have little means of providing such stimulus. Therefore, they may go to shops to spontaneously browse shelves, without knowing what they would like to buy. The same concept applies for computers and smart phones, where the user may simply want to browse through various applications and data on the device without having a specific destination in mind. When browsing in this manner, the user's interest may suddenly be triggered by some association, whereby the user connects a visual item on the shelf or on the device to some older memory association, and based on this association the user decides to buy the product or open the data/application.
- Traditional UIs do not support spontaneous human behavior described above. If a user is familiar with the system and its navigation structure and has some specific task in mind, then traditional GUIs are rather well suited for the task. However, if the user is unfamiliar with the system and navigation structure, then traditional UIs can be very difficult for the user to utilize to its fullest potential. Also, if the user just wants to kill some time or do a “window shopping” kind of activity, then such activities are not well supported by traditional UIs. The device may contain functions or data the user is not aware of, and thus cannot even find. Static graphical presentations do not trigger new associations. Embodiments described herein advantageously provide GUIs that a randomness aspect and provide a type of “living” functionality, which would feed the user some new “fuel” for associations in order to trigger some new and even unexpected events.
- Modern mobile devices are typically relatively small, and therefore offer challenges for typical GUIs. The screen is typically relatively small and cannot hold very much information at a time. Limited screen space leads usually to difficult navigation through deep menus, which may also lead to loss of position and uncertainty on how to get back or how to find items in complex menu structures. Modern devices often use desktop metaphor (e.g., windows), home screens or idle screens (e.g., S60 idle screen that runs on Symbian OS (operating system)), in which there are typically few icons or widgets, which user can usually configure. However, modern mobile devices have a lot of functionalities and can store a lot of data, so selecting only a few widgets for the screen is difficult, and the screen can fill up quickly. Thus, it can be difficult to stuff all the frequently used applications, links and contacts into a single small screen. One approach is to use several idle screens and have means for switching the view easily. However, switching views makes navigation more difficult and the user may get lost if the view is changed accidentally. View-switching is fast and sudden operation that is inconvenient for the user, and suddenly changing views may cause stress, the user may lose the feeling of knowing where he or she is within the structure of the GUI and be left with uncertainty of knowing the way back to a known location within the GUI. Also those views can create a new hierarchical navigation layer, “list of views”, so the user is back in navigating traditional UI-structures. Adding those views will just increase the layers user has to navigate, thus making the navigation even more complex.
- Accordingly, embodiments of the GUI described herein advantageously provide a new and unique way to present data and device functions (or applications) to the user, which takes into account an association process, by which the human brain processes inputs. The GUI presents data and applications as “objects” that are presented in a fluid manner as flowing across the display so as to provide the user with a novel manner in which to access and utilize the data and applications. With this GUI, data or applications are navigating or flowing to the user, so user has to just wait like a hunter and hit when he or she sees the target.
- As used herein, the “objects” can be any piece of data (e.g., contact information, pictures, videos, movies, music, messages, files, calendar entries, web links, game data, electronic books, television channels and/or programs, radio broadcasting channels and/or programs, media streams, point of interest (POI) information, data (e.g., regarding various products for sale online, such data being used to identify the products during an online shopping search, etc.), etc. or any combination thereof) or application (contact database, calendar, mapping function, clock, control panel or tools for user customizable settings, media player, games, web browser, camera, etc., or any combination thereof) or groups containing data and/or application objects that are accessible by the user utilizing the GUI (e.g., stored locally on the device, and/or remotely stored and accessible using the device). The GUI system can treat all objects regardless of their content in the same way, and high level manipulation of any object can be identical. Therefore, the GUI can present objects from different categories and abstraction layers, and those objects can be manipulated in the same manner regardless of their category or abstraction layer. For example, the user can create object groups, where contacts, links, applications, music, etc. can be within one group. There are no artificial system decided categories or boundaries to grouping, but rather such groupings are up to user as to how he or she uses and associates different data and applications available in the system. Objects are content agnostic, and thus the user manipulates objects in the same way independent of the content. Only after selection of an object from the flow can the user have some object specific actions available. However, for general object manipulation on the GUI (e.g., grouping of various objects, “pinning” or fixing the location of an object on the GUI, “kicking” or removing an object from the GUI, dragging or moving an object on the GUI), all the objects behave in same way.
- As used herein, a “source” is a category designation that is used to generate a flow of objects within that category on the GUI. The source can be a labeled or non-labeled area of the display screen, from which objects start flowing across the GUI. The source can be broadly defined to include all data and applications accessible by the GUI, or it can be more narrowly categorized by application (e.g., all applications, mapping applications, messaging applications, media playing applications, etc., etc.), by a data item (e.g., all applications and data that have some predefined relationship to a particular contact entry, such as all photos, messages, etc. from the contact entry, etc., etc.), and/or by a data category (e.g., music, pictures, media recorded within a particular time period, etc., etc.). There can be one or more different sources of objects provided on the GUI at any given time, and the user can manipulate the source(s) by activating or deactivating the source, by defining the location of the source on the GUI and direction of flow of objects therefrom, and by defining the boundaries of the source (e.g., if the source is music, then the user could limit it to a certain genre, or to certain recording date(s), or to certain artists, etc.). In certain embodiments, the sources are located on the edge of the screen and are labeled using a transparent bar. The user may, for example, activate a source on the left side of the GUI by making a left-to-right stroke motion across a touch screen, and, after that, objects that are associated with that source begin flowing across the GUI from left to right.
- To find a specific object from a source, the user is provided with means to filter the stream of objects flowing onto the screen. All search methods are available for all content types, when applicable. If the user learns to search for a contact by using text based search, he can apply this skill to any object, which contains something corresponding to the search text string. In certain embodiments, there are no separate interfaces for searching for a contact, or a message, or any kind of content. It is obvious that some search methods are better suited for searching for a specific content; however, it is up to the user to decide the methods, and thus the system does not set some predefined restrictions to the user. As an example, S60 operating system provides text based search for finding a contact. However, that is available only for the contacts and any other content types have different means and ways of searching. In various embodiments of the present invention, these high level object manipulation and search methods are the same for all the objects and contents. The system just provides a set of searching methods and it is up to the user how he applies them for all the objects available.
- Actually, any object can act as a source. The user can transform any flowing object into a source by activating the object. One embodiment of this activation is that user drags the flowing object into specific area of the screen. As mentioned above, sources can be placed on the side of the screen. If the user drags some flowing object into that position, it will transform itself into a source and start producing content to the flow. That content can be anything, which is somehow associated to that object. For example, a group object, when acting as a source, will create a flow of objects which belong to that group, like a group of contacts. The user can also drag an individual contact as a source. Then that contact acting as a source, can flow out some relevant contact dependent data, like friends of that contact, or pictures relating to that contact. In other words, when any object is acting as a source, it will show up to the user the associations existing for that object. As long as the object is not needed as a source, the user can drag it out from the “source area” of the screen and object will then dive into the flow and start flowing like it was doing before it was dragged to act as a source. So, source can be also interpreted as a one form of any object, or as a state of an object. An object either flows across the screen, acting as itself, or the object is acting as a source, presenting all associations relevant to that object.
- These source elements or objects acting as a source can be also stacked in the screen as a “stack of cards”. If the user has put some object to the side of the screen to act as a source, then he can drag a new object on top of the old source and then that new object will start acting as a source. However, when the user drags the latest object away from source area, the original object under the new one will again activate itself. The user can stack an infinite amount of objects into the source stack and instead of taking objects out of stack one by one, the user can also flip through the stack like flipping through a deck of cards. Always the top object visible on the stack is active and produces content to the flow. This flipping of source stack can be implemented on the touch screen by gestures or strokes mimicking the real flipping actions of a user hand or finger.
- The GUI introduces a dynamic idle-screen type of interface, in which objects are flowing across the screen, and in which human interaction and/or context related data (e.g., location of the device, time of day, etc.) can affect the flow of objects and/or the category definition of the source by which the objects are flowing from. Objects will appear on sides of the screen and flow across the screen, and then disappear off another side of screen, if user does not access or manipulate them. The user has full control of the flow (e.g., speed, direction, content, size of objects, number of (moving and/or static) objects visible simultaneously at any given time, pattern of flow, etc.), so the user can speed it up, “kick” unwanted objects out, “pin” objects at a location on the GUI, move objects on the GUI, select an object and perform actions related to that object, etc. The user can also control the type of objects flowing past his vision with some simple multimodal actions or gestures, such as strokes on a touch screen or speech, by activating and manipulating the sources on the edges of the GUI. The GUI therefore does not require view switching or deep menu structures as in traditional UIs, since the data and applications of the device are just flowing past the user, and the user can act when he or she sees something interesting. The user can adapt the flow's content, speed, and type on the fly. The GUI system can learn the user's habits and preferences and adapt thereto, since the user can easily enable or disable objects or change the flow properties of the GUI to fit the user's needs or mood. Based on learned/tracked habits (e.g., selections made, associations made, objects kicked, etc.) of a user, the system can provide suggested objects to the user, for example, by increasing the frequency and/or priority of certain objects that correlate to the learned/tracked habits of the user.
- The user can easily set some objects to be static in order to stop them from flowing, and can move them to a desired location on the GUI by dragging. The user can further lock a static object in place, which will disable dragging of the object, and thereby prevent accidental relocations. The user can also unlock and/or set the object in motion by “removing the pin” and the object will move away with the flow. Adding new static elements simply involves pinning down the objects from the flow with simple user gestures or other actions. The user has total control of how many static items are in the screen and what kind of data is flowing across the screen.
- In certain embodiments, there are no predefined fixed elements in screen, but rather everything can be enabled or disabled on the fly, as desired and manipulated by the user. The GUI can be in constant motion until the user stops it or limits the flow. The GUI can continuously provide new excitation to the user. Without any user active action, the GUI system can gradually present all the data and applications to the user. If something appears that is not interesting to the user, then the user can explicitly discard or remove it with some simple action, thus indicating to the GUI system, that the object is not interesting to the user or is not wanted. Because all visual objects (except those objects that are pinned) have temporal visible life span, even objects that are uninteresting to the user will disappear and therefore do not create constant nuisance to the user. The GUI system can propose some intelligent guesses for objects that are displayed based on the user's past use of the GUI and objects that were previously selected by the user.
- In short, the GUI system can include a flow of objects, like a waterfall or a stream, which flows past the user's field of vision. The user can manipulate that stream and slowly adapt its behavior to fit user's personal needs. The user can pick any interesting item from the stream and operate it. The GUI system can also include sources, which can be defined by the user so that the user can control categories of data and/or applications that are flowing in the stream. The user can shut down or open these flow sources as he or she sees fit. Since the GUI screen is used more actively, it can display more data and applications than a static UI, thus allowing for more effective use of the relatively small screens of mobile devices.
- Configuration of the flow is done in the same context as manipulation of objects, so there are no separate views for settings. Settings can be performed on the fly during normal flow of the GUI, thus making adjustments easier for the user.
- The GUI concept supports association of related events. It fits well to a basic way in which human memory works, as many activities of humans are triggered by a person associating two objects or events and acting based on that association. At first glance, the two objects may appear to be totally unrelated to one another to an outside observer; however, these two objects may trigger an association in a user's brain. Such associations might not be recognizable by any intelligent UI-logic and thus a UI might not be able to predict such associations; however, the GUI described herein facilitates such associations to occur in the user by providing the user with a dynamic and diverse display of objects which may trigger such associations in the user, and allow the user to act on such associations. The GUI described herein provides several ways of harnessing and utilizing this association phenomenon.
- The GUI facilitates object associations. In object associations, a user sees two objects flowing in the GUI that are related to one another based on the users experiences. For example, the user may see an object for a picture of a friend, and an object for an album that reminds the user of that friend, and the user may want to group the picture and the album together based on this association. While there are no predefined system rules to predict such an association, since this association occurs in the user's mind, the GUI provides a flow of objects that can facilitate such associations to be made by a user. When the user notices some relation between two objects, then the user can start different activities based on that observation. For example, the user can group those objects together to make a link between objects. Thus, the user can manipulate those objects together, or if the user later sees one of those objects alone on the GUI, then the user can quickly recover all the objects grouped/linked/associated to that object. In another example, the user may see some data and a contact entry simultaneously on the GUI, and decide to send that data to that contact. These associations can happen between any objects, and the system will not prevent the user from making “non-sensical” associations or groupings. Such associations are purely up to the whim of the user. For example, the user can connect a web-link and a person, or a music album and a food recipe, if so desired. Also objects with different abstraction levels can be combined. The GUI system just sees this process as a network of user generated associations and does not care what the content is of the associated objects. Thus, the user can group together contacts from a contact list with pictures, music albums, applications, etc. Certainly, the GUI system can intelligently propose some objects to the user and see, whether user sees some association between the proposed objects. However, it is up to the user to do the association, system can only try to help and create some potential or probable stimulus. The invented system supports this behavior very well, system proposed items just flow past user vision and if the proposal was incorrect, objects just flow away, not bothering user anymore. In the traditional system, static pop-up windows and icons start irritating the user if the life-span of those proposals is too long. There are no such problems in the invented system.
- The GUI also facilitates context associations. In context associations, the user sees an object on the GUI and an association is triggered based on user context. For example, the user may see a contact on the GUI, which the user has not seen for long time, and the user then suddenly notices that this person is living near by and decides to contact him or her. In another example, the user may be sitting in a restaurant and sees a contact that the user has promised to offer lunch.
- The GUI also facilitates source associations. In source associations, the user associates certain objects to a specific source, which is located in the certain location of GUI. Thus, the user will learn to assume that the source will produce certain kinds of objects. Also, sources need not be fixed, but rather can be adapted by the user and any associations that the user wants to define.
- In order to make these types of associations occur in a user's mind, the user needs some excitation to trigger this association process. Static home screens do not activate such association processes. Also, if the system is too deterministic, then it may never create some less frequently used associations. So the GUI described herein advantageously provides constant excitation and is partially deterministic, partially random, and user guided, which allows it to facilitate such associations. There are endless and even strange ways that human memory forms such associations, and the GUI described herein gives fuel for that process, rather than limiting it with too many artificial rules. The GUI is a tool that provides means for allowing a user to make his or her own associations, and to adapt to the way the user's memory works.
-
FIG. 2 is a diagram of the components of user equipment including a user interface widget, according to one embodiment. By way of example, the user interface widget 109A includes acontrol logic 201 that controls the widget and fluid graphical user interface (GUI), an object andsource manager module 203, adatabase 205, asetup manager module 207, an object flow manager module, and apresentation module 211. The object andsource manager module 203 can manage a list of the objects for the GUI and the defined sources, and store such information in thedatabase 205. The object andsource manager module 203 can control the appearance of sources and objects based on user actions, and can determine different context information to influence the process. Thesetup manager module 207 can manage any user settings that are defined by the user for the GUI (e.g., size of objects, maximum number of objects that can be displayed at any given time, speed of flow of objects, etc.). and store such information in thedatabase 205. The objectflow manager module 209 can manage the flow of the objects based on inputs of the user, and store such information in thedatabase 205. The objectflow manager module 209 can control the number of objects visible simultaneously to avoid overloading user cognition with too many moving objects, and can handle system configurations in light of user actions that are performed during operation of the GUI. Thecontrol logic 201 can also monitor various actions of the user, and control the operation of the GUI based on usage history (e.g., frequently used contacts, albums, applications, etc. can be given priority on the GUI by increasing the frequency by which they are presented on the GUI, or by displaying them on the GUI first, etc.). - The
presentation module 211 can communicate with a display of a user interface 213 of the UE 101A to display the GUI. Additionally, the UE 101A includes acommunication module 215 that allows the UI widget 109A to communicate with any remote device or server, if needed in order to present objects on the GUI, or to utilize data or applications associated with the objects. Also, the UE 101A includes adatabase 217 that can be used to store data and applications. -
FIG. 2 depicts the user interface widget 109A provided in UE 101A in order to provide a GUI for data and applications locally stored on the UE 101A or accessible remotely fromservice provider 107 or another server or UE. Also, theuser interface widgets 103A and 111 in UE 103 andserver provider 107, respectively, can have the same components as user interface widget 109A, and thus can perform similar functions. The user interface widget 111 can have the same components asuser interface widget 103A, and thus can provide, for example, a web-based GUI to any UE connected thereto via thecommunication network 105. Furthermore, such user interface widgets (or one or more components thereof) can be provided at various devices/servers, which can then be used in conjunction with each other to provide the GUI functionalities described herein. -
FIG. 3A is a flowchart of aprocess 300 for providing a fluid graphical user interface, according to one embodiment.FIG. 3B is a flowchart of aprocess 320 for providing a fluid graphical user interface allowing display of categorized objects, according to one embodiment. -
FIG. 3C is a flowchart of aprocess 340 for providing a fluid graphical user interface allowing selection and manipulation of objects shown in the graphical user interface, according to one embodiment. In one embodiment, the user interface widget (e.g., 103A, 109A . . . 109N, and/or 111) performs theprocesses FIG. 8 .FIGS. 4A-4C are diagrams of graphical user interfaces depicting the processes ofFIGS. 3A-3C , according to various embodiments. - In
step 301 of theprocess 300 for providing a fluid graphical user interface, various selectable objects are caused to be displayed on a GUI, where the objects correspond to data or application(s) accessible via the GUI. Instep 303, the selectable objects are caused to be displayed in motion travelling across the GUI based on a category or “source” of the selectable object. Thus, as shown inFIG. 4A , agraphical user interface 400 is displayed that includes ascreen area 401 in which various objects are display. In this instance, the category or source is broadly defined as any data or application accessible via the GUI, and therefore objects representing all data and applications accessible via the GUI will be cycled across the display screen. In this embodiment, at the outset (i.e., without any user input) each of the objects will, by default, be in motion moving from left to right, such that they appear on the left side of thedisplay screen area 401, move across thedisplay screen area 401, and disappear off the right side of the display screen area.FIG. 4A depicts objects that includemessage icons music icons clock icon 407 that displays the current time, acalendar icon 409, apicture icon 411, acontact icon 413, astore icon 415, and a groupedicon 417. Instep 305 ofprocess 300, user selection of manipulation of selectable objects displayed on the GUI is allowed. In the instance shown inFIG. 4A , for example, thestore icon 415 has been selected by the user (e.g., by touch screen or other button command, by voice command, etc.) and pinned at a desired location on thedisplay screen area 401 such that it is no longer moving. Also, themusic icon 405B has been selected by the user and fixed at a desired location, and the music associated with themusic icon 405B has been instructed to play via an audio output, as can be seen by the triangle symbol shown in the center of themusic icon 405B that indicates that the music icon is in a playback mode. Similarly, the user could select other icons, which could have associated therewith an editor or viewer, which would be activated by such selection. Also, the music and contact icons that make up the groupedicon 417 have been selected and linked together by the user. The remaining icons continue to flow in the right-to-left direction, as indicated by the directional arrows shown inFIG. 4A . In another embodiment the flow is paused while the user has selected some object in order to not distract the user while he is performing operation on some object. After completed user operation, the flowing will resume. The directional arrows are merely shown to indicate the direction of flow in the static screenshot ofFIG. 4A , and would not need to be displayed on the GUI in a working GUI. - It should be noted that the selectable objects, as described herein, can, at various times, be either visible (e.g., while travelling across the GUI) or can be non-visible (e.g., after the selectable object has travelled out of the field of vision of the GUI).
- While
FIG. 4A depicts the objects as generic icons, the objects can be displayed such that the content represented by the object is shown. For example, the message icons 403A, 403B could display the sender of the message and a portion of the message, the music icons 405A, 405B could display the album name, artist name, track name, album artwork, genre, etc. of the music, or it could be a generic representation of the music library application; the calendar icon 409 could display a description of a meeting or date reminder, location, attendees, etc., or it could be a generic representation of the calendar application; the picture icon 411 could display a thumbnail view of the picture, title of the picture, date/time stamp, etc., or it could be a generic representation of camera, photo album, photo editing applications; a video icon could display a static or streaming thumbnail of the video, the video name, etc., or it could be a generic representation of video camera, video library, video editing applications; the contact icon 413 could display a description of a contact entity, thumbnail picture of the contact entity, etc., or it could be a generic representation of the contact list application; the store icon 415 could display an item for sale with information such as price, etc., or it could generically display the store logo and name, contact information, etc.; the group icon 417 could display a name given to the group, etc.; etc. Also, the objects can each travel at the same speed, or at different speeds, or each group or category can travel at different speeds with the objects in each group traveling at the same speed, or any combination thereof. The objects can travel in a straight line in any direction, or the objects can travel in a non-straight path in a consistent or random pattern, or any combination thereof. The objects can be shown at different fields of depth within the GUI, such that certain objects or groups are presented in front of others. - Also, as noted previously, the characteristics of the flow of the objects across
GUI 400 can be controlled by user input. For example, the user can select the direction of flow, the speed of flow, the pattern of flow, the number of objects simultaneously shown in the display screen. Such user preference selections can be made using gestures, such as swiping motions across a touch screen (e.g., if the user prefers the flow to be from right-to-left then the user can swipe across the touch screen from right to left, etc.), or tilting the mobile device (e.g., where the angle and/or direction of tilt control the direction and speed of flow), etc., or using input commands, such as using buttons, touch screen selections, voice commands, etc. A toggle could be provided that enables and disables such inputs to control the user preferences, for example, so that a user can enable such inputs, then make adjustments by gestures/commands, and then disable such inputs, so that use of the mobile device does not make unwanted changes to such user preferences. Also, the user can access such selection options via an object representative of such options, and/or by accessing a selection options menu. - In
step 321 of theprocess 320 for providing a fluid graphical user interface allowing display of categorized objects, a user is allowed to select a first selectable object and move the first selectable object to an area on the GUI for use as a first category or “source.” Thus, as shown inFIG. 4B , the user selectscontact icon 413A, and drags thecontact icon 413A (as shown by dashed arrow) to an area along the upper edge of thedisplay screen 401. Therefore, thecontact icon 413A is used as a category or source of objects that flow therefrom. Any object can be used as a source for the GUI. Instep 323, a category bar is caused to be displayed in the area, as can be seen by the contacts bar 419 inFIGS. 4B and 4C , which can be transparent. The bar can be labeled by the GUI automatically, or can be labeled by the user, for example, by the GUI popping up a transparent keypad or keyboard on the display that allows the user to enter the label. Instep 325, categorized selectable objects are caused to be displayed in motion travelling across the GUI that have a relation to the first category of the first selectable object. Thus, as shown inFIG. 4C , contact icons, such ascontact icons display screen 401 towards the lower side of thedisplay screen 401. In this embodiment, the objects that do not fall within the contact grouping continue to flow in a left-to-right direction (unless they have been pinned). Alternatively, it is also possible to clear the screen of the objects not falling within the category defined by the first selectable object. This may depend on the screen size, i.e., in a very small screen it may be beneficial to clear the screen thus making more room for the objects belonging to the category defined by latest user selection of selectable objects. Thecontact icon 413A could be representative of the overall contact list application, in which case all contact entries will flow from thecontact bar 419, or could be representative of a specific contact entry, in which case any objects that are related to the contact entry (e.g., pictures/videos of or from that person, objects grouped/linked to that entry, messages from that person, calendar entries related to that person, etc.) will flow from thecontact bar 419. -
FIG. 3C is a flowchart of theprocess 340 for providing a fluid graphical user interface allowing selection and manipulation of objects shown in the graphical user interface, according to various embodiments. Instep 341, the GUI determines if a selectable object has been selected. If no selection has been made, then the process simply continues monitoring for user input until such a selection is made. If an object is selected, then the GUI determines the nature of the manipulation of the selected object as commanded by the user's input. For example, instep 343, the GUI determines whether the user has instructed the GUI to fix the object at a location on the GUI, for example by dragging the object a location and fixing and/or locking the object at that location. If the GUI has been instructed to fix the object, then instep 345 the GUI causes the display of the selected object at that fixed location on the GUI. If the GUI has not been instructed to fix the object, then, instep 347, the GUI determines whether the user has instructed the GUI to remove the object, for example by using a flicking motion on the touch screen to quickly remove the object from the display screen. If the GUI has been instructed to remove the object, then instep 349 the GUI causes the removal of the selected object from the GUI. If the GUI has not been instructed to remove the object, then, instep 351, the GUI determines whether the user has instructed the GUI to associate (or group or link) the selected object with another object, for example by dragging the select object over the other object using the touch screen. If the GUI has been instructed to associate the object with another object, then instep 353 the GUI causes the display of the associated objects on the GUI. If the GUI has not been instructed to associate the object, then, instep 355, the GUI can causes the deselection of the selected object after a predetermined time period has elapsed. -
FIG. 5 is a diagram of agraphical user interface 500, according to various embodiments. TheGUI 500 includes adisplay screen 501, and various category or source bars provided around the edges of the screen. For example, theGUI 500 includes amusic bar 503A, amessages bar 503B, acontacts bar 503C, anapplications bar 503D, apictures bar 503E, and undefined orinactive bars -
FIG. 5 depicts objects that include a pinnedclock icon 505,music icons message icons contact icon 511, a grouped/link icon 513, acomputer file icon 515, acalendar icon 517, shopping orstore icons 519A (which is pinned), 519B, amapping icon 521, andpicture icons 523A (which is pinned), 523B. The objects can flow from the side having a category or source bar from which they are generated. For example,music icons icon 513 that contains a music icon, flow from the left side of thedisplay screen 501 where themusic bar 503A is located. Also,message icons contact icon 511 flows from the upper side at which the contacts bar 503C is located. Various application icons, such as the computer file icon 515 (which can be representative of a word processor application, etc.), thecalendar icon 517, the shopping orstore icons 519A (prior to pinning), 519B, and themapping icon 521 flow from the upper side at which the applications bar 503D is located. Thepicture icons 523A (prior to pinning), 523B flow from the upper side at which the pictures bar 503E is located. Similarly, the user could definebars display screen 501 towards the left side, and could definebars 503H and 503I to a particular category or source and objects can flow from the lower side of thedisplay screen 501 towards the upper side. Also, the user can adjust any of these default flow directions as desire on the fly using gestures and/or commands. Alternatively, it is also possible that if all the objects, which fit within active selected categories and filter sets are already visible on the screen, then the flow can automatically stop because no new objects are coming into the screen. In other words, the user may have selected some category, which has only a few items matching to that category and when those items have moved into the visible screen, then the flow may be stopped in order to make easier manipulation of objects. As an extreme case, there may be just a single contact, which fits into selected categories. In that case, when that contact has appeared into the screen, then the flow is stopped to wait for user action on that contact. However, as soon as there are non-displayed objects fitting within the selected categories, then the flow will be activated again. -
FIGS. 6A-6C are diagrams of mobile devices displaying graphical user interfaces, according to various embodiments. -
FIG. 6A depicts amobile device 600 that includes a display screen, such as a touch screen display, that is displaying agraphical user interface 601. Themobile device 600 also includes various user input devices, such asbuttons 603. TheGUI 601 includes adisplay screen 605, and various category or source bars provided around the edges of the screen. For example, theGUI 601 includes anapplications bar 607, amusic bar 609, and aweb bar 611. TheGUI 601 also has various selectable objects, such asobjects display screen 605. -
FIGS. 6B and 6C depict themobile device 600 andGUI 601 thereof in a slightly different configuration, such that a contacts bar and a messages bar are provided as sources on a left side of the display screen thereof.FIGS. 6B and 6C depict an embodiment in which two objects are grouped or linked together. Thus, inFIG. 6B , the user (e.g., using the touch screen, voice commands, or other input) selectsobject 617, as can be seen by aselection box 619 that appears aroundobject 617, and dragsobject 617 overobject 621, which is then indicated as being selected by aselection box 623 that appears aroundobject 621. TheGUI 601 then displays atext box 625, as shown inFIG. 6C that asks the user whether the user wants to create a new group, and the user can answer either yes, using the thumbs upicon 627, or no, using the thumbs downicon 629. If the user decides to create a group, then theicons GUI 601 in much the same manner asgroup icons - Additionally, objects can include metadata that defined certain characteristics of the object, such that when the object is selected, then the system can use the metadata of the selected object to search for other similar types of objects or related objects, and then the objects found during that search can flow closer to the selected object so that user can be given the opportunity to build a group from these suggested objects found during the search. Thus, with this “object flowing” approach, the user can then ignore the suggested objects, group the suggested objects, or kick out some of the suggested objects if the user does not want them to belong to the group.
- There is no limit to number of sources that can be defined on the display screen of the GUI within the confines of size and shape restrictions of the screen. For a typical smartphone screen, one to six sources can be a good estimate; however, additional sources can be defined if so desired by the user. It is also possible to stack the sources (e.g., like a stack of cards), which can be shuffled using some gestures or commands. The top-most visible source on that stack is the active source, which produces objects to the flow.
- As can be seen in
FIGS. 4A-4C , 5, and 6A-6C, the basic operation of the GUI is very simple and straightforward, thus creating a natural and easy to grasp interface for the user. The user sees sources and objects, and can easily learn to manipulate, access, and control them in one uniform and simple interface. - Pinned objects can remain stationary until a user releases it, moves it to another location, removes it from the display screen, etc. Pinned objects can be dragged freely to any suitable position on the display screen, and can be locked in position, if desired, in order to prevent any accidental movement thereof from the pinned location.
- The GUI is customizable and allows a user to select and manipulate objects and sources using a plurality of interaction methods such as speech or touch. Rules for selected objects can be defined by a selecting action and/or function of metadata linked to objects.
- Two or more objects can be associated with one another, in order to create a link between them. Such associations can trigger some interaction between two or more linked objects. Also, two or more groups of linked objects can be associated with one another, thereby creating links between these groups of already associated objects. The groupings can allow the user to access and operate all the elements in the associated groups through single object in that group.
- The GUI presents data and application that “flow” to the user, so that the user can simply wait like a hunter and select a target when he or she sees the target. Thus, the GUI provides a very natural and relaxed way of accessing data. Also, in such a configuration, the user does not have to know exactly what he or she is looking for, and can access data and applications on an ad-hoc basis. The GUI may trigger some user actions almost accidentally, for example, the user may start some action just because he or she associates something moving across the display to the current context the user is living in. So the system is utilizes the user's intelligibility and ability to associate things based on the user's context. The GUI provides tools to the user and does not try to pretend to be too intelligent, since machines cannot be intelligent in wide enough sense to really predict irrational human behavior.
- The GUI supports spontaneous and irrational access to device functions. It will however adapt the appearance and order of the flow for the objects based on the frequency of use and other context data. So even though the flow may look random, it has some deterministic elements (e.g., the most frequently used contacts may flow onto the display screen first or more often than less frequently used contacts, etc.). Also, very infrequently used objects can also enter the display screen, even if the user has forgotten the object, thereby supporting discovery or rediscovery of objects. The GUI is also ideal for learning to use a new device, because some hidden functionalities will navigate their way to the user via the GUI, not the other way around.
- The user can adapt the system to his or her liking during normal usage, and therefore separate settings and configurations menus are not necessary, but rather the settings and configurations can be changed by the user on the fly. For example, a user can use gestures and/or speech to manipulate the flow, as well as the objects and sources. The GUI is dynamic and adaptive, and the user has full control thereof (e.g., if the user wants to maintain some fixed objects, etc.), such that the user can decide how much freedom the GUI allows in the manipulation of objects/sources. Using a dynamic flow of objects enables better handling of large number of objects. The user can access different functions and tasks from a single interface, without the need to switch between different applications. Complex hierarchical menu systems and view switching can be avoided or at least reduced. The GUI can be a fun and enjoyable manner in which to utilize the data and applications of the device, and can always offer something to the user that might otherwise go unnoticed. The GUI is forgiving, for example, uninteresting objects can simply flow away without remaining in the display screen without user permission. The GUI provides new stimulus to the user that allows the user to make new associations between various objects. Human associations can be even very irrational, thus needing some partly random stimulus from the GUI, which is not offer by a purely static GUI. The GUI is also very suitable for any advertising purposes because advertisements can act like rest of the fluid objects, by having advertisement objects (e.g., such objects can be provided to the GUI from a remote server, for example, from service/product providers that the user has utilized) flow in and out of the display screen. Also, the user can provide feedback to such advertisements, for example, by actively voting on or rating such advertisements by kicking then out of the screen or accessing them. This co-operation and user control is a benefit for both user and advertiser.
- The processes described herein for providing a fluid graphical user interface may be advantageously implemented via software, hardware (e.g., general processor, Digital Signal Processing (DSP) chip, an Application Specific Integrated Circuit (ASIC), Field Programmable Gate Arrays (FPGAs), etc.), firmware or a combination thereof. Such exemplary hardware for performing the described functions is detailed below.
-
FIG. 7 illustrates acomputer system 700 upon which an embodiment of the invention may be implemented. Althoughcomputer system 700 is depicted with respect to a particular device or equipment, it is contemplated that other devices or equipment (e.g., network elements, servers, etc.) withinFIG. 7 can deploy the illustrated hardware and components ofsystem 700.Computer system 700 is programmed (e.g., via computer program code or instructions) to provide a fluid graphical user interface as described herein and includes a communication mechanism such as abus 710 for passing information between other internal and external components of thecomputer system 700. Information (also called data) is represented as a physical expression of a measurable phenomenon, typically electric voltages, but including, in other embodiments, such phenomena as magnetic, electromagnetic, pressure, chemical, biological, molecular, atomic, sub-atomic and quantum interactions. For example, north and south magnetic fields, or a zero and non-zero electric voltage, represent two states (0, 1) of a binary digit (bit). Other phenomena can represent digits of a higher base. A superposition of multiple simultaneous quantum states before measurement represents a quantum bit (qubit). A sequence of one or more digits constitutes digital data that is used to represent a number or code for a character. In some embodiments, information called analog data is represented by a near continuum of measurable values within a particular range.Computer system 700, or a portion thereof, constitutes a means for performing one or more steps of providing a fluid graphical user interface. - A
bus 710 includes one or more parallel conductors of information so that information is transferred quickly among devices coupled to thebus 710. One ormore processors 702 for processing information are coupled with thebus 710. - A
processor 702 performs a set of operations on information as specified by computer program code related to provide a fluid graphical user interface. The computer program code is a set of instructions or statements providing instructions for the operation of the processor and/or the computer system to perform specified functions. The code, for example, may be written in a computer programming language that is compiled into a native instruction set of the processor. The code may also be written directly using the native instruction set (e.g., machine language). The set of operations include bringing information in from thebus 710 and placing information on thebus 710. The set of operations also typically include comparing two or more units of information, shifting positions of units of information, and combining two or more units of information, such as by addition or multiplication or logical operations like OR, exclusive OR (XOR), and AND. Each operation of the set of operations that can be performed by the processor is represented to the processor by information called instructions, such as an operation code of one or more digits. A sequence of operations to be executed by theprocessor 702, such as a sequence of operation codes, constitute processor instructions, also called computer system instructions or, simply, computer instructions. Processors may be implemented as mechanical, electrical, magnetic, optical, chemical or quantum components, among others, alone or in combination. -
Computer system 700 also includes amemory 704 coupled tobus 710. Thememory 704, such as a random access memory (RAM) or other dynamic storage device, stores information including processor instructions for providing a fluid graphical user interface. Dynamic memory allows information stored therein to be changed by thecomputer system 700. RAM allows a unit of information stored at a location called a memory address to be stored and retrieved independently of information at neighboring addresses. Thememory 704 is also used by theprocessor 702 to store temporary values during execution of processor instructions. Thecomputer system 700 also includes a read only memory (ROM) 706 or other static storage device coupled to thebus 710 for storing static information, including instructions, that is not changed by thecomputer system 700. Some memory is composed of volatile storage that loses the information stored thereon when power is lost. Also coupled tobus 710 is a non-volatile (persistent)storage device 708, such as a magnetic disk, optical disk or flash card, for storing information, including instructions, that persists even when thecomputer system 700 is turned off or otherwise loses power. - Information, including instructions for providing a fluid graphical user interface, is provided to the
bus 710 for use by the processor from anexternal input device 712, such as a keyboard containing alphanumeric keys operated by a human user, or a sensor. A sensor detects conditions in its vicinity and transforms those detections into physical expression compatible with the measurable phenomenon used to represent information incomputer system 700. Other external devices coupled tobus 710, used primarily for interacting with humans, include adisplay device 714, such as a cathode ray tube (CRT) or a liquid crystal display (LCD), or plasma screen or printer for presenting text or images, and apointing device 716, such as a mouse or a trackball or cursor direction keys, or motion sensor, for controlling a position of a small cursor image presented on thedisplay 714 and issuing commands associated with graphical elements presented on thedisplay 714. In some embodiments, for example, in embodiments in which thecomputer system 700 performs all functions automatically without human input, one or more ofexternal input device 712,display device 714 andpointing device 716 is omitted. - In the illustrated embodiment, special purpose hardware, such as an application specific integrated circuit (ASIC) 720, is coupled to
bus 710. The special purpose hardware is configured to perform operations not performed byprocessor 702 quickly enough for special purposes. Examples of application specific ICs include graphics accelerator cards for generating images fordisplay 714, cryptographic boards for encrypting and decrypting messages sent over a network, speech recognition, and interfaces to special external devices, such as robotic arms and medical scanning equipment that repeatedly perform some complex sequence of operations that are more efficiently implemented in hardware. -
Computer system 700 also includes one or more instances of acommunications interface 770 coupled tobus 710.Communication interface 770 provides a one-way or two-way communication coupling to a variety of external devices that operate with their own processors, such as printers, scanners and external disks. In general the coupling is with anetwork link 778 that is connected to alocal network 780 to which a variety of external devices with their own processors are connected. For example,communication interface 770 may be a parallel port or a serial port or a universal serial bus (USB) port on a personal computer. In some embodiments,communications interface 770 is an integrated services digital network (ISDN) card or a digital subscriber line (DSL) card or a telephone modem that provides an information communication connection to a corresponding type of telephone line. In some embodiments, acommunication interface 770 is a cable modem that converts signals onbus 710 into signals for a communication connection over a coaxial cable or into optical signals for a communication connection over a fiber optic cable. As another example,communications interface 770 may be a local area network (LAN) card to provide a data communication connection to a compatible LAN, such as Ethernet. Wireless links may also be implemented. For wireless links, thecommunications interface 770 sends or receives or both sends and receives electrical, acoustic or electromagnetic signals, including infrared and optical signals, that carry information streams, such as digital data. For example, in wireless handheld devices, such as mobile telephones like cell phones, thecommunications interface 770 includes a radio band electromagnetic transmitter and receiver called a radio transceiver. In certain embodiments, thecommunications interface 770 enables connection to thecommunication network 105 for providing a fluid graphical user interface to the UEs 101A . . . 101N or UE 103. - The term “computer-readable medium” as used herein refers to any medium that participates in providing information to
processor 702, including instructions for execution. Such a medium may take many forms, including, but not limited to computer-readable storage medium (e.g., non-volatile media, volatile media), and transmission media. Non-transitory media, such as non-volatile media, include, for example, optical or magnetic disks, such asstorage device 708. Volatile media include, for example,dynamic memory 704. Transmission media include, for example, coaxial cables, copper wire, fiber optic cables, and carrier waves that travel through space without wires or cables, such as acoustic waves and electromagnetic waves, including radio, optical and infrared waves. Signals include man-made transient variations in amplitude, frequency, phase, polarization or other physical properties transmitted through the transmission media. Common forms of computer-readable media include, for example, a floppy disk, a flexible disk, hard disk, magnetic tape, any other magnetic medium, a CD-ROM, CDRW, DVD, any other optical medium, punch cards, paper tape, optical mark sheets, any other physical medium with patterns of holes or other optically recognizable indicia, a RAM, a PROM, an EPROM, a FLASH-EPROM, any other memory chip or cartridge, a carrier wave, or any other medium from which a computer can read. The term computer-readable storage medium is used herein to refer to any computer-readable medium except transmission media. - Logic encoded in one or more tangible media includes one or both of processor instructions on a computer-readable storage media and special purpose hardware, such as
ASIC 720. - Network link 778 typically provides information communication using transmission media through one or more networks to other devices that use or process the information. For example,
network link 778 may provide a connection throughlocal network 780 to ahost computer 782 or toequipment 784 operated by an Internet Service Provider (ISP).ISP equipment 784 in turn provides data communication services through the public, world-wide packet-switching communication network of networks now commonly referred to as theInternet 790. - A computer called a
server host 792 connected to the Internet hosts a process that provides a service in response to information received over the Internet. For example,server host 792 hosts a process that provides information representing video data for presentation atdisplay 714. It is contemplated that the components ofsystem 700 can be deployed in various configurations within other computer systems, e.g., host 782 andserver 792. - At least some embodiments of the invention are related to the use of
computer system 700 for implementing some or all of the techniques described herein. According to one embodiment of the invention, those techniques are performed bycomputer system 700 in response toprocessor 702 executing one or more sequences of one or more processor instructions contained inmemory 704. Such instructions, also called computer instructions, software and program code, may be read intomemory 704 from another computer-readable medium such asstorage device 708 ornetwork link 778. Execution of the sequences of instructions contained inmemory 704 causesprocessor 702 to perform one or more of the method steps described herein. In alternative embodiments, hardware, such asASIC 720, may be used in place of or in combination with software to implement the invention. Thus, embodiments of the invention are not limited to any specific combination of hardware and software, unless otherwise explicitly stated herein. - The signals transmitted over
network link 778 and other networks throughcommunications interface 770, carry information to and fromcomputer system 700.Computer system 700 can send and receive information, including program code, through thenetworks network link 778 andcommunications interface 770. In an example using theInternet 790, aserver host 792 transmits program code for a particular application, requested by a message sent fromcomputer 700, throughInternet 790,ISP equipment 784,local network 780 andcommunications interface 770. The received code may be executed byprocessor 702 as it is received, or may be stored inmemory 704 or instorage device 708 or other non-volatile storage for later execution, or both. In this manner,computer system 700 may obtain application program code in the form of signals on a carrier wave. - Various forms of computer readable media may be involved in carrying one or more sequence of instructions or data or both to
processor 702 for execution. For example, instructions and data may initially be carried on a magnetic disk of a remote computer such ashost 782. The remote computer loads the instructions and data into its dynamic memory and sends the instructions and data over a telephone line using a modem. A modem local to thecomputer system 700 receives the instructions and data on a telephone line and uses an infra-red transmitter to convert the instructions and data to a signal on an infra-red carrier wave serving as thenetwork link 778. An infrared detector serving as communications interface 770 receives the instructions and data carried in the infrared signal and places information representing the instructions and data ontobus 710.Bus 710 carries the information tomemory 704 from whichprocessor 702 retrieves and executes the instructions using some of the data sent with the instructions. The instructions and data received inmemory 704 may optionally be stored onstorage device 708, either before or after execution by theprocessor 702. -
FIG. 8 illustrates achip set 800 upon which an embodiment of the invention may be implemented. Chip set 800 is programmed to provide a fluid graphical user interface as described herein and includes, for instance, the processor and memory components described with respect toFIG. 7 incorporated in one or more physical packages (e.g., chips). By way of example, a physical package includes an arrangement of one or more materials, components, and/or wires on a structural assembly (e.g., a baseboard) to provide one or more characteristics such as physical strength, conservation of size, and/or limitation of electrical interaction. It is contemplated that in certain embodiments the chip set can be implemented in a single chip. Chip set 800, or a portion thereof, constitutes a means for performing one or more steps of providing a fluid graphical user interface. - In one embodiment, the chip set 800 includes a communication mechanism such as a bus 801 for passing information among the components of the chip set 800. A
processor 803 has connectivity to the bus 801 to execute instructions and process information stored in, for example, amemory 805. Theprocessor 803 may include one or more processing cores with each core configured to perform independently. A multi-core processor enables multiprocessing within a single physical package. Examples of a multi-core processor include two, four, eight, or greater numbers of processing cores. Alternatively or in addition, theprocessor 803 may include one or more microprocessors configured in tandem via the bus 801 to enable independent execution of instructions, pipelining, and multithreading. Theprocessor 803 may also be accompanied with one or more specialized components to perform certain processing functions and tasks such as one or more digital signal processors (DSP) 807, or one or more application-specific integrated circuits (ASIC) 809. ADSP 807 typically is configured to process real-world signals (e.g., sound) in real time independently of theprocessor 803. Similarly, anASIC 809 can be configured to performed specialized functions not easily performed by a general purposed processor. Other specialized components to aid in performing the inventive functions described herein include one or more field programmable gate arrays (FPGA) (not shown), one or more controllers (not shown), or one or more other special-purpose computer chips. - The
processor 803 and accompanying components have connectivity to thememory 805 via the bus 801. Thememory 805 includes both dynamic memory (e.g., RAM, magnetic disk, writable optical disk, etc.) and static memory (e.g., ROM, CD-ROM, etc.) for storing executable instructions that when executed perform the inventive steps described herein to provide a fluid graphical user interface. Thememory 805 also stores the data associated with or generated by the execution of the inventive steps. -
FIG. 9 is a diagram of exemplary components of a mobile terminal (e.g., handset) for communications, which is capable of operating in the system ofFIG. 1 , according to one embodiment. In some embodiments, mobile terminal 900, or a portion thereof, constitutes a means for performing one or more steps of providing a fluid graphical user interface. Generally, a radio receiver is often defined in terms of front-end and back-end characteristics. The front-end of the receiver encompasses all of the Radio Frequency (RF) circuitry whereas the back-end encompasses all of the base-band processing circuitry. As used in this application, the term “circuitry” refers to both: (1) hardware-only implementations (such as implementations in only analog and/or digital circuitry), and (2) to combinations of circuitry and software (and/or firmware) (such as, if applicable to the particular context, to a combination of processor(s), including digital signal processor(s), software, and memory(ies) that work together to cause an apparatus, such as a mobile phone or server, to perform various functions). This definition of “circuitry” applies to all uses of this term in this application, including in any claims. As a further example, as used in this application and if applicable to the particular context, the term “circuitry” would also cover an implementation of merely a processor (or multiple processors) and its (or their) accompanying software/or firmware. The term “circuitry” would also cover if applicable to the particular context, for example, a baseband integrated circuit or applications processor integrated circuit in a mobile phone or a similar integrated circuit in a cellular network device or other network devices. - Pertinent internal components of the telephone include a Main Control Unit (MCU) 903, a Digital Signal Processor (DSP) 905, and a receiver/transmitter unit including a microphone gain control unit and a speaker gain control unit. A
main display unit 907 provides a display to the user in support of various applications and mobile terminal functions that perform or support the steps of providing a fluid graphical user interface. The display 9 includes display circuitry configured to display at least a portion of a user interface of the mobile terminal (e.g., mobile telephone). Additionally, thedisplay 907 and display circuitry are configured to facilitate user control of at least some functions of the mobile terminal. Anaudio function circuitry 909 includes amicrophone 911 and microphone amplifier that amplifies the speech signal output from themicrophone 911. The amplified speech signal output from themicrophone 911 is fed to a coder/decoder (CODEC) 913. - A
radio section 915 amplifies power and converts frequency in order to communicate with a base station, which is included in a mobile communication system, viaantenna 917. The power amplifier (PA) 919 and the transmitter/modulation circuitry are operationally responsive to theMCU 903, with an output from thePA 919 coupled to theduplexer 921 or circulator or antenna switch, as known in the art. ThePA 919 also couples to a battery interface andpower control unit 920. - In use, a user of
mobile terminal 901 speaks into themicrophone 911 and his or her voice along with any detected background noise is converted into an analog voltage. The analog voltage is then converted into a digital signal through the Analog to Digital Converter (ADC) 923. Thecontrol unit 903 routes the digital signal into theDSP 905 for processing therein, such as speech encoding, channel encoding, encrypting, and interleaving. In one embodiment, the processed voice signals are encoded, by units not separately shown, using a cellular transmission protocol such as global evolution (EDGE), general packet radio service (GPRS), global system for mobile communications (GSM), Internet protocol multimedia subsystem (IMS), universal mobile telecommunications system (UMTS), etc., as well as any other suitable wireless medium, e.g., microwave access (WiMAX), Long Term Evolution (LTE) networks, code division multiple access (CDMA), wideband code division multiple access (WCDMA), wireless fidelity (WiFi), satellite, and the like. - The encoded signals are then routed to an
equalizer 925 for compensation of any frequency-dependent impairments that occur during transmission though the air such as phase and amplitude distortion. After equalizing the bit stream, themodulator 927 combines the signal with a RF signal generated in theRF interface 929. Themodulator 927 generates a sine wave by way of frequency or phase modulation. In order to prepare the signal for transmission, an up-converter 931 combines the sine wave output from themodulator 927 with another sine wave generated by asynthesizer 933 to achieve the desired frequency of transmission. The signal is then sent through aPA 919 to increase the signal to an appropriate power level. In practical systems, thePA 919 acts as a variable gain amplifier whose gain is controlled by theDSP 905 from information received from a network base station. The signal is then filtered within theduplexer 921 and optionally sent to anantenna coupler 935 to match impedances to provide maximum power transfer. Finally, the signal is transmitted viaantenna 917 to a local base station. An automatic gain control (AGC) can be supplied to control the gain of the final stages of the receiver. The signals may be forwarded from there to a remote telephone which may be another cellular telephone, other mobile phone or a land-line connected to a Public Switched Telephone Network (PSTN), or other telephony networks. - Voice signals transmitted to the
mobile terminal 901 are received viaantenna 917 and immediately amplified by a low noise amplifier (LNA) 937. A down-converter 939 lowers the carrier frequency while the demodulator 941 strips away the RF leaving only a digital bit stream. The signal then goes through theequalizer 925 and is processed by theDSP 905. A Digital to Analog Converter (DAC) 943 converts the signal and the resulting output is transmitted to the user through thespeaker 945, all under control of a Main Control Unit (MCU) 903—which can be implemented as a Central Processing Unit (CPU) (not shown). - The
MCU 903 receives various signals including input signals from thekeyboard 947. Thekeyboard 947 and/or theMCU 903 in combination with other user input components (e.g., the microphone 911) comprise a user interface circuitry for managing user input. TheMCU 903 runs a user interface software to facilitate user control of at least some functions of themobile terminal 901 to provide a fluid graphical user interface. TheMCU 903 also delivers a display command and a switch command to thedisplay 907 and to the speech output switching controller, respectively. Further, theMCU 903 exchanges information with theDSP 905 and can access an optionally incorporatedSIM card 949 and amemory 951. In addition, theMCU 903 executes various control functions required of the terminal. TheDSP 905 may, depending upon the implementation, perform any of a variety of conventional digital processing functions on the voice signals. Additionally,DSP 905 determines the background noise level of the local environment from the signals detected bymicrophone 911 and sets the gain ofmicrophone 911 to a level selected to compensate for the natural tendency of the user of themobile terminal 901. - The CODEC 913 includes the
ADC 923 and DAC 943. Thememory 951 stores various data including call incoming tone data and is capable of storing other data including music data received via, e.g., the global Internet. The software module could reside in RAM memory, flash memory, registers, or any other form of writable storage medium known in the art. Thememory device 951 may be, but not limited to, a single memory, CD, DVD, ROM, RAM, EEPROM, optical storage, or any other non-volatile storage medium capable of storing digital data. - An optionally incorporated
SIM card 949 carries, for instance, important information, such as the cellular phone number, the carrier supplying service, subscription details, and security information. TheSIM card 949 serves primarily to identify themobile terminal 901 on a radio network. Thecard 949 also contains a memory for storing a personal telephone number registry, text messages, and user specific mobile terminal settings. - While the invention has been described in connection with a number of embodiments and implementations, the invention is not so limited but covers various obvious modifications and equivalent arrangements, which fall within the purview of the appended claims. Although features of the invention are expressed in certain combinations among the claims, it is contemplated that these features can be arranged in any combination and order.
Claims (20)
1. A method comprising:
causing, at least in part, display of selectable objects on a graphical user interface, wherein each of the selectable objects corresponds to data or an application accessible via the graphical user interface;
detecting a touch gesture on the graphical user interface;
causing, at least in part, display of the selectable objects in motion travelling across the graphical user interface responsive to the detected touch gesture, wherein the selectable objects are displayed based on a category of the selectable object or context dependent data; and
allowing user selection and manipulation of the selectable objects displayed on the graphical user interface.
2. A method of claim 1 , wherein the touch gesture is a swiping motion.
3. A method of claim 2 , wherein the swiping motion controls a direction, a speed, a pattern, a combination thereof, of the motion of the selectable objects.
4. A method of claim 1 , further comprising:
activating sources from which the selectable objects flow; and
allowing user manipulation to define a location of the sources, a direction of the flow, a boundary of the sources, a combination thereof,
wherein the sources correspond to an area of the graphical user interface.
5. A method of claim 4 , further comprising:
allowing user manipulation to transform one or more of the selectable objects into a source for content related to the transformed one or more selectable objects.
6. A method of claim 1 , wherein an order of the display of the selectable objects is based, at least in part, on a frequency with which the selectable objects are selected.
7. A method of claim 1 , wherein
wherein the user selection and manipulation includes allowing a user to select a first selectable object of the selectable objects, and to fix the first selectable object at a location on the graphical user interface, and
wherein the user selection and manipulation further includes allowing the user:
to activate media playback of the first selectable object; or
to activate an editor or a viewer associated with the first selectable object.
8. A method of claim 1 ,
wherein the user selection and manipulation includes allowing a user to select a first selectable object of the selectable objects, and remove the first selectable object from being displayed on the graphical user interface, and
wherein the removed first selectable object is reduced in priority of display as compared to other selectable objects for subsequent display of the removed first selectable object.
9. A method of claim 1 , wherein the graphical user interface is provided on a mobile device, and wherein the selectable objects are caused to be displayed in motion by appearing at a first respective side of the graphical user interface, travelling across the graphical user interface, and then disappearing at a second respective side of the graphical user interface, said method further comprising:
allowing user setting of the motion of the selectable objects including speed of travel, direction of travel, and number of selectable objects that are caused to be displayed simultaneously.
10. A method of claim 1 , wherein the manipulation of the selectable objects displayed on the graphical user interface is defined by action of the user selection and/or by a function of metadata linked to a respective selectable object.
11. An apparatus comprising:
at least one processor; and
at least one memory including computer program code,
the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following:
cause, at least in part, display of selectable objects on a graphical user interface, wherein each of the selectable objects corresponds to data or an application accessible via the graphical user interface;
detect a touch gesture on the graphical user interface;
cause, at least in part, display of the selectable objects in motion travelling across the graphical user interface responsive to the detected touch gesture, wherein the selectable objects are displayed based on a category of the selectable object or context dependent data; and
allow user selection and manipulation of the selectable objects displayed on the graphical user interface.
12. An apparatus of claim 11 , wherein the touch gesture is a swiping motion.
13. An apparatus of claim 12 , wherein the swiping motion controls a direction, a speed, a pattern, a combination thereof, of the motion of the selectable objects.
14. An apparatus of claim 11 , wherein the apparatus is further caused to:
activate sources from which the selectable objects flow; and
allow user manipulation to define a location of the sources, a direction of the flow, a boundary of the sources, a combination thereof,
wherein the sources correspond to an area of the graphical user interface.
15. An apparatus of claim 14 , wherein the apparatus is further caused to:
allow user manipulation to transform one or more of the selectable objects into a source for content related to the transformed one or more selectable objects.
16. An apparatus of claim 15 , wherein an order of the display of the selectable objects is based, at least in part, on a frequency with which the selectable objects are selected.
17. An apparatus of claim 11 ,
wherein the user selection and manipulation includes allowing a user to select a first selectable object of the selectable objects, and to fix the first selectable object at a location on the graphical user interface, and
wherein the user selection and manipulation further includes allowing the user:
to activate media playback of the first selectable object; or
to activate an editor or a viewer associated with the first selectable object.
18. An apparatus of claim 11 ,
wherein the user selection and manipulation includes allowing a user to select a first selectable object of the selectable objects, and remove the first selectable object from being displayed on the graphical user interface, and
wherein the removed first selectable object is reduced in priority of display as compared to other selectable objects for subsequent display of the removed first selectable object.
19. An apparatus of claim 11 , wherein the graphical user interface is provided on a mobile device, and wherein the selectable objects are caused to be displayed in motion by appearing at a first respective side of the graphical user interface, travelling across the graphical user interface, and then disappearing at a second respective side of the graphical user interface, and the apparatus is further caused to:
allow user setting of the motion of the selectable objects including speed of travel, direction of travel, and number of selectable objects that are caused to be displayed simultaneously.
20. An apparatus of claim 11 , wherein the manipulation of the selectable objects displayed on the graphical user interface is defined by action of the user selection and/or by a function of metadata linked to a respective selectable object.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/910,753 US20130263032A1 (en) | 2009-12-31 | 2013-06-05 | Method and apparatus for fluid graphical user interface |
US16/052,335 US20180364894A1 (en) | 2009-12-31 | 2018-08-01 | Method and apparatus for fluid graphical user interface |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/651,071 US8479107B2 (en) | 2009-12-31 | 2009-12-31 | Method and apparatus for fluid graphical user interface |
US13/910,753 US20130263032A1 (en) | 2009-12-31 | 2013-06-05 | Method and apparatus for fluid graphical user interface |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/651,071 Continuation US8479107B2 (en) | 2009-12-31 | 2009-12-31 | Method and apparatus for fluid graphical user interface |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/052,335 Continuation US20180364894A1 (en) | 2009-12-31 | 2018-08-01 | Method and apparatus for fluid graphical user interface |
Publications (1)
Publication Number | Publication Date |
---|---|
US20130263032A1 true US20130263032A1 (en) | 2013-10-03 |
Family
ID=44189020
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/651,071 Active 2031-10-26 US8479107B2 (en) | 2009-12-31 | 2009-12-31 | Method and apparatus for fluid graphical user interface |
US13/910,753 Abandoned US20130263032A1 (en) | 2009-12-31 | 2013-06-05 | Method and apparatus for fluid graphical user interface |
US16/052,335 Abandoned US20180364894A1 (en) | 2009-12-31 | 2018-08-01 | Method and apparatus for fluid graphical user interface |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/651,071 Active 2031-10-26 US8479107B2 (en) | 2009-12-31 | 2009-12-31 | Method and apparatus for fluid graphical user interface |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/052,335 Abandoned US20180364894A1 (en) | 2009-12-31 | 2018-08-01 | Method and apparatus for fluid graphical user interface |
Country Status (5)
Country | Link |
---|---|
US (3) | US8479107B2 (en) |
EP (1) | EP2519870B1 (en) |
CN (1) | CN102782629B (en) |
HK (1) | HK1177972A1 (en) |
WO (1) | WO2011080616A1 (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120260288A1 (en) * | 2011-04-11 | 2012-10-11 | Sony Corporation | Information processing apparatus, information processing method, and program |
US9756549B2 (en) | 2014-03-14 | 2017-09-05 | goTenna Inc. | System and method for digital communication between computing devices |
US20190138115A1 (en) * | 2016-07-20 | 2019-05-09 | Hewlett-Packard Development Company, L.P. | Visibly opaque and near infrared transparent display border with underlying encoded pattern |
US10996840B1 (en) * | 2019-08-26 | 2021-05-04 | Juniper Networks, Inc. | Systems and methods for providing user-friendly access to relevant help documentation for software applications |
Families Citing this family (225)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8645137B2 (en) | 2000-03-16 | 2014-02-04 | Apple Inc. | Fast, language-independent method for user authentication by voice |
US8677377B2 (en) | 2005-09-08 | 2014-03-18 | Apple Inc. | Method and apparatus for building an intelligent automated assistant |
US9318108B2 (en) | 2010-01-18 | 2016-04-19 | Apple Inc. | Intelligent automated assistant |
US8977255B2 (en) | 2007-04-03 | 2015-03-10 | Apple Inc. | Method and system for operating a multi-function portable electronic device using voice-activation |
US9733811B2 (en) | 2008-12-19 | 2017-08-15 | Tinder, Inc. | Matching process system and method |
US10002189B2 (en) | 2007-12-20 | 2018-06-19 | Apple Inc. | Method and apparatus for searching using an active ontology |
US9330720B2 (en) | 2008-01-03 | 2016-05-03 | Apple Inc. | Methods and apparatus for altering audio output signals |
US8996376B2 (en) | 2008-04-05 | 2015-03-31 | Apple Inc. | Intelligent text-to-speech conversion |
US10496753B2 (en) | 2010-01-18 | 2019-12-03 | Apple Inc. | Automatically adapting user interfaces for hands-free interaction |
US20100030549A1 (en) | 2008-07-31 | 2010-02-04 | Lee Michael M | Mobile device having human language translation capability with positional feedback |
US8676904B2 (en) | 2008-10-02 | 2014-03-18 | Apple Inc. | Electronic devices with voice command and contextual data processing capabilities |
US8798311B2 (en) * | 2009-01-23 | 2014-08-05 | Eldon Technology Limited | Scrolling display of electronic program guide utilizing images of user lip movements |
US9858925B2 (en) | 2009-06-05 | 2018-01-02 | Apple Inc. | Using context information to facilitate processing of commands in a virtual assistant |
US20120311585A1 (en) | 2011-06-03 | 2012-12-06 | Apple Inc. | Organizing task items that represent tasks to perform |
US10241644B2 (en) | 2011-06-03 | 2019-03-26 | Apple Inc. | Actionable reminder entries |
US10241752B2 (en) | 2011-09-30 | 2019-03-26 | Apple Inc. | Interface for a virtual digital assistant |
US9431006B2 (en) | 2009-07-02 | 2016-08-30 | Apple Inc. | Methods and apparatuses for automatic speech recognition |
US20110167350A1 (en) * | 2010-01-06 | 2011-07-07 | Apple Inc. | Assist Features For Content Display Device |
US10679605B2 (en) | 2010-01-18 | 2020-06-09 | Apple Inc. | Hands-free list-reading by intelligent automated assistant |
US10276170B2 (en) | 2010-01-18 | 2019-04-30 | Apple Inc. | Intelligent automated assistant |
US10705794B2 (en) | 2010-01-18 | 2020-07-07 | Apple Inc. | Automatically adapting user interfaces for hands-free interaction |
US10553209B2 (en) | 2010-01-18 | 2020-02-04 | Apple Inc. | Systems and methods for hands-free notification summaries |
US8682667B2 (en) | 2010-02-25 | 2014-03-25 | Apple Inc. | User profiling for selecting user specific voice input processing information |
US9762975B2 (en) | 2010-04-30 | 2017-09-12 | Thomas Loretan | Content navigation guide |
US9122701B2 (en) | 2010-05-13 | 2015-09-01 | Rovi Guides, Inc. | Systems and methods for providing media content listings according to points of interest |
US10139995B2 (en) * | 2010-06-02 | 2018-11-27 | Allen Learning Technologies | Device having graphical user interfaces and method for developing multimedia computer applications |
US8781629B2 (en) * | 2010-09-22 | 2014-07-15 | Toyota Motor Engineering & Manufacturing North America, Inc. | Human-robot interface apparatuses and methods of controlling robots |
US9323442B2 (en) * | 2010-09-30 | 2016-04-26 | Apple Inc. | Managing items in a user interface |
KR101685991B1 (en) * | 2010-09-30 | 2016-12-13 | 엘지전자 주식회사 | Mobile terminal and control method for mobile terminal |
JP5679782B2 (en) * | 2010-11-26 | 2015-03-04 | 京セラ株式会社 | Portable electronic device, screen control method, and screen control program |
KR101737555B1 (en) * | 2010-11-29 | 2017-05-29 | 엘지전자 주식회사 | Method for controlling a screen display and display apparatus thereof |
KR101172663B1 (en) * | 2010-12-31 | 2012-08-08 | 엘지전자 주식회사 | Mobile terminal and method for grouping application thereof |
US10409851B2 (en) | 2011-01-31 | 2019-09-10 | Microsoft Technology Licensing, Llc | Gesture-based search |
US10444979B2 (en) | 2011-01-31 | 2019-10-15 | Microsoft Technology Licensing, Llc | Gesture-based search |
US9262612B2 (en) | 2011-03-21 | 2016-02-16 | Apple Inc. | Device access using voice authentication |
JP5308509B2 (en) * | 2011-04-15 | 2013-10-09 | シャープ株式会社 | Menu screen display control method |
US9910559B2 (en) | 2011-04-15 | 2018-03-06 | Sharp Kabushiki Kaisha | Menu screen display control method and display control device for exchanging icons of a menu based on user instruction |
US10598508B2 (en) * | 2011-05-09 | 2020-03-24 | Zoll Medical Corporation | Systems and methods for EMS navigation user interface |
EP2715499B1 (en) * | 2011-05-23 | 2020-09-02 | Microsoft Technology Licensing, LLC | Invisible control |
KR101891803B1 (en) * | 2011-05-23 | 2018-08-27 | 삼성전자주식회사 | Method and apparatus for editing screen of mobile terminal comprising touch screen |
US10057736B2 (en) | 2011-06-03 | 2018-08-21 | Apple Inc. | Active transport based notifications |
JP5360140B2 (en) * | 2011-06-17 | 2013-12-04 | コニカミノルタ株式会社 | Information browsing apparatus, control program, and control method |
JP5849490B2 (en) * | 2011-07-21 | 2016-01-27 | ブラザー工業株式会社 | Data input device, control method and program for data input device |
JP2013065291A (en) * | 2011-08-29 | 2013-04-11 | Kyocera Corp | Device, method, and program |
US8994660B2 (en) | 2011-08-29 | 2015-03-31 | Apple Inc. | Text correction processing |
US8402375B1 (en) * | 2011-09-19 | 2013-03-19 | Google Inc. | System and method for managing bookmark buttons on a browser toolbar |
US9135914B1 (en) * | 2011-09-30 | 2015-09-15 | Google Inc. | Layered mobile application user interfaces |
JP5920869B2 (en) * | 2011-10-31 | 2016-05-18 | 株式会社ソニー・インタラクティブエンタテインメント | INPUT CONTROL DEVICE, INPUT CONTROL METHOD, AND INPUT CONTROL PROGRAM |
KR101916742B1 (en) * | 2011-11-10 | 2018-11-09 | 삼성전자 주식회사 | Method and apparatus for providing user interface in portable device |
JP2013134694A (en) * | 2011-12-27 | 2013-07-08 | Kyocera Corp | Device, method, and program |
US10984337B2 (en) | 2012-02-29 | 2021-04-20 | Microsoft Technology Licensing, Llc | Context-based search query formation |
KR101375911B1 (en) * | 2012-02-29 | 2014-04-03 | 주식회사 팬택 | Apparatus and method for controlling advertisement |
US10134385B2 (en) | 2012-03-02 | 2018-11-20 | Apple Inc. | Systems and methods for name pronunciation |
US9483461B2 (en) | 2012-03-06 | 2016-11-01 | Apple Inc. | Handling speech synthesis of content for multiple languages |
CN103377067B (en) * | 2012-04-13 | 2019-01-25 | 富泰华工业(深圳)有限公司 | Application program loading system and method |
US9280610B2 (en) | 2012-05-14 | 2016-03-08 | Apple Inc. | Crowd sourcing information to fulfill user requests |
US10417037B2 (en) | 2012-05-15 | 2019-09-17 | Apple Inc. | Systems and methods for integrating third party services with a digital assistant |
US9940903B2 (en) * | 2012-06-07 | 2018-04-10 | Nbcuniversal Media, Llc | System and method for managing, publishing and manipulating data objects |
US9721563B2 (en) | 2012-06-08 | 2017-08-01 | Apple Inc. | Name recognition system |
US9495129B2 (en) | 2012-06-29 | 2016-11-15 | Apple Inc. | Device, method, and user interface for voice-activated navigation and browsing of a document |
US20140059496A1 (en) * | 2012-08-23 | 2014-02-27 | Oracle International Corporation | Unified mobile approvals application including card display |
US9547647B2 (en) | 2012-09-19 | 2017-01-17 | Apple Inc. | Voice-based media searching |
US9152297B2 (en) | 2012-10-25 | 2015-10-06 | Udacity, Inc. | Interactive content creation system |
US20140156293A1 (en) * | 2012-11-30 | 2014-06-05 | Verizon Patent And Licensing Inc. | Methods and Systems for Providing a Customized Virtual Health Care Solution for a User |
KR20140087787A (en) * | 2012-12-31 | 2014-07-09 | 삼성전자주식회사 | display apparatus and method for controlling the display apparatus therof |
KR102025806B1 (en) * | 2013-02-04 | 2019-09-26 | 엘지전자 주식회사 | Operating Method for Mobil terminal |
KR102516577B1 (en) | 2013-02-07 | 2023-04-03 | 애플 인크. | Voice trigger for a digital assistant |
CA2900425C (en) | 2013-02-07 | 2023-06-13 | Dizmo Ag | System for organizing and displaying information on a display device |
US10652394B2 (en) | 2013-03-14 | 2020-05-12 | Apple Inc. | System and method for processing voicemail |
US10748529B1 (en) | 2013-03-15 | 2020-08-18 | Apple Inc. | Voice activated device for use with a voice-based digital assistant |
US20140298243A1 (en) * | 2013-03-29 | 2014-10-02 | Alcatel-Lucent Usa Inc. | Adjustable gui for displaying information from a database |
KR101825963B1 (en) * | 2013-05-16 | 2018-02-06 | 인텔 코포레이션 | Techniques for natural user interface input based on context |
US9829984B2 (en) * | 2013-05-23 | 2017-11-28 | Fastvdo Llc | Motion-assisted visual language for human computer interfaces |
US9582608B2 (en) | 2013-06-07 | 2017-02-28 | Apple Inc. | Unified ranking with entropy-weighted information for phrase-based semantic auto-completion |
WO2014197334A2 (en) | 2013-06-07 | 2014-12-11 | Apple Inc. | System and method for user-specified pronunciation of words for speech synthesis and recognition |
WO2014197336A1 (en) | 2013-06-07 | 2014-12-11 | Apple Inc. | System and method for detecting errors in interactions with a voice-based digital assistant |
WO2014197335A1 (en) | 2013-06-08 | 2014-12-11 | Apple Inc. | Interpreting and acting upon commands that involve sharing information with remote devices |
EP3008641A1 (en) | 2013-06-09 | 2016-04-20 | Apple Inc. | Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant |
US10176167B2 (en) | 2013-06-09 | 2019-01-08 | Apple Inc. | System and method for inferring user intent from speech inputs |
USD753675S1 (en) * | 2013-11-22 | 2016-04-12 | Lg Electronics Inc. | Multimedia terminal with graphical user interface |
US10296160B2 (en) | 2013-12-06 | 2019-05-21 | Apple Inc. | Method for extracting salient dialog usage from live data |
KR20150082824A (en) * | 2014-01-08 | 2015-07-16 | 삼성전자주식회사 | Method for controlling device and control apparatus |
US10802582B1 (en) * | 2014-04-22 | 2020-10-13 | sigmund lindsay clements | Eye tracker in an augmented reality glasses for eye gaze to input displayed input icons |
US9715875B2 (en) | 2014-05-30 | 2017-07-25 | Apple Inc. | Reducing the need for manual start/end-pointing and trigger phrases |
US10170123B2 (en) | 2014-05-30 | 2019-01-01 | Apple Inc. | Intelligent assistant for home automation |
US9760559B2 (en) | 2014-05-30 | 2017-09-12 | Apple Inc. | Predictive text input |
US10078631B2 (en) | 2014-05-30 | 2018-09-18 | Apple Inc. | Entropy-guided text prediction using combined word and character n-gram language models |
EP3149728B1 (en) | 2014-05-30 | 2019-01-16 | Apple Inc. | Multi-command single utterance input method |
US9633004B2 (en) | 2014-05-30 | 2017-04-25 | Apple Inc. | Better resolution when referencing to concepts |
US9842101B2 (en) | 2014-05-30 | 2017-12-12 | Apple Inc. | Predictive conversion of language input |
US9785630B2 (en) | 2014-05-30 | 2017-10-10 | Apple Inc. | Text prediction using combined word N-gram and unigram language models |
US9430463B2 (en) | 2014-05-30 | 2016-08-30 | Apple Inc. | Exemplar-based natural language processing |
US10659851B2 (en) | 2014-06-30 | 2020-05-19 | Apple Inc. | Real-time digital assistant knowledge updates |
US9338493B2 (en) | 2014-06-30 | 2016-05-10 | Apple Inc. | Intelligent automated assistant for TV user interactions |
US10446141B2 (en) | 2014-08-28 | 2019-10-15 | Apple Inc. | Automatic speech recognition based on user feedback |
US9541404B2 (en) | 2014-08-29 | 2017-01-10 | Samsung Electronics Co., Ltd. | System for determining the location of entrances and areas of interest |
USD735754S1 (en) | 2014-09-02 | 2015-08-04 | Apple Inc. | Display screen or portion thereof with graphical user interface |
US9818400B2 (en) | 2014-09-11 | 2017-11-14 | Apple Inc. | Method and apparatus for discovering trending terms in speech requests |
US10789041B2 (en) | 2014-09-12 | 2020-09-29 | Apple Inc. | Dynamic thresholds for always listening speech trigger |
US9668121B2 (en) | 2014-09-30 | 2017-05-30 | Apple Inc. | Social reminders |
US10074360B2 (en) | 2014-09-30 | 2018-09-11 | Apple Inc. | Providing an indication of the suitability of speech recognition |
US10127911B2 (en) | 2014-09-30 | 2018-11-13 | Apple Inc. | Speaker identification and unsupervised speaker adaptation techniques |
US9886432B2 (en) | 2014-09-30 | 2018-02-06 | Apple Inc. | Parsimonious handling of word inflection via categorical stem + suffix N-gram language models |
US9646609B2 (en) | 2014-09-30 | 2017-05-09 | Apple Inc. | Caching apparatus for serving phonetic pronunciations |
US10552013B2 (en) | 2014-12-02 | 2020-02-04 | Apple Inc. | Data detection |
US9865280B2 (en) | 2015-03-06 | 2018-01-09 | Apple Inc. | Structured dictation using intelligent automated assistants |
USD765098S1 (en) | 2015-03-06 | 2016-08-30 | Apple Inc. | Display screen or portion thereof with graphical user interface |
US10152299B2 (en) | 2015-03-06 | 2018-12-11 | Apple Inc. | Reducing response latency of intelligent automated assistants |
US10567477B2 (en) | 2015-03-08 | 2020-02-18 | Apple Inc. | Virtual assistant continuity |
US9886953B2 (en) | 2015-03-08 | 2018-02-06 | Apple Inc. | Virtual assistant activation |
US9721566B2 (en) | 2015-03-08 | 2017-08-01 | Apple Inc. | Competing devices responding to voice triggers |
USD771670S1 (en) | 2015-03-09 | 2016-11-15 | Apple Inc. | Display screen or portion thereof with animated graphical user interface |
US9899019B2 (en) | 2015-03-18 | 2018-02-20 | Apple Inc. | Systems and methods for structured stem and suffix language models |
US9842105B2 (en) | 2015-04-16 | 2017-12-12 | Apple Inc. | Parsimonious continuous-space phrase representations for natural language processing |
US10460227B2 (en) | 2015-05-15 | 2019-10-29 | Apple Inc. | Virtual assistant in a communication session |
US10200824B2 (en) | 2015-05-27 | 2019-02-05 | Apple Inc. | Systems and methods for proactively identifying and surfacing relevant content on a touch-sensitive device |
US10083688B2 (en) | 2015-05-27 | 2018-09-25 | Apple Inc. | Device voice control for selecting a displayed affordance |
US10127220B2 (en) | 2015-06-04 | 2018-11-13 | Apple Inc. | Language identification from short strings |
US10101822B2 (en) | 2015-06-05 | 2018-10-16 | Apple Inc. | Language input correction |
USD772269S1 (en) | 2015-06-05 | 2016-11-22 | Apple Inc. | Display screen or portion thereof with graphical user interface |
US9578173B2 (en) | 2015-06-05 | 2017-02-21 | Apple Inc. | Virtual assistant aided communication with 3rd party service in a communication session |
US10255907B2 (en) | 2015-06-07 | 2019-04-09 | Apple Inc. | Automatic accent detection using acoustic models |
US11025565B2 (en) | 2015-06-07 | 2021-06-01 | Apple Inc. | Personalized prediction of responses for instant messaging |
US10186254B2 (en) | 2015-06-07 | 2019-01-22 | Apple Inc. | Context-based endpoint detection |
US20160378747A1 (en) | 2015-06-29 | 2016-12-29 | Apple Inc. | Virtual assistant for media playback |
US11449218B2 (en) * | 2015-07-17 | 2022-09-20 | Thomson Reuters Enterprise Centre Gmbh | Systems and methods for data evaluation and classification |
US20170038960A1 (en) * | 2015-08-07 | 2017-02-09 | Your Voice Usa Corp. | Management of data in an electronic device |
US10747498B2 (en) | 2015-09-08 | 2020-08-18 | Apple Inc. | Zero latency digital assistant |
US10331312B2 (en) | 2015-09-08 | 2019-06-25 | Apple Inc. | Intelligent automated assistant in a media environment |
US10671428B2 (en) | 2015-09-08 | 2020-06-02 | Apple Inc. | Distributed personal assistant |
US10740384B2 (en) | 2015-09-08 | 2020-08-11 | Apple Inc. | Intelligent automated assistant for media search and playback |
WO2017042985A1 (en) * | 2015-09-09 | 2017-03-16 | チームラボ株式会社 | Information provision device |
US9697820B2 (en) | 2015-09-24 | 2017-07-04 | Apple Inc. | Unit-selection text-to-speech synthesis using concatenation-sensitive neural networks |
US10366158B2 (en) | 2015-09-29 | 2019-07-30 | Apple Inc. | Efficient word encoding for recurrent neural network language models |
US11010550B2 (en) | 2015-09-29 | 2021-05-18 | Apple Inc. | Unified language modeling framework for word prediction, auto-completion and auto-correction |
US11587559B2 (en) | 2015-09-30 | 2023-02-21 | Apple Inc. | Intelligent device identification |
US10691473B2 (en) | 2015-11-06 | 2020-06-23 | Apple Inc. | Intelligent automated assistant in a messaging environment |
US10956666B2 (en) | 2015-11-09 | 2021-03-23 | Apple Inc. | Unconventional virtual assistant interactions |
US10049668B2 (en) | 2015-12-02 | 2018-08-14 | Apple Inc. | Applying neural network language models to weighted finite state transducers for automatic speech recognition |
US10140516B2 (en) | 2015-12-16 | 2018-11-27 | Samsung Electronics Co., Ltd. | Event-based image management using clustering |
US10223066B2 (en) | 2015-12-23 | 2019-03-05 | Apple Inc. | Proactive assistance based on dialog communication between devices |
US10446143B2 (en) | 2016-03-14 | 2019-10-15 | Apple Inc. | Identification of voice inputs providing credentials |
US9934775B2 (en) | 2016-05-26 | 2018-04-03 | Apple Inc. | Unit-selection text-to-speech synthesis based on predicted concatenation parameters |
US9972304B2 (en) | 2016-06-03 | 2018-05-15 | Apple Inc. | Privacy preserving distributed evaluation framework for embedded personalized systems |
US11227589B2 (en) | 2016-06-06 | 2022-01-18 | Apple Inc. | Intelligent list reading |
US10249300B2 (en) | 2016-06-06 | 2019-04-02 | Apple Inc. | Intelligent list reading |
US10049663B2 (en) | 2016-06-08 | 2018-08-14 | Apple, Inc. | Intelligent automated assistant for media exploration |
DK179309B1 (en) | 2016-06-09 | 2018-04-23 | Apple Inc | Intelligent automated assistant in a home environment |
US10586535B2 (en) | 2016-06-10 | 2020-03-10 | Apple Inc. | Intelligent digital assistant in a multi-tasking environment |
US10490187B2 (en) | 2016-06-10 | 2019-11-26 | Apple Inc. | Digital assistant providing automated status report |
US10067938B2 (en) | 2016-06-10 | 2018-09-04 | Apple Inc. | Multilingual word prediction |
US10509862B2 (en) | 2016-06-10 | 2019-12-17 | Apple Inc. | Dynamic phrase expansion of language input |
US10192552B2 (en) | 2016-06-10 | 2019-01-29 | Apple Inc. | Digital assistant providing whispered speech |
DK179343B1 (en) | 2016-06-11 | 2018-05-14 | Apple Inc | Intelligent task discovery |
DK201670540A1 (en) | 2016-06-11 | 2018-01-08 | Apple Inc | Application integration with a digital assistant |
DK179415B1 (en) | 2016-06-11 | 2018-06-14 | Apple Inc | Intelligent device arbitration and control |
DK179049B1 (en) | 2016-06-11 | 2017-09-18 | Apple Inc | Data driven natural language event detection and classification |
US10474753B2 (en) | 2016-09-07 | 2019-11-12 | Apple Inc. | Language identification using recurrent neural networks |
US10043516B2 (en) | 2016-09-23 | 2018-08-07 | Apple Inc. | Intelligent automated assistant |
US11281993B2 (en) | 2016-12-05 | 2022-03-22 | Apple Inc. | Model and ensemble compression for metric learning |
US10593346B2 (en) | 2016-12-22 | 2020-03-17 | Apple Inc. | Rank-reduced token representation for automatic speech recognition |
US11204787B2 (en) | 2017-01-09 | 2021-12-21 | Apple Inc. | Application integration with a digital assistant |
DK201770383A1 (en) | 2017-05-09 | 2018-12-14 | Apple Inc. | User interface for correcting recognition errors |
US10417266B2 (en) | 2017-05-09 | 2019-09-17 | Apple Inc. | Context-aware ranking of intelligent response suggestions |
US10395654B2 (en) | 2017-05-11 | 2019-08-27 | Apple Inc. | Text normalization based on a data-driven learning network |
US10726832B2 (en) | 2017-05-11 | 2020-07-28 | Apple Inc. | Maintaining privacy of personal information |
DK201770439A1 (en) | 2017-05-11 | 2018-12-13 | Apple Inc. | Offline personal assistant |
DK179496B1 (en) | 2017-05-12 | 2019-01-15 | Apple Inc. | USER-SPECIFIC Acoustic Models |
US11301477B2 (en) | 2017-05-12 | 2022-04-12 | Apple Inc. | Feedback analysis of a digital assistant |
DK201770429A1 (en) | 2017-05-12 | 2018-12-14 | Apple Inc. | Low-latency intelligent automated assistant |
DK179745B1 (en) | 2017-05-12 | 2019-05-01 | Apple Inc. | SYNCHRONIZATION AND TASK DELEGATION OF A DIGITAL ASSISTANT |
DK201770431A1 (en) | 2017-05-15 | 2018-12-20 | Apple Inc. | Optimizing dialogue policy decisions for digital assistants using implicit feedback |
DK201770432A1 (en) | 2017-05-15 | 2018-12-21 | Apple Inc. | Hierarchical belief states for digital assistants |
DK179560B1 (en) | 2017-05-16 | 2019-02-18 | Apple Inc. | Far-field extension for digital assistant services |
US20180336892A1 (en) | 2017-05-16 | 2018-11-22 | Apple Inc. | Detecting a trigger of a digital assistant |
US20180336275A1 (en) | 2017-05-16 | 2018-11-22 | Apple Inc. | Intelligent automated assistant for media exploration |
US10311144B2 (en) | 2017-05-16 | 2019-06-04 | Apple Inc. | Emoji word sense disambiguation |
US10403278B2 (en) | 2017-05-16 | 2019-09-03 | Apple Inc. | Methods and systems for phonetic matching in digital assistant services |
US10657328B2 (en) | 2017-06-02 | 2020-05-19 | Apple Inc. | Multi-task recurrent neural network architecture for efficient morphology handling in neural language modeling |
US10445429B2 (en) | 2017-09-21 | 2019-10-15 | Apple Inc. | Natural language understanding using vocabularies with compressed serialized tries |
US10755051B2 (en) | 2017-09-29 | 2020-08-25 | Apple Inc. | Rule-based natural language processing |
US10636424B2 (en) | 2017-11-30 | 2020-04-28 | Apple Inc. | Multi-turn canned dialog |
US10733982B2 (en) | 2018-01-08 | 2020-08-04 | Apple Inc. | Multi-directional dialog |
US10733375B2 (en) | 2018-01-31 | 2020-08-04 | Apple Inc. | Knowledge-based framework for improving natural language understanding |
US10789959B2 (en) | 2018-03-02 | 2020-09-29 | Apple Inc. | Training speaker recognition models for digital assistants |
US10592604B2 (en) | 2018-03-12 | 2020-03-17 | Apple Inc. | Inverse text normalization for automatic speech recognition |
US10813169B2 (en) | 2018-03-22 | 2020-10-20 | GoTenna, Inc. | Mesh network deployment kit |
US10818288B2 (en) | 2018-03-26 | 2020-10-27 | Apple Inc. | Natural assistant interaction |
US10909331B2 (en) | 2018-03-30 | 2021-02-02 | Apple Inc. | Implicit identification of translation payload with neural machine translation |
US11366571B2 (en) * | 2018-05-04 | 2022-06-21 | Dentma, LLC | Visualization components including sliding bars |
US11145294B2 (en) | 2018-05-07 | 2021-10-12 | Apple Inc. | Intelligent automated assistant for delivering content from user experiences |
US10928918B2 (en) | 2018-05-07 | 2021-02-23 | Apple Inc. | Raise to speak |
WO2019217043A1 (en) * | 2018-05-08 | 2019-11-14 | Google Llc | Drag gesture animation |
US10984780B2 (en) | 2018-05-21 | 2021-04-20 | Apple Inc. | Global semantic word embeddings using bi-directional recurrent neural networks |
DK179822B1 (en) | 2018-06-01 | 2019-07-12 | Apple Inc. | Voice interaction at a primary device to access call functionality of a companion device |
DK201870355A1 (en) | 2018-06-01 | 2019-12-16 | Apple Inc. | Virtual assistant operation in multi-device environments |
US11386266B2 (en) | 2018-06-01 | 2022-07-12 | Apple Inc. | Text correction |
DK180639B1 (en) | 2018-06-01 | 2021-11-04 | Apple Inc | DISABILITY OF ATTENTION-ATTENTIVE VIRTUAL ASSISTANT |
US10892996B2 (en) | 2018-06-01 | 2021-01-12 | Apple Inc. | Variable latency device coordination |
US10496705B1 (en) | 2018-06-03 | 2019-12-03 | Apple Inc. | Accelerated task performance |
USD882615S1 (en) | 2018-09-06 | 2020-04-28 | Apple Inc. | Electronic device with animated graphical user interface |
US11010561B2 (en) | 2018-09-27 | 2021-05-18 | Apple Inc. | Sentiment prediction from textual data |
US11170166B2 (en) | 2018-09-28 | 2021-11-09 | Apple Inc. | Neural typographical error modeling via generative adversarial networks |
US11462215B2 (en) | 2018-09-28 | 2022-10-04 | Apple Inc. | Multi-modal inputs for voice commands |
US10839159B2 (en) | 2018-09-28 | 2020-11-17 | Apple Inc. | Named entity normalization in a spoken dialog system |
US11475898B2 (en) | 2018-10-26 | 2022-10-18 | Apple Inc. | Low-latency multi-speaker speech recognition |
US11638059B2 (en) | 2019-01-04 | 2023-04-25 | Apple Inc. | Content playback on multiple devices |
CN109874026B (en) * | 2019-03-05 | 2020-07-07 | 网易(杭州)网络有限公司 | Data processing method and device, storage medium and electronic equipment |
US11348573B2 (en) | 2019-03-18 | 2022-05-31 | Apple Inc. | Multimodality in digital assistant systems |
US11475884B2 (en) | 2019-05-06 | 2022-10-18 | Apple Inc. | Reducing digital assistant latency when a language is incorrectly determined |
US11423908B2 (en) | 2019-05-06 | 2022-08-23 | Apple Inc. | Interpreting spoken requests |
DK201970509A1 (en) | 2019-05-06 | 2021-01-15 | Apple Inc | Spoken notifications |
US11307752B2 (en) | 2019-05-06 | 2022-04-19 | Apple Inc. | User configurable task triggers |
US11140099B2 (en) | 2019-05-21 | 2021-10-05 | Apple Inc. | Providing message response suggestions |
US11289073B2 (en) | 2019-05-31 | 2022-03-29 | Apple Inc. | Device text to speech |
DK180129B1 (en) | 2019-05-31 | 2020-06-02 | Apple Inc. | User activity shortcut suggestions |
DK201970510A1 (en) | 2019-05-31 | 2021-02-11 | Apple Inc | Voice identification in digital assistant systems |
US11496600B2 (en) | 2019-05-31 | 2022-11-08 | Apple Inc. | Remote execution of machine-learned models |
US11360641B2 (en) | 2019-06-01 | 2022-06-14 | Apple Inc. | Increasing the relevance of new available information |
US11188980B1 (en) * | 2019-06-17 | 2021-11-30 | Wells Fargo Bank, N.A. | Display and control of building purchase cash flow |
US11488406B2 (en) | 2019-09-25 | 2022-11-01 | Apple Inc. | Text detection using global geometry estimators |
CN111447074B (en) * | 2020-03-22 | 2021-10-08 | 腾讯科技(深圳)有限公司 | Reminding method, device, equipment and medium in group session |
US11183193B1 (en) | 2020-05-11 | 2021-11-23 | Apple Inc. | Digital assistant hardware abstraction |
CN114116081B (en) * | 2020-08-10 | 2023-10-27 | 抖音视界有限公司 | Interactive dynamic fluid effect processing method and device and electronic equipment |
US11893400B1 (en) | 2022-08-26 | 2024-02-06 | Bank Of America Corporation | System and method for automated adjustment of software application function integrations of graphical user interface |
Citations (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020180808A1 (en) * | 2001-05-30 | 2002-12-05 | Fujitsu Limited | Displaying plural linked information objects in virtual space in accordance with visual field |
US20070094620A1 (en) * | 2005-04-26 | 2007-04-26 | Lg Electronics Inc. | Mobile terminal providing graphic user interface and method of providing graphic user interface using the same |
US20070130545A1 (en) * | 2005-12-06 | 2007-06-07 | Arito Mochizuki | Information reproduction apparatus and information reproduction program |
US20070271524A1 (en) * | 2006-05-19 | 2007-11-22 | Fuji Xerox Co., Ltd. | Interactive techniques for organizing and retreiving thumbnails and notes on large displays |
US20090088204A1 (en) * | 2007-10-01 | 2009-04-02 | Apple Inc. | Movement-based interfaces for personal media device |
US7546545B2 (en) * | 2006-09-27 | 2009-06-09 | International Business Machines Corporation | Emphasizing drop destinations for a selected entity based upon prior drop destinations |
US20090228820A1 (en) * | 2008-03-07 | 2009-09-10 | Samsung Electronics Co. Ltd. | User interface method and apparatus for mobile terminal having touchscreen |
US20090262090A1 (en) * | 2006-10-23 | 2009-10-22 | Oh Eui Jin | Input device |
US20100023871A1 (en) * | 2008-07-25 | 2010-01-28 | Zumobi, Inc. | Methods and Systems Providing an Interactive Social Ticker |
US20100020026A1 (en) * | 2008-07-25 | 2010-01-28 | Microsoft Corporation | Touch Interaction with a Curved Display |
US20100079405A1 (en) * | 2008-09-30 | 2010-04-01 | Jeffrey Traer Bernstein | Touch Screen Device, Method, and Graphical User Interface for Moving On-Screen Objects Without Using a Cursor |
US20100088597A1 (en) * | 2008-10-02 | 2010-04-08 | Samsung Electronics Co., Ltd. | Method and apparatus for configuring idle screen of portable terminal |
US20100306650A1 (en) * | 2009-05-26 | 2010-12-02 | Pantech Co., Ltd. | User interface apparatus and method for user interface in touch device |
US8122356B2 (en) * | 2007-10-03 | 2012-02-21 | Eastman Kodak Company | Method for image animation using image value rules |
US8189880B2 (en) * | 2007-05-29 | 2012-05-29 | Microsoft Corporation | Interactive photo annotation based on face clustering |
US8296666B2 (en) * | 2004-11-30 | 2012-10-23 | Oculus Info. Inc. | System and method for interactive visual representation of information content and relationships using layout and gestures |
US8384662B2 (en) * | 2009-05-26 | 2013-02-26 | Hong Fu Jin Precision Industry (Shenzhen) Co., Ltd. | Display device and icon display method therefor |
US9208174B1 (en) * | 2006-11-20 | 2015-12-08 | Disney Enterprises, Inc. | Non-language-based object search |
Family Cites Families (31)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2870911B2 (en) * | 1988-05-27 | 1999-03-17 | コダック・リミテッド | Document folder icons for displays in data processing systems |
US5806071A (en) * | 1995-08-21 | 1998-09-08 | Info America, Inc. | Process and system for configuring information for presentation at an interactive electronic device |
US20020130891A1 (en) * | 1999-12-08 | 2002-09-19 | Michael Singer | Text display with user-defined appearance and automatic scrolling |
EP1195673B1 (en) * | 2000-10-04 | 2007-05-09 | Siemens Aktiengesellschaft | Automotive multimedia system with animated display function |
US7308653B2 (en) * | 2001-01-20 | 2007-12-11 | Catherine Lin-Hendel | Automated scrolling of browser content and automated activation of browser links |
US7185290B2 (en) * | 2001-06-08 | 2007-02-27 | Microsoft Corporation | User interface for a system and process for providing dynamic communication access and information awareness in an interactive peripheral display |
US20070072666A1 (en) * | 2002-08-02 | 2007-03-29 | David Loewenstein | Multihand poker game |
US20040155909A1 (en) * | 2003-02-07 | 2004-08-12 | Sun Microsystems, Inc. | Scroll tray mechanism for cellular telephone |
PL377856A1 (en) * | 2003-04-15 | 2006-02-20 | Merck Patent Gmbh | Identification of n-alkylglycine trimers for induction of apoptosis |
US7343567B2 (en) * | 2003-04-25 | 2008-03-11 | Microsoft Corporation | System and method for providing dynamic user information in an interactive display |
US20060107213A1 (en) * | 2004-08-17 | 2006-05-18 | Sunil Kumar | Intelligent multimodal navigation techniques using motion of a mobile device sensed by a motion sensing device associated with the mobile device |
JP2006134288A (en) * | 2004-10-06 | 2006-05-25 | Sharp Corp | Interface and interface program executed by computer |
US7797645B2 (en) * | 2005-01-21 | 2010-09-14 | Microsoft Corporation | System and method for displaying full product functionality using minimal user interface footprint |
JP2006323672A (en) * | 2005-05-19 | 2006-11-30 | Sharp Corp | Interface |
KR100647958B1 (en) * | 2005-06-15 | 2006-11-23 | 엘지전자 주식회사 | Method and apparatus for providing home screen function in the mobile terminal |
TWI291640B (en) * | 2005-10-18 | 2007-12-21 | Benq Corp | Methods and portable electronic apparatuses for application execution |
JP5142510B2 (en) * | 2005-11-25 | 2013-02-13 | オセ−テクノロジーズ ビーブイ | Graphical user interface providing method and system |
US7880728B2 (en) * | 2006-06-29 | 2011-02-01 | Microsoft Corporation | Application switching via a touch screen interface |
US20080072174A1 (en) * | 2006-09-14 | 2008-03-20 | Corbett Kevin M | Apparatus, system and method for the aggregation of multiple data entry systems into a user interface |
JP2008157974A (en) * | 2006-12-20 | 2008-07-10 | Canon Inc | Display controller and control method of display controller |
US20080163119A1 (en) * | 2006-12-28 | 2008-07-03 | Samsung Electronics Co., Ltd. | Method for providing menu and multimedia device using the same |
US20080229255A1 (en) | 2007-03-15 | 2008-09-18 | Nokia Corporation | Apparatus, method and system for gesture detection |
US8073423B2 (en) * | 2007-05-25 | 2011-12-06 | At&T Mobility Ii Llc | Intelligent information control repository |
US7830396B2 (en) * | 2007-06-29 | 2010-11-09 | Nokia Corporation | Content and activity monitoring |
US20090079699A1 (en) * | 2007-09-24 | 2009-03-26 | Motorola, Inc. | Method and device for associating objects |
US20090122018A1 (en) * | 2007-11-12 | 2009-05-14 | Leonid Vymenets | User Interface for Touchscreen Device |
US8217906B2 (en) * | 2007-11-16 | 2012-07-10 | Sony Ericsson Mobile Communications Ab | User interface, apparatus, method, and computer program for viewing of content on a screen |
US20090174679A1 (en) * | 2008-01-04 | 2009-07-09 | Wayne Carl Westerman | Selective Rejection of Touch Contacts in an Edge Region of a Touch Surface |
US9250797B2 (en) * | 2008-09-30 | 2016-02-02 | Verizon Patent And Licensing Inc. | Touch gesture interface apparatuses, systems, and methods |
US20100095219A1 (en) * | 2008-10-15 | 2010-04-15 | Maciej Stachowiak | Selective history data structures |
US8458169B2 (en) * | 2009-09-25 | 2013-06-04 | Apple Inc. | Mini-form view for data records |
-
2009
- 2009-12-31 US US12/651,071 patent/US8479107B2/en active Active
-
2010
- 2010-11-24 EP EP10840670.3A patent/EP2519870B1/en active Active
- 2010-11-24 CN CN201080064911.0A patent/CN102782629B/en active Active
- 2010-11-24 WO PCT/IB2010/055391 patent/WO2011080616A1/en active Application Filing
-
2013
- 2013-05-13 HK HK13105668.5A patent/HK1177972A1/en unknown
- 2013-06-05 US US13/910,753 patent/US20130263032A1/en not_active Abandoned
-
2018
- 2018-08-01 US US16/052,335 patent/US20180364894A1/en not_active Abandoned
Patent Citations (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020180808A1 (en) * | 2001-05-30 | 2002-12-05 | Fujitsu Limited | Displaying plural linked information objects in virtual space in accordance with visual field |
US8296666B2 (en) * | 2004-11-30 | 2012-10-23 | Oculus Info. Inc. | System and method for interactive visual representation of information content and relationships using layout and gestures |
US20070094620A1 (en) * | 2005-04-26 | 2007-04-26 | Lg Electronics Inc. | Mobile terminal providing graphic user interface and method of providing graphic user interface using the same |
US20070130545A1 (en) * | 2005-12-06 | 2007-06-07 | Arito Mochizuki | Information reproduction apparatus and information reproduction program |
US20070271524A1 (en) * | 2006-05-19 | 2007-11-22 | Fuji Xerox Co., Ltd. | Interactive techniques for organizing and retreiving thumbnails and notes on large displays |
US7546545B2 (en) * | 2006-09-27 | 2009-06-09 | International Business Machines Corporation | Emphasizing drop destinations for a selected entity based upon prior drop destinations |
US20090262090A1 (en) * | 2006-10-23 | 2009-10-22 | Oh Eui Jin | Input device |
US9208174B1 (en) * | 2006-11-20 | 2015-12-08 | Disney Enterprises, Inc. | Non-language-based object search |
US8189880B2 (en) * | 2007-05-29 | 2012-05-29 | Microsoft Corporation | Interactive photo annotation based on face clustering |
US20090088204A1 (en) * | 2007-10-01 | 2009-04-02 | Apple Inc. | Movement-based interfaces for personal media device |
US8122356B2 (en) * | 2007-10-03 | 2012-02-21 | Eastman Kodak Company | Method for image animation using image value rules |
US20090228820A1 (en) * | 2008-03-07 | 2009-09-10 | Samsung Electronics Co. Ltd. | User interface method and apparatus for mobile terminal having touchscreen |
US20100023871A1 (en) * | 2008-07-25 | 2010-01-28 | Zumobi, Inc. | Methods and Systems Providing an Interactive Social Ticker |
US20100020026A1 (en) * | 2008-07-25 | 2010-01-28 | Microsoft Corporation | Touch Interaction with a Curved Display |
US20100079405A1 (en) * | 2008-09-30 | 2010-04-01 | Jeffrey Traer Bernstein | Touch Screen Device, Method, and Graphical User Interface for Moving On-Screen Objects Without Using a Cursor |
US20100088597A1 (en) * | 2008-10-02 | 2010-04-08 | Samsung Electronics Co., Ltd. | Method and apparatus for configuring idle screen of portable terminal |
US20100306650A1 (en) * | 2009-05-26 | 2010-12-02 | Pantech Co., Ltd. | User interface apparatus and method for user interface in touch device |
US8384662B2 (en) * | 2009-05-26 | 2013-02-26 | Hong Fu Jin Precision Industry (Shenzhen) Co., Ltd. | Display device and icon display method therefor |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120260288A1 (en) * | 2011-04-11 | 2012-10-11 | Sony Corporation | Information processing apparatus, information processing method, and program |
US9021540B2 (en) * | 2011-04-11 | 2015-04-28 | Sony Corporation | Information processing apparatus, information processing method, and program |
US20150264120A1 (en) * | 2011-04-11 | 2015-09-17 | Sony Corporation | Information processing apparatus, information processing method, and program |
US9942308B2 (en) * | 2011-04-11 | 2018-04-10 | Sony Corporation | Performing communication based on grouping of a plurality of information processing devices |
US9756549B2 (en) | 2014-03-14 | 2017-09-05 | goTenna Inc. | System and method for digital communication between computing devices |
US10015720B2 (en) | 2014-03-14 | 2018-07-03 | GoTenna, Inc. | System and method for digital communication between computing devices |
US10602424B2 (en) | 2014-03-14 | 2020-03-24 | goTenna Inc. | System and method for digital communication between computing devices |
US20190138115A1 (en) * | 2016-07-20 | 2019-05-09 | Hewlett-Packard Development Company, L.P. | Visibly opaque and near infrared transparent display border with underlying encoded pattern |
US10620716B2 (en) * | 2016-07-20 | 2020-04-14 | Hewlett-Packard Development Company, L.P. | Visibly opaque and near infrared transparent display border with underlying encoded pattern |
US10996840B1 (en) * | 2019-08-26 | 2021-05-04 | Juniper Networks, Inc. | Systems and methods for providing user-friendly access to relevant help documentation for software applications |
Also Published As
Publication number | Publication date |
---|---|
HK1177972A1 (en) | 2013-08-30 |
US20110161852A1 (en) | 2011-06-30 |
EP2519870B1 (en) | 2019-12-25 |
CN102782629B (en) | 2015-05-27 |
US8479107B2 (en) | 2013-07-02 |
WO2011080616A1 (en) | 2011-07-07 |
EP2519870A4 (en) | 2016-02-17 |
EP2519870A1 (en) | 2012-11-07 |
US20180364894A1 (en) | 2018-12-20 |
CN102782629A (en) | 2012-11-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20180364894A1 (en) | Method and apparatus for fluid graphical user interface | |
JP7174734B2 (en) | Systems, devices, and methods for dynamically providing user interface controls on touch-sensitive secondary displays | |
US11050701B2 (en) | System and method of embedding rich media into text messages | |
US8910076B2 (en) | Social media platform | |
CN105531660B (en) | For supporting the subscriber terminal equipment and its method of user's interaction | |
CN103034406B (en) | Method and apparatus for the operating function in touching device | |
CN103582873B (en) | System and method for showing the notice received from multiple applications | |
CN103999028B (en) | Invisible control | |
WO2011113992A1 (en) | Method and apparatus for displaying relative motion of objects on graphical user interface | |
KR101948075B1 (en) | Device and method for providing carousel user interface | |
US20120278764A1 (en) | Platform agnostic ui/ux and human interaction paradigm | |
US20130074003A1 (en) | Method and apparatus for integrating user interfaces | |
US20140006949A1 (en) | Enhanced user interface to transfer media content | |
US20130318437A1 (en) | Method for providing ui and portable apparatus applying the same | |
US20120287154A1 (en) | Method and apparatus for controlling display of item | |
KR20140105736A (en) | Dynamic navigation bar for expanded communication service | |
US11822943B2 (en) | User interfaces for presenting information about and facilitating application functions | |
WO2013061156A2 (en) | Systems and method for implementing multiple personas on mobile technology platforms | |
US9535569B2 (en) | System and method for a home multimedia container | |
EP2507728A1 (en) | Method and apparatus for providing media content searching capabilities | |
KR20140105737A (en) | Docking and undocking dynamic navigation bar for expanded communication service | |
US20140351756A1 (en) | System and method for displaying a multimedia container | |
CN105468254A (en) | Content searching apparatus and method for searching content | |
JP2014052903A (en) | Input device, input device controlling method, controlling program, and recording medium | |
US20140351723A1 (en) | System and method for a multimedia container |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: NOKIA TECHNOLOGIES OY, FINLAND Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:NOKIA CORPORATION;REEL/FRAME:040486/0125 Effective date: 20150116 Owner name: NOKIA CORPORATION, FINLAND Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:VAINIO, JANNE;BERGMAN, JANNE;REEL/FRAME:040486/0047 Effective date: 20100127 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- AFTER EXAMINER'S ANSWER OR BOARD OF APPEALS DECISION |