US20090254829A1 - User interface with visual progression - Google Patents

User interface with visual progression Download PDF

Info

Publication number
US20090254829A1
US20090254829A1 US12/099,635 US9963508A US2009254829A1 US 20090254829 A1 US20090254829 A1 US 20090254829A1 US 9963508 A US9963508 A US 9963508A US 2009254829 A1 US2009254829 A1 US 2009254829A1
Authority
US
United States
Prior art keywords
icon
progression
graphical
image object
voice message
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US12/099,635
Other versions
US8489992B2 (en
Inventor
Ruben Rohde
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cisco Technology Inc
Original Assignee
Cisco Technology Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cisco Technology Inc filed Critical Cisco Technology Inc
Priority to US12/099,635 priority Critical patent/US8489992B2/en
Assigned to CISCO TECHNOLOGY, INC. reassignment CISCO TECHNOLOGY, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ROHDE, RUBEN
Priority to PCT/US2009/039634 priority patent/WO2009126565A1/en
Priority to CN2009801115531A priority patent/CN101981904A/en
Priority to EP09731322A priority patent/EP2263368A1/en
Publication of US20090254829A1 publication Critical patent/US20090254829A1/en
Application granted granted Critical
Publication of US8489992B2 publication Critical patent/US8489992B2/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/247Telephone sets including user guidance or feature selection means facilitating their use
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/34Indicating arrangements 
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
    • H04M1/72433User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for voice messaging, e.g. dictaphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72469User interfaces specially adapted for cordless or mobile telephones for operating the device by selecting functions from two or more displayed items, e.g. menus or icons

Definitions

  • the present disclosure relates generally to data communication and user interfaces.
  • Communication takes many forms. For example, ideas, feelings, and questions are many times conveyed using the spoken word. Using technology, the spoken word can be captured, stored, and transmitted to several potential listeners. Music, voice messages, such as voicemails, and other speech based content are becoming a significant part of society. Users and listeners can use devices, such as computers, telephones, or personal digital assistants (“PDAs”), to listen to a variety of information at desired times. However, as more and more audio content is acquired, users are seeking to review and/or listen to the audio content in more efficient and comfortable manners.
  • PDAs personal digital assistants
  • FIG. 1 illustrates one embodiment of a data communication system
  • FIG. 2 illustrates an embodiment of a user device in communication with a network of a system, such as the system of FIG. 1 ;
  • FIG. 3 illustrates an embodiment of a graphics user interface of a user device, such as the user device of FIG. 2 ;
  • FIG. 4 illustrates an alternate embodiment of a graphics user interface of a user device, such as the user device of FIG. 2 ;
  • FIG. 5 illustrates one embodiment of a method for generating data for a graphics user interface, such as the graphics user interface of FIG. 3 and/or FIG. 4 ;
  • FIG. 6 illustrates an embodiment of a method for executing a graphics user interface, such as the graphics user interface of FIG. 3 and/or FIG. 4 .
  • the example embodiments described below include a graphics user interface and associated methods.
  • the graphics user interface includes one or more icons identifying respective audio files or data, such as voice messages. Progression of playback of the audio file or data is displayed within each of the respective icons.
  • a graphics user interface includes a plurality of graphical representations identifying separate audio data, respectively. Each of the plurality of graphical representations is configured in a list to be selected for playback of the respective audio data.
  • a progression icon is displayed in each of the respective graphical representations. Each progression icon illustrates a temporal progression of the playback of the respective audio data.
  • a data signal is received.
  • the data signal is stored as an audio file.
  • An image object identifying the audio file is generated.
  • a progression icon to be displayed in the image object is generated.
  • the progression icon extends over an entire height of the image object. Movement of the progression icon during playback of the audio file corresponds to temporal progression of the audio file.
  • a first graphical icon is displayed.
  • a second graphical icon is displayed simultaneously with the first graphical icon.
  • the first graphical icon and the second graphical icon represent separate voice messages, respectively.
  • a progression image object is displayed in each of the first and second graphical icons. Selection of the first graphical icon is received for playback of the respective voice message.
  • the location of the respective progression image object illustrates a temporal position of a current audio playback in the selected voice message.
  • a list of graphical representations of voice messages such as visual voicemails, or other audio files or data is displayed.
  • each audio file or voice message is associated with a visual progression bar.
  • a user can see all of the voice message or audio file icons with their respective progression bars.
  • the user can playback audio and/or a voice message from the list, and during playback, the respective progression bar indicates the timing or temporal progression of the playback of the voice message and/or audio file.
  • the user can stop the audio file or voice message, and the progression bar will remain or rest at or indicate the point in time where the audio file or voice message was stopped. This way, a user can go to any part of an audio file or voice message without listening to the entire message or repeated portions.
  • FIG. 1 shows a data communication system 100 (hereinafter referred to as “system 100 ”).
  • the system 100 is an Internet protocol-based system, an Intranet system, a telephony system, a voice over Internet protocol (“VoIP”) system, a cellular based system, a message system, a wireless or wired audio/visual data communication system, and/or any known or future data communication system.
  • VoIP voice over Internet protocol
  • the system 100 includes, but is not limited to, a user or client device 104 , a network 112 , and another user or client device 108 .
  • the network 112 includes a server 116 and a repository or database 120 . Additional, different, or fewer devices or components may be provided. For example, a proxy server, a billing server, a name server, a switch or intelligent switch, other computers or workstations, administrative components, such as an administrative workstation, a gateway device, a backbone, ports, network connections, and network interfaces may be provided. While the components in FIG. 1 are shown as separate from one another, one or more of these components may be combined.
  • the user device 104 is a wireless device (e.g., a cellular phone, a PDA, a wireless computer), a wired or cabled device (e.g., a desktop computer using a broadband cable or digital subscriber line (“DSL”) connection), a landline based or VoIP telephone, or any other data communication device that can transmit or convey aural content, speech, or voice messages.
  • a user uses the device 104 to initiate and/or conduct voice or speech conversations as well as leave voice messages, such as voicemails, with an intended recipient.
  • the user device 104 communicates with the user device 108 or the server 116 associated with the user device 108 via the network 112 .
  • the user device 104 includes a memory 124 , a processor 128 , and a display 130 . Additional, different, or fewer components may be provided.
  • an input device is provided, such as a button, keypad, keyboard, mouse, trackball, rocker switch, touch pad, or voice recognition circuit.
  • Audio components may be provided.
  • a speaker, one or more microphones, an antenna, a transceiver, audio jacks, and/or other components for outputting or receiving audible or sound signals may be provided.
  • the audio components may be part of a separate device or are separate devices that may be placed in communication with the user device 104 .
  • the processor 128 is in communication with the display 130 and the memory 124 .
  • the processor 128 may be in communication with more or fewer components.
  • the processor 128 is a general processor, application-specific integrated circuit (“ASIC”), digital signal processor, field programmable gate array (“FPGA”), digital circuit, analog circuit, or combinations thereof.
  • the processor 128 is one or more processors operable to control and/or communicate with the various electronics and logic of the user device 104 .
  • the processor 128 , the memory 124 , and other circuitry may be part of an integrated circuit.
  • the processor 128 is operable to generate voice or speech data. For example, analog aural or speech signals are received and processed into digital signals.
  • the digital signals include one or more packets of data corresponding to speech components.
  • the processor 128 may generate data packets that are to be converted into audio signals without receiving any input speech signals, such as a computer based voice message.
  • the processor in combination with a transmitter may generate radio frequency (“RF”) signals to transmit speech or voice content.
  • RF radio frequency
  • the server 116 or other device generates the voice or speech data from stored data or received in an analog format.
  • the display 130 is any mechanical and/or electronic display positioned for accessible viewing in on, or in communication with the user device 104 .
  • the display 130 is a touch screen, liquid crystal display (“LCD”) cathode ray tube (“CRT”) display, or a plasma display.
  • the display 130 is operable to display graphical representations of voicemails, emails, websites, and other data or media.
  • the memory 124 is any known or future storage device.
  • the memory 124 is a non-volatile and/or volatile memory, such as a Random Access Memory “RAM” (electronic), a Read-Only Memory “ROM” (electronic), or an Erasable Programmable Read-Only Memory (EPROM or Flash memory).
  • RAM Random Access Memory
  • ROM Read-Only Memory
  • EPROM Erasable Programmable Read-Only Memory
  • a memory network may be provided.
  • the memory 124 may be part of the processor 128 .
  • the user device 104 is a workstation, computer, database, or other device used to store and/or modify a variety of audio files or data.
  • the user device 104 is configured to store, modify, download, upload, and/or transmit music files, such as MPEG audio layer 3 (“MP3”) files, speech data, communication files, and/or other audio data or files.
  • MP3 MPEG audio layer 3
  • the user device 104 is operable to communicate with the user device 108 via the network 112 .
  • the network 112 is the Internet, a cellular network, an intranet, a local area network (“LAN”), a wide area network (“WAN”), a virtual private network (“VPN”), a message network, a music file sharing network, a VoIP network, a telephone network, and/or any known or future network.
  • the network may contain cellular base stations, servers, computers, or other systems, devices, or components for transferring and/or modifying data.
  • the server 116 and the database 120 are shown to be within the network 112 . However, the server 116 and/or database 120 may be outside the network 112 or may be part of a separate network.
  • the server 116 communicates with the user device 104 , the user device 108 , and the database 120 .
  • the sever 116 is a provider server, an application server, communications server, database server, proxy server, file server, web server, client server, peer-to-peer server, and/or any known or future server.
  • the server 116 is a network access server, a gateway general packet radio service (“GPRS”) support node, and/or an authentication, authorization, and accounting (“AAA”) server.
  • the server 116 is operable to receive voice, speech, music, and/or audio data from the user device 104 and 108 .
  • the server 116 is a software and/or hardware implementation.
  • the server 116 is an application program.
  • the server 116 is a server computer or any other hardware that executes and runs server applications.
  • a hardware implementation of the server 116 includes, but is not limited to, a memory 144 and a processor 140 . Additional, different, or fewer components may be provided.
  • the processor 140 is in communication with the memory 144 .
  • the processor 140 may be in communication with more or fewer components.
  • the memory 144 and the processor 140 are similar to or different than the memory 124 and the processor 128 , respectively.
  • the processor 140 analyzes and/or modifies the voice, speech, music, and/or audio data and passes or transmits the data to the user device 108 .
  • the processor 140 may also store or save the voice, speech, or music data as a voice message or other audio file in the memory 144 or the database 120 .
  • the server 116 may generate and store voice messages or voicemails as well as store audio files, such as music files.
  • the server 116 is also configured to generate graphical data that represents or identifies the voice messages, music files, or other audio data.
  • the processor 140 generates icons, image objects, or graphical representations of the audio data or files.
  • the processor 140 also generates a progression icon or image object, such as a visual progression bar, that is displayed in the respective icons identifying the audio files and/or voice messages.
  • the progression icons illustrate or show temporal progression of the audio file during playback.
  • the server 116 is further operable to convert the voice or speech data into textual or word data.
  • the processor 140 or other component such as a converter, identifies speech content and associates words or phrases with the speech content to generate text corresponding to a voice message.
  • the server 116 generates a summary or textual summary of the voice message.
  • the summary includes a gist of the voice message.
  • the summary provides more than just a name, date, time, or number.
  • the summary may provide a central or main point, idea, or communication that is to be conveyed by the voice message.
  • the summary may be displayed with a respective icon identifying a voice message or visual voicemail.
  • the methods and features of U.S. patent application Ser. No. ______ entitled “USER INTERFACE WITH VOICE MESSAGE SUMMARY” (Attorney Docket Number 13522-15/Cisco No. 958331) may be used.
  • the functionality of the server 116 may be implemented on a different or separate device.
  • a gateway device, a switch, an intelligent switch, a router, or other device may be used to execute the tasks of the server 116 .
  • the database 120 is in communication with the server 116 .
  • the database 120 is a central repository, a cache network, distributed database of a plurality of databases, or any known or future data storage device or system.
  • the database 120 includes a memory 148 . Additional, different, or fewer components may be provided. For example, one or more processors may be provided.
  • the memory 148 is similar to or different than the memory 144 and/or 124 .
  • the database 120 receives and stores data, such as voice message data, music files, or other audio data. For example, when a first user attempts to call or have a conversation with a second user (e.g., the user device 104 attempts to transmit voice data to the user device 108 via the server 116 ), the first user may have to leave a voice message or voicemail if the second user is not available. In such a case, the server 116 generates a voice message from the voice data received and stores the voice message in the memory 144 and/or the database 120 (via the memory 148 ). The storage of the voice message in the memory 144 may be relatively temporary compared to the storage in the database 120 . The database 120 may partition voice messages based on different users, locations, timings, or other factors. Alternatively, the database 120 may also store the progression information corresponding to the respective voice messages.
  • data such as voice message data, music files, or other audio data.
  • Stored voice messages may be retrieved from the database 120 or the server 116 by a user (e.g., via the user device 108 ) to listen to the respective voice message and/or to prepare a summary.
  • the database 120 may be integrated into the server 116 .
  • the server 116 may transmit the voice message to the user device 108 upon creation of the voice message to be stored on the user device 108 .
  • the database 120 contains a plurality of music files or audio data that can be downloaded or uploaded for listening purposes. For example, a user using either the user device 104 or 108 may desire to listen to a new or old song or other music file. The user downloads a song via the server 116 to listen to using the user device 104 or 108 .
  • the database 120 may also contain or store the graphical representations of the voice messages and/or audio files as well as the respective progression icons.
  • the user device 108 is similar to or different than the user device 104 .
  • the user device 108 includes a memory 132 , a processor 136 , and a display 138 . Additional, different, or fewer components may be provided such as described in regards to the user device 108 .
  • the memory 132 , the processor 136 , and the display 138 are similar to or different than the memory 124 , the processor 128 , and the display 130 , respectively.
  • the user device 108 is used to view graphical representations or icons representing voice messages, such as visual voicemails, music or song files, or other audio data and is used to playback the audio data or files.
  • the user device 108 also displays visual progression, such as progression icons, image objects, or bars, within each of the graphical representations identifying the voice messages, music files, and/or other audio files.
  • the user device 108 may also perform the functionalities of the user device 104 and vice versa.
  • FIG. 2 shows a user device 201 in communication with a network 205 , such as the network 112 .
  • the user device 201 is similar to the user device 108 and/or the user device 104 ,
  • the user device 201 is a cellular telephone, a digital telephone, a computer, or a PDA.
  • the user device 201 includes a screen or display 209 , such as the display 130 and/or 138 .
  • the screen 209 is used to view graphical representations of voice messages, visual voicemails, music or song files, or other audio data.
  • the user device 201 communicates with the network 205 .
  • the network 205 sends or transmits data 213 , such as one or more data packets, to the user device 201 .
  • data 213 such as one or more data packets
  • the network 205 sends or transmits data 213 , such as one or more data packets, to the user device 201 .
  • the data 213 is used to make the user device 201 aware of a voice message or is used to display a representation of a music file and/or audio file.
  • the data 213 is received by the user device 201 , and based on the data 213 , a graphical representation, image object, or icon identifying the voice message, song, and/or audio file is generated or displayed on the user device 201 .
  • the graphical data, including progression icon information, that is to be displayed may be part of the data 213 or may be generated in the user device 201 .
  • the progression image data may be transmitted to the user device 201 separate from the data 213 .
  • the data 213 may be or include the voice message, music, and/or audio content that is to be played back via the user device 201 .
  • the user device may send a request to the network 205 (e.g., a server or database in the network 205 ) to obtain or retrieve the audio content, and the network transmits the audio content to the user device, such as via the data 213 .
  • the voice message, music, or other audio content may be stored on the user device 201 .
  • the progression image data such as the progression icon or bar data, may also be generated in the user device 201 rather than in a server or other network device.
  • FIG. 3 shows a graphics user interface 300 of a user device, such as the user device 104 , 108 , and/or 201 .
  • the graphics user interface 300 is executed on a display, such as the display 130 , 138 , and/or 209 .
  • the graphics user interface 300 includes one or more graphical representations, image objects, or icons 304 identifying respective voice messages, such as voicemails, music or song files, or other audio data.
  • a progression icon or image object 308 is displayed with a respective icon 304 . Additional, fewer, or different features may be displayed. For example, time data, personal information of a caller, caller identification, date data, subject data, song title data, audio file information, voice summary information, activation buttons or icons, or other content may also be displayed with the icons 304 .
  • a list of graphical representations 304 is displayed in the graphics user interface 300 .
  • the graphical representations 304 correspond to and identify separate voice messages or visual voicemails ready to be selected for playback by a user.
  • the graphical representations 304 are contained within a same screen shot or window.
  • the list of icons 304 is displayed within a voicemail inbox screen or equivalent screen for a user to view his or her voicemail messages.
  • multiple screens or windows may be used for displaying different icons 304 . For example, deleted icons 304 or icons 304 related to voicemails that have been listened to may be listed in a first window or screen shot, and saved icons 304 or icons 304 related to voicemails that have not been listened may be listed in a second window or screen shot.
  • the list of graphical representations 304 corresponds to a list of different songs, music files, or other audio files.
  • a list of different song icons is shown in a same screen shot for a user to scroll through and select a desired song to listen to.
  • the image objects 304 have a substantially rectangular shape. Alternatively, the image objects 304 may have a substantially oval, circular, or other geometrical shape. In other alternatives, no shape is provided, such as where the objects 304 are merely listed.
  • the image objects 304 may be represented in a two dimensional or three dimensional perspective. Also, the image objects 304 may be illustrated with the same or similar color or shading or may be illustrated with a variety of or different colors or shadings.
  • the image objects 304 are listed in chronological order. For example, the most recent image object 304 may be listed at the top or beginning of the list. Alternatively, the image objects 304 may be listed based on the respective subjects, categories, or other user preferences (e.g., configure the list by sender, title, or musical group).
  • Listing of the icons 304 may take a variety of forms. For example, the graphics user interface 300 may list the icons 304 in a vertical, horizontal, circular, or other geometrical pattern. A user may scroll up or down to view all the icons 304 within a respective screen or window.
  • the progression icon 308 illustrates a temporal progression of the playback of the respective audio data.
  • the progression icon 308 is a visual or graphic bar that is configured to move across or over a respective icon or image object 304 .
  • the progression icon 308 may or may not be an interactive icon.
  • the movement of the progression icon 308 during playback of the audio file corresponds to temporal progression of the audio file. For example, if a user selects one image object 304 to listen to, the respective progression icon 308 begins to move across the image object 304 , identifying the current temporal location, point in time, or relative time to the overall length of the audio playback.
  • the portion or portions of the image object 304 that have been passed by the progression icon 308 change color or shade relative to portions of the image object that have not been passed by or over by the progression icon 308 .
  • the progression icon 308 for each image object 304 is always visually present or may appear when playback has been initiated.
  • the progression icon 308 may also be visually present in the respective image object 304 when playback is stopped to mark the temporal location of the stopped or paused audio file.
  • an origin state e.g., the beginning or the audio file
  • the progression icons 308 have a substantially oval shape.
  • the progression icons 308 may be transparent or may be opaque.
  • the progression icons 308 may have a substantially rectangular, circular, triangular, or other geometrical shape.
  • the progression icons 308 may be a dividing line having a small or no width that illustrate progression via color, shade, other visual change.
  • the progression icons 308 are displayed in, on, or over each of the respective icons 304 . Multiple progression icons 308 may be displayed at the same time in the same list, screen, and/or window. For example, the progression icons 308 are in or within the respective visual borders of the graphical representations 304 . The progression icons 308 may extend across or over an entire height of each image object 304 , respectively. For example, the progression icons 308 may cover the area between a lower border limit and an upper border limit of the respective image objects 304 . The progression icons 308 may also extend beyond the borders, such as the lower and upper borders, of the respective image objects 304 .
  • FIG. 4 shows a graphics user interface 401 of a user device, such as the user device 104 , 108 , and/or 201 .
  • the graphics user interface 401 may be similar to or different than the graphics user interface 300 .
  • the graphics user interface 401 includes or displays date information 409 , time information 413 , sender information 417 , phone number information 421 , subject information 425 , and duration information 429 for each listed voice message icon 405 .
  • the icons 405 are similar to or different than the icons 304 .
  • progression icons or image objects 403 such as the icons 308 , are provided.
  • the progression icons 403 are illustrated as transparent, but they may be provided as opaque objects.
  • the number of messages and/or audio files and activation icons or soft buttons 433 are also provided.
  • FIG. 4 illustrates a list of visual voicemails, but other audio files or data may be illustrated instead or in addition.
  • one or more song, music, or audio file icons may be provided in which the subject information 425 includes title, musical group, name, subject, or other information.
  • the date information 409 and the time information 413 represents when a user received the respective voice message.
  • the date and time information may be configured to user preferences, such as representing the date in a United States or European fashion.
  • the sender information 417 identifies the person who left the voice message. For example, a full or partial name may be displayed. The name may be extracted from the voice message itself or may be identified through a network database, caller identification, or a memory network. Entity or company names may be used instead of personal names.
  • the phone number information 421 corresponds to the phone number or Internet protocol address of the sender of the voice message.
  • the phone number information 412 may or may not include area code information or foreign numbers.
  • the phone number information 412 may also be extracted from the voice message or caller identification. For example, a person leaving the voice message may say or speak the number he of she is calling from or a contact number, and that number may be extracted from the voice message.
  • the subject information 425 includes textual summaries or summaries, such as the summaries 308 , of respective voice messages.
  • One or more words or a phrase summarizing or providing a gist of the voice message is displayed.
  • the gist or summary may be more than a number, name, date, or time associated with the voice message.
  • the gist or summary may provide the essence or central message or communication (e.g., material or significant words) that is to be conveyed by the voice message. Examples of summaries include: “Will you be at the meeting,” “About the new project,” “Update on the plan,” Next week travel plans,” and “Confirm the booking.”
  • the duration information 429 represents how long the voice message is temporally. A user may view the duration information 429 to determine which voice message will be quicker or faster to review. Alternatively, the duration information 429 corresponds to the length of a song, musical selection, or other audio file. Also, the duration information 429 may illustrate the change in time as audio content is played back instead of being a constant indicator of the entire length of the audio content. Or, a separate counter icon, such as a playback time associated with the progression icon 403 , may be provided.
  • the activation button 433 is an icon or soft button to select, activate, or initiate a respective voice message, song, or other audio file. By selecting the activation button 433 , a user may listen to the respective audio content or data via an audio output on the user device or in communication with the user device. The activation button 433 may be used to stop or pause playback of the audio content as well. Other activation may be used, such as “clicking” on the icon 405 for a given audio file.
  • the progression icons 403 are used to view a temporal progression of playback of the respective audio content.
  • the progression icons 403 are within each respective image object 405 and may extend an entire height of the respective image object 405 .
  • the progression icons 403 can be moved to a desired point in time across the respective image objects 405 (e.g., independent of playback, a user may move a progression icon 403 to a desired temporal location).
  • Progression icons 403 may not be visually present until initiation of playback or until a user designates, via playback or user control, a progression icon 403 to be at a temporal location above an origin point or zero time.
  • a user can select the image object 405 , such as by clicking on it or placing a mouse or stylist pen on the image object 405 , in which a progression icon 403 may appear and may be dragged to a desired temporal location.
  • a user can listen to a portion of audio content for one file or message, listen to another message or file, and then come back to the original message or file to finish playback from where, temporally, the audio content was stopped or paused. Therefore, a user can conveniently and comfortably listen to a variety of messages and/or audio files displayed within a same screen or window and view temporal progression or location of each audio file without opening or viewing another window or screen shot.
  • FIG. 5 shows a method for generating data for a graphics user interface, such as the graphics user interface 300 and/or 401 . Fewer or more acts may be provided.
  • a data signal is received.
  • the data signal may be a voice signal, such as speech data, music data, or other audio data.
  • a first user initiates a call to a second user via a phone, computer, or PDA.
  • the second user is unavailable, and, therefore, the first user leaves a voice message for the second user.
  • Speech content from the first user is converted into voice data or signals.
  • Voice data, such as data packets, or voice signals are transmitted to a network, such as the network 112 or 205 .
  • a server or basestation such as the server 116 , or a user device receives the voice data or signal.
  • a music, song, or other audio signal is received by the sever or basestation.
  • the data signal is stored or saved as a voice message, such as a voicemail, music or song file, or other audio file.
  • the server stores the voice message and/or audio file in a repository or database, such as the database 120 .
  • an image object such as the image object 305 or 405 , identifying the audio file is generated, such as by the server.
  • the image object may also be stored in the database.
  • the image object is transmitted to a respective user device, such as the user device 104 , 108 , or 201 , or user interface for display. Alternatively, the image object is generated in the user device.
  • a progression icon such as the progression icon 308 or 403
  • the progression icon is generated, such as by the server, to be displayed in the image object.
  • the progression icon is generated in the user device.
  • audio content and/or other features of the graphics user interface may be transmitted to the user device or may be generated by the user device.
  • FIG. 6 shows a method for executing a graphics user interface, such as the graphics user interface 300 and/or 401 . Fewer or more acts may be provided.
  • a first graphical icon such as a graphical representation 304 or 405 .
  • the second user may turn on his or her computer, phone, PDA, or other user device and view a voicemail inbox, song or music selection screen, and/or graphics user interface screen or window identifying audio content.
  • the window displays an icon identifying a voice message, a song, or other audio files.
  • a second graphical icon such as another graphical representation 304 or 405 , is displayed simultaneously with the first graphical icon.
  • the second graphical icon identifies a separate or other voice message, song, or other audio data.
  • the second and first graphical icons are arranged in a list format. For example, the first and second graphical icons are displayed in the same window or screen. The second user may scroll up and down within the screen to view multiple icons identifying different voice messages, songs, or other audio files.
  • a progression image object such as the progression icon 308 or 403 , is displayed in each of the first and second graphical icons.
  • the progression image objects are always displayed or displayed when playback of a selected audio file is initiated or when a user stops, pauses, or selects a point in time after a zero time or origin point within a respective graphical icon.
  • selection of the first graphical icon is received for playback of the respective voice message, song, or other audio file.
  • the first graphical icon includes an activation button, such as the activation button 433 , and the second user selects the activation button for playback.
  • the progression image object may be selected or “clicked on” to initiate playback.
  • the user device requests the stored voice message, song, or audio content from the database or server in the network, and the database or server transmits the voice message, song, or audio file to the user device in response to the query.
  • the user device outputs audio signals to play the audio content.
  • the audio signals may be outputted by speakers in or on the user device or by another device in communication with the user device.
  • the voice message, song, or audio content is stored on or in the user device, and requesting the voice message, song, or audio content from the network may be avoided.
  • a command to stop playback is received. For example, a user selects or “clicks on” the activation/soft button or progression icon to stop or pause playback of the audio content.
  • the respective progression image object or icons rests or stops at a position corresponding to a time at which the selected audio content is stopped.
  • selection of the second graphical icon is received for playback of the respective voice message, song, or other audio file, similar to the selection of the first graphical icon. While play back of the audio content regarding the second graphical icon is occurring, the respective progression icon moves across the second graphical icon to track the temporal location of the audio playback.
  • the logic, software or instructions for implementing the processes, methods and/or techniques discussed above are provided on computer-readable storage media or memories or other tangible media, such as a cache, buffer, RAM, removable media, hard drive, other computer readable storage media, or any other tangible media.
  • the tangible media include various types of volatile and nonvolatile storage media.
  • the functions, acts or tasks illustrated in the figures or described herein are executed in response to one or more sets of logic or instructions stored in or on computer readable storage media.
  • the functions, acts or tasks are independent of the particular type of instructions set, storage media, processor or processing strategy and may be performed by software, hardware, integrated circuits, firmware, micro code and the like, operating alone or in combination.
  • processing strategies may include multiprocessing, multitasking, parallel processing and the like.
  • the instructions are stored on a removable media device for reading by local or remote systems.
  • the logic or instructions are stored in a remote location for transfer through a computer network or over telephone lines.
  • the logic or instructions are stored within a given computer, central processing unit (“CPU”), graphics processing unit (“GPU”) or system.

Abstract

In one embodiment, a graphics user interface is provided. The graphics user interface includes a plurality of graphical representations identifying separate audio data, respectively. Each of the plurality of graphical representations is configured in a list to be selected for playback of the respective audio data. A progression icon is displayed in each of the respective graphical representations. Each progression icon illustrates a temporal progression of the playback of the respective audio data.

Description

    BACKGROUND
  • The present disclosure relates generally to data communication and user interfaces.
  • Communication takes many forms. For example, ideas, feelings, and questions are many times conveyed using the spoken word. Using technology, the spoken word can be captured, stored, and transmitted to several potential listeners. Music, voice messages, such as voicemails, and other speech based content are becoming a significant part of society. Users and listeners can use devices, such as computers, telephones, or personal digital assistants (“PDAs”), to listen to a variety of information at desired times. However, as more and more audio content is acquired, users are seeking to review and/or listen to the audio content in more efficient and comfortable manners.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The components and the figures are not necessarily to scale, emphasis instead being placed upon illustrating the principles of the invention. Moreover, in the figures, like reference numerals designate corresponding parts throughout the different views.
  • FIG. 1 illustrates one embodiment of a data communication system;
  • FIG. 2 illustrates an embodiment of a user device in communication with a network of a system, such as the system of FIG. 1;
  • FIG. 3 illustrates an embodiment of a graphics user interface of a user device, such as the user device of FIG. 2;
  • FIG. 4 illustrates an alternate embodiment of a graphics user interface of a user device, such as the user device of FIG. 2;
  • FIG. 5 illustrates one embodiment of a method for generating data for a graphics user interface, such as the graphics user interface of FIG. 3 and/or FIG. 4; and
  • FIG. 6 illustrates an embodiment of a method for executing a graphics user interface, such as the graphics user interface of FIG. 3 and/or FIG. 4.
  • DESCRIPTION OF EXAMPLE EMBODIMENTS Overview
  • By way of introduction, the example embodiments described below include a graphics user interface and associated methods. For example, the graphics user interface includes one or more icons identifying respective audio files or data, such as voice messages. Progression of playback of the audio file or data is displayed within each of the respective icons.
  • According to a first aspect, a graphics user interface is provided. The graphics user interface includes a plurality of graphical representations identifying separate audio data, respectively. Each of the plurality of graphical representations is configured in a list to be selected for playback of the respective audio data. A progression icon is displayed in each of the respective graphical representations. Each progression icon illustrates a temporal progression of the playback of the respective audio data.
  • According to a second aspect, a data signal is received. The data signal is stored as an audio file. An image object identifying the audio file is generated. A progression icon to be displayed in the image object is generated. The progression icon extends over an entire height of the image object. Movement of the progression icon during playback of the audio file corresponds to temporal progression of the audio file.
  • According to a third aspect, a first graphical icon is displayed. A second graphical icon is displayed simultaneously with the first graphical icon. The first graphical icon and the second graphical icon represent separate voice messages, respectively. A progression image object is displayed in each of the first and second graphical icons. Selection of the first graphical icon is received for playback of the respective voice message. The location of the respective progression image object illustrates a temporal position of a current audio playback in the selected voice message.
  • The present invention is defined by the following claims, and nothing in this section should be taken as a limitation on those claims. Further aspects and advantages of the invention are discussed below in conjunction with the preferred embodiments.
  • Example Embodiments
  • A list of graphical representations of voice messages, such as visual voicemails, or other audio files or data is displayed. For example, each audio file or voice message is associated with a visual progression bar. A user can see all of the voice message or audio file icons with their respective progression bars. The user can playback audio and/or a voice message from the list, and during playback, the respective progression bar indicates the timing or temporal progression of the playback of the voice message and/or audio file. The user can stop the audio file or voice message, and the progression bar will remain or rest at or indicate the point in time where the audio file or voice message was stopped. This way, a user can go to any part of an audio file or voice message without listening to the entire message or repeated portions.
  • FIG. 1 shows a data communication system 100 (hereinafter referred to as “system 100”). The system 100 is an Internet protocol-based system, an Intranet system, a telephony system, a voice over Internet protocol (“VoIP”) system, a cellular based system, a message system, a wireless or wired audio/visual data communication system, and/or any known or future data communication system.
  • The system 100 includes, but is not limited to, a user or client device 104, a network 112, and another user or client device 108. The network 112 includes a server 116 and a repository or database 120. Additional, different, or fewer devices or components may be provided. For example, a proxy server, a billing server, a name server, a switch or intelligent switch, other computers or workstations, administrative components, such as an administrative workstation, a gateway device, a backbone, ports, network connections, and network interfaces may be provided. While the components in FIG. 1 are shown as separate from one another, one or more of these components may be combined.
  • The user device 104 is a wireless device (e.g., a cellular phone, a PDA, a wireless computer), a wired or cabled device (e.g., a desktop computer using a broadband cable or digital subscriber line (“DSL”) connection), a landline based or VoIP telephone, or any other data communication device that can transmit or convey aural content, speech, or voice messages. A user uses the device 104 to initiate and/or conduct voice or speech conversations as well as leave voice messages, such as voicemails, with an intended recipient. For example, the user device 104 communicates with the user device 108 or the server 116 associated with the user device 108 via the network 112.
  • The user device 104 includes a memory 124, a processor 128, and a display 130. Additional, different, or fewer components may be provided. For example, an input device is provided, such as a button, keypad, keyboard, mouse, trackball, rocker switch, touch pad, or voice recognition circuit. Audio components may be provided. For example, a speaker, one or more microphones, an antenna, a transceiver, audio jacks, and/or other components for outputting or receiving audible or sound signals may be provided. Alternatively, the audio components may be part of a separate device or are separate devices that may be placed in communication with the user device 104.
  • The processor 128 is in communication with the display 130 and the memory 124. The processor 128 may be in communication with more or fewer components. The processor 128 is a general processor, application-specific integrated circuit (“ASIC”), digital signal processor, field programmable gate array (“FPGA”), digital circuit, analog circuit, or combinations thereof. The processor 128 is one or more processors operable to control and/or communicate with the various electronics and logic of the user device 104. The processor 128, the memory 124, and other circuitry may be part of an integrated circuit.
  • The processor 128 is operable to generate voice or speech data. For example, analog aural or speech signals are received and processed into digital signals. The digital signals include one or more packets of data corresponding to speech components. Alternatively, the processor 128 may generate data packets that are to be converted into audio signals without receiving any input speech signals, such as a computer based voice message. Also, the processor in combination with a transmitter may generate radio frequency (“RF”) signals to transmit speech or voice content. In alternative embodiments, the server 116 or other device generates the voice or speech data from stored data or received in an analog format.
  • The display 130 is any mechanical and/or electronic display positioned for accessible viewing in on, or in communication with the user device 104. For example, the display 130 is a touch screen, liquid crystal display (“LCD”) cathode ray tube (“CRT”) display, or a plasma display. The display 130 is operable to display graphical representations of voicemails, emails, websites, and other data or media.
  • The memory 124 is any known or future storage device. The memory 124 is a non-volatile and/or volatile memory, such as a Random Access Memory “RAM” (electronic), a Read-Only Memory “ROM” (electronic), or an Erasable Programmable Read-Only Memory (EPROM or Flash memory). A memory network may be provided. The memory 124 may be part of the processor 128.
  • In an alternate embodiment, the user device 104 is a workstation, computer, database, or other device used to store and/or modify a variety of audio files or data. For example, the user device 104 is configured to store, modify, download, upload, and/or transmit music files, such as MPEG audio layer 3 (“MP3”) files, speech data, communication files, and/or other audio data or files.
  • The user device 104 is operable to communicate with the user device 108 via the network 112. The network 112 is the Internet, a cellular network, an intranet, a local area network (“LAN”), a wide area network (“WAN”), a virtual private network (“VPN”), a message network, a music file sharing network, a VoIP network, a telephone network, and/or any known or future network. The network may contain cellular base stations, servers, computers, or other systems, devices, or components for transferring and/or modifying data. The server 116 and the database 120 are shown to be within the network 112. However, the server 116 and/or database 120 may be outside the network 112 or may be part of a separate network.
  • The server 116 communicates with the user device 104, the user device 108, and the database 120. The sever 116 is a provider server, an application server, communications server, database server, proxy server, file server, web server, client server, peer-to-peer server, and/or any known or future server. For example, the server 116 is a network access server, a gateway general packet radio service (“GPRS”) support node, and/or an authentication, authorization, and accounting (“AAA”) server. The server 116 is operable to receive voice, speech, music, and/or audio data from the user device 104 and 108. The server 116 is a software and/or hardware implementation. For example, the server 116 is an application program. Alternatively, the server 116 is a server computer or any other hardware that executes and runs server applications.
  • A hardware implementation of the server 116 includes, but is not limited to, a memory 144 and a processor 140. Additional, different, or fewer components may be provided. The processor 140 is in communication with the memory 144. The processor 140 may be in communication with more or fewer components. The memory 144 and the processor 140 are similar to or different than the memory 124 and the processor 128, respectively. The processor 140 analyzes and/or modifies the voice, speech, music, and/or audio data and passes or transmits the data to the user device 108. The processor 140 may also store or save the voice, speech, or music data as a voice message or other audio file in the memory 144 or the database 120.
  • For example, the server 116 may generate and store voice messages or voicemails as well as store audio files, such as music files. The server 116 is also configured to generate graphical data that represents or identifies the voice messages, music files, or other audio data. For example, the processor 140 generates icons, image objects, or graphical representations of the audio data or files. The processor 140 also generates a progression icon or image object, such as a visual progression bar, that is displayed in the respective icons identifying the audio files and/or voice messages. The progression icons illustrate or show temporal progression of the audio file during playback.
  • The server 116 is further operable to convert the voice or speech data into textual or word data. For example, the processor 140 or other component, such as a converter, identifies speech content and associates words or phrases with the speech content to generate text corresponding to a voice message. The server 116 generates a summary or textual summary of the voice message. For example, the summary includes a gist of the voice message. The summary provides more than just a name, date, time, or number. For example, the summary may provide a central or main point, idea, or communication that is to be conveyed by the voice message. The summary may be displayed with a respective icon identifying a voice message or visual voicemail. For example, the methods and features of U.S. patent application Ser. No. ______ entitled “USER INTERFACE WITH VOICE MESSAGE SUMMARY” (Attorney Docket Number 13522-15/Cisco No. 958331) may be used.
  • The functionality of the server 116 may be implemented on a different or separate device. For example, a gateway device, a switch, an intelligent switch, a router, or other device may be used to execute the tasks of the server 116.
  • The database 120 is in communication with the server 116. The database 120 is a central repository, a cache network, distributed database of a plurality of databases, or any known or future data storage device or system. The database 120 includes a memory 148. Additional, different, or fewer components may be provided. For example, one or more processors may be provided. The memory 148 is similar to or different than the memory 144 and/or 124.
  • The database 120 receives and stores data, such as voice message data, music files, or other audio data. For example, when a first user attempts to call or have a conversation with a second user (e.g., the user device 104 attempts to transmit voice data to the user device 108 via the server 116), the first user may have to leave a voice message or voicemail if the second user is not available. In such a case, the server 116 generates a voice message from the voice data received and stores the voice message in the memory 144 and/or the database 120 (via the memory 148). The storage of the voice message in the memory 144 may be relatively temporary compared to the storage in the database 120. The database 120 may partition voice messages based on different users, locations, timings, or other factors. Alternatively, the database 120 may also store the progression information corresponding to the respective voice messages.
  • Stored voice messages may be retrieved from the database 120 or the server 116 by a user (e.g., via the user device 108) to listen to the respective voice message and/or to prepare a summary. The database 120 may be integrated into the server 116. Alternatively, the server 116 may transmit the voice message to the user device 108 upon creation of the voice message to be stored on the user device 108.
  • Alternatively, the database 120 contains a plurality of music files or audio data that can be downloaded or uploaded for listening purposes. For example, a user using either the user device 104 or 108 may desire to listen to a new or old song or other music file. The user downloads a song via the server 116 to listen to using the user device 104 or 108. The database 120 may also contain or store the graphical representations of the voice messages and/or audio files as well as the respective progression icons.
  • The user device 108 is similar to or different than the user device 104. The user device 108 includes a memory 132, a processor 136, and a display 138. Additional, different, or fewer components may be provided such as described in regards to the user device 108. The memory 132, the processor 136, and the display 138 are similar to or different than the memory 124, the processor 128, and the display 130, respectively. The user device 108 is used to view graphical representations or icons representing voice messages, such as visual voicemails, music or song files, or other audio data and is used to playback the audio data or files. The user device 108 also displays visual progression, such as progression icons, image objects, or bars, within each of the graphical representations identifying the voice messages, music files, and/or other audio files. The user device 108 may also perform the functionalities of the user device 104 and vice versa.
  • FIG. 2 shows a user device 201 in communication with a network 205, such as the network 112. The user device 201 is similar to the user device 108 and/or the user device 104, For example, the user device 201 is a cellular telephone, a digital telephone, a computer, or a PDA. The user device 201 includes a screen or display 209, such as the display 130 and/or 138. The screen 209 is used to view graphical representations of voice messages, visual voicemails, music or song files, or other audio data.
  • The user device 201 communicates with the network 205. For example, when a server or device in the network 205 generates a voice message, the network 205 sends or transmits data 213, such as one or more data packets, to the user device 201. Or, when a music selection, song, or other audio file is being downloaded or acquired, the network 205 sends or transmits data 213, such as one or more data packets, to the user device 201. The data 213 is used to make the user device 201 aware of a voice message or is used to display a representation of a music file and/or audio file. For example, the data 213 is received by the user device 201, and based on the data 213, a graphical representation, image object, or icon identifying the voice message, song, and/or audio file is generated or displayed on the user device 201. The graphical data, including progression icon information, that is to be displayed may be part of the data 213 or may be generated in the user device 201. The progression image data may be transmitted to the user device 201 separate from the data 213.
  • Alternatively, the data 213 may be or include the voice message, music, and/or audio content that is to be played back via the user device 201. For example, if a user selects the graphical representation, image object, or icon identifying or representing a certain voice message, song, or other audio data, the user device may send a request to the network 205 (e.g., a server or database in the network 205) to obtain or retrieve the audio content, and the network transmits the audio content to the user device, such as via the data 213. Or, the voice message, music, or other audio content may be stored on the user device 201. The progression image data, such as the progression icon or bar data, may also be generated in the user device 201 rather than in a server or other network device.
  • FIG. 3 shows a graphics user interface 300 of a user device, such as the user device 104, 108, and/or 201. The graphics user interface 300 is executed on a display, such as the display 130, 138, and/or 209. The graphics user interface 300 includes one or more graphical representations, image objects, or icons 304 identifying respective voice messages, such as voicemails, music or song files, or other audio data. A progression icon or image object 308 is displayed with a respective icon 304. Additional, fewer, or different features may be displayed. For example, time data, personal information of a caller, caller identification, date data, subject data, song title data, audio file information, voice summary information, activation buttons or icons, or other content may also be displayed with the icons 304.
  • For example, a list of graphical representations 304 is displayed in the graphics user interface 300. The graphical representations 304 correspond to and identify separate voice messages or visual voicemails ready to be selected for playback by a user. The graphical representations 304 are contained within a same screen shot or window. For example, the list of icons 304 is displayed within a voicemail inbox screen or equivalent screen for a user to view his or her voicemail messages. Alternatively, multiple screens or windows may be used for displaying different icons 304. For example, deleted icons 304 or icons 304 related to voicemails that have been listened to may be listed in a first window or screen shot, and saved icons 304 or icons 304 related to voicemails that have not been listened may be listed in a second window or screen shot.
  • In an alternate embodiment, the list of graphical representations 304 corresponds to a list of different songs, music files, or other audio files. For example, a list of different song icons is shown in a same screen shot for a user to scroll through and select a desired song to listen to.
  • The image objects 304 have a substantially rectangular shape. Alternatively, the image objects 304 may have a substantially oval, circular, or other geometrical shape. In other alternatives, no shape is provided, such as where the objects 304 are merely listed. The image objects 304 may be represented in a two dimensional or three dimensional perspective. Also, the image objects 304 may be illustrated with the same or similar color or shading or may be illustrated with a variety of or different colors or shadings.
  • The image objects 304 are listed in chronological order. For example, the most recent image object 304 may be listed at the top or beginning of the list. Alternatively, the image objects 304 may be listed based on the respective subjects, categories, or other user preferences (e.g., configure the list by sender, title, or musical group). Listing of the icons 304 may take a variety of forms. For example, the graphics user interface 300 may list the icons 304 in a vertical, horizontal, circular, or other geometrical pattern. A user may scroll up or down to view all the icons 304 within a respective screen or window.
  • The progression icon 308 illustrates a temporal progression of the playback of the respective audio data. For example, the progression icon 308 is a visual or graphic bar that is configured to move across or over a respective icon or image object 304. The progression icon 308 may or may not be an interactive icon. The movement of the progression icon 308 during playback of the audio file corresponds to temporal progression of the audio file. For example, if a user selects one image object 304 to listen to, the respective progression icon 308 begins to move across the image object 304, identifying the current temporal location, point in time, or relative time to the overall length of the audio playback. The portion or portions of the image object 304 that have been passed by the progression icon 308 change color or shade relative to portions of the image object that have not been passed by or over by the progression icon 308.
  • The progression icon 308 for each image object 304 is always visually present or may appear when playback has been initiated. The progression icon 308 may also be visually present in the respective image object 304 when playback is stopped to mark the temporal location of the stopped or paused audio file. However, if a user moves or configures the progression icon 308 back to an origin state (e.g., the beginning or the audio file), then the progression icon 308 may disappear and reappear during initiation or activation of the audio playback.
  • The progression icons 308 have a substantially oval shape. The progression icons 308 may be transparent or may be opaque. Alternatively, the progression icons 308 may have a substantially rectangular, circular, triangular, or other geometrical shape. Or, the progression icons 308 may be a dividing line having a small or no width that illustrate progression via color, shade, other visual change.
  • The progression icons 308 are displayed in, on, or over each of the respective icons 304. Multiple progression icons 308 may be displayed at the same time in the same list, screen, and/or window. For example, the progression icons 308 are in or within the respective visual borders of the graphical representations 304. The progression icons 308 may extend across or over an entire height of each image object 304, respectively. For example, the progression icons 308 may cover the area between a lower border limit and an upper border limit of the respective image objects 304. The progression icons 308 may also extend beyond the borders, such as the lower and upper borders, of the respective image objects 304.
  • FIG. 4 shows a graphics user interface 401 of a user device, such as the user device 104, 108, and/or 201. The graphics user interface 401 may be similar to or different than the graphics user interface 300. The graphics user interface 401 includes or displays date information 409, time information 413, sender information 417, phone number information 421, subject information 425, and duration information 429 for each listed voice message icon 405. The icons 405 are similar to or different than the icons 304. Also, progression icons or image objects 403, such as the icons 308, are provided. The progression icons 403 are illustrated as transparent, but they may be provided as opaque objects. The number of messages and/or audio files and activation icons or soft buttons 433 are also provided. FIG. 4 illustrates a list of visual voicemails, but other audio files or data may be illustrated instead or in addition. For example, one or more song, music, or audio file icons may be provided in which the subject information 425 includes title, musical group, name, subject, or other information.
  • In regards to describing FIG. 4 in terms of visual voicemails or voice messages, the date information 409 and the time information 413 represents when a user received the respective voice message. The date and time information may be configured to user preferences, such as representing the date in a United States or European fashion. The sender information 417 identifies the person who left the voice message. For example, a full or partial name may be displayed. The name may be extracted from the voice message itself or may be identified through a network database, caller identification, or a memory network. Entity or company names may be used instead of personal names. The phone number information 421 corresponds to the phone number or Internet protocol address of the sender of the voice message. The phone number information 412 may or may not include area code information or foreign numbers. The phone number information 412 may also be extracted from the voice message or caller identification. For example, a person leaving the voice message may say or speak the number he of she is calling from or a contact number, and that number may be extracted from the voice message.
  • The subject information 425 includes textual summaries or summaries, such as the summaries 308, of respective voice messages. One or more words or a phrase summarizing or providing a gist of the voice message is displayed. The gist or summary may be more than a number, name, date, or time associated with the voice message. For example, the gist or summary may provide the essence or central message or communication (e.g., material or significant words) that is to be conveyed by the voice message. Examples of summaries include: “Will you be at the meeting,” “About the new project,” “Update on the plan,” Next week travel plans,” and “Confirm the booking.”
  • The duration information 429 represents how long the voice message is temporally. A user may view the duration information 429 to determine which voice message will be quicker or faster to review. Alternatively, the duration information 429 corresponds to the length of a song, musical selection, or other audio file. Also, the duration information 429 may illustrate the change in time as audio content is played back instead of being a constant indicator of the entire length of the audio content. Or, a separate counter icon, such as a playback time associated with the progression icon 403, may be provided.
  • The activation button 433 is an icon or soft button to select, activate, or initiate a respective voice message, song, or other audio file. By selecting the activation button 433, a user may listen to the respective audio content or data via an audio output on the user device or in communication with the user device. The activation button 433 may be used to stop or pause playback of the audio content as well. Other activation may be used, such as “clicking” on the icon 405 for a given audio file.
  • The progression icons 403 are used to view a temporal progression of playback of the respective audio content. The progression icons 403 are within each respective image object 405 and may extend an entire height of the respective image object 405. The progression icons 403 can be moved to a desired point in time across the respective image objects 405 (e.g., independent of playback, a user may move a progression icon 403 to a desired temporal location). Progression icons 403 may not be visually present until initiation of playback or until a user designates, via playback or user control, a progression icon 403 to be at a temporal location above an origin point or zero time. For example, regarding an image object 405 that does not display a progression icon 403 initially (e.g., playback has not been selected), a user can select the image object 405, such as by clicking on it or placing a mouse or stylist pen on the image object 405, in which a progression icon 403 may appear and may be dragged to a desired temporal location.
  • A user can listen to a portion of audio content for one file or message, listen to another message or file, and then come back to the original message or file to finish playback from where, temporally, the audio content was stopped or paused. Therefore, a user can conveniently and comfortably listen to a variety of messages and/or audio files displayed within a same screen or window and view temporal progression or location of each audio file without opening or viewing another window or screen shot.
  • FIG. 5 shows a method for generating data for a graphics user interface, such as the graphics user interface 300 and/or 401. Fewer or more acts may be provided. In act 500, a data signal is received. The data signal may be a voice signal, such as speech data, music data, or other audio data. For example, a first user initiates a call to a second user via a phone, computer, or PDA. The second user is unavailable, and, therefore, the first user leaves a voice message for the second user. Speech content from the first user is converted into voice data or signals. Voice data, such as data packets, or voice signals are transmitted to a network, such as the network 112 or 205. A server or basestation, such as the server 116, or a user device receives the voice data or signal. Alternatively, a music, song, or other audio signal is received by the sever or basestation.
  • In act 504, the data signal is stored or saved as a voice message, such as a voicemail, music or song file, or other audio file. For example, the server stores the voice message and/or audio file in a repository or database, such as the database 120. In act 508, an image object, such as the image object 305 or 405, identifying the audio file is generated, such as by the server. The image object may also be stored in the database. The image object is transmitted to a respective user device, such as the user device 104, 108, or 201, or user interface for display. Alternatively, the image object is generated in the user device.
  • In act 512, a progression icon, such as the progression icon 308 or 403, is generated, such as by the server, to be displayed in the image object. Alternatively, the progression icon is generated in the user device. Also, audio content and/or other features of the graphics user interface may be transmitted to the user device or may be generated by the user device.
  • FIG. 6 shows a method for executing a graphics user interface, such as the graphics user interface 300 and/or 401. Fewer or more acts may be provided. In act 601, a first graphical icon, such as a graphical representation 304 or 405, is displayed. For example, the second user may turn on his or her computer, phone, PDA, or other user device and view a voicemail inbox, song or music selection screen, and/or graphics user interface screen or window identifying audio content. The window displays an icon identifying a voice message, a song, or other audio files.
  • In act 605, a second graphical icon, such as another graphical representation 304 or 405, is displayed simultaneously with the first graphical icon. The second graphical icon identifies a separate or other voice message, song, or other audio data. The second and first graphical icons are arranged in a list format. For example, the first and second graphical icons are displayed in the same window or screen. The second user may scroll up and down within the screen to view multiple icons identifying different voice messages, songs, or other audio files.
  • In act 609, a progression image object, such as the progression icon 308 or 403, is displayed in each of the first and second graphical icons. The progression image objects are always displayed or displayed when playback of a selected audio file is initiated or when a user stops, pauses, or selects a point in time after a zero time or origin point within a respective graphical icon.
  • In act 613, selection of the first graphical icon is received for playback of the respective voice message, song, or other audio file. For example, the second user decides to listen to the voice message or audio content corresponding to the first graphical icon. The first graphical icon includes an activation button, such as the activation button 433, and the second user selects the activation button for playback. Alternatively, the progression image object may be selected or “clicked on” to initiate playback. Based on the selection, the user device requests the stored voice message, song, or audio content from the database or server in the network, and the database or server transmits the voice message, song, or audio file to the user device in response to the query. The user device outputs audio signals to play the audio content. While play back is occurring, the respective progression icon or image object moves across the first graphical icon to track the temporal location of the audio playback, and other progression icons or image objects are either not visually present or remain at rest at certain temporal locations. The audio signals may be outputted by speakers in or on the user device or by another device in communication with the user device. Alternatively, the voice message, song, or audio content is stored on or in the user device, and requesting the voice message, song, or audio content from the network may be avoided.
  • In act 617, a command to stop playback is received. For example, a user selects or “clicks on” the activation/soft button or progression icon to stop or pause playback of the audio content. The respective progression image object or icons rests or stops at a position corresponding to a time at which the selected audio content is stopped. In act 621, selection of the second graphical icon is received for playback of the respective voice message, song, or other audio file, similar to the selection of the first graphical icon. While play back of the audio content regarding the second graphical icon is occurring, the respective progression icon moves across the second graphical icon to track the temporal location of the audio playback.
  • The logic, software or instructions for implementing the processes, methods and/or techniques discussed above are provided on computer-readable storage media or memories or other tangible media, such as a cache, buffer, RAM, removable media, hard drive, other computer readable storage media, or any other tangible media. The tangible media include various types of volatile and nonvolatile storage media. The functions, acts or tasks illustrated in the figures or described herein are executed in response to one or more sets of logic or instructions stored in or on computer readable storage media. The functions, acts or tasks are independent of the particular type of instructions set, storage media, processor or processing strategy and may be performed by software, hardware, integrated circuits, firmware, micro code and the like, operating alone or in combination. Likewise, processing strategies may include multiprocessing, multitasking, parallel processing and the like. In one embodiment, the instructions are stored on a removable media device for reading by local or remote systems. In other embodiments, the logic or instructions are stored in a remote location for transfer through a computer network or over telephone lines. In yet other embodiments, the logic or instructions are stored within a given computer, central processing unit (“CPU”), graphics processing unit (“GPU”) or system.
  • While the invention has been described above by reference to various embodiments, it should be understood that many changes and modifications can be made without departing from the scope of the invention. It is therefore intended that the foregoing detailed description be regarded as illustrative rather than limiting, and that it be understood that it is the following claims, including all equivalents, that are intended to define the spirit and scope of this invention.

Claims (20)

1. A graphics user interface comprising:
a plurality of graphical representations identifying separate audio data, respectively, each of the plurality of graphical representations configured in a list to be selected for playback of the respective audio data; and
a progression icon displayed in each of the respective graphical representations,
wherein each progression icon illustrates a temporal progression of the playback of the respective audio data.
2. The graphics user interface of claim 1, wherein each progression icon extends over an entire height of the respective graphical representation.
3. The graphics user interface of claim 1, wherein each of the audio data comprises a voice message.
4. The graphics user interface of claim 3, wherein each of the graphical representations illustrates a time corresponding to the respective voice message.
5. The graphics user interface of claim 3, wherein each of the graphical representations illustrates a phone number corresponding to the respective voice message.
6. The graphics user interface of claim 1, wherein each progression icon comprises a graphical bar configured to move over the respective graphical representation.
7. The graphics user interface of claim 1, wherein the plurality of graphical representations are displayed on a screen of a user device,
8. The graphics user interface of claim 7, wherein the user device comprises a telephone, a computer, or a personal digital assistant.
9. The graphics user interface of claim 11, wherein each progression icon comprises an interactive icon.
10. A method comprising:
receiving a data signal;
storing the data signal as an audio file;
generating an image object identifying the audio file; and
generating a progression icon to be displayed in the image object, the progression icon extending over an entire height of the image object,
wherein movement of the progression icon during playback of the audio file corresponds to temporal progression of the audio file.
11. The method of claim 10, wherein movement of the progression icon comprises movement of a graphical bar across the image object.
12. The method of claim 10, wherein the audio file comprises a voice message.
13. The method of claim 10, wherein the progression icon remains at a corresponding temporal location in the image object when playback of the audio file is stopped.
14. The method of claim 10, wherein a portion of the image object changes color or shade as the progression icon moves across the portion.
15. The method of claim 10, wherein the image object illustrates a name or subject corresponding to the audio file.
16. A method comprising:
displaying a first graphical icon;
displaying a second graphical icon simultaneously with the first graphical icon, the first graphical icon and the second graphical icon represent separate voice messages, respectively;
displaying a progression image object in each of the first and second graphical icons; and
receiving selection of the first graphical icon for playback of the respective voice message, the location of the respective progression image object illustrating a temporal position of a current audio playback in the selected voice message.
17. The method of claim 16, wherein the first graphical icon and the second graphical icon are displayed in a list format in a same window.
18. The method of claim 17, wherein the progression image object of the selected first graphical icon moves during playback while the progression image object of the second graphical icon remains at rest.
19. The method of claim 16, wherein the first graphical icon and the second graphical icon are displayed in a chronological order.
20. The method of claim 16, further comprising:
receiving a command to stop playback of the selected voice message, the respective progression image object resting at a position corresponding to a time at which the selected voice message is stopped; and
receiving selection of the second graphical icon for playback of the voice message corresponding to the second graphical icon.
US12/099,635 2008-04-08 2008-04-08 User interface with visual progression Active 2031-09-25 US8489992B2 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
US12/099,635 US8489992B2 (en) 2008-04-08 2008-04-08 User interface with visual progression
PCT/US2009/039634 WO2009126565A1 (en) 2008-04-08 2009-04-06 User interface with visual progression
CN2009801115531A CN101981904A (en) 2008-04-08 2009-04-06 User interface with visual progression
EP09731322A EP2263368A1 (en) 2008-04-08 2009-04-06 User interface with visual progression

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US12/099,635 US8489992B2 (en) 2008-04-08 2008-04-08 User interface with visual progression

Publications (2)

Publication Number Publication Date
US20090254829A1 true US20090254829A1 (en) 2009-10-08
US8489992B2 US8489992B2 (en) 2013-07-16

Family

ID=40810488

Family Applications (1)

Application Number Title Priority Date Filing Date
US12/099,635 Active 2031-09-25 US8489992B2 (en) 2008-04-08 2008-04-08 User interface with visual progression

Country Status (4)

Country Link
US (1) US8489992B2 (en)
EP (1) EP2263368A1 (en)
CN (1) CN101981904A (en)
WO (1) WO2009126565A1 (en)

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080133622A1 (en) * 2006-10-31 2008-06-05 Brown Andrew P Backup and restore system for a computer
US20090298556A1 (en) * 2008-05-30 2009-12-03 Raffle Hayes S Messaging device
US20100122166A1 (en) * 2008-11-12 2010-05-13 Apple Inc. Preview of next media object to play
US20100151830A1 (en) * 2008-05-30 2010-06-17 Raffle Hayes S Messaging device
US20110191721A1 (en) * 2010-02-04 2011-08-04 Samsung Electronics Co., Ltd. Method and apparatus for displaying additional information of content
US8296410B1 (en) 2009-11-06 2012-10-23 Carbonite, Inc. Bandwidth management in a client/server environment
US8352430B1 (en) 2009-11-06 2013-01-08 Carbonite, Inc. File storage system to support high data rates
US8386430B1 (en) 2009-11-06 2013-02-26 Carbonite, Inc. File storage method to support data recovery in the event of a memory failure
KR101556662B1 (en) 2010-06-23 2015-10-02 주식회사 엘지유플러스 Receiving terminal for providing fast vms service and operating method thereof
CN105979083A (en) * 2016-04-29 2016-09-28 珠海市魅族科技有限公司 Method and device for displaying graph
WO2016203472A1 (en) * 2015-06-18 2016-12-22 Googale (2009) Ltd A computerized system including rules for a rendering system accessible to non-literate users via a touch screen
US20170229114A1 (en) * 2009-08-21 2017-08-10 Sony Corporation Apparatus, process, and program for combining speech and audio data
US20170353408A1 (en) * 2013-07-02 2017-12-07 Huawei Technologies Co., Ltd. Method, apparatus, and client for displaying media information, and method and apparatus for displaying graphical controls
US10198963B2 (en) 2015-06-18 2019-02-05 Googale (2009) Ltd. Secure computerized system, method and computer program product for children and/or pre-literate/illiterate users
US10345991B2 (en) * 2015-06-16 2019-07-09 International Business Machines Corporation Adjusting appearance of icons in an electronic device
USD857746S1 (en) 2007-10-29 2019-08-27 Carbonite, Inc. Display screen or portion thereof with an icon
US10726118B2 (en) 2015-06-18 2020-07-28 Googale (2009) Ltd. Secured computerized system for children and/or pre-literate/illiterate users
US10853513B1 (en) * 2019-07-30 2020-12-01 Capital One Services, Llc Data storage using image objects shown in a real-time view
US11307825B1 (en) * 2021-02-28 2022-04-19 International Business Machines Corporation Recording a separated sound from a sound stream mixture on a personal device
US11418472B2 (en) * 2019-06-13 2022-08-16 Beijing Xiaomi Intelligent Technology Co., Ltd. Message processing method, apparatus and device

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102572555B (en) * 2012-01-16 2014-06-18 深圳市龙视传媒有限公司 Method and system for realizing live video playback at HTTP live streaming (HLS) client
CN102821065A (en) * 2012-08-13 2012-12-12 上海量明科技发展有限公司 Method, client and system for outputting audio message display shape in instant messaging

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5838320A (en) * 1994-06-24 1998-11-17 Microsoft Corporation Method and system for scrolling through data
US6185527B1 (en) * 1999-01-19 2001-02-06 International Business Machines Corporation System and method for automatic audio content analysis for word spotting, indexing, classification and retrieval
US20030128820A1 (en) * 1999-12-08 2003-07-10 Julia Hirschberg System and method for gisting, browsing and searching voicemail using automatic speech recognition
US20060010217A1 (en) * 2004-06-04 2006-01-12 Business Instruments Corp. System and method for dynamic adaptive user-based prioritization and display of electronic messages
US20070293272A1 (en) * 2006-06-15 2007-12-20 Timothy Salmon System and method for processing a voice mail
US20080031595A1 (en) * 2006-08-07 2008-02-07 Lg Electronics Inc. Method of controlling receiver and receiver using the same
US20080055264A1 (en) * 2006-09-06 2008-03-06 Freddy Allen Anzures Voicemail Manager for Portable Multifunction Device
US20080207176A1 (en) * 2005-06-28 2008-08-28 Brackbill Douglas L Visual voicemail management
US7479949B2 (en) * 2006-09-06 2009-01-20 Apple Inc. Touch screen device, method, and graphical user interface for determining commands by applying heuristics
US7532913B2 (en) * 2003-04-22 2009-05-12 Spinvox Limited Method of managing voicemails from a mobile telephone
US20090177301A1 (en) * 2007-12-03 2009-07-09 Codentity, Llc Scalable system and method for an integrated digital media catalog, management and reproduction system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7440900B2 (en) 2002-03-15 2008-10-21 Microsoft Corporation Voice message processing system and method

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5838320A (en) * 1994-06-24 1998-11-17 Microsoft Corporation Method and system for scrolling through data
US6185527B1 (en) * 1999-01-19 2001-02-06 International Business Machines Corporation System and method for automatic audio content analysis for word spotting, indexing, classification and retrieval
US20030128820A1 (en) * 1999-12-08 2003-07-10 Julia Hirschberg System and method for gisting, browsing and searching voicemail using automatic speech recognition
US7532913B2 (en) * 2003-04-22 2009-05-12 Spinvox Limited Method of managing voicemails from a mobile telephone
US20060010217A1 (en) * 2004-06-04 2006-01-12 Business Instruments Corp. System and method for dynamic adaptive user-based prioritization and display of electronic messages
US20080207176A1 (en) * 2005-06-28 2008-08-28 Brackbill Douglas L Visual voicemail management
US20070293272A1 (en) * 2006-06-15 2007-12-20 Timothy Salmon System and method for processing a voice mail
US20080031595A1 (en) * 2006-08-07 2008-02-07 Lg Electronics Inc. Method of controlling receiver and receiver using the same
US20080055264A1 (en) * 2006-09-06 2008-03-06 Freddy Allen Anzures Voicemail Manager for Portable Multifunction Device
US7479949B2 (en) * 2006-09-06 2009-01-20 Apple Inc. Touch screen device, method, and graphical user interface for determining commands by applying heuristics
US20090177301A1 (en) * 2007-12-03 2009-07-09 Codentity, Llc Scalable system and method for an integrated digital media catalog, management and reproduction system

Cited By (36)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8117163B2 (en) 2006-10-31 2012-02-14 Carbonite, Inc. Backup and restore system for a computer
US20080133622A1 (en) * 2006-10-31 2008-06-05 Brown Andrew P Backup and restore system for a computer
US8935208B2 (en) 2006-10-31 2015-01-13 Carbonite, Inc. Backup and restore system for a computer
USD969859S1 (en) 2007-10-29 2022-11-15 Carbonite, Inc. Display screen or portion thereof with an icon
USD857746S1 (en) 2007-10-29 2019-08-27 Carbonite, Inc. Display screen or portion thereof with an icon
US20090298556A1 (en) * 2008-05-30 2009-12-03 Raffle Hayes S Messaging device
US20100151830A1 (en) * 2008-05-30 2010-06-17 Raffle Hayes S Messaging device
US8442493B2 (en) * 2008-05-30 2013-05-14 Hayes S. Raffle Messaging device
US8548434B2 (en) * 2008-05-30 2013-10-01 Hayes S. Raffle Messaging device
US8707181B2 (en) * 2008-11-12 2014-04-22 Apple Inc. Preview of next media object to play
US20100122166A1 (en) * 2008-11-12 2010-05-13 Apple Inc. Preview of next media object to play
US20170229114A1 (en) * 2009-08-21 2017-08-10 Sony Corporation Apparatus, process, and program for combining speech and audio data
US10229669B2 (en) * 2009-08-21 2019-03-12 Sony Corporation Apparatus, process, and program for combining speech and audio data
US9158629B2 (en) 2009-11-06 2015-10-13 Carbonite Inc. Methods and systems for managing bandwidth usage among a plurality of client devices
US8386430B1 (en) 2009-11-06 2013-02-26 Carbonite, Inc. File storage method to support data recovery in the event of a memory failure
US9654417B2 (en) 2009-11-06 2017-05-16 Carbonite, Inc. Methods and systems for managing bandwidth usage among a plurality of client devices
US8352430B1 (en) 2009-11-06 2013-01-08 Carbonite, Inc. File storage system to support high data rates
US8296410B1 (en) 2009-11-06 2012-10-23 Carbonite, Inc. Bandwidth management in a client/server environment
US20110191721A1 (en) * 2010-02-04 2011-08-04 Samsung Electronics Co., Ltd. Method and apparatus for displaying additional information of content
KR101556662B1 (en) 2010-06-23 2015-10-02 주식회사 엘지유플러스 Receiving terminal for providing fast vms service and operating method thereof
US10637806B2 (en) * 2013-07-02 2020-04-28 Huawei Technologies Co., Ltd. User interface for a chatting application displaying a visual representation of a voice message with feature information indicating a mood
US10880244B2 (en) 2013-07-02 2020-12-29 Huawei Technologies Co., Ltd. Method, apparatus, and client for displaying media information, and method and apparatus for displaying graphical controls
US20170353408A1 (en) * 2013-07-02 2017-12-07 Huawei Technologies Co., Ltd. Method, apparatus, and client for displaying media information, and method and apparatus for displaying graphical controls
US11700217B2 (en) 2013-07-02 2023-07-11 Huawei Technologies Co., Ltd. Displaying media information and graphical controls for a chat application
US10345991B2 (en) * 2015-06-16 2019-07-09 International Business Machines Corporation Adjusting appearance of icons in an electronic device
US11029811B2 (en) 2015-06-16 2021-06-08 International Business Machines Corporation Adjusting appearance of icons in an electronic device
WO2016203472A1 (en) * 2015-06-18 2016-12-22 Googale (2009) Ltd A computerized system including rules for a rendering system accessible to non-literate users via a touch screen
US10853029B2 (en) 2015-06-18 2020-12-01 Googale (2009) Ltd. Computerized system including rules for a rendering system accessible to non-literate users via a touch screen
US10726118B2 (en) 2015-06-18 2020-07-28 Googale (2009) Ltd. Secured computerized system for children and/or pre-literate/illiterate users
US10198963B2 (en) 2015-06-18 2019-02-05 Googale (2009) Ltd. Secure computerized system, method and computer program product for children and/or pre-literate/illiterate users
CN105979083A (en) * 2016-04-29 2016-09-28 珠海市魅族科技有限公司 Method and device for displaying graph
US11418472B2 (en) * 2019-06-13 2022-08-16 Beijing Xiaomi Intelligent Technology Co., Ltd. Message processing method, apparatus and device
US10853513B1 (en) * 2019-07-30 2020-12-01 Capital One Services, Llc Data storage using image objects shown in a real-time view
US20210042436A1 (en) * 2019-07-30 2021-02-11 Capital One Services, Llc Data storage using image objects shown in a real-time view
US11720224B2 (en) * 2019-07-30 2023-08-08 Capital One Services, Llc Data storage using image objects shown in a real-time view
US11307825B1 (en) * 2021-02-28 2022-04-19 International Business Machines Corporation Recording a separated sound from a sound stream mixture on a personal device

Also Published As

Publication number Publication date
US8489992B2 (en) 2013-07-16
WO2009126565A1 (en) 2009-10-15
CN101981904A (en) 2011-02-23
EP2263368A1 (en) 2010-12-22

Similar Documents

Publication Publication Date Title
US8489992B2 (en) User interface with visual progression
US8311188B2 (en) User interface with voice message summary
US8064888B2 (en) Communications system that provides user-selectable data when user is on-hold
US9749457B2 (en) Provisioning interfaces for accessing virtual private branch exchange services through a mobile device
US9521255B1 (en) Systems and methods for visual presentation and selection of IVR menu
US9112970B2 (en) Method and apparatus for data channel augmented auto attended voice response systems
US8320889B2 (en) Method for automatic presentation of information before connection
US8635554B2 (en) Enhanced telephony computer user interface allowing user interaction and control of a telephone using a personal computer
US9106672B2 (en) Method and apparatus for performing multiple forms of communications in one session
US20100246784A1 (en) Conversation support
US9369576B2 (en) Phone system with methodology for call parking
US10536577B2 (en) Method and apparatus for data channel augmented voice telephony systems
US9680994B2 (en) Method and apparatus for data channel augmented auto attended voice response systems
US9412088B2 (en) System and method for interactive communication context generation
US8537989B1 (en) Device and method for providing enhanced telephony
US8879698B1 (en) Device and method for providing enhanced telephony
Ahmed et al. Interactive voice response mashup system for service enhancement
CA2908943A1 (en) Systems and methods of providing communications services on a software platform

Legal Events

Date Code Title Description
AS Assignment

Owner name: CISCO TECHNOLOGY, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:ROHDE, RUBEN;REEL/FRAME:020778/0591

Effective date: 20080407

STCF Information on status: patent grant

Free format text: PATENTED CASE

FPAY Fee payment

Year of fee payment: 4

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 8