US20080181371A1 - Systems and Methods for Producing Build Calls - Google Patents

Systems and Methods for Producing Build Calls Download PDF

Info

Publication number
US20080181371A1
US20080181371A1 US12/022,723 US2272308A US2008181371A1 US 20080181371 A1 US20080181371 A1 US 20080181371A1 US 2272308 A US2272308 A US 2272308A US 2008181371 A1 US2008181371 A1 US 2008181371A1
Authority
US
United States
Prior art keywords
call
information
called person
person
subsequent
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US12/022,723
Inventor
Lucas Merrow
Alexandra Drane
Ivy Krull
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Eliza Corp
Original Assignee
Eliza Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Eliza Corp filed Critical Eliza Corp
Priority to US12/022,723 priority Critical patent/US20080181371A1/en
Assigned to ELIZA CORPORATION reassignment ELIZA CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: DRANE, ALEXANDRA, KRULL, IVY, MERROW, LUCAS
Assigned to ELIZA CORPORATION reassignment ELIZA CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MERROW, LUCAS
Publication of US20080181371A1 publication Critical patent/US20080181371A1/en
Assigned to SILICON VALLEY BANK, AS ADMINISTRATIVE AGENT reassignment SILICON VALLEY BANK, AS ADMINISTRATIVE AGENT SECURITY AGREEMENT Assignors: ELIZA CORPORATION, ELIZA HOLDING CORP., ELIZALIVE, INC.
Assigned to SILICON VALLEY BANK, AS ADMINISTRATIVE AGENT reassignment SILICON VALLEY BANK, AS ADMINISTRATIVE AGENT CORRECTIVE ASSIGNMENT TO CORRECT THE APPLICATION NUMBER PREVIOUSLY RECORDED AT REEL: 028374 FRAME: 0586. ASSIGNOR(S) HEREBY CONFIRMS THE SECURITY AGREEMENT. Assignors: ELIZA CORPORATION, ELIZA HOLDING CORP., ELIZALIVE, INC.
Assigned to ELIZA CORPORATION, ELIZA HOLDING CORP., ELIZALIVE, INC. reassignment ELIZA CORPORATION RELEASE BY SECURED PARTY (SEE DOCUMENT FOR DETAILS). Assignors: SILICON VALLEY BANK, AS ADMINISTRATIVE AGENT
Assigned to CITIBANK, N.A., AS COLLATERAL AGENT reassignment CITIBANK, N.A., AS COLLATERAL AGENT SECURITY INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ELIZA CORPORATION
Priority to US15/840,865 priority patent/US10536582B2/en
Assigned to ELIZA CORPORATION reassignment ELIZA CORPORATION RELEASE BY SECURED PARTY (SEE DOCUMENT FOR DETAILS). Assignors: CITIBANK, N.A.
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M11/00Telephonic communication systems specially adapted for combination with other electrical systems
    • H04M11/10Telephonic communication systems specially adapted for combination with other electrical systems with dictation recording and playback systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers
    • H04M3/50Centralised arrangements for answering calls; Centralised arrangements for recording messages for absent or busy subscribers ; Centralised arrangements for recording messages
    • H04M3/51Centralised call answering arrangements requiring operator intervention, e.g. call or contact centers for telemarketing
    • H04M3/5158Centralised call answering arrangements requiring operator intervention, e.g. call or contact centers for telemarketing in combination with automated outdialling systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2201/00Electronic components, circuits, software, systems or apparatus used in telephone systems
    • H04M2201/36Memories
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2201/00Electronic components, circuits, software, systems or apparatus used in telephone systems
    • H04M2201/40Electronic components, circuits, software, systems or apparatus used in telephone systems using speech recognition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2203/00Aspects of automatic or semi-automatic exchanges
    • H04M2203/20Aspects of automatic or semi-automatic exchanges related to features of supplementary services
    • H04M2203/2011Service processing based on information specified by a party before or during a call, e.g. information, tone or routing selection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers
    • H04M3/46Arrangements for calling a number of substations in a predetermined sequence until an answer is obtained
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers
    • H04M3/50Centralised arrangements for answering calls; Centralised arrangements for recording messages for absent or busy subscribers ; Centralised arrangements for recording messages
    • H04M3/51Centralised call answering arrangements requiring operator intervention, e.g. call or contact centers for telemarketing
    • H04M3/5166Centralised call answering arrangements requiring operator intervention, e.g. call or contact centers for telemarketing in combination with interactive voice response systems or voice portals, e.g. as front-ends

Definitions

  • the present disclosure relates generally to automated telephone calling techniques and, and more particularly, to methods and systems used to capture specific responses from an initial automated telephone conversation and using that data to build or create a more personal and intelligent future interactions with the person involved with the initial telephone call.
  • Vowels and consonants are phonemes and have many different characteristics, depending on which components of human speech are used. The position of a phoneme in a word has a significant effect on the ultimate sound generated. A spoken word can have several meanings, depending on how it is said. Linguists have identified allophones as acoustic variants of phonemes and use them to more explicitly describe how a particular word is formed.
  • Automated telephone calls that use speech recognition are a cost effective method of engaging large populations; organizations use this methodology to reach out to thousands of people in a single day.
  • the present disclosure addresses the limitations and problems noted previously for prior art automated phone call techniques by providing methods and systems for capturing specific responses from an initial automated telephone conversation and using that data to build or create a more personal and intelligent subsequent interaction with the person involved with the initial telephone call.
  • a primary purpose of any such subsequent “build calls” includes that information conveyed or acquired during a previous or initial call or information concerning an action (or lack thereof) requested during a previous or initial call is utilized (or built upon) for a subsequent call.
  • An embodiment of the present disclosure includes a method of creating an engaging and intelligent series of speech-activated telephone calls, where a follow-up conversation with an individual builds upon responses gathered from a previous call that the system conducted with that person.
  • An initial telephone call can be made or conducted to a call recipient or targeted person. Information can be gathered or received from that person during the initial call and saved for subsequent use.
  • One or more subsequent calls can be made to the same person, with the one or more subsequent calls being built with or incorporating information received from the called person during the initial call.
  • An automated system can be used to make the initial call and/or the subsequent call as well as for recording responses of the called person.
  • Another embodiment of the present disclosure includes a system configured to initiate and conduct (hold) initial and/or subsequent calls to one or more targeted people over a telephone system.
  • the system can produce spoken voice prompts for telephony-based informational interaction.
  • the system can record responses given during an initial call.
  • the information recorded by the system can be used for one or more subsequent calls, or build calls, to the same individual(s).
  • Each subsequent call can incorporate or be based (or built) on information gathered from the called person during the previous call(s), forming a so-called “build call”.
  • the system can include an automated calling system, a storage system/medium, and a speech recognition system.
  • the speech recognition system can be speaker-independent so that it does not require any voice training by the individual call recipients.
  • FIG. 1 depicts a flow chart according to an exemplary embodiment of the present disclosure
  • FIG. 2 depicts a flow chart according to an exemplary embodiment of the present disclosure
  • FIG. 3 depicts further method portions in accordance with exemplary embodiments of the present disclosure.
  • FIG. 4 depicts a diagrammatic view of a system in accordance with an exemplary embodiment of the present disclosure.
  • aspects and embodiments of the present disclosure are directed to techniques, including methods and systems, for creating an engaging and intelligent series of speech-activated telephone calls, where a follow-up conversation with an individual builds upon responses gathered from a previous call the system conducted with that person.
  • Embodiments of the present disclosure provide for successive automated calls that are personalized, context-sensitive, and thus life-like.
  • a primary purpose of any such subsequent “build calls” includes that information conveyed or acquired during a previous or initial call or information concerning an action (or lack thereof) requested during a previous or initial call is utilized (or built upon) for a subsequent call.
  • automated telephone calls that use speech recognition are a cost effective method of engaging large populations; organizations use this methodology to reach out to thousands of people in a single day.
  • embodiments of the present disclosure utilize a unique approach that includes “remembering” past interactions and intelligently using that information to engage someone in a subsequent follow-up conversation. Such building on an initial conversation does not require the caller to access a record created on a past call, even if the person is calling inbound to engage in the second conversation. Rather, embodiments of the present disclosure include the ability to dynamically recognize an inbound or outbound caller and share relevant information based upon the last time the system “spoke” (e.g., interacted) with them.
  • FIG. 1 depicts a flow chart according to an exemplary embodiment of the present disclosure.
  • An initial telephone call can be designed and conducted to engage individuals, as described at 102 .
  • Information or data can be gathered from the individual during the initial call, as described at 104 .
  • One or more subsequent calls can be designed for the called person, with the subsequent call(s) building upon and utilizing information from the previous conversation of the initial call, as described at 106 .
  • the subsequent calls can be inbound or outbound calls, i.e., the initially called person can make or receive the subsequent build call(s).
  • each called person can be recognized, as described at 108 .
  • the called person e.g., caller
  • that recognition can be conveyed to the caller, as described at 110 .
  • the called person can be engaged in a personal and intelligent conversation based upon past interactions with that person (and his/her responses).
  • FIG. 2 depicts a flow chart according to an exemplary embodiment of the present disclosure.
  • An automated system can be used to convey voice prompts to the called person during an initial call, as described at 202 .
  • an automated system can be used to convey voice prompts to the called person during one or more subsequent calls (inbound or outbound), as described at 204 .
  • responses to specific questions posed during the initial call can be stored and/or processed, e.g., by a voice recognition system, as described at 206 .
  • Information and/or data from the initial call can be taken and used to build or schedule a subsequent call, as described at 208 .
  • FIG. 3 depicts further method portions in accordance with exemplary embodiments of the present disclosure.
  • An automated system e.g., as described for FIG. 4 , may be used to place an initial call to a called/targeted person, as described at 302 .
  • an automated system may be used to place one or more subsequent calls to the called/targeted person, as described at 304 .
  • information about the called/targeted person from an external data source can be used for one or more subsequent calls to that person, as described at 306 .
  • the external data or information can be used in conjunction with the information/data gathered during the initial call.
  • news alerts and/or other information may be conveyed to the called person during a subsequent call, as described at 308 .
  • the external data source can include insurance claim data, census demographic data, consumer purchase data, community service information, police alerts, commuter system information, and the like.
  • the follow-up call builds on a past conversation by referring to a prior call, referencing information the person shared in the earlier call, and cuing up specific questions based upon an individual's response to a question in the past, all of which approximate a live conversation between two human beings.
  • FIG. 4 depicts a diagrammatic view of a system 400 in accordance with an exemplary embodiment of the present disclosure.
  • System 400 can be used in conjunction with methods of the present disclosure, e.g., as shown and described for FIGS. 1-3 , and can include an automated subsystem 412 that includes an automated telephone calling system 414 and a speech recognition system 416 .
  • System 400 can include a called party telephone 418 , and a storage system 420 , as shown.
  • Storage system 420 can include any suitable voice recording device and/or voice recording media, e.g., magnetic tape, flash memory, etc. for recording information from the called person during an initial call and/or subsequent build calls.
  • the automated telephone calling system 414 can be of any suitable kind, and may include a personal computer, although a main frame computer system can also (or alternatively) be used. All of the components of telephone calling system 414 can reside on a particular computer system, thus enabling the system to independently process data received from a respondent in the manner described below. Alternatively, the components of system 414 may be included in different systems that have access to each other via a LAN or similar network. For example, the automated telephone calling device 414 may reside on a server system that is configured to receive the audio response from a telephone 418 and transmit the response to the speech recognition device 416 .
  • the automated telephone calling system 414 may also include a network interface that facilitates receipt of audio information by any of a variety of networks, such as telephone networks, cellular telephone networks, the Web, Internet, local area networks (LANs), wide area networks (WANs), private networks, virtual private networks (VPNs), intranets, extranets, wireless networks, and the like, or some combination thereof.
  • the automated subsystem 412 may be accessible by any one or more of a variety of input devices capable of communicating audio information. Such devices may include, but are not limited to, a standard telephone or cellular telephone 418 .
  • automated telephone calling system 414 can include a database of persons to whom the automated subsystem 412 is capable of initiating telephone calls, a telephone number associated with each person and a recorded data file that includes the target person's name. Such automated telephone calling devices are known in the art. As is described below, the automated telephone calling system 414 is capable of initiating a telephone call to a target person and playing a prerecorded greeting prompt, asking for the target person, and/or other voice prompts and then recording responses of the called/target person. System 414 can then interact with speech recognition system 416 to analyze responses received from the person on telephone 418 . The automated subsystem 412 can also respond to an inbound call from (directly or indirectly) the initially called person.
  • Speech recognition system 416 can function as an automated system on which a speech recognition application, including a series of acoustic outputs or voice prompts, which can comprise queries about a particular topic, are programmed so that they can be presented to a respondent, preferably by means of a telephonic interaction between the querying party and the respondent.
  • a speech recognition application may be any interactive application that collects, provides, and/or shares information, or that is capable of such.
  • the speech recognition system can be speaker-independent so that it does not require any voice training by the individual call recipients.
  • Exemplary embodiments of systems and methods of the present disclosure can include that it isn't just healthcare or health plan information that is gathered—it could be personal information like language preference, time of day for a call, thoughts on a program, planned behavior that relates to health or other life events.
  • a news alert of a subsequent call can provide community event information—examples include local health clinics, seminars, etc.
  • a build call can be centered on or based upon the absence of a response. For example, it could be that an attempt was made to contact someone, they didn't call back in or take the action requested in the initial call (which lack action could be ascertained or known from external data); this knowledge could be utilized in the build call.
  • External data section can include publicly available data, e.g., from the Behavioral Risk Factor Surveillance System (BRFSS) of the National Center for Chronic Disease Prevention and Health Promotion (i.e., the public health data gathered through the CDC and state Public Health departments). Additionally, cultural indicator data, such as demographics linking a person to a particular culture may also be utilized.
  • BFSS Behavioral Risk Factor Surveillance System
  • cultural indicator data such as demographics linking a person to a particular culture may also be utilized.
  • a speech application may be any of a group of interactive applications, including consumer survey applications; Web access applications; educational applications, including computer-based learning and lesson applications and testing applications; screening applications; consumer preference monitoring applications; compliance applications, including applications that generate notifications of compliance related activities, including notifications regarding product maintenance; test result applications, including applications that provide at least one of standardized tests results, consumer product test results, and maintenance results; and linking applications, including applications that link two or more of the above applications.
  • Embodiments according to the present disclosure may also be used with or implement specifically constructed voice prompts having specifically constructed speech parameters, such as those disclosed in U.S. patent application Ser. No. 12/020,217 filed 25 Jan. 2008, entitled “Systems and Techniques for Producing Spoken Voice Prompts,” assigned to the assignee of the subject disclosure; the entire content of which is incorporated herein by reference.
  • Embodiments of the present disclosure can provide advantages relative to prior art automated phone techniques, as described herein.
  • Embodiments can be utilized to build more personal and engaging subsequent interactions, can be utilized on outbound, as well as inbound calls and can simulate a human being's ability to listen, remember and refer to past conversations, making the automated telephone calls more interactive and effective.

Abstract

Techniques are disclosed for making an automated telephone call more interactive and intelligent by saving responses gathered from a previous call and using that information to build more personal and engaging subsequent interactions. An initial telephone call can be designed with data needs in mind. Relevant responses from the initial calls can be captured and stored and a follow-up call can be created that includes dialogs that reference specific information from the previous interactions with the system. Such build call techniques can be utilized on outbound, as well as inbound calls, and can simulate a human being's ability to listen, remember and refer to past conversations, making the automated telephone calls more interactive and effective.

Description

    RELATED APPLICATION
  • This application claims the benefit of U.S. Provisional Patent Application No. 60/898,351 filed 30 Jan. 2007, the entire content of which is incorporated herein by reference.
  • FIELD OF THE DISCLOSURE
  • The present disclosure relates generally to automated telephone calling techniques and, and more particularly, to methods and systems used to capture specific responses from an initial automated telephone conversation and using that data to build or create a more personal and intelligent future interactions with the person involved with the initial telephone call.
  • BACKGROUND OF THE DISCLOSURE
  • In the new, connected economy, it has become increasingly important for companies or service providers to become more in tune with their clients and customers. Such contact can be facilitated with automated telephonic transaction systems, in which interactively-selected prompts are played in the context of a telephone transaction, and the replies of a human user are recognized by an automatic speech recognition system.
  • The answers given by the respondent are processed by the system in order to convert the spoken words to meaning, which can then be utilized interactively, or stored in a database. One example of such a system is described in U.S. Pat. No. 6,990,179, issued in the names of Lucas Merrow et al. on 24 Jan. 2006 and assigned to the assignee of the present application, further discussed below, the entire content of which is incorporated herein by reference.
  • In order for a computer system to recognize the words that are spoken and convert these words to text, the system must be programmed to phonetically break down the spoken words and convert portions of the words to their textural equivalents. Such a conversion requires an understanding of the components of speech and the formation of the spoken word. The production of speech generates a complex series of rapidly changing acoustic pressure waveforms. These waveforms comprise the basic building blocks of speech, known as phonemes.
  • Vowels and consonants are phonemes and have many different characteristics, depending on which components of human speech are used. The position of a phoneme in a word has a significant effect on the ultimate sound generated. A spoken word can have several meanings, depending on how it is said. Linguists have identified allophones as acoustic variants of phonemes and use them to more explicitly describe how a particular word is formed.
  • Automated telephone calls that use speech recognition are a cost effective method of engaging large populations; organizations use this methodology to reach out to thousands of people in a single day.
  • While such prior art automated telephone call techniques can be effective for their intended purposes, such techniques can present certain problems and limitations. For example, if the telephone calls are perceived by the recipient as being impersonal or context-insensitive, and thus not approximating a conversation with a live human being, the call(s) can be ineffective.
  • SUMMARY OF THE DISCLOSURE
  • The present disclosure addresses the limitations and problems noted previously for prior art automated phone call techniques by providing methods and systems for capturing specific responses from an initial automated telephone conversation and using that data to build or create a more personal and intelligent subsequent interaction with the person involved with the initial telephone call. A primary purpose of any such subsequent “build calls” includes that information conveyed or acquired during a previous or initial call or information concerning an action (or lack thereof) requested during a previous or initial call is utilized (or built upon) for a subsequent call.
  • An embodiment of the present disclosure includes a method of creating an engaging and intelligent series of speech-activated telephone calls, where a follow-up conversation with an individual builds upon responses gathered from a previous call that the system conducted with that person. An initial telephone call can be made or conducted to a call recipient or targeted person. Information can be gathered or received from that person during the initial call and saved for subsequent use. One or more subsequent calls can be made to the same person, with the one or more subsequent calls being built with or incorporating information received from the called person during the initial call. An automated system can be used to make the initial call and/or the subsequent call as well as for recording responses of the called person.
  • Another embodiment of the present disclosure includes a system configured to initiate and conduct (hold) initial and/or subsequent calls to one or more targeted people over a telephone system. For such calls, the system can produce spoken voice prompts for telephony-based informational interaction. The system can record responses given during an initial call. The information recorded by the system can be used for one or more subsequent calls, or build calls, to the same individual(s). Each subsequent call can incorporate or be based (or built) on information gathered from the called person during the previous call(s), forming a so-called “build call”. The system can include an automated calling system, a storage system/medium, and a speech recognition system. For populations (e.g., large) of targeted or potential call recipients, the speech recognition system can be speaker-independent so that it does not require any voice training by the individual call recipients.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • Aspects and embodiments of the present disclosure may be more fully understood from the following description when read together with the accompanying drawings, which are to be regarded as illustrative in nature, and not as limiting. The drawings are not necessarily to scale, emphasis instead being placed on the principles of the disclosure. In the drawings:
  • FIG. 1 depicts a flow chart according to an exemplary embodiment of the present disclosure;
  • FIG. 2 depicts a flow chart according to an exemplary embodiment of the present disclosure;
  • FIG. 3 depicts further method portions in accordance with exemplary embodiments of the present disclosure; and
  • FIG. 4 depicts a diagrammatic view of a system in accordance with an exemplary embodiment of the present disclosure.
  • While certain embodiments depicted in the drawings, one skilled in the art will appreciate that the embodiments depicted are illustrative and that variations of those shown, as well as other embodiments described herein, may be envisioned and practiced within the scope of the present disclosure.
  • DETAILED DESCRIPTION
  • Aspects and embodiments of the present disclosure are directed to techniques, including methods and systems, for creating an engaging and intelligent series of speech-activated telephone calls, where a follow-up conversation with an individual builds upon responses gathered from a previous call the system conducted with that person.
  • To utilize automated telephone calls to interact successfully with a broad range of people, it is desirable to be as precise and personal as possible for such communication. Embodiments of the present disclosure provide for successive automated calls that are personalized, context-sensitive, and thus life-like. A primary purpose of any such subsequent “build calls” includes that information conveyed or acquired during a previous or initial call or information concerning an action (or lack thereof) requested during a previous or initial call is utilized (or built upon) for a subsequent call.
  • As was described previously, automated telephone calls that use speech recognition are a cost effective method of engaging large populations; organizations use this methodology to reach out to thousands of people in a single day. Research has shown that automated calls can be more effective to the extent they are personalized, context-sensitive, and thus approximate a conversation with a live human being. People are more likely to engage in an automated telephone call, using speech recognition technology, if the conversation approximates an interaction between two human beings, instead of the more traditional approach to automated calls, which often involves one-way, repetitive communication from the computer to the human being at the other end of the telephone line.
  • There are a number of ways to make an automated conversation more “real,” as described herein; embodiments of the present disclosure utilize a unique approach that includes “remembering” past interactions and intelligently using that information to engage someone in a subsequent follow-up conversation. Such building on an initial conversation does not require the caller to access a record created on a past call, even if the person is calling inbound to engage in the second conversation. Rather, embodiments of the present disclosure include the ability to dynamically recognize an inbound or outbound caller and share relevant information based upon the last time the system “spoke” (e.g., interacted) with them.
  • Referring now to the drawings, FIG. 1 depicts a flow chart according to an exemplary embodiment of the present disclosure. An initial telephone call can be designed and conducted to engage individuals, as described at 102. Information or data can be gathered from the individual during the initial call, as described at 104. One or more subsequent calls can be designed for the called person, with the subsequent call(s) building upon and utilizing information from the previous conversation of the initial call, as described at 106. The subsequent calls can be inbound or outbound calls, i.e., the initially called person can make or receive the subsequent build call(s).
  • During (or subsequent to) the initial conversation, each called person can be recognized, as described at 108. During the subsequent call(s), the called person (e.g., caller) can be recognized and that recognition can be conveyed to the caller, as described at 110. In such a way, the called person can be engaged in a personal and intelligent conversation based upon past interactions with that person (and his/her responses).
  • FIG. 2 depicts a flow chart according to an exemplary embodiment of the present disclosure. An automated system can be used to convey voice prompts to the called person during an initial call, as described at 202. Similarly, an automated system can be used to convey voice prompts to the called person during one or more subsequent calls (inbound or outbound), as described at 204.
  • Continuing with the description of method 200, responses to specific questions posed during the initial call can be stored and/or processed, e.g., by a voice recognition system, as described at 206. Information and/or data from the initial call can be taken and used to build or schedule a subsequent call, as described at 208.
  • FIG. 3 depicts further method portions in accordance with exemplary embodiments of the present disclosure. An automated system, e.g., as described for FIG. 4, may be used to place an initial call to a called/targeted person, as described at 302. Similarly, an automated system may be used to place one or more subsequent calls to the called/targeted person, as described at 304.
  • Continuing with the description of 300, information about the called/targeted person from an external data source (relative to the initial call) can be used for one or more subsequent calls to that person, as described at 306. The external data or information can be used in conjunction with the information/data gathered during the initial call. Moreover, news alerts and/or other information may be conveyed to the called person during a subsequent call, as described at 308. In exemplary embodiments, and without limitation, the external data source can include insurance claim data, census demographic data, consumer purchase data, community service information, police alerts, commuter system information, and the like.
  • The following example is provided for further understanding of the methods and systems of the present disclosure.
  • EXAMPLE A Follow-up Conversation from a Car Dealership
  • In our last conversation, we spoke about the importance of routine maintenance for your car, most of which is covered under the warranty you presently have on your automobile.
  • NOTE: A caller at the targeted telephone number said no to having arranged an oil change when queried in previous call.
  • Specifically, we talked about the importance of having your oil changed every 5,000 miles. Last time we spoke, your car was due for this maintenance. Please tell me, have you had a chance to get the oil changed?
  • Yes—That's excellent! CONTINUE to next flagged maintenance.
  • No—Okay. Are you planning to get it changed?
  • Yes—That's excellent. Remember, having the oil changed every 5,000 miles can have a significant impact on the health of your engine. CONTINUE TO next flagged maintenance.
  • No—All right. Please do consider following-up with the dealership to get your oil changed since it can have a significant impact on the health of your engine. We can even have one of our technicians come to your home or work place to change the oil for you. CONTINUE TO next flagged maintenance.
  • NOTE: If yes to fluids in previous call, Go to electrical; if no to fluids, CONTINUE
  • We also talked about the importance of getting your car's fluids checked every three months. Have you had a chance to get your car's fluids checked since our last conversation?
  • As the preceding example indicates, the follow-up call builds on a past conversation by referring to a prior call, referencing information the person shared in the earlier call, and cuing up specific questions based upon an individual's response to a question in the past, all of which approximate a live conversation between two human beings.
  • FIG. 4 depicts a diagrammatic view of a system 400 in accordance with an exemplary embodiment of the present disclosure. System 400 can be used in conjunction with methods of the present disclosure, e.g., as shown and described for FIGS. 1-3, and can include an automated subsystem 412 that includes an automated telephone calling system 414 and a speech recognition system 416. System 400 can include a called party telephone 418, and a storage system 420, as shown. Storage system 420 can include any suitable voice recording device and/or voice recording media, e.g., magnetic tape, flash memory, etc. for recording information from the called person during an initial call and/or subsequent build calls.
  • The automated telephone calling system 414 can be of any suitable kind, and may include a personal computer, although a main frame computer system can also (or alternatively) be used. All of the components of telephone calling system 414 can reside on a particular computer system, thus enabling the system to independently process data received from a respondent in the manner described below. Alternatively, the components of system 414 may be included in different systems that have access to each other via a LAN or similar network. For example, the automated telephone calling device 414 may reside on a server system that is configured to receive the audio response from a telephone 418 and transmit the response to the speech recognition device 416.
  • The automated telephone calling system 414 may also include a network interface that facilitates receipt of audio information by any of a variety of networks, such as telephone networks, cellular telephone networks, the Web, Internet, local area networks (LANs), wide area networks (WANs), private networks, virtual private networks (VPNs), intranets, extranets, wireless networks, and the like, or some combination thereof. The automated subsystem 412 may be accessible by any one or more of a variety of input devices capable of communicating audio information. Such devices may include, but are not limited to, a standard telephone or cellular telephone 418.
  • With continued reference to FIG. 4, automated telephone calling system 414 can include a database of persons to whom the automated subsystem 412 is capable of initiating telephone calls, a telephone number associated with each person and a recorded data file that includes the target person's name. Such automated telephone calling devices are known in the art. As is described below, the automated telephone calling system 414 is capable of initiating a telephone call to a target person and playing a prerecorded greeting prompt, asking for the target person, and/or other voice prompts and then recording responses of the called/target person. System 414 can then interact with speech recognition system 416 to analyze responses received from the person on telephone 418. The automated subsystem 412 can also respond to an inbound call from (directly or indirectly) the initially called person.
  • Speech recognition system 416 can function as an automated system on which a speech recognition application, including a series of acoustic outputs or voice prompts, which can comprise queries about a particular topic, are programmed so that they can be presented to a respondent, preferably by means of a telephonic interaction between the querying party and the respondent. A speech recognition application, however, may be any interactive application that collects, provides, and/or shares information, or that is capable of such. For populations (e.g., large) of targeted or potential call recipients, the speech recognition system can be speaker-independent so that it does not require any voice training by the individual call recipients.
  • Exemplary embodiments of systems and methods of the present disclosure can include that it isn't just healthcare or health plan information that is gathered—it could be personal information like language preference, time of day for a call, thoughts on a program, planned behavior that relates to health or other life events.
  • Moreover, a news alert of a subsequent call can provide community event information—examples include local health clinics, seminars, etc. In certain embodiments, a build call can be centered on or based upon the absence of a response. For example, it could be that an attempt was made to contact someone, they didn't call back in or take the action requested in the initial call (which lack action could be ascertained or known from external data); this knowledge could be utilized in the build call. External data section, it should be noted, can include publicly available data, e.g., from the Behavioral Risk Factor Surveillance System (BRFSS) of the National Center for Chronic Disease Prevention and Health Promotion (i.e., the public health data gathered through the CDC and state Public Health departments). Additionally, cultural indicator data, such as demographics linking a person to a particular culture may also be utilized.
  • As examples, in the present disclosure, a speech application may be any of a group of interactive applications, including consumer survey applications; Web access applications; educational applications, including computer-based learning and lesson applications and testing applications; screening applications; consumer preference monitoring applications; compliance applications, including applications that generate notifications of compliance related activities, including notifications regarding product maintenance; test result applications, including applications that provide at least one of standardized tests results, consumer product test results, and maintenance results; and linking applications, including applications that link two or more of the above applications.
  • Exemplary voice/speech recognition techniques that can be implemented within the scope of the present disclosure are described in U.S. patent application Ser. No. 11/219,593 filed 2 Sep. 2005, entitled “Speech Recognition Method and System for Determining the Status of an Answered Telephone During the Course of an Outbound Telephone Call,” which is a continuation of U.S. patent application Ser. No. 09/945,282 filed 31 Aug. 2001, entitled “Speech Recognition Method and System for Determining the Status of an Answered Telephone During the Course of an Outbound Telephone Call,” now U.S. Pat. No. 6,990,179 (referenced above); the entire contents of all of which are incorporated herein by reference. It should be understood that such systems (or techniques) can further include an inbound calling features with interaction between the caller and the speech recognition system.
  • Embodiments according to the present disclosure may also be used with or implement specifically constructed voice prompts having specifically constructed speech parameters, such as those disclosed in U.S. patent application Ser. No. 12/020,217 filed 25 Jan. 2008, entitled “Systems and Techniques for Producing Spoken Voice Prompts,” assigned to the assignee of the subject disclosure; the entire content of which is incorporated herein by reference.
  • Accordingly, embodiments of the present disclosure can provide advantages relative to prior art automated phone techniques, as described herein. Embodiments can be utilized to build more personal and engaging subsequent interactions, can be utilized on outbound, as well as inbound calls and can simulate a human being's ability to listen, remember and refer to past conversations, making the automated telephone calls more interactive and effective.
  • While certain embodiments have been described herein, it will be understood by one skilled in the art that the methods, systems, and apparatus of the present disclosure may be embodied in other specific forms without departing from the spirit thereof.
  • Accordingly, the embodiments described herein, and as claimed in the attached claims, are to be considered in all respects as illustrative of the present disclosure and not restrictive.

Claims (37)

1. A method of producing a telephone call for telephony-based informational interaction, the method comprising:
conducting an initial call with a called person;
gathering information from the called person during the initial call; and
conducting at least one subsequent call with the called person, the subsequent call utilizing information from the called person from the initial call and any other previous call.
2. The method of claim 1, wherein conducting an initial call comprises recognizing a called person.
3. The method of claim 1, wherein gathering information from the called person during the initial call comprises storing responses to specific questions posed during the call.
4. The method of claim 1, wherein conducting a subsequent call comprises recognizing the called person.
5. The method of claim 4, wherein recognizing the called person comprises conveying recognition to the called person in the subsequent call.
6. The method of claim 1, wherein conducting an initial call comprises using an automated system to place the initial call to the called person.
7. The method of claim 1, wherein conducting an initial call comprises using an automated system to convey voice prompts to the called person.
8. The method of claim 1, wherein conducting a subsequent call comprises using an automated system to place the subsequent call to the called person.
9. The method of claim 1, wherein conducting a subsequent call comprises using an automated system to convey voice prompts to the called person.
10. The method of claim 1, wherein information gathered from the called person comprises healthcare information.
11. The method of claim 1, wherein information gathered from the called person comprises health plan information.
12. The method of claim 1, further comprising utilizing information from an external data source outside of the initial call to the called person.
13. The method of claim 12, wherein the external data source comprises insurance claim data.
14. The method of claim 12, wherein the external data source comprises census demographic data.
15. The method of claim 12, wherein the external data source comprises consumer purchase data.
16. The method of claim 1, further comprising conveying a news alert to the called person during the subsequent call.
17. The method of claim 16, wherein the news alert comprises a public safety message.
18. The method of claim 16, wherein the news alert comprises a weather report.
19. The method of claim 1, wherein subsequent call includes healthcare information.
20. The method of claim 1, wherein the subsequent call includes health plan information.
21. The method of claim 1, wherein information gathered from the called person comprises language preference, time of day for a call, thoughts on a program, planned behavior that relates to health, or other life events.
22. The method of claim 16, wherein the news alert comprises community event information.
23. The method of claim 22, wherein the community event information comprises information about local health clinics or seminars.
24. The method of claim 12, wherein the external data source comprises publicly accessible data.
25. The method of claim 24, wherein the external data comprises information from the Behavioral Risk Factor Surveillance System (BRFSS) of the National Center for Chronic Disease Prevention and Health Promotion.
26. The method of claim 24, wherein the publicly accessible data comprises cultural indicator data.
27. A system for conducting build calls to a target person, the system comprising:
an automated calling system configured and arranged to place an automated initial call including one or more spoken voice prompts to a target person at a called party telephone and to conduct one or more subsequent build calls to the target person; and
a storage system configured and arranged to record responses of the target person.
28. The system of claim 27, further comprising an automated speech recognition system configured and arranged to process auditory responses of the target person as made in response to the one or more voice prompts.
29. The system of claim 27, wherein the one or more build calls incorporate information related to a response from the target person during the initial call.
30. The system of claim 27, wherein the one or more build calls incorporate information from an external data source external to the initial call.
31. The system of claim 30, wherein the external data source comprises insurance claim data.
32. The system of claim 30, wherein the external data source comprises census demographic data.
33. The system of claim 30, wherein the external data source comprises consumer purchase data.
34. The system of claim 27, wherein the system is configured and arranged to dynamically recognize an inbound or outbound call conducted with the targeted caller.
35. The system of claim of claim 27, wherein the one or more build calls includes information including an absence of a response from the called person.
36. The system of claim 35, wherein information includes that the called person did take an action that they were requested to take.
37. The system of claim 36, wherein the action requested to take is a call to a particular phone number.
US12/022,723 2007-01-30 2008-01-30 Systems and Methods for Producing Build Calls Abandoned US20080181371A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US12/022,723 US20080181371A1 (en) 2007-01-30 2008-01-30 Systems and Methods for Producing Build Calls
US15/840,865 US10536582B2 (en) 2007-01-30 2017-12-13 Systems and methods for producing build calls

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US89835107P 2007-01-30 2007-01-30
US12/022,723 US20080181371A1 (en) 2007-01-30 2008-01-30 Systems and Methods for Producing Build Calls

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US15/840,865 Continuation US10536582B2 (en) 2007-01-30 2017-12-13 Systems and methods for producing build calls

Publications (1)

Publication Number Publication Date
US20080181371A1 true US20080181371A1 (en) 2008-07-31

Family

ID=39667985

Family Applications (2)

Application Number Title Priority Date Filing Date
US12/022,723 Abandoned US20080181371A1 (en) 2007-01-30 2008-01-30 Systems and Methods for Producing Build Calls
US15/840,865 Active US10536582B2 (en) 2007-01-30 2017-12-13 Systems and methods for producing build calls

Family Applications After (1)

Application Number Title Priority Date Filing Date
US15/840,865 Active US10536582B2 (en) 2007-01-30 2017-12-13 Systems and methods for producing build calls

Country Status (5)

Country Link
US (2) US20080181371A1 (en)
EP (1) EP2108158B1 (en)
JP (1) JP2010517450A (en)
CA (1) CA2675042A1 (en)
WO (1) WO2008095002A1 (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8767927B2 (en) 2011-12-15 2014-07-01 Nuance Communications, Inc. System and method for servicing a call
US8767928B2 (en) 2011-12-15 2014-07-01 Nuance Communications, Inc. System and method for servicing a call
US9288320B2 (en) 2011-12-15 2016-03-15 Nuance Communications, Inc. System and method for servicing a call
US20170358296A1 (en) 2016-06-13 2017-12-14 Google Inc. Escalation to a human operator
US10057418B1 (en) 2017-01-27 2018-08-21 International Business Machines Corporation Managing telephone interactions of a user and an agent
US10827064B2 (en) 2016-06-13 2020-11-03 Google Llc Automated call requests with status updates
US10944869B2 (en) 2019-03-08 2021-03-09 International Business Machines Corporation Automating actions of a mobile device during calls with an automated phone system
US11303749B1 (en) 2020-10-06 2022-04-12 Google Llc Automatic navigation of an interactive voice response (IVR) tree on behalf of human user(s)
US11468893B2 (en) * 2019-05-06 2022-10-11 Google Llc Automated calling system

Citations (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6226360B1 (en) * 1998-05-19 2001-05-01 At&T Corp. System and method for delivery of pre-recorded voice phone messages
US20030001727A1 (en) * 2001-06-29 2003-01-02 Steinmark Daniel E. System and method for creating an adjusted alarm time
US6567504B1 (en) * 1994-06-20 2003-05-20 Sigma Communications, Inc. Automated calling system with database updating
US20030165223A1 (en) * 2000-03-07 2003-09-04 Timmins Timothy A. Technique for providing a telecommunication service including information assistance
US20040234065A1 (en) * 2003-05-20 2004-11-25 Anderson David J. Method and system for performing automated telemarketing
US20050201533A1 (en) * 2004-03-10 2005-09-15 Emam Sean A. Dynamic call processing system and method
US6990179B2 (en) * 2000-09-01 2006-01-24 Eliza Corporation Speech recognition method of and system for determining the status of an answered telephone during the course of an outbound telephone call
US20060154642A1 (en) * 2004-02-20 2006-07-13 Scannell Robert F Jr Medication & health, environmental, and security monitoring, alert, intervention, information and network system with associated and supporting apparatuses
US20060276196A1 (en) * 2000-08-17 2006-12-07 Mobileum, Inc. Method and system for wireless voice channel/data channel integration
US20070016474A1 (en) * 2005-07-18 2007-01-18 24/7 Customer Method, system and computer program product for executing a marketing campaign
US20080095355A1 (en) * 2006-07-24 2008-04-24 Fmr Corp. Predictive call routing
US20080165948A1 (en) * 2007-01-09 2008-07-10 Steven Ryals Adaptive Incoming Call Processing
US20080205601A1 (en) * 2007-01-25 2008-08-28 Eliza Corporation Systems and Techniques for Producing Spoken Voice Prompts
US20090268882A1 (en) * 2008-04-25 2009-10-29 Delta Electronics, Inc. Outbound dialogue system and dialogue operation method
US20100226482A1 (en) * 2009-03-09 2010-09-09 Jacob Naparstek System and method for recording and distributing customer interactions
US7818176B2 (en) * 2007-02-06 2010-10-19 Voicebox Technologies, Inc. System and method for selecting and presenting advertisements based on natural language processing of voice-based input
US20100332287A1 (en) * 2009-06-24 2010-12-30 International Business Machines Corporation System and method for real-time prediction of customer satisfaction
US7885391B2 (en) * 2003-10-30 2011-02-08 Hewlett-Packard Development Company, L.P. System and method for call center dialog management
US20110077988A1 (en) * 2009-04-12 2011-03-31 Cates Thomas M Emotivity and Vocality Measurement
US20120008762A1 (en) * 2010-07-08 2012-01-12 David Gehm System and method for personalized services network call center
US20120057689A1 (en) * 2010-09-07 2012-03-08 Research In Motion Limited Callback option
US20120064862A1 (en) * 2010-09-10 2012-03-15 Google Inc. Call status sharing
US20120077472A1 (en) * 2001-05-25 2012-03-29 Timmins Timothy A Technique for effectively providing a personalized information assistance service
US8320531B2 (en) * 2006-06-16 2012-11-27 Applied Voice & Speech Technologies, Inc. Template-based electronic message generation using sound input
US20120321055A1 (en) * 2005-09-01 2012-12-20 Vishal Dhawan System and method for placing telephone calls using a distributed voice application execution system architecture
US20130129058A1 (en) * 2011-11-22 2013-05-23 Incontact, Inc. Systems and methods of using machine translation in contact handling systems
US20130223434A1 (en) * 1995-09-25 2013-08-29 Pragmatus Telecom, Llc Method and system for coordinating data and voice communications via customer contact channel changing system

Family Cites Families (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5664003A (en) * 1996-01-23 1997-09-02 At & T Personal mobile communication system with two points of entry
JP2000049974A (en) 1998-07-27 2000-02-18 Fujitsu Denso Ltd Emergency notice system
JP3636428B2 (en) 2000-06-12 2005-04-06 株式会社コミュニケーション・ドット・コム Calling device and method for making a call with designated subscriber device
JP2002252725A (en) 2001-02-23 2002-09-06 Hitachi Plant Eng & Constr Co Ltd Emergency notice system and communication terminal
JP2003005778A (en) 2001-06-21 2003-01-08 Niyuuzu Line Network Kk Voice recognition portal system
JP2003219038A (en) 2001-10-22 2003-07-31 Ntt Comware Corp Call center system apparatus and call method in interlocking with customer information
JP2003169147A (en) * 2001-11-30 2003-06-13 Buzzhits Kk Client response system and method
AU2003222132A1 (en) * 2002-03-28 2003-10-13 Martin Dunsmuir Closed-loop command and response system for automatic communications between interacting computer systems over an audio communications channel
US8494868B2 (en) * 2002-05-07 2013-07-23 Priority Dispatch Corporation Method and system for a seamless interface between an emergency medical dispatch system and a nurse triage system
JP2004080717A (en) 2002-06-21 2004-03-11 Aioi Insurance Co Ltd Notification system and adapter for notification
JP2005063077A (en) 2003-08-08 2005-03-10 R & D Associates:Kk Method and device for personal authentication and connector
AU2004266654B2 (en) * 2003-08-12 2011-07-21 Loma Linda University Medical Center Modular patient support system
JP2005062240A (en) * 2003-08-13 2005-03-10 Fujitsu Ltd Audio response system
US7065349B2 (en) * 2003-09-29 2006-06-20 Nattel Group, Inc. Method for automobile safe wireless communications
JP4001574B2 (en) 2003-10-31 2007-10-31 トッパン・フォームズ株式会社 Telephone response system and telephone response server
JP2006093960A (en) 2004-09-22 2006-04-06 Oki Electric Ind Co Ltd Information-providing system
JP2006229753A (en) 2005-02-18 2006-08-31 Tokyo Electric Power Co Inc:The Call center system and call back device
JP2006245758A (en) 2005-03-01 2006-09-14 Nec Fielding Ltd Call center system, operator evaluation method, and program
JP2006339810A (en) 2005-05-31 2006-12-14 Tsubame Kotsu Kyodo Kumiai Vehicle dispatch accepting system
KR100663477B1 (en) * 2005-12-23 2007-01-02 삼성전자주식회사 Apparatus and method for providing receiving/sending information
US8363791B2 (en) * 2006-12-01 2013-01-29 Centurylink Intellectual Property Llc System and method for communicating medical alerts
US8311199B2 (en) * 2006-12-28 2012-11-13 Verizon Services Corp. Methods and systems for configuring and providing conference calls with customized caller id information
US8175248B2 (en) * 2007-01-29 2012-05-08 Nuance Communications, Inc. Method and an apparatus to disambiguate requests

Patent Citations (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6567504B1 (en) * 1994-06-20 2003-05-20 Sigma Communications, Inc. Automated calling system with database updating
US20130223434A1 (en) * 1995-09-25 2013-08-29 Pragmatus Telecom, Llc Method and system for coordinating data and voice communications via customer contact channel changing system
US6226360B1 (en) * 1998-05-19 2001-05-01 At&T Corp. System and method for delivery of pre-recorded voice phone messages
US20030165223A1 (en) * 2000-03-07 2003-09-04 Timmins Timothy A. Technique for providing a telecommunication service including information assistance
US20060276196A1 (en) * 2000-08-17 2006-12-07 Mobileum, Inc. Method and system for wireless voice channel/data channel integration
US6990179B2 (en) * 2000-09-01 2006-01-24 Eliza Corporation Speech recognition method of and system for determining the status of an answered telephone during the course of an outbound telephone call
US20060056600A1 (en) * 2000-09-01 2006-03-16 Lucas Merrow Speech recognition method of and system for determining the status of an answered telephone during the course of an outbound telephone call
US20120077472A1 (en) * 2001-05-25 2012-03-29 Timmins Timothy A Technique for effectively providing a personalized information assistance service
US20030001727A1 (en) * 2001-06-29 2003-01-02 Steinmark Daniel E. System and method for creating an adjusted alarm time
US20040234065A1 (en) * 2003-05-20 2004-11-25 Anderson David J. Method and system for performing automated telemarketing
US20100074421A1 (en) * 2003-05-20 2010-03-25 At&T Intellectual Property I, L.P. Method and system for performing automated telemarketing
US7885391B2 (en) * 2003-10-30 2011-02-08 Hewlett-Packard Development Company, L.P. System and method for call center dialog management
US20060154642A1 (en) * 2004-02-20 2006-07-13 Scannell Robert F Jr Medication & health, environmental, and security monitoring, alert, intervention, information and network system with associated and supporting apparatuses
US20050201533A1 (en) * 2004-03-10 2005-09-15 Emam Sean A. Dynamic call processing system and method
US20070016474A1 (en) * 2005-07-18 2007-01-18 24/7 Customer Method, system and computer program product for executing a marketing campaign
US20120321055A1 (en) * 2005-09-01 2012-12-20 Vishal Dhawan System and method for placing telephone calls using a distributed voice application execution system architecture
US8320531B2 (en) * 2006-06-16 2012-11-27 Applied Voice & Speech Technologies, Inc. Template-based electronic message generation using sound input
US20080095355A1 (en) * 2006-07-24 2008-04-24 Fmr Corp. Predictive call routing
US20080165948A1 (en) * 2007-01-09 2008-07-10 Steven Ryals Adaptive Incoming Call Processing
US20080205601A1 (en) * 2007-01-25 2008-08-28 Eliza Corporation Systems and Techniques for Producing Spoken Voice Prompts
US7818176B2 (en) * 2007-02-06 2010-10-19 Voicebox Technologies, Inc. System and method for selecting and presenting advertisements based on natural language processing of voice-based input
US20090268882A1 (en) * 2008-04-25 2009-10-29 Delta Electronics, Inc. Outbound dialogue system and dialogue operation method
US20100226482A1 (en) * 2009-03-09 2010-09-09 Jacob Naparstek System and method for recording and distributing customer interactions
US20110077988A1 (en) * 2009-04-12 2011-03-31 Cates Thomas M Emotivity and Vocality Measurement
US20100332287A1 (en) * 2009-06-24 2010-12-30 International Business Machines Corporation System and method for real-time prediction of customer satisfaction
US20120008762A1 (en) * 2010-07-08 2012-01-12 David Gehm System and method for personalized services network call center
US20120057689A1 (en) * 2010-09-07 2012-03-08 Research In Motion Limited Callback option
US20120064862A1 (en) * 2010-09-10 2012-03-15 Google Inc. Call status sharing
US20130129058A1 (en) * 2011-11-22 2013-05-23 Incontact, Inc. Systems and methods of using machine translation in contact handling systems

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8767927B2 (en) 2011-12-15 2014-07-01 Nuance Communications, Inc. System and method for servicing a call
US8767928B2 (en) 2011-12-15 2014-07-01 Nuance Communications, Inc. System and method for servicing a call
US9288320B2 (en) 2011-12-15 2016-03-15 Nuance Communications, Inc. System and method for servicing a call
US10560575B2 (en) 2016-06-13 2020-02-11 Google Llc Escalation to a human operator
US20180227418A1 (en) * 2016-06-13 2018-08-09 Google Llc Automated call requests with status updates
US20170358296A1 (en) 2016-06-13 2017-12-14 Google Inc. Escalation to a human operator
US11936810B2 (en) 2016-06-13 2024-03-19 Google Llc Automated call requests with status updates
US10542143B2 (en) 2016-06-13 2020-01-21 Google Llc Automated call requests with status updates
US11012560B2 (en) 2016-06-13 2021-05-18 Google Llc Automated call requests with status updates
US10574816B2 (en) 2016-06-13 2020-02-25 Google Llc Automated call requests with status updates
US10582052B2 (en) 2016-06-13 2020-03-03 Google Llc Automated call requests with status updates
US10721356B2 (en) 2016-06-13 2020-07-21 Google Llc Dynamic initiation of automated call
US10827064B2 (en) 2016-06-13 2020-11-03 Google Llc Automated call requests with status updates
US10893141B2 (en) * 2016-06-13 2021-01-12 Google Llc Automated call requests with status updates
US10917522B2 (en) * 2016-06-13 2021-02-09 Google Llc Automated call requests with status updates
US11563850B2 (en) 2016-06-13 2023-01-24 Google Llc Automated call requests with status updates
US10057418B1 (en) 2017-01-27 2018-08-21 International Business Machines Corporation Managing telephone interactions of a user and an agent
US10148815B2 (en) 2017-01-27 2018-12-04 International Business Machines Corporation Managing telephone interactions of a user and an agent
US10944869B2 (en) 2019-03-08 2021-03-09 International Business Machines Corporation Automating actions of a mobile device during calls with an automated phone system
US11468893B2 (en) * 2019-05-06 2022-10-11 Google Llc Automated calling system
US11303749B1 (en) 2020-10-06 2022-04-12 Google Llc Automatic navigation of an interactive voice response (IVR) tree on behalf of human user(s)
US20220201119A1 (en) 2020-10-06 2022-06-23 Google Llc Automatic navigation of an interactive voice response (ivr) tree on behalf of human user(s)
US11843718B2 (en) 2020-10-06 2023-12-12 Google Llc Automatic navigation of an interactive voice response (IVR) tree on behalf of human user(s)

Also Published As

Publication number Publication date
US20180103152A1 (en) 2018-04-12
EP2108158A4 (en) 2011-12-07
JP2010517450A (en) 2010-05-20
CA2675042A1 (en) 2008-08-07
EP2108158A1 (en) 2009-10-14
US10536582B2 (en) 2020-01-14
WO2008095002A1 (en) 2008-08-07
EP2108158B1 (en) 2019-03-13

Similar Documents

Publication Publication Date Title
US10536582B2 (en) Systems and methods for producing build calls
US10104233B2 (en) Coaching portal and methods based on behavioral assessment data
US9330658B2 (en) User intent analysis extent of speaker intent analysis system
US10129402B1 (en) Customer satisfaction analysis of caller interaction event data system and methods
US7822611B2 (en) Speaker intent analysis system
US10728384B1 (en) System and method for redaction of sensitive audio events of call recordings
US10522144B2 (en) Method of and system for providing adaptive respondent training in a speech recognition application
US8045699B2 (en) Method and system for performing automated telemarketing
US20060265089A1 (en) Method and software for analyzing voice data of a telephonic communication and generating a retention strategy therefrom
EP4016355B1 (en) Anonymized sensitive data analysis

Legal Events

Date Code Title Description
AS Assignment

Owner name: ELIZA CORPORATION, MASSACHUSETTS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MERROW, LUCAS;REEL/FRAME:020701/0410

Effective date: 20080130

Owner name: ELIZA CORPORATION, MASSACHUSETTS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:MERROW, LUCAS;DRANE, ALEXANDRA;KRULL, IVY;REEL/FRAME:020701/0549

Effective date: 20080319

AS Assignment

Owner name: SILICON VALLEY BANK, AS ADMINISTRATIVE AGENT, MASS

Free format text: SECURITY AGREEMENT;ASSIGNORS:ELIZA CORPORATION;ELIZA HOLDING CORP.;ELIZALIVE, INC.;REEL/FRAME:028374/0586

Effective date: 20120614

AS Assignment

Owner name: SILICON VALLEY BANK, AS ADMINISTRATIVE AGENT, MASS

Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE APPLICATION NUMBER PREVIOUSLY RECORDED AT REEL: 028374 FRAME: 0586. ASSIGNOR(S) HEREBY CONFIRMS THE SECURITY AGREEMENT;ASSIGNORS:ELIZA CORPORATION;ELIZA HOLDING CORP.;ELIZALIVE, INC.;REEL/FRAME:042098/0746

Effective date: 20120614

AS Assignment

Owner name: ELIZALIVE, INC., TEXAS

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:SILICON VALLEY BANK, AS ADMINISTRATIVE AGENT;REEL/FRAME:042150/0300

Effective date: 20170417

Owner name: ELIZA CORPORATION, TEXAS

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:SILICON VALLEY BANK, AS ADMINISTRATIVE AGENT;REEL/FRAME:042150/0300

Effective date: 20170417

Owner name: ELIZA HOLDING CORP., TEXAS

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:SILICON VALLEY BANK, AS ADMINISTRATIVE AGENT;REEL/FRAME:042150/0300

Effective date: 20170417

AS Assignment

Owner name: CITIBANK, N.A., AS COLLATERAL AGENT, DELAWARE

Free format text: SECURITY INTEREST;ASSIGNOR:ELIZA CORPORATION;REEL/FRAME:042441/0573

Effective date: 20170517

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION

AS Assignment

Owner name: ELIZA CORPORATION, TEXAS

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CITIBANK, N.A.;REEL/FRAME:055812/0290

Effective date: 20210401