METHOD, SYSTEM, AND APPARATUS FOR AUTOMATING THE CREATION OF CUSTOMER-CENTRIC INTERFACE
TECHNICAL FIELD OF THE INVENTION
The present invention relates generally to interface designs, and more specifically relates to a system and method for implementing customer-centric interfaces. BACKGROUND OF THE INVENTION
Every year, company service centers typically receive numerous telephone calls from customers seeking assistance with particular tasks. The customers often speak with customer service representatives (CSR) to complete their tasks. Because of the cost associated with CSR time, companies are switching over to automated systems such as interactive voice response (IVR) systems where IVR systems answer the customer phone calls and direct the customer phone calls to the correct service center using one or more menus of options. The IVR systems allow customers to complete their tasks without the assistance of a CSR.
In order to maintain a high level of customer satisfaction, an IVR system must be designed so that customers can easily navigate the various menus and accomplish their tasks without spending too much time on the telephone and becoming frustrated and unsatisfied with the company and its customer service. Therefore, companies must design and continually test, update, and improve the IVR systems including the IVR menus so that the IVR systems function efficiently so- that customers remain satisfied with the level of customer service.
BRIEF DESCRIPTION OF THE DRAWINGS
A more complete understanding of the present embodiments and advantages thereof may be acquired by referring to the following description taken in conjunction with the accompanying drawings, in which like reference numbers indicate like features, and wherein:
FIGURE 1 illustrates a block diagram showing a system incorporating teachings of the present invention;
FIGURE 2 depicts a flow diagram of a method for automating the creation of a customer-centric interface;
FIGURE 3 depicts an example task frequency table;
FIGURE 4 illustrates a block flow diagram of various components of the system for the automated creation of a customer-centric interface; FIGURE 5 illustrates a flow diagram of a method for creating customer-centric menu prompts;
FIGURE 6 illustrates an example graphical user interface for the categorization of statements within a customer-centric interface; FIGURE 7 depicts a flow diagram of a method for the automated categorization of statements within a customer- centric interface;
FIGURE 8 illustrates an example graphical user interface for the analysis of performance data within a customer-centric interface;
FIGURE 9 depicts an example log file including performance data;
FIGURE 10 illustrates a flow diagram of a method for the automated analysis of performance data within a customer-centric interface; and
FIGURES 11 - 13 illustrate flow diagrams depicting a method for conducting a dialog exchange between an interactive voice response system and a user.
DETAILED DESCRIPTION OF THE INVENTION
Preferred embodiments and their advantages are best understood with reference to the figures, wherein like numbers may be used to indicate like and corresponding parts . Many companies that have customer service programs and/or call centers, such as telephone companies, Internet service providers, and credit card companies, typically have automated systems such as interactive voice response (IVR) systems that answer and direct customer phone calls when a customer calls seeking assistance for a particular task such as to change an address or inquire about payment of a bill. If a customer does not reach an IVR system when calling a service number, the customer may speak with a customer service representative (CSR) who either helps the customer or transfers the customer to an IVR. Within the IVR, the customer listens to one or more prerecorded menus or prompts and provides responses using touch-tone input and/or speech input in order to accomplish their task. Therefore, the content and structure of the IVR including the prerecorded menus or prompts needs to allow for customers to easily and quickly accomplish their tasks with little frustration.
The typical approach to IVR system interface design involves a company design team creating a set of requirements where the design team is comprised of various individuals representing different departments
within the company. The design team incorporates various perspectives and documents from the team members in designing the IVR interface. The design team decides how best to structure the IVR interface based on their understanding of the underlying system and/or the organization of the company. The customers'' preferences and level of knowledge are generally not taken into account.
Once designed, the IVR interface is tested to ensure functionality and that it is error free. The inclusion of customers into the design process occurs late in the development phase, if it all, through usability testing. But much of the customer input gathered in the usability testing will not be implemented into the IVR interface because of the costs involved with making changes late in the development phase. Only significant errors discovered through the usability testing are generally corrected. The result is an IVR interface having a business-centric organization and structure, where the menu options and prompts are structured according to the organization of the company and are worded using company terminology.
When calling a customer service number, customers know why they are calling (to accomplish a specific task) but typically do not know which department within a company handles specific tasks. Therefore, business- centric interfaces generally do not allow for customers to easily and quickly navigate and accomplish their tasks with little frustration, since business-centric interfaces are designed around a company's organization and way of thinking. When customers cannot quickly and easily accomplish their tasks, they generally make
incorrect selections within the IVR interface, resulting in misdirected calls. Misdirected calls are expensive to companies both in the time and money spent dealing with a misdirected call and in lower levels of customer satisfaction resulting from unpleasant customer experiences with business-centric interfaces, which can lead to negative feelings towards the company.
In most IVR system implementations, a single persona may be assigned to the IVR system. However, according to behavioral research, users of automated systems tend to view automated systems more favorably when the persona or personality of the system matches the user's personality. For example, a recent study suggests that introverts and extroverts tend to be more satisfied with, likely to trust and likely to make a purchase from an automated system possessing voice characteristics similar to their own. Similarly, evidence exists showing that many IVR system users prefer an IVR system having a system voice matching their own gender. In order for IVR systems to meet customers' needs and be more customer-centric, the usability of IVR systems are tested and improved by conducting laboratory studies or tests where test participants are asked to accomplish sets of tasks using the IVR system. An example of a task to accomplish may be, "Call Telephone
Company at 555-1111 and change your billing address." In these studies or tests, the participants use telephones to interact with an IVR simulation application which is presented by a laboratory computer. The simulated IVR application plays prerecorded announcements or prompts to the participants in the form of a series of menus and records information regarding the participants' responses
such as the menu name, the amount of time the prerecorded prompt played before the participant made a selection or pressed a key, and the key that the participant pressed. Once the study is completed, the recorded information regarding the participants' responses or the performance data is compiled into a log file with information regarding each task test stored as an individual performance data set.
To analyze the performance data collected by the IVR simulation application in the log file, the company may score participants call routing performance based on two factors - accomplishment of the task and the time spent in the IVR simulation application attempting to accomplish the task. Analysis of the log file and the performance data is typically done as a manual process where one or more persons manually examine each performance data set noting the task, determining if the participant accomplished the task, and calculating the time spent listening to the menus or prompts and then manually creating an output file containing the findings of the IVR simulation. Given that a typical IVR study generally includes many participants each performing several different tasks, the manual analysis of the performance data is a very time consuming, labor intensive, and resource intensive process. In addition, the manual analysis of the performance data is also subject to human error such as math errors in calculating time spent in the menus and in omitting particular data points . Furthermore, many companies often track statements made by customers when the customers contact the company with problems or questions about a product or service or
to alter a product or service. When a customer calls a service number and speaks to a CSR, the customer typically tells the CSR the purpose of the call in the first substantive statement the customer makes. Alternatively, a customer may contact a company via the company web site or email and generally the first substantive statement made in the email or web site response includes the customer' s purpose for contacting the company. These initial statements containing the purpose of the customer's call are often referred to as opening statements.
These opening statements can be used by companies to better design IVR systems, web sites, and any other customer interfaces between a company and the customers and allow for a more customer-centric interface design. One effective way to design an IVR system or a web site interface is to analyze the scripts of incoming calls or emails to a customer support center or call center to locate the opening statements and identify the purpose of each call or email by classifying or categorizing each opening statement. Once categorized, a frequency report can be created that details how often customers are calling with specific problems or questions about specific products or services. For example, a telephone company may want to know how many customers are calling or emailing about a problem with their bill or to add a new product to their telephone service. Once a company knows the frequency of customer complaints and questions, an IVR system can be designed that incorporates the frequencies so that customers calling with common problems, complaints, or questions can be serviced quickly and efficiently. For example, a company would be
able to determine that of the 5,000 service calls received in one month, what percentage of the calls were about particular topics and also rank the reasons why the customers called or emailed the customer support. In order to maximize the utilization of the statements given by the customers in a customer-centric interface design, a company therefore needs to track and categorize the statements. Typically, companies have manually tracked and manually categorized opening statements. The company manually tracks each call and manually records and transcribes each opening statement spoken to a CSR or received via email and then creates a list of opening statements. An employee of the company reads the long list of opening statements with a list of categories in front of him/her and assigns a category label to each opening statement. This is a very time consuming and costly process because one or more people manually examining every opening statement and deciding how to categorize the statement in accordance with multiple category labels requires a large amount of employee time which is expensive and would be better utilized in a revenue generating task.
In addition to the cost and man-power required for the manual categorization of opening statements, there is also a subjective element to the manual categorization of opening statements which affects the reliability of the categorization results. The category labels used to manually categorize the opening statements are generally designed to be objective but when applied by a person, the person's subjective thinking and opinions affect how they categorize the opening statements. For instance, an opening statement such as "I am calling about my bill for
the charges for Call Waiting" may be categorized by one person as a billing inquiry and another person as a call waiting inquiry. Therefore, even though multiple people may use the same category labels to categorize the opening statements, they might categorize the same opening statement differently because the categorization is partly a matter of opinion. This human opinion factor and subjectiveness creates an inconsistency in the categorization data and frequency reports that results in unreliable data and a customer interface design that is not optimized with respect to the opening statements and the way customers think.
By contrast, the example embodiment described herein allows for the automated creation of a customer-centric interface. The customer-centric interface is designed to best represent the customers' preferences and levels of knowledge and understanding. Additionally, the example embodiment allows for the inclusion of the customers in the design process from the beginning to ensure that the customer-centric interface is both usable and useful for the customers. The customer-centric interface allows for the customers to quickly and easily navigate the various menus within the customer-centric interface to accomplish their tasks with high levels of customer satisfaction. The customer-centric design also allows for increased call routing accuracy and a reduction in the number of misdirected calls. Therefore, companies save time and money because less time is spent dealing with misdirected calls and less resources are used by the customers since the customers spend less time within the customer-centric interface accomplishing their tasks.
Furthermore, the example embodiment described herein allows for the automated categorization of statements to better enable the creation of a customer-centric interface. Additionally, the example embodiment allows for the creation of objective rules to categorize the statements which results in reliable and consistent categorization data. Time and money is saved because employees are no longer manually looking through lists of statements trying to categorize the statements using only category labels. Therefore, employees' time may be better utilized in revenue generating projects. Furthermore, the objective rules for categorizing the statements eliminate the subjective aspect of the categorization scheme allowing for the same statement to be categorized with the same category label as long as the same set of rules are used to categorize the statements. This results in consistent and reliable categorization and frequency data which can be used in the design and creation of customer interfaces that reflect the customers' view of how the interface should operate.
Furthermore, the example embodiment described herein allows for the automated analysis of performance data to better enable the creation of a customer-centric interface. Additionally, the example embodiment allows for the consistent analysis of performance data free of human error. Time and money is saved because employees no longer manually examine the performance data determining if the task was accomplished and manually calculating the time required to accomplish each task. Therefore, employees' time may be better utilized in other revenue generating projects since less time is
required to analyze the performance data. Furthermore, the analysis of the performance data is more reliable because the analysis is not subject to human error such as calculation errors and different people are not interpreting the performance data in different manners. FIGURE 1 generally illustrates one embodiment of a customer-centric interface solution incorporating teachings of the present invention and operable to provide automated or computer based customer service to callers using an interactive voice response (IVR) system. As depicted in FIGURE 1, system 10 preferably includes at least one IVR system 12. In one embodiment, IVR system 12 may include one or more traffic handling devices 14. Traffic handling devices may include, but are not limited to, such devices as routers, switches, hubs, bridges, content accelerators, or other similar devices. As depicted, one or more traffic handling devices 14 may be coupled between communications link 16 and computer system 18. Computing system 18 may be a personal computer, a server, or any other appropriate computing device. Communications technologies which may be used as communications link 16 include, but are not limited to, a PSTN (public switched telephone network) , the Internet using voice over IP (Internet Protocol) , such mobile technologies as satellite and PCS (personal communication service) , as well as others.
In an embodiment of IVR system 12 having a component or storage system 20 which is maintained separately from computer system 18, as depicted in FIGURE 1, one or more traffic handling devices 14 may be included and coupled between computer system 18 and such a storage system 20. As described below, storage system 20 or portions thereof
may be incorporated into computer system 18, according to teachings of the present invention.
Computer system 18 may be constructed according to a variety of configurations. Preferably, however, computer system 18 includes one or more processors or microprocessors 22. Processors or microprocessors 22 may include such computer processing devices as those manufactured by Intel, Advanced Micro Devices, Motorola, Transmeta, as well as others. Operably coupled to microprocessor (s) 22 are one or more memory devices 24. Memory devices 24 may include, but are not limited to, such memory devices as SDRAM (synchronous dynamic random access memory) , RDRAM (Rambus dynamic random access memory) , FLASH memory, or other memory device operable to functioning with the microprocessor (s) 22 of choice.
Also operably coupled to microprocessor (s) 22 are one or more communications interfaces 26. Communications interface 26 may employ wire-line and/or wireless technologies. For example, wire-line based communications interfaces 26 may include, but are not limited to, such wire-line technologies as PSTN (public switched telephone networks) , Ethernet, Token-Ring, coaxial, fiber optic, as well as others. Examples of wireless technology based communications interfaces 26 may include, but are not limited to, such wireless technologies as Bluetooth and IEEE (Institute of Electrical and Electronic Engineers) 802.11b, as well as others .
One or more component systems interfaces 28 are also preferably included and coupled to microprocessor 22. According to teachings of the present invention, component systems interfaces 28 preferably couple one or
more component systems to microprocessor (s) 22 such that microprocessor (s) 22 may access the functionality included therein. Examples of component systems include storage system 20, video displays, storage devices, scanners, CD-ROM (compact-disc-read only memory) systems, input/output devices, etc. Component systems interfaces 28 may include, for example, ISA (industry standard architecture) connections, PCI (peripheral component interconnect) connections, PCI-X (peripheral component interconnect-extended) connections, SCSI (small computer systems interface) connections, USB (universal serial bus) connections, FC-AL (fibre-channel arbitrated loop) connections, serial connections, parallel connections, Ethernet connections, IEEE 802.11b receivers/transmitters, Bluetooth receivers/transmitters, as well as others. In addition, component systems interfaces 28 may be provided to couple one or more components system internal to computer system 18, such as hard disc drive (HDD) devices, CD-ROM read/write devices, etc., to microprocessor (s) 22.
Computing system 18 further includes hard disk drive (HDD) 30 containing databases 32, 33, 34, 36, 38, and 40 and processor 22, memory 24, communications interface 26, component systems interface 28, and HDD 30 communicate and may work together via bus 42 to provide the desired functionality. The various hardware and software components may also be referred to as processing resources. Computer system 18 further includes display 44 for presenting graphical user interface (GUI) 46 and input and output devices such as a mouse and a keyboard. Computer system 18 also includes rule engine 48, task engine 50, collection engine 52, customer language engine
54, performance engine 56, and customer structure engine 58, which reside in memory such as HDD 30 and are executable by processor 22 through bus 42. In other embodiments, HDD 30 may include more or less than six databases and be remotely located in storage system 20. Display 44 presents GUI 46 which allows for a user or an operator to interact with IVR system 12 and computer system 18. Shown in FIGURES 6 and 8 are various example GUIs 46. GUI 46 includes a plurality of screens and buttons that allow the users and the operators to access and control the operation of IVR system 12 and computer system 18.
As illustrated in FIGURE 1 and mentioned above, one or more traffic handling devices 14 may be coupled between computer system 18 and storage system 20. In another embodiment, however, storage system 20 may be included within or internal to computer system 18. In such an embodiment, storage system 20 or one or more components thereof may be directly coupled to the one or more component systems interfaces 28.
Component or storage system 20 may include a variety of computing devices and is preferably not limited to one or more types of storage device. In the embodiment of storage system 20 illustrated in FIGURE 1, a plurality of storage devices, preferably storing one or more applications and databases for use in accordance with teachings of the present invention, 'may be provided. Specifically, component or storage system 20 may include one or more supplemental hard disc drive (HDD) devices 60, digital linear tape (DLT) libraries (not expressly shown) , CD-ROM libraries and/or one or more storage area networks (SAN) 62. In yet another embodiment of IVR
system 12, one or more HDD devices 60 may be included in computer system 18 with one or more SANs 62 included in storage system 20.
As with many computer systems, a variety of applications 64 may be used to leverage the functionality or processing capability of computer system 18. In the present invention, a plurality of applications 64 may be effectively included in storage system 20, on one or more HDD devices 60 and/or on one or more SANs 62. For example, one or more communications applications operable to establish a communication connection with one or more users via communication link 16 may be included in storage system 20. In addition, one or more speech recognition or voice analysis applications are included on HDD devices 60 and/or SAN 62 for use as described below. A variety of additional applications 64 may also be included on one or more of HDD devices 60 and/or SANs 62.
As will be described in more detail below with respect to one embodiment of a method according to the present invention, one or more persona libraries 66 are preferably included on storage system 20. Persona libraries 66 preferably include a plurality of IVR system personas, one or more of which may be selected for use during a transaction with a given user. In one embodiment, the personas stored in persona libraries 66 may be pre-existing, i.e., a complete persona or one having a defined gender, rate of speech, system prompt menu, etc., needing only to be selected and activated for use in the IVR system. Each persona in the library may also include a number of styles or strategies. For example, within a persona designed to communicate like a
calm, caring, mature female (i.e., a motherly persona), the library may include a first subset of prompts or scripted dialog designed to help novice callers, a second subset designed to help expert callers, a third subset designed to sound sympathetic and soothing and a fourth subset designed to be more abrupt. These different subsets or styles may be produced by altering characteristics of the persona such as speaking rate, choice of formal or informal words, use of terse or verbose utterances, etc. As described below, IVR system 12 may dynamically change from one style to another in response to detected changes in the speech characteristics of a caller.
In another embodiment, persona libraries 66 may contain a plurality of IVR system persona components, such as gender, rate of speech, tone, inflection, prompt menus, etc. An overall IVR system persona may be selected and compiled from selected components to create an IVR system persona which has been determined, according to teachings of the present invention, to be the persona most likely to elicit favorable responses from the user as well as to achieve other benefits.
Also as described below with respect to one embodiment of a method of the present invention, one or more user persona profiles 68 may be stored on HDD 30, HDD devices 60, and/or on SANs 62. According to teachings of the present invention, when a repeat user contacts IVR system 12, the IVR system 12 may be implemented such that the user can be identified, e.g., from one or more call characteristics, and the user's preferred or most recent IVR system persona may be initiated by the IVR system 12. As will be described in
more detail below, one or more responses from the user to prompts from the user's stored persona 68 may initiate a change in the persona used to complete the user's desired transaction. In other embodiments, the above may be stored in HDD 30 instead of storage system 20.
FIGURE 2 depicts a flow diagram of a method for automating the creation of a customer-centric interface. The method begins at step 70 and at step 72 collection engine 52 collects a plurality of customer opening statements. When a customer calls a service number and speaks to a CSR, the customer typically tells the CSR the purpose of the call in the first substantive statement the customer makes. Alternatively, a customer may contact a company via the company web site or email and generally the first substantive statement made in the email or web site response includes the customer's purpose for contacting the company. These initial statements containing the purpose of the customer' s call are often referred to as customer opening statements. Collection engine 52 collects the customer opening statements from customer service centers and stores the customer opening statement in customer opening statement database 32.
The customer opening statements provide insight into the tasks that the customers inquire about as well as the language or terminology the customers use to describe the tasks. At step 74, customer language engine 54 analyzes the customer opening statements to determine the language or terminology used by the customers when referring to particular tasks. When customers call a service number, they are not concerned with how the company is going to accomplish the task just that the task gets accomplished.
Therefore, customer language engine 54 must learn and use the terminology of the customers in creating customer- centric menu prompts so that customers will be able to easily understand and identify how to accomplish their tasks when using the customer-centric interface. At step 76, customer task model 128 within collection engine 52 determines the different reasons why the customers contact the company in order to create a list of tasks for which the customers access the customer-centric interface. Analysis of the customer opening statements allows for the determined tasks to be tested to see if the list of tasks accounts for a majority of the reasons why the customer contact the company. The tasks may include such tasks as "telephone line is not working," "question about my bill," "order a new service," or any other appropriate reason for a customer to call seeking assistance regarding a product or service.
Once the list of tasks has been created and determined to cover the majority of the customers' reasons for calling, task engine 50 determines a task frequency of occurrence for each task at step 78. The task frequency of occurrence allows system 10 to recognize which tasks customers are calling about the most and which tasks the customers are calling about the least. Task engine 50 determines the task frequency of occurrence by examining and categorizing the customer opening statements. Each customer opening statement is examined to identify the purpose of the call and is then categorized as a particular task.
Once the customer opening statements have been categorized, task engine 50 creates a task frequency
table that ranks the tasks according to the task frequency of occurrence. The task frequency table details how often customers call with specific problems or questions about each particular task. An example task frequency table 120 for eighteen tasks 127 - 161 is shown in FIGURE 3 and includes column 122 for the frequency rank of the task, column 124 for the task, and column 126 for the frequency value. In other embodiments, task frequency table 120 may include more or less than eighteen tasks. Task frequency table 120 shows that eighteen tasks account for more than 80% of the customer opening statements or service calls received from the customers. Task frequency table 120 allows for system 10 to determine which tasks the customers call about the most and provides valuable information on how to arrange the customer-centric menu prompts within the customer- centric interface.
Task frequency table 120 is ordered in descending frequency order and is a statistically valid representation of the tasks that the customers inquire about when calling customer service centers. Because having a menu prompt for every single task results in numerous menu prompts making customer navigation of the customer-centric interface burdensome and slow, at step 80 task engine 50 determines which tasks are to be included in the customer-centric interface. In order to allow easy and quick navigation for the customers but at the same time not utilize too many company resources operating the customer-centric interface, only the most frequently occurring tasks are included within the customer-centric interface.
Task engine 50 utilizes task frequency table 120 to determine which tasks are to be included in the customer- centric interface. In one embodiment, task engine 50 includes only the tasks that have a frequency of occurrence of 1% or higher. Task frequency table 120 includes only the tasks having a frequency of occurrence of 1% or higher and includes eighteen tasks accounting for 80.20% of the tasks represented in the customer opening statement. In another embodiment, task engine 50 includes tasks so that the total number of included tasks accounts for a specified percentage coverage of the tasks represented in the customer opening statements. For instance, task engine 50 may include a specified number of tasks so that the total frequency of occurrence is a specific total percentage coverage value such as 85%, 90% or any other appropriate percentage of coverage. Either embodiment typically allows for between fifteen and twenty tasks to be included in the customer-centric interface. For efficient operation, the customer-centric interface does not include an opening customer-centric menu prompt listing all of the included tasks in frequency order. Such an opening menu prompt would take too long for the customers to listen to and would not allow for quick and easy navigation of the customer- centric interface. Therefore, the customer-centric interface is of a hierarchical design with the tasks grouped together by task relationships.
In order for the customer-centric interface to be organized from the vantage of the customers, the included tasks need to be grouped according to how the customers perceive the tasks to be related. Therefore at step 82,
customer structure engine 58 elicits from one or more test customers each customer' s perceptions as to how the included tasks relate to each other in order to create interface structure for the customer-centric interface. Interface structure is how the tasks are placed within the customer-centric interface and organized and grouped within the customer-centric menu prompts. For instance, the interface structure of a web page refers to how the pages, objects, menu items, and information is organized relative to each other while the interface structure for an IVR system refers to the sequence and grouping of the tasks within the customer-centric menu prompts. The interface structure for the customer-centric interface needs to allow for the customers to find information and complete tasks as quickly as possible without confusion. Customer structure engine 58 uses tasks 127 - 161 from task frequency table 120 and performs customer exercises with the customers to elicit customer feedback regarding how the customers relate and group together tasks 127 - 161. For instance, customer structure engine 58 may require a group of test customers to group tasks 127 - 161 into one or more groups of related tasks. In addition, customer structure engine 58 may also require the test customers to make comparative judgments regarding the similarity of two or more of the tasks where the test customers state how related or unrelated they believe the tasks to be. Furthermore, customer structure engine 58 may require the test customers to rate the relatedness of the tasks on a scale. Customer structure engine 58 performs the customer exercises using a test IVR system, a web site, or any other appropriate testing means. In addition to eliciting tasks
relationships, customer structure engine 58 also elicits from the test customers general names or headings that can be used to describe the groups of tasks in the customers own language or terminology. Once customer structure engine 58 elicits from the test customers how the customers perceive tasks 127 -161 to relate to each other, customer structure engine 58 aggregates the customer feedback and analyzes the customer feedback to determine customer perceived task relationships. The customer perceived task relationships are how the customers perceive the tasks to be related. Customer structure engine 58 represents the customer perceived task relationships in a numerical data matrix of relatedness scores that represents collectively the customers' perceived relatedness of the included tasks. At step 84, customer structure engine 58 utilizes the customer perceived task relationships and the numerical data matrix and combines the included tasks into one or more groups of related tasks. For example, using the customer feedback from the customer exercises, customer structure engine 58 determines that the customers perceive tasks 133, 155, and 159 as related and group one, tasks 147, 149, and 157 as related and group two, tasks 127, 129, 131, 135, 139, 141, 143, 145, 153, and 161 as related and group three, and tasks 137 and 151 as related and group four. To aid in the grouping of the tasks and to better enable the company to understand the structure and grouping of the tasks, customer structure engine 58 represents the customer perceived task relationships and numerical data matrix in a graphical form. For instance, customer structure engine 58 may
generate a flow chart or indogram illustrating a customer-centric call flow for the groups of tasks.
At step 86, task engine 50 orders the groups of task and the tasks within each group based on the task frequency of occurrence. Task engine 50 determines a frequency of occurrence for each group of tasks by summing the individual frequency of occurrences for each task within each group. From the example above, group one has a group frequency of occurrence of 8.9% (6.7% + 1.1% + 1.1%) , group two has a group frequency of occurrence of 6.2% (3% + 2.1% + 1.1%), group three has a group frequency of occurrence of 59.4% (14% + 11.6% + 11.3% + 5.6% -I- 3.8% + 3.8% + 3.5% + 3.4% + 1.4% + 1.0%), and group four has group frequency of occurrence of 5.7% (3.8% + 1.9%) . Task engine 50 orders the groups within customer-centric interface in descending frequency order so that the tasks having the highest frequency of occurrence are heard first by the customers when the customers listen to the customer-centric menu prompts within the customer-centric interface. Since 59.4% of the customer will be calling about a task in group three, task engine 50 orders group three first followed by group one, group two, and group four.
In addition to ordering the groups of tasks, task engine 50 also orders the tasks within each group. Task engine 50 orders the tasks within each group according to each task' s frequency of occurrence from the highest frequency of occurrence to the lowest frequency of occurrence. For instance, the tasks in group one are ordered as task 133, task 155, and task 159. The tasks in group two are ordered as task 147, task 149, and task 157. The tasks in group three are ordered as task 127,
task 129, task 131, task 135, task 139, task 141, task 143, task 145, task 153, and task 161. The tasks in group four are ordered as task 137 and task 151. The grouping and ordering of the tasks allow for the high frequency tasks to be more accessible to the customers than the low frequency tasks by placing the tasks having higher frequency of occurrences higher or earlier in the customer-centric interface menu prompts.
At step 88, customer language engine 54, task engine 50, and customer structure engine 58 work together to create and order the customer-centric menu prompts for the customer-centric interface. Task engine 50 and customer structure engine 58 do not take into account customer terminology when calculating task frequencies, grouping the tasks, and ordering the tasks. So once task engine 50 and customer structure engine 58 create interface structure including ordering the included tasks, customer language engine 54 creates customer- centric menu prompts using the customers own terminology. Customer-centric menu prompts in the language of the customers allow for the customers to more easily recognize what each menu prompt is asking and allows the customer to accomplish their tasks quickly and with little frustration. In other embodiments, customer language engine 54 may create customer-centric menu prompts using action specific object words in addition to the customers own terminology. The use of action specific object words to create menu prompts is described in further detail below with respect to FIGURE 5. Once system 10 creates the customer-centric menu prompts and the customer-centric interface, performance engine 56 tests the customer-centric interface at step 90
by performing usability tests. Performance engine 56 performs the usability tests in order to locate and fix any problems with the customer-centric interface before the customer-centric interface is implemented for use by all customers. The usability tests involve laboratory tests where test customers are asked to accomplish sets of tasks using the customer-centric interface such as "Call Telephone Company at 555-1111 and change your billing address." In these tests, the test customers use telephones to interact with the customer-centric interface. The customer-centric interface plays the prerecorded customer-centric menu prompts to the test customers and performance engine 56 records information regarding the test customers' responses such as the menu name for the menus accessed, the amount of time the prerecorded menu prompt played before the test customer made a selection or pressed a key, and the key that the test customer pressed.
When the usability tests conclude, at step 92 performance engine 56 analyzes the results of the usability tests. With respect to the results, performance engine 56 focuses on three different usability test results: customer satisfaction, task accomplishment, and response times. Customer satisfaction is whether or not the test customer was satisfied using the customer-centric interface. Performance engine 56 gathers customer satisfaction by asking the test customers a • variety of questions regarding their experiences in interacting with the customer-centric interface such as how satisfied the test customer was in accomplishing the assigned tasks, how confident the test customer was about being correctly
routed, the level of agreement between the selected menu prompts and test customers' assigned tasks, and whether the test customers would want to use the customer-centric interface again. Performance engine 56 also determines a task accomplishment or call routing accuracy score. Task accomplishment measures whether a test customer successfully completes an assigned task and is based on a sequence of key presses necessary to navigate the customer-centric interface and accomplish the task.
Performance engine 56 determines if the test customers actually accomplished their assigned task. For example, if a test customer was assigned the task of using the customer-centric interface to inquire about their bill, did the test customer correctly navigate the customer- centric menu prompts in order to inquire about their bill. Performance engine 56 examines all the different menu prompts accessed by the test customers and compares the test customer key sequences with the correct key sequences in order to determine if the test customers accomplished the assigned tasks.
In addition to customer satisfaction and task accomplishment, performance engine 56 also calculates a response time or cumulative response time (CRT) for each customer-centric menu prompt accessed by the test customers. The response time indicates the amount of time a test customer spends interacting with each customer-centric menu prompt and the customer-centric interface. The response times reflects the amount of time the test customers listen to a menu prompt versus the amount of time it takes for the menu prompt to play in its entirety. The amount of time the test customers
spend listening to the menu prompt is not a very valuable number unless menu duration times are also taken into account. A menu duration time is the amount of time it takes for a menu prompt to play in its entirety. For instance, a menu prompt may have five different options to choose from and the menu duration time is the amount of time it takes for the menu prompt to play through all five options.
Performance engine 56 records a listening time for each test customer for each menu prompt. The listening time is the time the test customers actually spend listening to a menu prompt before making a selection. Performance engine 56 also has access to the menu duration times for all of the customer-centric menu prompts in the customer-centric interface. Performance engine 56 calculates a response for a menu prompt which is the difference between the listening time and the menu duration time by subtracting the menu duration time from the listening time. For example, if the introductory menu prompt of the customer-centric interface requires 20 seconds to play in its entirety (menu duration time) and the test customer listens to the whole menu and then makes a selection, the test customer has a listening time of 20 seconds and receives a CRT score or response time of 0 (20 - 20 = 0) . If the test customer only listens to part of the menu prompt, hears his choice and chooses an option before the whole menu plays, then the test customer receives a negative CRT score or response time. For instance, if the test customer chooses option three 15 seconds
(listening time) into the four-option, 20 second menu prompt, the test customer receives a CRT score or
response time of "-5" (15 - 20 = -5) . Conversely, the test customer has a response time of +15 if the test customer repeats the menu prompt after hearing it once, and then choose option three 15 seconds (35 second listening time) into the second playing of the menu (35 - 20 = 15) .
A negative response time is good because the test customers spent less time in the customer-centric interface than they could have and a positive response time is bad because the test customers spent more time than they should have in the customer-centric interface. In addition to calculating response times for individual menu prompts, performance engine 56 may also calculate response times for entire tasks for each test customer by summing the menu duration times and the listening times for each menu prompt required to accomplish the task and subtracting the total menu duration time from the total listening time.
Once performance engine 56 has determined customer satisfaction, task accomplishment, and response times, performance engine 56 generates a performance matrix which charts customer satisfaction, task accomplishment, and response times for each test customer, each customer- centric menu prompt, and each task. The performance matrix allows for performance engine 56 to determine if any of the customer-centric menu prompts or tasks have unsatisfactory performance at step 94 by examining the combination of customer satisfaction, task accomplishment, and response times and thereby evaluating how well the customer-centric interface performs.
Ideally a customer-centric menu prompt and task have a high level of customer satisfaction, a negative or zero
response time, and a high rate of task accomplishment. For unsatisfactory performance, performance engine 56 looks for low customer satisfaction, low task completion, or a high positive response time. By charting the customer satisfaction, task accomplishment, and response times on the performance matrix, performance engine 56 can determine when one of the test results is not satisfactory.
If a customer-centric menu prompt or task has unsatisfactory performance at step 94, then at step 96 performance engine 56 selects the menu prompt or task, at step 98 determines the reason for the unsatisfactory performance, and at step 100 modifies the customer- centric menu prompt or task to correct for the unsatisfactory performance. For example, a task may have a high level of customer satisfaction and high rate of task accomplishment but a positive response time. The test customers are accomplishing the task and are satisfied when interacting with the customer-centric interface but are spending too much time interacting with the customer-centric interface as indicated by the positive response time. The positive response time is not good for the customer-centric interface because the customers are using unnecessary resources from the customer-centric interface in the form of too much time in accomplishing the task. By examining the menu prompts for the task, performance engine 56 determines that the terminology used in the menu prompts for the task is not the terminology used by the customers. Therefore, performance engine 56 alerts customer language engine 54 to the terminology problem and customer language engine
54 rewords the menu prompts for the task using the customers own terminology.
Once performance engine 56 locates and corrects the problem, performance engine 56 determines if there are additional menu prompts or tasks that have unsatisfactory performance at step 102. If at step 102 there are additional menu prompts or tasks having unsatisfactory performance, then at step 104 performance engine 56 selects the next menu prompt or task having unsatisfactory performance and returns to step 98.
Performance engine 56 repeats steps 98, 100, 102, and 104 until there are no additional menu prompts or tasks at step 102 having unsatisfactory performance. When there are no additional menu prompts or tasks having unsatisfactory performance at step 102, the process returns to step 90 and performance engine 56 tests the customer-centric interface having the modified menu prompts or tasks. Performance engine 56 repeats steps 90, 92, 94, 96, 98, 100, 102, and 104 until there are no customer-centric menu prompts or tasks having unsatisfactory performance at step 94.
When there are no customer-centric menu prompts or tasks having unsatisfactory performance at step 94, at step 106 system 10 implements the customer-centric interface for use by the customers. As customers use the customer-centric interface, system 10 and performance engine 56 continually monitor the performance of the customer-centric interface checking for low customer satisfaction levels, low task completion rates, or high positive response times at step 108. When system 10 discovers an unsatisfactory post-implementation result such as those described above, system 10 determines the
cause of the problem and modifies the customer-centric interface to correct the problem at step 110. As long as the customer-centric interface is accessible by the customers, system 10 monitors the customer-centric interface performance and modifies the customer-centric interface to allow for customer-centric menu prompts that are worded in the terminology of the customers, that directly match the tasks that the customers are trying to accomplish, and that are ordered and grouped by customer task frequencies and the customers' perceptions of task relationships .
FIGURE 4 illustrates a block flow diagram of how collection engine 52, customer language engine 54, task engine 50, customer structure engine 58, and performance engine 56 of system 10 interact and interoperate to automatically create the customer-centric interface. In addition, FIGURE 4 also represents the various functions for collection engine 52, customer language engine 54, task engine 50, customer structure engine 58, and performance engine 56.
Collection engine 52 gathers customer intention information from the customer opening statements and includes customer task model 128 which includes the list of tasks for which the customers access and use the customer-centric interface. Customer language engine 54, task engine 50, and customer structure engine 58 perform their various functions by processing and manipulating the customer intention information and task list.
Customer language engine 54 develops customer- centric menu prompts for the customer-centric interface using the customers own terminology. Customer language engine 54 analyzes the customers' language by analyzing
and tracking every word used by the customers in the customer opening statements to get a feel for how the customers refer to each of the tasks. Customer language engine 54 counts each word in each customer opening statement to determine which words the customers use the most and thereby recognize which of the customers' words are best to use in creating customer-centric menu prompts using the customers own terminology.
In addition to creating customer-centric menu prompts using the customers own terminology, in other embodiments of system 10 customer language engine 54 may also create customer-centric menu prompts using action specific object words taken from the customer opening statements . FIGURE 5 illustrates a flow diagram for creating customer-centric menu prompts utilizing action specific object words. Customer wordings of tasks in customer opening statements are generally in four different styles: action-object ("I need to order CALLNOTES"); action ("I need to make changes"); object ("I don't understand my bill") ; and general ("I have some questions") . Menu prompts are typically worded in one of four styles: action specific object ("To order CALLNOTES press one") ; specific object (For CALLNOTES press two") ; general object ("To order a service press three"); and action general object ("For all other questions press four") .
The style of the menu prompt wording can have an effect on the performance of the menu prompt due to the customers interaction with the menu prompt. Wording menu prompts as action specific object is typically the best way to word customer-centric menu prompts because upon
hearing an action specific object menu prompt, the customer generally knows that it is the menu prompt they want to select and therefore response times decrease because customers do not have to repeat the menu prompts in order to make a selection. For example, if a customer calls wanting to order CALLNOTES and the second option in the six option menu prompt is "To order CALLNOTES press two" then the customer will typically press two without listening to the rest of the menu prompts and therefore have a negative response time, high customer satisfaction, and high task accomplishment rate.
In order to create customer-centric menu prompts using action specific object words, customer language engine 54 determines the action words and object words used by the customers. At step 132, customer language engine 54 analyzes the customer opening statements in customer opening statement database 32 in order to identify the action words and the object words used by the customers in their opening statements. In addition to identifying the action words and the object words, customer language engine 54 also determines which of the action words are specific action words and which of the object words are specific object words. For instance, "order" and "pay" are specific action words and "CALLNOTES" and "Call Waiting" are specific object words while "service" and "question" are not specific object words .
At step 134, customer language engine 54 saves the specific action words in specific action database 34 and the specific object words in specific object database 36. When saving the specific action words and the specific object words, customer language engine 54 identifies and
maintains the relationships between the specific action words and the specific object words by linking the specific action words with the specific object words that were used together by the customers as shown by arrows 199 in FIGURE 5. For example, for the customer opening statements of "I want to buy CALLNOTES" and "I want to inquire about my bill," "buy" and "inquire" are the specific action words and "CALLNOTES" and "bill" are the specific object words. When customer language engine 54 saves the respective specific action words and specific object words in databases 34 and 36, a link will be maintained between "buy" and "CALLNOTES" and between "inquire" and "bill." Maintaining how the customers use the action words and object words in databases 34 and 36 prevents erroneous combinations of specific action words and specific object words when creating customer-centric menu prompts. An example erroneously combined menu prompt is "To buy a bill press one" since the statement would not make sense to the customer. The linking of the specific action words with the specific object words which the customer used together allows for the formation of correct customer-centric menu prompts that make sense to the customers .
In addition to storing the specific action words and the specific object words in databases 34 and 36, customer language engine 54 also calculates a frequency of occurrence for each specific action word and each specific object word and stores the specific action words and the specific object words in databases 34 and 36 in accordance with the frequency of occurrence in descending frequency order. Therefore, the specific action words having the highest frequency of occurrence are stored at
the top of specific action database 34 and the specific object words having the highest frequency of occurrence are stored at the top of specific object database 36. Once customer language engine 54 determines the frequency of occurrence and stores the specific action words and the specific object words, at step 136 customer language engine 54 generalizes the specific action words into general groups of specific action words and generalizes the specific object words into general groups of specific object words. Customer language engine 54 examines the specific action words and the specific object words for commonalties and then groups the specific action words and the specific object words together in groups based on the commonalties. For example, the specific action words of "buy," "order," and "purchase" all share the commonality of acquiring something and may be grouped together. The specific object words of "CALLNOTES" and "Call Waiting" share the commonality of being residential telephone services and therefore may be grouped together. Customer language engine 54 assigns names for each of the general groups of specific action words and the specific object words and saves the general action words in general action database 38 and the general object words in general object database 40 at step 138.
Having specific action database 34, specific object database 36, general action database 38, and general object database 40 allows for a great resource for customer language engine 54 to locate customer terminology when creating customer-centric menu prompts. For creating upper level hierarchical menu prompts, customer language engine 54 uses words from general
action database 38 and general object database 40. To create action specific object menu prompts in the words of the customers for lower level hierarchical menu prompts, customer language engine 54 uses words from specific action database 34 and specific object database 36. Because the specific action words and the specific object words are ordered by frequency in databases 34 and 36, customer language engine 54 can create action specific object menu prompts using the customer terminology most often used by the customers.
While customer language engine 54 determines the customer terminology and wording to use for the customer- centric menu prompts, task engine 50 determines the frequency of occurrence for the tasks that the customers call about and also determines which tasks will be included in the customer-centric interface. Generally the customer opening statements are from more than one call center so when determining the frequency of occurrence for each task, task engine 50 takes into account the volume of calls into each call center when constructing the task frequency table so that the frequency results are accurate. Frequency of occurrence data must be weighted so that a call center receiving three million calls does not have the same weight as a call center receiving ten million calls.
Once task engine 50 determines the tasks to be included in the customer-centric interface including all tasks down to 1% frequency or to a percentage coverage, customer structure engine 58 elicits customer perceived task relationships for the included tasks as described above. Utilizing the customer perceived task relationships, customer structure engine 58 creates
interface structure for the customer-centric interface and represents the interface structure both as a numerical data matrix and a graphical representation.
At box 130, customer language engine 54, task engine 50, and customer structure engine 58 work together to automatically create the customer-centric interface. Customer language engine 54 contributes the wording of the customer-centric menu prompts in the customers own terminology for the customer-centric interface. Task engine 50 provides the tasks that are to be included in the customer-centric interface, the ordering of the groups of tasks in the menu prompts, and the ordering of the tasks within the groups of tasks. Customer structure engine 58 provides the interface structure or grouping of tasks for the customer-centric interface. After the automated creation of the customer-centric interface, performance engine 56 performs usability tests on the customer-centric interface as described above and evaluates and reconfigures the customer-centric interface based on customer satisfaction, task accomplishment, and response times during both the testing phase and implementation .
As described above, when creating a customer-centric interface, the customers' opening statements need to be categorized in order to determine what tasks the customers are calling about. Therefore, FIGURE 7 depicts a flow diagram of a method for the automated categorization of statements. The method begins at step 180 and at step 182 a user selects the statements to be categorized. Before system 10 can automatically categorize the statements, the user must have one or more statements to categorize and load the list of statements
into system 10. The statements may be opening statements as defined above, written statements from a training session, survey responses, search statements from a web site or pop-up window, statements evaluating a customer' s experience and satisfaction in a test environment, or any other appropriate response to an open-ended question that can be analyzed using content text analysis.
Typically, the statements are recorded, transcribed, configured in a format that can be understood by system 10, and then placed in a text file which may be stored in database 32. Because there may be more than one list of statements and therefore more than one text file, the user chooses what list of statements to categorize by selecting a text file using open file button 144. Open file button 144 allows the user to view all the available files containing statements and then select the file containing the list of statements to be categorized. Once the list of statements has been selected, system 10 reads the list of statements from database 32. After the selection of the statements to be categorized, at step 184 the user decides whether to use rule engine 48 to create new rules to categorize the statements or use existing rules already stored in database 33 to categorize the statements. If at step 184 the user decides to create new rules, then at step 186 the user accesses rule engine 48 to create new rules. New rules are desirable when there have been new products or services recently made available to the customers and the existing rules do not reflect these new products or services or when the statements are from a new domain not covered by the existing rules, such as survey responses
where all the existing rules pertain to statements from customer service call centers.
The user utilizes rule engine 48 and rule creation screen 160 to create new rules and then edit the newly created rules. Creation of the rules involves the use of four include boxes 162, 163, 165, and 167 and two exclude boxes 169 and 171. In other embodiments, there may be more or less than four include boxes and more or less than two exclude boxes. The user inputs combinations of words and text strings that should be included in the statement in order for the statement to satisfy the rule include boxes 162, 163, 165, and 167 and combinations of words and text strings that should not be in the statement in order for the statement to satisfy the rule in exclude boxes 169 and 171. Each rule is also associated with a particular category label which the user enters in category label box 164.
For example, a user may want to create a new rule to categorize statements with respect to the late payment of customer bills. Therefore "late" may be entered in include box 162, "bill" may be entered in include box 163, "paid" may be entered in exclude box 169, and "labill" may be entered in category label box 164. This allows for a rule that finds statements that contain the words "late" and "bill" but do not contain the word
"paid." If a statement contains the words "late" and "bill" and does not include the word "paid, " then the statement would be categorized with the category label "labill," meaning the purpose of the statement is to inquire about a late bill that has not yet been paid. Once a user enters in the desired words or text strings in include boxes 162, 163, 165, and 167 and
exclude boxes 169 and 171, the user selects apply rule button 166 and the rule appears in rule screen 170 and is available to be edited and used to categorize the statements. The user may then repeat the above process to create as many rules as needed. In addition, other embodiments allow for rules where a noun in the singular form in include box 162 includes all forms of the noun (singular and plural) and a verb in the present tense in include box 162 includes all tenses and forms of that verb. This allows for a bigger hit rate when applying the rules to the statements since one rule is satisfied by a statements containing any form of the noun or verb and saves time because multiple rules are not required for each form of the noun or verb. After the creation of the rules, at step 188 the user groups the rules into sets of rules. There may be different sets of rules for different applications or divisions of a company. For example, the marketing division may have a set of rules to categorize a list of statements while the product development division may have a different set of rules to categorize the same list of statements. This is because different users may be interested in different terms with respect to a list of statements. In addition, different sets of rules may also be necessary for different kinds of statements or statements from different domains. A user may use one set of rules to categorize opening statements from a call center and a different set of rules to categorize survey responses from a web survey questionnaire. Therefore, rule engine 48 allows for the rules to be grouped into different sets of rules with the name for each set of rules displayed in set box 168 and the sets of rules
saved in database 33. In addition, the user may group only newly created rules together in a group or group together newly created rules with existing rules when creating sets of rules. At step 190, the rules must be arranged in a rule order in accordance with a rule hierarchy enabling performance engine 56 to apply the rules in the correct order thereby preventing inconsistent results. Typically the rule hierarchy is from specific rules to general rules but can be any other appropriate way of ordering the rules. For a specific to general rule hierarchy, performance engine 56 applies the most specific rules first to a statement and then applies the more general rules if the statement does not satisfy any of the specific rules.
For example, a user wants to find both "phone" and "telephone" separately. A rule specifying "telephone" needs to be above the rule specifying "phone" in the rule hierarchy so that the "telephone" rule is applied to a statement before the "phone" rule is applied to a statement. If the "phone" rule is applied before the "telephone" rule, then when performance engine 56 locates a statement containing the word "telephone," performance engine 56 will find "phone" in "telephone" and categorize the statement with the "phone" category label instead of the "telephone" category label and the statement will be incorrectly categorized. But if the "telephone" rule is placed above the "phone" rule in the rule hierarchy, then performance engine 56 will find "telephone" in the statement, categorize that statement with the "telephone" category label and move on to the next statement without applying the "phone" rule. Therefore, the most specific
rules need to be placed at the top of the rule hierarchy and the most general rules need to be placed at the very end or bottom of the rule hierarchy with a gradual gradient from specific to general in-between. Once the rules have been grouped and ordered in a correct rule hierarchy, rule engine 48 stores the newly created rules, sets of rules, and rule hierarchy in database 33 at step 192 so that users and performance engine 56 may later access the rules. After rule engine 48 saves the rules, at step 194 the user selects the rule or the set of rules that the user wants to have performance engine 56 apply to the list of statements. If at step 184 the user decides to not create any new rules but instead to use existing rules, then at step 196 the user selects and edits rules from the lists of existing rules stored in database 33. Existing rules include rules that have already been created and saved by the process outlined above at steps 186 through 194. If a user has already created a set of rules that has worked well in the past in categorizing statements, then the user may want to use these rules instead of creating new rules. The user selects from the list of rules in set box 168 and the rules from the selected set of rules appear in rule screen 170. Once the rules appear in rule screen 170, the user may edit an existing rule such as rule 173 by selecting it in rule screen 170 and clicking edit rule button 156. The rule then appears in rule creation screen 160 and the user may modify include boxes 162, 163, 165, and 167 and exclude boxes 169 and 171. Once the user has a set of rules for performance engine 56 to apply to the list of statements, the process continues to step 198.
At step 98, the user selects run button 148 and performance engine 56 applies the selected rules to the list of statements in order to determine a category label for each statement. Performance engine 56 cycles through the list of statements one statement at a time applying the rules to a statement until each statement satisfies a rule. Performance engine 56 begins applying the rules to the list of statements at step 200 by applying the first rule in the rule hierarchy to the first statement in the list of statements. When performance engine 56 applies the rules to the statements, performance engine 56 strips the punctuation off the statements so that "bill," and "bill" do not appear as two different text strings.
At step 202, performance engine 56 determines if the statement satisfies the first rule. Performance engine 56 determines if a statement satisfies a rule by searching the statement for the presence of particular text string combinations or words and the exclusion of other text string combinations or words. For instance, rule 173 is the highest rule in the rule hierarchy shown in rule screen 170. Therefore, performance engine 56 searches the first statement to see if the text string "dsl" is present in the first statement. If "dsl" is not present in the first statement, then the first statement does not satisfy rule 173. If the statement does not satisfy the rule, then at step 204 performance engine 56 checks to see if there are additional rules in the set of rules to apply to the statement. If there are additional rules to apply to the statement, then at step 206 performance engine 56 applies the next rule in the rule hierarchy to the statement and the process returns to step 202 where performance engine 56 determines if the
statement satisfies this rule. Steps 202, 204, and 206 repeat until either the statement satisfies a rule at step 202 or until the statement does not satisfy any of the rules at step 202 and there are no more rules to apply to the statement at step 204.
If the statement satisfies a rule at step 202, then at step 208 performance engine 56 assigns the category label associated with the satisfied rule to the statement. So if the statement contained the text string "dsl," then performance engine 56 assigns the "dsl" category label to the statement. But if the statement does not satisfy any of the rules at step 202 and there are no more rules left to apply at step 204, then performance engine 56 applies a catch-all rule to the statement and labels the statement with the catch-all category label at step 210. The catch-all rule and category label is designed for statements that do not fit within any of the other rules. Performance engine 56 labels the statement as catch-all so that the statement may be examined at a later date to determine if the statement really does not satisfy any of the rules or if there is a malfunction of system 10 which resulted in the statement not satisfying any of the rules. A high number of catch-all category labels may indicate that system 10, rule engine 48, or performance engine 56 are not operating correctly and require attention.
After performance engine 56 assigns a category label to the statement at either step 208 or step 210, at step 212 performance engine 56 checks to see if there are additional statements in the list of statements that require categorization. If there are additional statements to be categorized at step 212, then at step
214 performance engine 56 selects the next statement to be categorized and applies the first rule in the rule hierarchy to the statement and then determines if the statement satisfies the rule at step 202. Performance engine 56 repeats steps 202 - 212 until performance engine 56 determines at step 212 that there are no additional statements to be categorized.
For instance, a statement to be categorized is "I cannot access my email account." Performance engine 56 applies the first rule in rule screen 170, rule 173, to the statement. Performance engine 56 applies rule 173 by searching the statement "I cannot access my email account" for the text string "dsl." Performance engine 56 determines that the statement does not contain the text string "dsl" and therefore the statement does not satisfy rule 173. Performance engine 56 then applies each rule below rule 173 to the statement one rule at a time until the statement satisfies a rule. When performance engine 56 gets to rule 175 and applies rule 175 to the statement, performance engine 56 determines that the statement includes the text string "email" and does not include the text strings "bill" and "can't comm." Therefore, the statement satisfies rule 175 and performance engine 56 assigns category label "email" to the statement.
When there are no additional statements to be categorized, performance engine 56 creates an output file at step 216 and the process ends at step 218. The output file includes all the statements from the list of statements and each corresponding category label. An example output file with three statements is shown in Table 1.
The output file allows system 10 or a user to determine the frequency of occurrence for each category label and therefore determine which categories customers are calling the most about. Knowing which categories the customers are calling the most about allows for a customer-centric interface design that takes into account the customers' way of thinking and is therefore easier to for the customer to use. The interface design that is easier for the customer to use allows the customer to accomplish their tasks in less time and a more efficient manner resulting in less company resources being used in servicing the customers and therefore lower costs for a company.
In order to make the customer-centric interface accessible and easy to use for the customers, the customer-centric interface needs to be continually tested and modified using both actual and test data. FIGURE 10 depicts a flow diagram of a method for the automated analysis of performance data. The method begins at step 310 and at step 312 a user or an operator of system 10 selects the performance data to be analyzed. System 10 allows for up to three different log files to be analyzed at one time. In other embodiments, system 10 may analyze more than three log files at the same time. Each time an IVR study or test occurs, a log file containing
performance data from that test is created. So if there are three IVR tests in one day - one in the morning, one in the afternoon, and one in the evening - then there will be three log files at the end of the day. System 10 and GUI 46 allow for simultaneous analysis of the three log files at the same time to allow for more efficient operation of system 10.
To analyze more than one log file at a time, the user selects the log file to be analyzed in input windows 230, 232, and 234. If only one log file is to be analyzed, the user selects the log file in input window 230. If more than one log file is to be analyzed, the first log file is selected in input window 230, the second log file is selected in input window 232, and the third log file is selected in input window 234. When selecting the log files to be analyzed, the user may also want to select the location to save the output file which can be done in output window 236.
Once the log files to be analyzed have been selected, the user presses process button 238 and system 10 begins to automatically analyze the performance contained in the log file. At step 314, system 10 selects the first performance data set in the log file to analyze. System 10 selects the performance data set to analyze by selecting the first performance data set in the log file. A performance data set is the recorded data and information regarding one specific participant and one specific task for that participant. Generally in an IVR test, a participant is given four different tasks to accomplish such as "order DSL service" or "change your billing address." For example, a performance data set
would contain the recorded information for participant A and the task of ordering DSL service.
An example log file 250 including two example performance data sets 252 and 254 is shown in FIGURE 10. A performance data set includes such information as the start time of the task, each menu accessed by the participant within the IVR, the time each menu was accessed, how long the participant listened to each menu, the key the participant pressed in response to the menu, and the total time the participant interacted with the IVR system.
Performance data sets are separated in a log file by start lines and end lines. Performance data set 252 includes start line 251 and end line 263 while performance data set 254 includes start line 265 and end line 277. Start lines 251 and 265 include the date of the IVR test, what IVR system is being tested, and the time that the first IVR menu begins to play. In start line 251, the date of the test is April 5, 2002, the IVR system being tested is Yahoo2 - Version B, and the first menu began playing at 8:23:53 AM. End lines 263 and 277 include total listening time 276 and 298 which is the total time that the participant spends listening to the menus and interacting with the IVR system. Performance data set 252 has total listening time 276 of 83 seconds and performance data set 254 has total listening time 298 of 64 seconds. Each line in-between start lines 251 and 265 and end lines 263 and 277 provides information regarding various submenus within the IVR accessed by the participant. For performance data set 252 and line 253, BMainMenu was accessed at 8:23:53 AM, the participant listed to BMainMenu for 30 seconds (listening time 256) ,
pressed the "2" key (key 258), and BMainMenu stopped playing at 8:24:23 AM. Lines 255, 257, 259, and 261 supply the same type of information for each respective menu. Key 258 in line 261 is "TO" which indicates that the participant never made a selection in response to the "B22110" menu and therefore the participant was timed out of the menu.
Once system 10 has selected the performance data set to be analyzed, task engine 50 determines a task code and task for the selected data set at step 316. The performance data sets do not contain a participant number identifying the participant or the task. But the participant number is stored in database 33 in a log-in call record file. When the participants access the IVR simulation application, system 10 stores in database 33 each participant's participant number and the tasks they are to accomplish. Participants are generally given more than one task to accomplish and the participants are to attempt the tasks in a pre-specified order and the log files reflect this specified order of tasks. For example, if each participant is given four tasks to accomplish, then the log file includes four performance data sets for each participant where the four performance data sets for each participant are grouped together in the same sequence as the participant attempted each task. So if participant A was given the four tasks of "order DSL service," "change your billing address," "inquire about a bill payment," and "add call-forwarding," the log file has the four performance data sets for participant A one after the other in the same order as participant A was specified to attempt the tasks. Therefore, task engine 50 locates the participant number in database 33,
determines what tasks the participant was supposed to accomplish and the order the tasks were to be accomplished, and determines which performance data sets correlate with which participants and tasks. After task engine 50 determines the correct task for the selected performance data set, at step 318 task engine 50 retrieves from database 33 the correct key sequence for the corresponding task for the selected performance data set. Each task has a distinct correct key sequence so that for example that correct key sequence for "ordering DSL service" is different from the correct key sequence for "changing your billing address." The correct key sequence is the keys pressed in response to the IVR menu prompts that allows the participant to navigate the IVR menus and successfully accomplish the assigned task. For instance, the task of "ordering DSL service" requires the participant to navigate through and listen to three different menus in order to order DSL service. After the first menu, the participant needs to press the "3" key which sends the participant to the second menu. After the second menu the participant needs to press the "2" key which sends the participant to the third menu. After the third menu the participant needs to press the "4" key after which the participant has ordered DSL service and successfully completed the task. Therefore the correct key sequence for the task of "order DSL service" is "3, 2, 4."
At step 320, performance engine 56, having the correct key sequence from task engine 50, searches the selected performance data set for the correct key sequence. Performance engine 56 searches the last few keys 280 for the correct data sequence. Performance
engine 56 starts with the line right above end line 277 and begins searching up the lines 275, 273, 271, 269, and 267 to start line 265 looking for the correct key sequence. Performance engine 56 examines the end of the selected performance data set because that is the only location where the correct key sequence may be located because when the participant enters the correct key sequence, the task is accomplished, the performance data set ends, and the participant moves on to the next assigned task. Therefore once the participant enters the last key of the correct key sequence, the next line in the performance data set is end line 277 and a new performance data set begins .
Performance engine 56 compares the recorded key sequence entered by the participant with the correct key sequence at step 322. For example, performance data set 254 is for the task of "changing your billing address" and the task has a correct key sequence of "2, 2, 1, 1, 5." Performance engine 56 compares the correct key sequence with the recorded key sequence in performance data set 254 beginning with line 275 which has "5" as key 280. Performance engine 56 then moves up to line 273 to look for "1" as key 280 and finds "1" as key 280. Performance engine 56 repeats this process for lines 271, 269, and 267 until a line does not have the correct key 280 or until performance engine 56 determines that the recorded key sequence of performance data set 254 is the same as the correct key sequence.
Once performance engine 56 compares the correct key sequence with the recorded key sequence for the selected performance data set at step 322, at step 324 performance engine 56 determines if the task for the selected
performance data set was successfully accomplished. The task is successfully accomplished if the recorded key sequence includes the correct key sequence. The task is not successfully accomplished or is a failure if the recorded key sequence does not include the correct key sequence. If the task is not accomplished, then at step 326 performance engine 56 marks the selected performance data set as a failure. If the task is successfully accomplished, then at step 328 performance engine 56 marks the selected performance data set as a success or as passing. For example, performance data set 252 timed out ("TO") in line 261 because the participant made no selection and therefore performance data set 252 cannot have the correct key sequence and performance engine 56 marks performance data set 252 as failing. Determining whether the selected performance data set accomplished the task allows for an objective performance measure and provides a call-routing accuracy.
In addition to call-routing accuracy, system 10 also provides for another objective performance measure - the amount of time the participant listens to each IVR menu and the total amount spent attempting to accomplish or accomplishing the assigned task. The amount of time the participant spends listening to the menu is not a very valuable number unless menu duration times are also taken into account. A menu duration time is the amount of time it takes for a menu to play in its entirety. For instance, a menu may have five different options to choose from and the menu duration time is the amount of time it takes for the menu to play through all five options.
At step 330, performance engine 56 obtains the menu duration time from database 33 for the first menu in the selected performance data set. Performance engine 56 also obtains the listening time for the first menu in the selected performance data set. The listening time is the time a participant actually spends listening to a menu before making a selection. For instance, performance data set 254 contains the first menu BMainMenu that has listening time 278 of 30 seconds (line 267) . From database 33, performance engine 56 retrieves that menu BMainMenu has a menu duration time of 30 seconds. Once performance engine 56 obtains both the listening time and the menu duration time, performance engine 56 calculates the response time or the cumulative response time (CRT) for the first menu at step 332. The response time is the difference between the menu duration time and the listening time. Performance engine 56 calculates the response time by subtracting the menu duration time from the listening time. For example, if the main menu of the IVR is 20 seconds in length, and the participant listens to the whole menu and then makes a selection, the participant has a listening time of 20 seconds and receive a CRT score or response time of 0 (20 - 20 = 0) . If the participant only listens to part of a menu, hears their choice and chooses an option before the whole menu plays, then the participant receives a negative CRT score or response time. For instance, if the participant chooses option three 15 seconds (listening time) into a four-option, 20 second menu, the participant receives a CRT score or response time of "-5" (15 - 20 = -5) . Conversely, the participant has a response time of +15 if the participant were to repeat
the menu after hearing it once, and then choose option three 15 seconds (35 second listening time) into the second playing of the menu (35 - 20 = 15) . For performance data set 254 and line 267, the participant has a response time or CRT score of 0 because the participant has a listening time of 30 seconds and the BMainMenu menu has a menu duration time of 30 seconds (30 - 30 = 30) .
After the calculation of the response time for the first menu, performance engine 56 at step 334 determines if the selected performance data set has additional menus for which a response time needs to be calculated. If there are additional menus within the selected performance data set at step 334, then at step 336 performance engine 56 obtains the menu duration time from database 33 for the next menu and the listening time for the next menu in the same manner as performance engine 56 obtained the menu duration time and listening time for the first menu at step 330. So for performance data set 254, performance engine 56 obtains the menu duration time and listening time for line 269 and menu "B20." Once performance engine 56 obtains the menu duration time and the listening time for the next menu, at step 338 performance engine 56 calculates the response time for the next menu in the same manner as described above at step 332. The method then returns to step 334 where performance engine 56 determines if the selected performance data set has additional menus that have not yet been analyzed. Steps 334, 336, and 338 repeat until there are no additional menus to be analyzed within the selected performance data.
If there are no additional menus within the selected performance data set at step 334, then at step 340 performance engine 56 calculates the total response time for the selected performance data set. The total response time is the difference between the total menu duration time and the total listening time. Performance engine 56 calculates the total response time by first summing the menu duration times and the listening times for each menu within the selected performance data set. Once performance engine 56 has both a total menu duration time and a total listening time, performance engine 56 calculates the total response time for the selected performance data set by subtracting the total menu duration time from the total listening time. A negative total response time indicates that less time was used than required to accomplish the task, a zero response time indicates that the exact amount of time was used by the participant to accomplish the task, and a positive response time indicates that more time was used than required to accomplish the task. For instance, performance data set 64 has a total listening time 298 of 64 seconds and a total menu duration time of 75 seconds. Therefore, performance data set 254 has a total response time of -11 seconds (64 - 75 = -11) . Once performance engine 56 calculates the total response time for the selected performance data set, at step 342 system 10 determines if there are additional performance data sets within the log file to be analyzed. If at step 342 there are additional performance data sets, then system 10 selects the next performance data set at step 344 and the method returns to step 316 and repeats as described above until there are no additional
performance data sets in the log file or files to be analyzed at step 342.
When there are no additional performance data sets to be analyzed at step 342, system 10 and performance engine 56 generate an output file at step 346 and the method ends at step 348. The output file is similar in structure to the log file and performance data and is sorted by participant and sequence of task. The output file includes all the information in the log file as well as additional information such as the participant number, the assigned task, the IVR system used by the participant, the response time for each menu, the total response time, and whether the task was successfully accomplished. The output file may also contain the correct key sequence for each performance data set. The output file allows a user of system 10 to determine which IVR menu and tasks may need to be redesigned based on high positive response times or failures to accomplish tasks. For example, a performance data set for a particular task that was successfully accomplished but has very high response times may indicate that the menus need to be redesigned or reworded because although the participants accomplished the task, they had to listen to the menus several times before being able to make a selection.
In addition to the output file, GUI 46 has an additional feature that allows a user of system 10 to quickly determine the reliability of IVR test results. Summary window 240 allows the user to quickly determine the pass/fail results for task accomplishment for each participant. Because participants may not take the IVR test seriously and others may only be taking the test to
be paid, not all of the participants actually attempt to accomplish any of the assigned tasks. A participant intentionally failing all assigned tasks is not good for the overall test results and affects the analysis of the IVR system. A participant failing all of their assigned tasks is a good indication that the participant did not really try and that the associated performance data should be ignored when analyzing the output file. Summary window 240 allows the user to quickly peruse each participant's pass/fail results and call-routing accuracy without having to examine the output file and therefore determine which performance data should be disregarded and which tasks need to be tested again.
The call-routing and response time results of the IVR usability test yield important information that can be used in the further development and refinement of IVR systems. Based on these measures, companies can select IVR designs associated with the best objective performance and usability score and have an IVR system that is efficient and satisfactory to the customers.
Further enabling a customer-centric interface design is the ability to tailor the persona of the IVR system to each individual customer. Referring now to FIGURES 11 through 13, a method for conducting a dialog exchange between a user and an IVR system 12 is generally depicted. The method preferably enables an operator of a customer service call center, for example, to achieve, among other benefits, greater numbers of favorable responses to system prompts by matching the active persona of the IVR system 12 to one or more personality traits or characteristics of a current caller into the call center.
In general, method 360 preferably identifies one or more personality traits or characteristics of a caller or user and activates an IVR system persona likely to put the user at ease during the user's interaction with the IVR system 12, elicit desirable responses to IVR system 12 prompts, such as sales prompts, as well as achieve other benefits.
Method 360 may be implemented in a variety of ways. For example, method 360 may be implemented in the form of a program of instructions storable on and readable or executable from one or more computer readable media such as floppy discs, CD-ROM, HDD devices, FLASH memory, etc. Alternatively, method 360 may be implemented in one or more ASIC (application specific integrated circuits) . In a further embodiment, method 360 may be implemented using both ASIC and computer readable media. Other methods of enabling method 360 to be stored and/or executed by a computer system, such as system 18, are contemplated and considered within the scope of the present invention. Other embodiments of the invention also include computer-usable media encoding logic such as computer instructions for performing the operations of the invention. Such computer-usable media may include, without limitation, storage media such as floppy disks, hard disks, CD-ROMs, read-only memory, and random access memory; as well as communications media such as wires, optical fibers, microwaves, radio waves, and other electromagnetic or optical carriers. The control logic may also be referred to as a program product. Specifically referring to FIGURE 11, method 360 begins at 362 where IVR system 12 is preferably
initialized. Upon initialization of IVR system 12 at 362, method 360 preferably proceeds to 364.
At 364, method 360 preferably remains or loops in a wait-state where a call from a user may be awaited. As with many computer-based systems, IVR system 12 may perform additional tasks, i.e., multi-task, while in a wait-state at 364. In other words, IVR system 12 may perform one or more other computing or data processing functions while awaiting an incoming call at 364. Method 360 preferably maintains IVR system 12 in a wait-state at 364 while there is no call detected or being received on communications link 16. As with many computer-based systems, one or more escape routines may be run alongside or in conjunction with method 360 which will enable a system administrator or other IVR system 12 operator to interrupt method 360 and thereby free up one or more resources of IVR system 12.
Once an incoming call is detected or being received at 364, method 360 preferably proceeds to 366 where a communication connection between IVR system 12 and the user's communications device 40 may be established. User communication device 17 is preferably operable to allow a user to submit voice, touch-tone or other responses to prompts communicated from IVR system 12. Examples of user communications devices include, but are not limited to, telephones, mobile phones, PDAs (personal digital assistant) , personal computers, portable computers, etc.
As mentioned above, a user may contact IVR system 12 via communications link 16. Establishing a communications connection by IVR system 12 can include initiating a program or software sequence operable to accept an incoming call from a calling user.
Alternatively, IVR system 12 may include functionality operable to permit IVR system 12 to initiate contact with one or more users. For example, IVR system 12 may be configured with auto-dialer type capabilities. Other methods of establishing a communication connection between IVR system 12 and a user communication device 17 are contemplated within the scope and spirit of the present invention.
In one embodiment of method 360, IVR system 12 may be configured to evaluate or identify one or more characteristics of the call or communication connection from the current user at 368. For example, if the user dialed into a specific one of a plurality of IVR system 12 access numbers, the specific number dialed might be associated with users from a specific geographic region, a specific service, sale, lease or use of a specific product, etc. In addition, IVR system 12 may also be configured to determine whether the user is calling during a holiday, at a particular time of day, etc. IVR system 12 may be further configured with automatic number identification (ANI) , enabling IVR system 12 to identify one or more personal, geographic or other characteristic of the user from their calling number by referring to a customer database. Using either the information gathered from the user's incoming call or from one or more IVR system 12 configurations and settings, method 360 preferably then proceeds to 370 where a first prompt for the user may be generated. Preferably, the first prompt generated by IVR system 12 includes a request for a user response.
Further, the request for a user response will preferably encourage the user to respond with a spoken or verbal
response. Depending on the type of communications link 16, and user communication device 17, the request for a user response may assume other preferred constructs. The text, voice, gender, rate of speech and other characteristics of the first prompt may be determined or dictated by the information gathered from the user's incoming call, from one or more IVR system 12 settings, as well as from other factors.
In one embodiment of the present invention, IVR system 12 may identify the user from one or more communication link characteristics of the user's incoming call and accesses a stored user persona profile for the calling user, such as a persona from stored user persona profiles 68. For example, ANI information may be used to identify the caller. The stored user persona profile may contain the IVR system 12 persona used during the user's last call, for example. The first prompt may be generated based on one or more speech parameters identified in the stored user persona profile. Upon generation of a first user prompt at 370, method 360 preferably proceeds to 372. At 372, the first user prompt may be communicated to the user. In general, IVR system 12 preferably communicates the first prompt to the user over communications link 16 to user communication device 17 via communications interface 26. The first prompt may be generated using one or more speech generation applications and/or hardware devices and according to the IVR system persona then in effect, e.g., a default IVR system persona or an IVR system persona identified from one or more call characteristics. Upon communication of the first prompt to the user at 372, method 360 preferably proceeds to 374. At 374,
IVR system 12 preferably awaits a user response to the first prompt. To avoid trapping IVR system 12 in a loop waiting for the current user to respond to the first prompt, if no response is detected within a reasonable delay after prompting, method 360 preferably proceeds to 376. At 376, a determination is made as to whether a predetermined amount of overall or total wait time for a user response has been exhausted. If the predetermined amount of overall or total wait time has not been exhausted, method 360 preferably loops at 376 until such an amount of time has lapsed or passed.
Once the predetermined wait time has been exhausted at 376, method 360 preferably proceeds to 378. At 378, IVR system 12 may determine whether a predetermined number of first prompt communication attempts have been exhausted. Again, to aid in the avoidance of locking IVR system 12 in a loop waiting for a user response, a limit to the number of first prompt communication attempts may be implemented in method 360. If at 374 a user response has not been received, at 376 the predetermined wait period for the most recent first prompt communication has been exhausted and at 378 the number of first prompt communication retries have not been exhausted, method 360 preferably returns to 372 where the first prompt may again be communicated to the user. Upon re-prompting the user, method 360 preferably reiterates through the processes indicated at 374, 376 and 378. However, if at 378 it is determined that the number of first prompt communication attempts has been exhausted, method 360 preferably proceeds to 380 where the communication connection with the current user is preferably severed. Once the communications link between
the current user and IVR system 12 has been severed, method 360 preferably returns to 364 where the next user call may be awaited. Other implementations of preventing IVR system 12 from being trapped in a loop awaiting a user response to an IVR system 12 prompt are contemplated and should be included within the spirit and scope of the present invention. For example, at block 380, IVR system 12 may transfer the caller to a human operator.
Referring now to FIGURE 12, a flow diagram depicting one embodiment of the continuation of method 360 is illustrated. Method 360 preferably proceeds to 382 of FIGURE 12 as a result of the detection of a user response to the first IVR system prompt at 374 of FIGURE 11.
At 382, the detected and received user response is preferably interrogated or otherwise analyzed to identify one or more of its characteristics. For example, if the user response is verbal or spoken, IVR system 12 will preferably identify one or more speech characteristics associated with the verbal response. The characteristics of speech which may be analyzed by IVR system 12 include, but are not limited to, the speaker's gender, rate of speech, fundamental frequency, frequency range, and amplitude. According to behavioral research, for example, an introvert can be discerned from an extrovert by analyzing the rate, fundamental frequency, frequency range and amplitude of the speaker's speech. Many other speech characteristics may be identified from a speaker's speech and used in the method of the present invention. The delay between an IVR system 12 prompt and the user response may also be monitored and analyzed by IVR system 12, for example, to determine whether the current user is a novice or experienced IVR system 12 user. Additional
characteristics or parameters of a user's responses to IVR system 12 prompts may be monitored and analyzed without departing from the spirit and scope of the present invention. In one embodiment of method 360 of the present invention, IVR system 12 may begin processing the user response at 384 generally concurrently with the analysis of the user response at 382. For example, if the first prompt generated by IVR system 12 at 370 presented a plurality of transaction options from which the user was to select one, processing the user's response and selection of a desired transaction at 384 would allow IVR system 12 to initiate the desired user transaction. Alternatively, if the first prompt generated by IVR system 12 requested the caller' s IVR system 12 user identifier, for example, any information associated with the user identifier stored by the IVR system 12 may be retrieved at 384 before, after or generally concurrent with the analysis and identification of the speech characteristics of the user's response at 382.
Once the IVR system 12 has identified one or more characteristics or parameters of the user response relevant for its determination of the user's personality or demeanor at 382, method 360 preferably proceeds to 386. At 386, IVR system 12 may interrogate one or more of the persona libraries 66 preferably stored in storage system 20 on HDD device 60 and/or SAN 62. One goal of the persona library 66 interrogation at 386 is for IVR system 12 to identify a persona available in a persona library 66 which best comports with or matches the current personality or demeanor of the user. Alternatively, IVR system 12 may be configured to select
from a plurality of persona characteristics to create an IVR system persona which best matches the current personality or demeanor of the user. By activating an IVR system 12 persona that comports with or matches the current personality or demeanor of the user, according to teachings of the present invention, the user's interaction with IVR system 12 is more likely to be relaxed and the user is more likely to favorably respond to IVR system 12 prompts. According to teachings of the present invention, the personality or demeanor of a user may be defined in a variety of ways. For example, a user personality or demeanor may include IVR system 12 analysis to determine whether a user is likely a novice or experienced IVR system 12 user. Further, according to aspects of the present invention, a user' s personality or demeanor may include the gender of the caller, whether the caller may be characterized as an introvert or extrovert, whether the user is agitated, seems confused or is questioning the system. For example IVR system 12 may be configured to identify when a user is struggling with the system by recognizing that a user has increased the duration and amplitude of their speech. Further, IVR system 12 may be configured to identify tension in a user's voice. Other categories or types of user personalities or demeanors are considered within the scope of the present invention.
After interrogating one or more of the persona libraries 66 preferably included on one or more HDD device 60 or SAN 62 at 386 to identify at least one preferred or optimal IVR system persona, an IVR system persona is selected or created at 388. Upon selection of an IVR system persona at 388, the IVR system 12 persona
may be activated at 390. Activation of an IVR system persona may include, but is not limited to, loading one or more person characteristic, i.e., gender, speech rate, etc., into a memory accessible to voice generation software or hardware. Once the selected IVR system 12 persona is activated at 390, method 360 preferably proceeds to 392.
According to teachings of the present invention, the persona of IVR system 12 can have a significant impact on the responsiveness of a user. Accordingly, selection of a preferable, optimal or appropriate IVR system persona and the subsequent dialog exchange with the user in accordance with the IVR system persona provide computer-based call centers an advantage over single persona IVR system 12 based call centers.
Generally at 392, a plurality of user prompts may be generated by IVR system 12 in accordance with the IVR system persona selected at 388. For example, if it is determined that the current user is an experienced IVR system 12 user, quick, brief system prompts may be included in the preferred persona. Similarly, if it is determined that the user is a novice user or is struggling with system, slow, detailed instructions may be provided in accordance with the selected or created persona.
The plurality of user prompts generated at 392 are generally directed to completing a user desired transaction, i.e., the purpose for which the current user contacted IVR system 12, such as to check a balance or pay on an account. Upon the generation of one or more user prompts directed to completing a desired user transaction, method 360 preferably proceeds to 394 where
the one or more user prompts may be communicated to the user. For example, if IVR system 12 determines at 72 that the current user is a shy male seeking to check an account balance, the selected IVR system persona may have the characteristics of being a soft spoken male that prompts the user for an account number, asks whether the user would like an account statement mailed to his address of record or whether the user would like his balance spoken to him over the communications link, etc. Once the next user prompt has been communicated to the user at 394, method 360 preferably proceeds to 396 where a user response to the prompt is awaited. In the event a user response is not detected within a reasonable delay after prompting, method 50 preferably proceeds to 398. At 398, a determination is made as to whether a predetermined overall or total wait period for a user response to the IVR system 12 prompt has been exhausted. In the event that the predetermined time period has not been exhausted, method 360 preferably loops at 398 until the predetermined time period has been exhausted. Once the predetermined time period has been exhausted, method 360 preferably proceeds to 400.
At 400, a determination is made as to whether a predetermined total number of IVR system 12 prompt retries has been exhausted. If the predetermined total number of IVR system 12 prompt retries has not been exhausted, method 360 preferably returns to 394 where the IVR system 12 prompt directed to completing the user desired transaction is preferably repeated to the user. However, if at 400 it is determined that the predetermined number of IVR system 12 prompt retries has been exhausted, method 360 preferably proceeds to 402
where the communications connection with the current user may be severed or a bail-out to a human operator effected. Upon severance of the current user's communications connection at 402, method 360 preferably returns to 364 where the next incoming call from a user is awaited.
Referring now to FIGURE 13, a continuation of method 360 as illustrated in FIGURES 11 and 12, is shown according to teachings of the present invention. Method 360 preferably proceeds to 404 of FIGURE 13 in response to detection or reception of a user response to the IVR system 12 prompt directed to completing the desired user transaction communicated at 394.
At 404, one or more parameters or characteristics of the user's response are preferably identified, analyzed or otherwise isolated. In one embodiment of the present invention, each user response to an IVR system prompt may be evaluated for a change in the user's personality, or demeanor. In a further embodiment, only selected user responses to IVR system prompts may be evaluated for a change in the user's personality or demeanor. As mentioned above, generally concurrently with or after receipt of a user response, IVR system 12 may process the user response in furtherance of the desired user transaction, as indicated at 406.
At 408, IVR system 12 preferably compares or otherwise determines whether any differences exist between the user's current personality or demeanor and the personality or demeanor previously detected, e.g., at 382 of FIGURE 12. Specifically, according to teachings of the present invention, IVR system 12 is attempting to monitor the user's personality or demeanor to determine
whether a new IVR system persona or change in style of the current persona is likely to elicit more favorable responses from the user, put the user at ease, or otherwise enhance the user's interaction with IVR system 12. In addition, IVR system 12 may be configured to detect whether the user is having difficulty using or interacting with the system and to access and communicate one or more help prompts to aid the user in such instances . If at 408 a change is detected in the user's demeanor or personality, method 360 may return to 386 of FIGURE 12 where the one or more persona libraries 66 may again be interrogated to identify one or more IVR system personas which best comport or match the user's current demeanor or personality. Alternatively, as mentioned above, the style of the current persona may be changed or one or more persona characteristics may be compiled to create an overall IVR system persona which best matches or comports with the user's present demeanor or personality. Upon a return to 386, method 360 preferably again proceeds through selection at 388 and activation at 390 of a new IVR system persona or style. If at 408 there is no detected change in the user's demeanor or personality detected, method 360 preferably proceeds to 410.
At 410, IVR system 12 preferably determines whether the desired user transaction has been completed, i.e., whether the user has received all desired information or whether the user has provided all of the information requested by IVR system 12. If it is determined at 410 that the desired user transaction is incomplete, method 360 preferably returns to 392 of FIGURE 12 where the next
prompt in the sequence of prompts directed to completing a desired user transaction may be generated for communication at 394.
However, if at 410 it is determined that the desired user transaction has been completed, method 360 may proceed to 412. In one embodiment of the present invention, personas for users of IVR system 12 may be stored for use during subsequent transactions or dialog exchanges with the user. In such an IVR system 12, the persona for the last transaction with the current user, for example, may be stored in one or more stored user persona profiles 68 on one or more HDD devices 60 or SANs 62. As mentioned above, such stored user persona profiles 68 may be used by IVR system 12 in those instances where the caller can be identified prior to the communication of the first prompt to the user as well as in other instances.
After the persona for the current user has been stored, method 360 preferably proceeds to 414 where the communications link with the user may be severed. Once the communications link has been effectively severed, method 360 preferably returns to 364 of FIGURE 11 where IVR system 12 may await the next incoming call.
In an embodiment of the present invention, a stored user persona 68 may be used to implement one or more security measures. For example, if a stored user persona 68 is supposed to be used by only one user, when IVR system 12 detects a suspect voice pattern a security alert may be generated. Such a security alert might prompt the user to enter an additional password.
Alternatively, such an alert might notify IVR system 12 personnel of the potential breach and leave the matter
for the personnel to address. Other embodiments of securing a user account using teachings of the present invention are contemplated and considered within the scope hereof. System 10 allows for the automated creation of a customer-centric interface that directly matches menu prompts with customer tasks, orders and groups the tasks and menu options by the task frequency of occurrence and the customer perceived task relationships, and states the menu prompts using the customers own terminology.
Although the present invention has been described in detail with respect to an IVR system, system 10 may also be utilized for the automated creation of customer- centric interfaces for web sites with respect to developing content for the web site, designs of the web pages, and what tasks to locate on different web pages.
The present invention allows for the automated analysis of one or more log files containing performance data and the generation of an output file including the results of the analysis on the performance data.
Although the example embodiment is described in reference to IVR performance data, in other embodiments IVR system 12 and computer system 18 may also automatically analyze performance data from other systems in addition to IVR systems as well as any other appropriate type of data.
Although the present invention has been described in detail, it should be understood that various changes, substitutions and alterations can be made hereto without the parting from the spirit and scope of the invention as defined by the appended claims.