WO2008075329A2 - Method and system for automatic quality evaluation - Google Patents

Method and system for automatic quality evaluation Download PDF

Info

Publication number
WO2008075329A2
WO2008075329A2 PCT/IL2006/001474 IL2006001474W WO2008075329A2 WO 2008075329 A2 WO2008075329 A2 WO 2008075329A2 IL 2006001474 W IL2006001474 W IL 2006001474W WO 2008075329 A2 WO2008075329 A2 WO 2008075329A2
Authority
WO
WIPO (PCT)
Prior art keywords
interaction
score
feature
rule
personnel member
Prior art date
Application number
PCT/IL2006/001474
Other languages
French (fr)
Other versions
WO2008075329A3 (en
Inventor
Yizhak Idan
Moshe Wasserblat
Offer Hassidi
Original Assignee
Nice Systems Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nice Systems Ltd. filed Critical Nice Systems Ltd.
Publication of WO2008075329A2 publication Critical patent/WO2008075329A2/en
Publication of WO2008075329A3 publication Critical patent/WO2008075329A3/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers
    • H04M3/50Centralised arrangements for answering calls; Centralised arrangements for recording messages for absent or busy subscribers ; Centralised arrangements for recording messages
    • H04M3/51Centralised call answering arrangements requiring operator intervention, e.g. call or contact centers for telemarketing
    • H04M3/5175Call or contact centers supervision arrangements

Definitions

  • the present invention relates to quality evaluation in general, and more specifically to a method and system for automatic quality assessment of performance in an organization.
  • Quality evaluation tools are intended for obtaining, recording or using productivity, quality or performance measures within an organization.
  • a key factor is quality monitoring of various elements, such as the proficiency of personnel member interacting with calling parties, the impact of a campaign, the success of a product sale or a product, especially in relation to the competition, or the like.
  • An agent interacting with a customer represents the organization to that customer, and is responsible for a significant part of the customer experience.
  • a pleasant and professional agent can prove useful in customer service and customer retention as well as in influencing new customers to buy services or goods from the organization.
  • agents are a resource of the organization, and as such their time should be managed as efficiently as possible.
  • evaluations are done by an evaluator using an evaluation tool.
  • a supervisor listens to a randomly selected call of a specific agent, fills in an evaluation form, and attributes to the agent or to the ball a quality score or other scores and indications.
  • the supervisor may talk to the agent, suggest a training session or take other measures.
  • the scores assigned to a call may be taken into account when evaluating or analyzing a campaign, a product, a product line or the like.
  • the traditional evaluation scheme described above has multiple deficiencies.
  • the evaluation capacity is relatively low due to the dependence of the evaluation process on the human evaluator.
  • the scope of the evaluation may be limited due to the range of factors that can be taken into account when evaluating an interaction, including the captured interaction itself, the agent's workload, the call center workload during the interaction time and its impact on the service quality (e.g. queue time before agent availability), the history of interactions between the agent and the specific customer, the contribution of other agents to an activity involving several agents, the details and behavior profile of the specific customer and the like.
  • Human evaluators may not be aware or capable of considering such factors which may be relevant to the interaction quality and its evaluation.
  • the overall evaluation may be biased due to the relatively small number of the interactions that can be evaluated using current techniques and methodologies.
  • the evaluator typically samples a fraction of the interactions made by an agent as a basis for the evaluation, which may be non- representing and may not indicate important issues.
  • the evaluation may be subjective and biased due to the dependence on the specific agent and evaluator involved, and possibly their relationship.
  • the evaluator may not be aware of this bias.
  • the evaluation is executed post activity and by another person. Thus, factors that can influence the quality of the interaction (e.g.
  • evaluations are based on evaluating the activity itself and do not incorporate external factors such as the customer's satisfaction, as part of the quality evaluation. Moreover, no use or little use is done in parameters that can be drawn from the interactions and can be used for calibrating business processes and policies (e.g. the relation between the interaction's quality and its duration, or the relation between queue time before the interaction and the customer satisfaction when available). Evaluations can be further used for other agent related activities, such as recruitment (e.g. what is the predicted quality of a candidate agent, based on his background and skills profile), promotion and compensation (i.e.
  • a method for automated performance evaluation of a current interaction between a calling party and a personnel member of an organization comprising: a training and calibration step for obtaining one or more rules for determining one or more scores for a historic interaction, said ruled depending on one or more features; a feature evaluation step for determining a value of each feature, in association with the current interaction; and a score determination step for integrating the valued into one or more score evaluations for the current interaction, using the rules.
  • the method can further comprise a goal determination step for associating one or more labels to one or more goals associated with the current interaction.
  • the current interaction optionally comprises a vocal component.
  • the training step optionally comprises receiving the features and the rules.
  • the training step optionally comprises: receiving one or more historic interactions; receiving one or more labels for one or more goals for each of the historic interactions; and determining the one or more rules.
  • the method can further comprise a step of receiving the features or a step of deducing the features.
  • determining the rules is optionally performed using any one or more of the group consisting of: artificial intelligence, fuzzy logic, data mining, statistics, pattern recognition, classification, or machine learning.
  • the method optionally comprises a step of visualizing the scores or the goals.
  • deducing the features is optionally performed using any one or more of the group consisting of: artificial intelligence, fuzzy logic, data mining, statistics, pattern recognition, classification, or machine learning.
  • the method optionally comprises a partial score determination step for determining according to a second rule one or more partial scores for the current interaction, the partial score associated with the one or more features.
  • the method can further comprise a step of storing the partial scores or visualizing the partial scores.
  • the one or more features can be taken from the group consisting of: a word spotted in the interaction, an emotional level detected in the interaction, talk over percentage, number of bursts in the interaction, percentage of silence, number of participants in the interaction, number of transfers in the interaction, hold time in the interaction, abandon from hold time in the interaction, hang-up side of the interaction, abandon from queue time in the interaction, start and end time of the interaction, agent time in the interaction, customer time in the interaction, ring time in the interaction, call wrap up time of the interaction; personnel member name, personnel member status, personnel member hire date, personnel member grade, personnel member skills, personnel member department, personnel member location, personnel member working hours, personnel member workload, personnel member previous evaluations, a screen event on a computing platform operated by the personnel member, information from Customer Relationship Management system, information from billing system, or information relating to the customer.
  • the method optionally comprises a step of capturing the interactions or a step of capturing additional information.
  • the additional information optionally relates to any of the group consisting of: the interactions; the personnel member; the calling party; the organization, or a part of the organization.
  • the method optionally comprises a step of indicating the current interaction to an evaluator, or a step of performing further analysis related to the current interaction, or to the goal.
  • Each of the one or more scores may be related to the personnel member, to a product associated with the organization, or to a campaign associated with the organization.
  • Another aspect of the disclosed invention relates to an apparatus for automatically evaluating one or more interactions between a calling party and a personnel member of an organization, the apparatus comprising: a training component for obtaining one or more features and one or more rules for evaluating the interactions; and an automated quality monitoring component for obtaining one or more scores for the current interactions, using the rules.
  • the apparatus can further comprise a component for capturing the interactions or for capturing additional data.
  • the apparatus comprises an alert generation component for generating an alert when the score exceeds a predetermined threshold.
  • the apparatus can further comprise a storage device for storing the interactions or the additional data.
  • the apparatus optionally comprises a partial score determination component for determining according to a second rule a partial score for the current interaction, the partial score associated with the feature.
  • the apparatus can further comprise an alert generation component for generating an alert when the partial score exceeds a predetermined threshold.
  • an alert generation component for generating an alert when the partial score exceeds a predetermined threshold.
  • a computer readable storage medium containing a set of instructions for a general purpose computer, the set of instructions comprising: a training step for obtaining an at least one rule for determining an at least one score for an at least one historic interaction, said rule depending on one or more features; a feature evaluation step for determining one or more values of the feature, in association with the current interactions; and a score determination step for integrating the values into the score for the current interaction, using the rules.
  • Yet another aspect of the disclosed invention relates to a method for performance evaluation of an interaction between a calling party and a personnel member of an organization, the method comprising: reviewing the interaction; receiving one or more data items related to the interaction; and evaluating the interaction using the data items.
  • Fig. 1 is a block diagram of the main components in a typical environment in which the disclosed invention is used;
  • FIG. 2 is a flowchart of the automatic quality evaluation method, in accordance with a preferred embodiment of the disclosed invention.
  • FIG. 3 is a flowchart of the evaluation process itself, in accordance with a preferred embodiment of the disclosed invention.
  • the present invention overcomes the disadvantages of the prior art by providing a novel method and a system for automatic quality assessment of activities within an organization, such as agents or other personnel members interacting with customers in call centers or contact centers, the effectiveness of a campaign, the satisfaction level from a product, or the like.
  • a performance evaluation system is provided that substantially eliminates or reduces disadvantages or problems associated with the previously developed systems and processes.
  • the present invention evaluates one or more partial scores, total scores, or goals for an interaction, and assigns one or more labels to the interaction, wherein the scores, goals, labels are based on features, formulas, or rules for combining the features.
  • a partial score generally relates to one value associated with a feature regarding an interaction, and a total score generally refers to a combination of feature values combined into a result associated with the interaction.
  • a goal generally refers to a broader point of view of an interaction, wherein a feature generally refers to a specific aspect.
  • a goal unlike a total score, is optionally named.
  • a goal may refer to a "politeness", “customer satisfaction”, or the like, while a feature may be "emotion level", the partial score may be the actual emotion level assigned to a specific interaction, and a total score is a combination of one or more feature values associated with an interaction.
  • a label is generally the result assigned to a certain goal in association with a specific interaction, for example "a polite interaction", "a dissatisfied customer interaction” or the like.
  • the features and rules are deduced by training the system on interactions and user-supplied evaluations for the historic interactions. Alternatively, all or part of the features and the rules can be set manually by a user.
  • a user in this case can be an evaluator, such as a supervisor or a manager, or a person whose task is to introduce the information into the system. Such person can be an employee of the organization or belong to a third party organization responsible for integrating such system within the organization.
  • the present invention provides a performance evaluation system that extracts and evaluates one or more measures or features from an interaction and/or from data and metadata related to the interaction or to a personnel member such as an agent involved in the interaction, and automatically creates total evaluation score by considering the evaluated measures.
  • the features to be evaluated may include metadata related to the call, such as time of day, contact origin, IVR category selected by the calling party, duration, the calling party's hold time, number of call transfers during the interaction or the like; the type of contact media used during the interaction (e.g. voice, video, chat, etc.); data extracted from the interaction itself, such as spotted words, emotion levels, or the like; and additional data, such as data related to the shifts of the agent handling the call; data related to the calling person or the like.
  • metadata related to the call such as time of day, contact origin, IVR category selected by the calling party, duration, the calling party's hold time, number of call transfers during the interaction or the like
  • the type of contact media used during the interaction e.g. voice, video,
  • the invention optionally extracts from previous interactions and evaluations the features to be extracted and evaluated, and the combination thereof for generating partial and total evaluation score, thus making the system independent of human definition of the features to be evaluated, the evaluation for different results associated with the features, and the way to integrate all results of all features into a total interaction evaluation score or any other measure.
  • interactions with one or more notable measures or a notable total evaluation score are notified to a human evaluator or a relevant system, preferably in real-time or near-real- time, i.e. during the interaction or a short time, in the order of magnitude of minutes, after an interaction ends.
  • the environment is an interaction-rich organization, typically a financial institute such as a bank, a trading floor, or an insurance company, a public safety contact center, a communications service provider contact center, customer service outsourcing center or the like. Interactions with customers, users, leads, employees, business partners, or other contacts are captured, thus generating input information of various types.
  • Each organization may comprise one or more sites, i.e. geographic locations in which interactions are handled.
  • the information types include vocal interactions, interactions comprising a vocal component, non-vocal interactions, organizational data and additional data.
  • Interactions comprising a vocal component optionally include telephone calls 112, made using any device, such as a landline phone or a cellular phone, and transmitted using any technology, such as analog lines, voice over IP (VoIP) or others.
  • VoIP voice over IP
  • the capturing of voice interactions can employ many forms and technologies, including trunk side, extension side, summed audio, separate audio, various encoding and decoding protocols such as G729, G726, G723.1, and the like.
  • the voice typically passes through a PABX (not shown), which in addition to the voice of the two or more sides participating in the interaction, collects additional information discussed below.
  • the interactions can further include face- to-face interactions, such as those recorded in a walk-in-center, and additional sources of vocal data, such as microphone, intercom, the audio part of a video capturing such as a video conference, vocal input by external systems or any other source.
  • Another source of collected information includes multi media information 116, which comprises interactions or parts thereof, such as video conferences, e- mails, chats, screen events including text entered by the agent, buttons pressed, field value change, mouse clicks, windows opened or closed, links to additional interactions in which one of the participants in the current interaction participated, or any other information relevant to the interaction or to the participants, which may reside within other applications or databases.
  • CTI Computer Telephony Integration
  • PABX information 120 including start and end time, ring time, hold time, queue time, call wrap up time, number of participants, stages (i.e. segments of the call during which the speakers do not change), hold time, abandon from hold, hang-up side, abandon from queue, number and length of hold periods, transfer events, number called, number called from, DNIS, VDN, ANI, or the like.
  • CTI Computer Telephony Integration
  • PABX information 120 including start and end time, ring time, hold time, queue time, call wrap up time, number of participants, stages (i.e. segments of the call during which the speakers do not change), hold time, abandon from hold, hang-up side, abandon from queue, number and length of hold periods, transfer events, number called, number called from, DNIS, VDN, ANI, or the like.
  • organization information 124 containing information such as customer feedback and partial or total scores collected for example via a customer survey taken after an interaction; agent information such as name, status such as temporary or not, hire date, grade, grade date, job function, job skills, training received, department, location, agent working parameters related to the interaction such as working hours and breaks during the shift, workload, quality of recent interactions, previous agent and evaluator partial or total scores and trends, average monthly agent evaluations, agent trend during the last predetermined period, service attrition indication, agent shift assignments, or the like.
  • agent information such as name, status such as temporary or not, hire date, grade, grade date, job function, job skills, training received, department, location, agent working parameters related to the interaction such as working hours and breaks during the shift, workload, quality of recent interactions, previous agent and evaluator partial or total scores and trends, average monthly agent evaluations, agent trend during the last predetermined period, service attrition indication, agent shift assignments, or the like.
  • Organization information 124 can further include relevant information from other systems such as Customer Relationship Management (CRM), billing, Workflow Management (WFM), the corporate Intranet, mail servers, the Internet, relevant information exchanged between the parties before, during or after the interaction, details of the shift the agent worked on that day, the agent's experience, information about previous evaluations of the same agent, documents and the like.
  • CRM Customer Relationship Management
  • WFM Workflow Management
  • the corporate Intranet the corporate Intranet
  • mail servers the Internet
  • relevant information exchanged between the parties before, during or after the interaction details of the shift the agent worked on that day, the agent's experience, information about previous evaluations of the same agent, documents and the like.
  • Yet another source of information relates to audio analysis information, i.e. results of processing vocal segments such as telephone interactions.
  • the results can include speech-to-text, words extracted from the interaction and their timing within the interaction, for example greetings, bad words, satisfaction or dissatisfaction, fulfillment, or others; talk-over percentage, number of bursts and identification of bursting side, percentage of silence, percentage of agent/customer speech time, excitement and emotions on both sides.
  • Additional information 130 can also be introduced into the system for evaluation processes, including for example video analysis of video streams, or capturing of the participants' screen images. Data from all the above- mentioned sources and others is captured and preferably logged by capturing/logging unit 132.
  • Capturing/logging unit 132 comprises a computing platform running one or more computer applications as is detailed below.
  • the captured data is optionally stored in storage 134, which is preferably a mass storage device, for example an optical storage device such as a CD, a DVD, or a laser disk; a magnetic storage device such as a tape or a hard disk; a semiconductor storage device such as Flash device, memory stick, or the like.
  • storage 134 is preferably a mass storage device, for example an optical storage device such as a CD, a DVD, or a laser disk; a magnetic storage device such as a tape or a hard disk; a semiconductor storage device such as Flash device, memory stick, or the like.
  • the storage can be common or separate for different types of captured interactions and different types of additional data.
  • the storage can be remote from the site of capturing and can serve one or more sites of a multi-site organization.
  • Storage 134 further optionally stores features, parameters and rules 135, describing the features or measures to be extracted or evaluated from an interaction, such as spotted words, length of conversation, number of transfers, customer's satisfaction, or others, and the way to combine them into one or more total evaluation scores, or to goals (by assigning labels), referring for examples to customer satisfaction, compliance with instructions, or similar goals. Labels, however, may alternatively be unrelated to goals, such as "for follow-up", "reconsider” or the like. Each goal may be associated with a different business need, such as agent evaluation, customer retention or others.
  • the rules can be for example a weighted sum, logical or algebraic calculation or a mixture thereof, multiplication or other linear and/or non linear functions connecting the partial scores assigned to features in connection with a certain interaction, and one or more labels assigned to goals in association with the interaction.
  • Features, parameters, rules or labels 135 are either entered by a user such as an evaluator or determined by training module 141.
  • Training module 141 preferably receives historic interactions, evaluations, and/or feedback thereof, and deduces features, parameters, rules, or labels 135. Training module 141 can also extract features or statistical behavior that do not require human evaluation, such as the average and variance of call duration, and provide this information to automated quality monitoring 136 or other systems.
  • Such information can be used for business insight or for determining out-of-norm behavior that can be used as a basis for setting a partial or total score, evaluation prioritizing, alerting, or the like.
  • the data, features, parameters, or rules are transferred from storage 134 or directly from capturing/logging unit 132 without being stored, to automated quality monitoring component 136 which executes the actual evaluation method, detailed in association with Fig. 2 and Fig. 3 below, and obtains one or more partial scores for the interaction, each partial score associated with one or more features, and a total score for the interaction.
  • a command may be sent to alert generation component 138.
  • the alert can take any form, such as transferring a call, providing an on-screen alert, sending an e-mail, fax, SMS, telephone message or others to a person in charge, updating a database or other actions.
  • the person in charge preferably receives also the interaction or the relevant data.
  • the alert is a real-time alert
  • a live connection for monitoring or for intervening in the call as long as it is continued is preferably sent to the person in charge, so that he or she can listen and take part in the call and .
  • the call recording or a link to its recording thereof may be sent to the person.
  • the evaluation results are optionally transferred to inspection component 140, where a human evaluator in inspector preferably monitors the performance of automated quality monitoring component 136.
  • input from the human inspection is fed back into training module 141 for updating rules and parameters 135.
  • the evaluation information is transferred for storage purposes to result storage 144.
  • the evaluation information can be transferred for any other purpose or component 148 such as reporting, storage in a human resources (HR) system, reward calculation, as a feedback to the agent himself, as a call assignment parameter in Automatic Call Distribution (ACD) systems or other systems and purposes, input to service, marketing, or product departments, or the like.
  • HR human resources
  • ACD Automatic Call Distribution
  • All components of the system including capturing/logging components 132, automated quality monitoring component 136 and training module 141, preferably comprise one or more computing platforms, such as a personal computer, a mainframe computer, or any other type of computing platform that is provisioned with a memory device (not shown), a Central Processing Unit (CPU) or microprocessor device, and several I/O ports (not shown).
  • each component can be a Digital Signal Processing (DSP) chip, an Application Specific Integrated Circuit (ASIC) device storing the commands and data necessary to execute the methods of the present invention, or the like.
  • DSP Digital Signal Processing
  • ASIC Application Specific Integrated Circuit
  • Each component can further include a storage device (not shown), storing the relevant applications and data required for processing.
  • Each computing platform runs one or more applications, including the applications running on the capturing components, training component or the quality evaluation component are a set of logically inter-related computer programs, modules, or other units and associated data structures that interact to perform one or more specific tasks. All applications can be co-located and run on the same one or more computing platform, or on different platforms.
  • the information sources, capturing platforms, computing platforms, the or storage devices, or any combination thereof can be located on or in association with one or more sites of a multi-site organization, and one or more evaluation components can be remotely located, evaluate interactions captured at one or more sites and store the segmentation results in a local, central, distributed or any other storage.
  • information stored in storage 134 can be utilized for monitoring prior to or simultaneously with the storage operation, such that the captured information is streamed to automated quality monitoring component 136, and evaluation results are available in real-time or near-real-time to the agent, to a supervisor, to a manager, or another person when immediate intervention is required.
  • audio analysis data 128 may be categorized under additional data 128, but may also be a product generated by automated quality monitoring component 136, and produced during its activity.
  • the disclosed description is meant to provide one preferred embodiment, wherein other preferred embodiments can be designed without departing from the spirit of the disclosed invention. Referring now to Fig.
  • step 200 starts at capturing interactions step 200, during which interactions, including but not limited to vocal interactions, are captured.
  • the vocal interactions comprise interactions made from or to any type of communication device, including landline, mobile, cellular, or personal computing device, or other types of vocal interactions, such as the audio part of video capturing, a capturing of interactions in walk-in centers or the like.
  • step 204 additional information is captured, including CTI information, multi-media information, and data from organizational databases. It will be appreciated by a person skilled in the art that step 200 and step 204 can be performed simultaneously or one after the other, at any order, and step 200 is not necessarily performed before step 204.
  • analysis engines such as word spotting, emotion detection and others are operated on one or more interactions at step 204, and supply indications related to the interactions.
  • the interactions captured at step 200 and/or the data captured at step 204 are optionally stored for later examination. This step may be omitted if further analysis related to performance is performed online and not upon stored data.
  • interactions to be evaluated are selected. If processing power is not limited, it would be desirable to evaluate all interactions, so all interactions are processed and recommendations are issued to a user for focusing on important interactions or groups of interactions. However, if this is not possible, a selection step is performed at step 212 for selecting interactions according to criteria.
  • the criteria can be related to the agent, to the interaction, to the product, to a campaign, to the environment or to the processing capacity of the computing platforms.
  • typical selection rules can be, for example: at least one interaction associated with each agent should be evaluated every predetermined period of time, for example a month; alternatively, an agent should be evaluated every week, for example, during his first month and only then every month; an evaluated interaction should have a minimal and/or a maximal duration, in order to avoid insignificant interactions on one hand, and too long and processing-power-consuming interactions on the other hand; in case of back-log, a newer interaction should be prioritized for evaluation over older one; interactions related to a predetermined department within an organization should take precedence over interactions related to other departments, or the like.
  • the criteria may also relate to factors found by the analysis engines discussed in association with step 204 above.
  • the analysis engines may be operated as part of the selection step 212 rather than during data capturing step 204.
  • one or more interactions selected at step 212 are evaluated.
  • the interactions are evaluated using the interaction itself, together with relevant additional information, including for example: details about the agent, such as the shift he or she were working on the day of the interaction, their experience, previous scoring or the like; information about previous interactions of the same customer, and other information.
  • the evaluation is done according to features, parameters and rules gathered through training process 220 detailed below.
  • the training process outputs elements including features, parameters, statistics, or rules according to which an interaction should be evaluated.
  • the features preferably relate to aspects of the interaction that should be considered, including for example: call duration, hold time, spotted compliance words, such as "good morning, company X, this is Y speaking, how may I help you", "thank you for calling company X", etc., spotted words that indicate anger, emotion level of the different parties, number of bursts of parties into the other's party speech, crosstalk duration, number of transfers, or the like.
  • the features to be examined optionally depend on the objective of the evaluation, such as: evaluating an agent, a campaign, a product or any other factor within the organization.
  • the features may thus include characteristics of the agent, the call, the environment or other factors.
  • the parameters include for example the specific words to be spotted, the emotion level and emotion type that provide required indication, or the like.
  • the characteristics used may change according to other characteristics, for example, when an indication for a satisfied customer is available for the call, testing the emotion levels may be skipped, and higher importance may be assigned to other characteristics, such as spotted compliance words.
  • the rules preferably relate to the partial scoring that should be assigned to the results of applying each feature to the interaction, for example: if at least a predetermined percentage, such as 80% of the required compliance words are spotted with high certainty in an interaction, then the interaction receives the maximal scoring for this aspect.
  • the rules may be static or dynamic. A static rule for example would assign a fixed partial score to an emotional level.
  • a dynamic rule may take into account the department the agent is working for and assign a differential partial score to the emotional level, depending on the service type provided by the department, the average emotion levels of all calls or workload on the day of the interaction.
  • Another rule preferably refers to assigning one or more goals or labels to the interaction, optionally based on the partial scores of the various features.
  • Each of the total scores should preferably combine partial scores assigned to different features.
  • a total score can comprise a weighted sum of the partial scores assigned to spotted compliance words, emotional level above a certain threshold, and a maximal number of bursts, wherein a label assigned to such a call may be "an emotional interaction", or a similar name.
  • the rule for assigning a label or a total score may also be dynamic or static.
  • a dynamic rule may take into account the experience of an agent, and assign better total scores to a less experienced agent, wherein a more experienced agent would receive an inferior total score for the same performance.
  • interactions with a customer who called more than a predetermined number of times recently, may justify a better total score.
  • Fig. 3 showing a flow chart of the main steps associated with performance evaluation step 216 of Fig. 2.
  • the relevant features are selected for the interaction to be evaluated.
  • the features may be selected based on factors such as: the quality of the capturing, as some features, for example emotion detection, are more sensitive to the voice quality; computational complexity of features; automatic feature selection; the significance of the specific interaction, or others.
  • the features for the specific interaction as selected by a user in steps 232 or 236, or by the system in step 240 are determined
  • the values of the features are determined. This may include performing analyses such as word spotting, emotion detection, call flow analysis, or usage of already available data such as number of transfers or hold time.
  • a partial score is optionally assigned to the value associated with each feature, based on rules determined during training and calibration steps 220 of Fig. 2, and at step 312 one or more total scores are assigned to the interaction, possibly through integration of the partial scores obtained at step 308, and using relevant combination rules, also obtained by training and calibration steps 220 of Fig. 2.
  • analysis engines such as word spotting, emotion detection and others are operated during step 216 and supply indications for the relevant evaluated features. Alternatively, they are operated during step 308.
  • Step 312 can be carried out without partial score determination step 308, if the goal determination is based on raw parameters, such as number of transfers, call length above a predetermined threshold or the like.
  • one or more labels are assigned to the interaction, as an evaluation of one or more goals. The labels are preferably based on the partial scores or on the total scores assigned to the interaction at step 312.
  • the evaluation results including the partial and total scores, are stored for later retrieval, for example during future evaluation of the same agent, evaluations of other agents, statistics or any other purpose.
  • the results are optionally visualized, using any visualization tool or methodology, such as graphs, tables, connection networks, reports, or the like.
  • any visualization tool or methodology such as graphs, tables, connection networks, reports, or the like.
  • such further analysis or another usage is performed. Once a substantial amount of interactions are evaluated, their results have statistical significance, and they can be used for deducing organizational parameters such as quality drivers for agents, abnormal behavior of agents, reasons for inefficient service, such as too long interactions, or the like. Tools including data mining and various statistical analyses can be used for designing predictive scoring models, discovering behavior patterns, discovering behavior trends and exceptions relevant for the organization.
  • one or more interactions are transferred to a supervisor, a compliance officer or any other user, including a person or a system for further evaluation.
  • Such interactions are preferably interactions for which one or more partial scores or a total score is exceptionally good or bad.
  • an exceptionally "bad" call i.e. a call that received a total score significantly under the average can be sent to the agent itself, to enable self learning.
  • interactions are evaluated and indicated to a supervisor on a periodical basis.
  • interactions that receive a total score that is close to the average of the total scores of an agent are transferred to a supervisor for periodical evaluation.
  • a supervisor or another user may indicate parameters for an interaction to be transferred to him. For example, if an evaluator indicated in a previous evaluation to an agent to be more polite, he may indicate that he prefers to receive calls related to the same agent, for which a partial score related to politeness is lower than the average.
  • the system receives from a user the features, such as word spotting, emotional level, work load, agent experience or the like, to be considered in the evaluation.
  • the features may relate to the call itself, to metadata thereof, to data extracted from the call, to the agent or the customer participating in the call, to the environment, or to the organization.
  • the user optionally further supplies the parameters relevant for each feature, such as words to be spotted, and the rules, for example the partial score associated with a predetermined range of emotional level.
  • the user also has to supply the rule for determining one or more total scores for an interaction.
  • the user may supply one or more rules for assigning one or more labels to the interaction, for example a customer satisfaction goal may be associated with an average of the partial score assigned for spotted words, emotional level and number of bursts.
  • Step 236 is an alternative to step 232.
  • the user indicates for a multiplicity of historic interactions the features and the parameters according to which a feature is to be evaluated within an interaction, such as the emotional level, number of bursts, etc., and the way to determine the partial score, for example the maximal acceptable range of emotional level in a interaction.
  • the user evaluates, i.e. provides labels for multiple interactions and assigns one or more total scores to each exemplary interaction.
  • the system evaluates the partial scores for the interaction, and determines according to the partial scores and the total scores provided by the user, the rules according to which the partial scores are to be composed to goals.
  • the training phase is also used for training a sub-system, such as the emotion analysis sub system, and the user also supplies the partial scores assigned to features of the exemplary interactions or some of them, to make the system more robust.
  • the user supplies only the labels for the training interactions or the total scores, and the system deduces the used features, parameters and rules.
  • the features that should be considered for the goals may be determined by feature selection, through identification which features are more dominant in the goal calculation.
  • the features and rules may be determined based on techniques such as artificial intelligence, fuzzy logic, data mining, statistics, pattern recognition, classification, machine learning, or others.
  • the system may omit the use of an evaluator, and use a customer feedback available for the interaction as a proxy to an evaluator-provided total score.
  • Alternative steps 232, 236, and 240 differ in the workload division between the user training the system, and the system itself, in determining the relevant features, parameters and rules. The more details provided by a user the more time consuming is the process. People skilled in the art will appreciate that other divisions of work between the user and the system may be implemented.
  • a user can provide accurate parameters and scoring rules for some features, such as compliance words which must be pronounced, and less accurate details for other features, such as silence within the interaction. For the less specific details, the system will complete the deficiencies.
  • the determination of partial scores and rules can employ methods and techniques known in the fields of artificial intelligence, fuzzy logic, data mining, statistics, pattern recognition, classification, machine learning and others.
  • the features, parameters, rules and other related data such as statistical models, users' voice models or other information are stored for later retrieval during evaluations at step 216.
  • the system may record the values assigned by different evaluators to partial scores or to labels assigned to goals, and compare them. Thus, the system can notify about evaluators who typically assign total scores significantly higher or lower than the average.
  • the evaluator performing the evaluation is provided with interactions that were earlier classified by assigning total scores or labels by the system or by another evaluator. The interactions are preferably mixed, so that interactions representing a wide range of labels are presented to the evaluator, preferably without the assigned partial or total scores. This provides for a more typical division of partial or total scores, and more accurate rules. Training and calibration steps can be performed not only for initial training of the system, but also at a later time, for fine-tuning or calibration of the system. ,
  • the disclosed invention overcomes the problems of manual evaluations, and provides significant advantages: the capacity of quality monitoring increases, and when sufficient computing power is available, total QM (quality monitoring) can be achieved, in which all interactions are evaluated; due to the increased number of evaluated interactions, the overall quality measurement accuracy will increase, too; quality drivers can be identified on one hand, and abnormal behavior of agents can be detected on the other hand; calls can be profiled in an efficient manner; quality measurements can be fine-tuned while avoiding human and statistical bias; the interactions are evaluates using common evaluation methods, which are objective and not biased by a specific evaluator; critical measures for business performance monitoring can be determined, such as acceptable waiting times in incoming queue; performance quality may be increased while service and quality management costs are decreased; and it is possible to obtain real time indication for problematic interactions. It should be appreciated that other methods, which may include one or more of the suggested steps, may be designed to suitably perform the concepts of the present invention in other similar manners. Such alternative methods and modes are also covered by the present invention.
  • a human evaluator can perform the evaluation according to the organization's methodology and procedures, while receiving information from analysis engines, such as indication to spotted words, areas of high emotional level or the like. Such information may save the evaluator time in listening to the whole interaction and direct him or her to relevant areas. Thus, the evaluator will review the interaction, receive the additional interaction and provide an evaluation for the interaction based on the interaction itself or the auxiliary data.

Abstract

A method and apparatus for automatic quality evaluation of an activity related to an organization, such as an agent of an organization which intereacts with a calling party, a product, a campaign or the like, based on any combination of one or more of the following: the interaction itself and particularly its vocal part; meta data related to the call, to the call parties or to the environment; information extracted from the call or general information (Fig. 1). The method may be activated off-line or on-line, in which case an alert can be generated (Fig. 1, alert generation 138) for one or more calls.

Description

METHOD AND SYSTEM FOR AUTOMATIC QUALITY EVALUATION
BACKGROUND OF THE INVENTION
FIELD OF TFIE INVENTION
The present invention relates to quality evaluation in general, and more specifically to a method and system for automatic quality assessment of performance in an organization.
DISCUSSION OF THE RELATED ART
Quality evaluation tools are intended for obtaining, recording or using productivity, quality or performance measures within an organization. Within organizations or organizations' units that mainly handle customer interactions, such as call centers, customer relations centers, trade floors or the like, a key factor is quality monitoring of various elements, such as the proficiency of personnel member interacting with calling parties, the impact of a campaign, the success of a product sale or a product, especially in relation to the competition, or the like. An agent interacting with a customer represents the organization to that customer, and is responsible for a significant part of the customer experience. A pleasant and professional agent can prove useful in customer service and customer retention as well as in influencing new customers to buy services or goods from the organization. On the other hand, agents are a resource of the organization, and as such their time should be managed as efficiently as possible. Thus, there is great importance in evaluating the agents' performance on a regular basis, for purposes such as identifying and correcting inefficiencies in an agent's conduct, rewarding agents for notable performance, or the like.
Traditionally, evaluations are done by an evaluator using an evaluation tool. In a typical call center service evaluation scenario, a supervisor listens to a randomly selected call of a specific agent, fills in an evaluation form, and attributes to the agent or to the ball a quality score or other scores and indications. During employee evaluation processes or if significant deficiencies are detected in the agent's performance, the supervisor may talk to the agent, suggest a training session or take other measures. The scores assigned to a call may be taken into account when evaluating or analyzing a campaign, a product, a product line or the like.
The traditional evaluation scheme described above has multiple deficiencies. First, the evaluation capacity is relatively low due to the dependence of the evaluation process on the human evaluator. Next, the scope of the evaluation may be limited due to the range of factors that can be taken into account when evaluating an interaction, including the captured interaction itself, the agent's workload, the call center workload during the interaction time and its impact on the service quality (e.g. queue time before agent availability), the history of interactions between the agent and the specific customer, the contribution of other agents to an activity involving several agents, the details and behavior profile of the specific customer and the like. Human evaluators may not be aware or capable of considering such factors which may be relevant to the interaction quality and its evaluation. Another limitation is that the overall evaluation may be biased due to the relatively small number of the interactions that can be evaluated using current techniques and methodologies. Thus, the evaluator typically samples a fraction of the interactions made by an agent as a basis for the evaluation, which may be non- representing and may not indicate important issues. Yet another problem is that there is no mechanism that can identify evaluation-worthy interactions and prioritize the interactions for evaluation. In addition, the evaluation may be subjective and biased due to the dependence on the specific agent and evaluator involved, and possibly their relationship. Moreover, the evaluator may not be aware of this bias. Also, the evaluation is executed post activity and by another person. Thus, factors that can influence the quality of the interaction (e.g. a customer has waited a long time on queue before the activity) may be unknown to the evaluator at the time of evaluation. Yet another problem is that evaluations are based on evaluating the activity itself and do not incorporate external factors such as the customer's satisfaction, as part of the quality evaluation. Moreover, no use or little use is done in parameters that can be drawn from the interactions and can be used for calibrating business processes and policies (e.g. the relation between the interaction's quality and its duration, or the relation between queue time before the interaction and the customer satisfaction when available). Evaluations can be further used for other agent related activities, such as recruitment (e.g. what is the predicted quality of a candidate agent, based on his background and skills profile), promotion and compensation (i.e. the objective quality of the agent) and retention (the relation between the agent's quality trend and the agent's probability to leave). When employing quality monitoring, it is desired that outstanding interactions are notified to a supervisor, or another person within the organization. It is also desired that real-time or near-real-time alert is generated for such interactions, or agent quality trend where there might be room for effective reparative intervention. These is therefore a need in the art for a system and apparatus for automated quality monitoring, which will overcome the problems and disadvantages of prior art systems and of manual evaluation methods. The solution should provide more characteristics, take into account more factors, and make the evaluation results available to additional tools and systems intended for improving the performance of the organization or parts thereof.
SUMMARY OF THE PRESENT INVENTION
It is an object of the present invention to provide a novel method for detecting evaluating interactions and more particularly vocal interactions in an organizations , which overcomes the disadvantages of the prior art. In accordance with the present invention, there is thus provided a method for automated performance evaluation of a current interaction between a calling party and a personnel member of an organization, the method comprising: a training and calibration step for obtaining one or more rules for determining one or more scores for a historic interaction, said ruled depending on one or more features; a feature evaluation step for determining a value of each feature, in association with the current interaction; and a score determination step for integrating the valued into one or more score evaluations for the current interaction, using the rules. The method can further comprise a goal determination step for associating one or more labels to one or more goals associated with the current interaction. The current interaction optionally comprises a vocal component. The training step optionally comprises receiving the features and the rules. The training step optionally comprises: receiving one or more historic interactions; receiving one or more labels for one or more goals for each of the historic interactions; and determining the one or more rules. The method can further comprise a step of receiving the features or a step of deducing the features. Within the method, determining the rules is optionally performed using any one or more of the group consisting of: artificial intelligence, fuzzy logic, data mining, statistics, pattern recognition, classification, or machine learning. The method optionally comprises a step of visualizing the scores or the goals. Within the method, deducing the features is optionally performed using any one or more of the group consisting of: artificial intelligence, fuzzy logic, data mining, statistics, pattern recognition, classification, or machine learning. The method optionally comprises a partial score determination step for determining according to a second rule one or more partial scores for the current interaction, the partial score associated with the one or more features. The method can further comprise a step of storing the partial scores or visualizing the partial scores. Within the method, the one or more features can be taken from the group consisting of: a word spotted in the interaction, an emotional level detected in the interaction, talk over percentage, number of bursts in the interaction, percentage of silence, number of participants in the interaction, number of transfers in the interaction, hold time in the interaction, abandon from hold time in the interaction, hang-up side of the interaction, abandon from queue time in the interaction, start and end time of the interaction, agent time in the interaction, customer time in the interaction, ring time in the interaction, call wrap up time of the interaction; personnel member name, personnel member status, personnel member hire date, personnel member grade, personnel member skills, personnel member department, personnel member location, personnel member working hours, personnel member workload, personnel member previous evaluations, a screen event on a computing platform operated by the personnel member, information from Customer Relationship Management system, information from billing system, or information relating to the customer. The method optionally comprises a step of capturing the interactions or a step of capturing additional information. The additional information optionally relates to any of the group consisting of: the interactions; the personnel member; the calling party; the organization, or a part of the organization. The method optionally comprises a step of indicating the current interaction to an evaluator, or a step of performing further analysis related to the current interaction, or to the goal. Each of the one or more scores may be related to the personnel member, to a product associated with the organization, or to a campaign associated with the organization. Another aspect of the disclosed invention relates to an apparatus for automatically evaluating one or more interactions between a calling party and a personnel member of an organization, the apparatus comprising: a training component for obtaining one or more features and one or more rules for evaluating the interactions; and an automated quality monitoring component for obtaining one or more scores for the current interactions, using the rules. The apparatus can further comprise a component for capturing the interactions or for capturing additional data. Optionally, the apparatus comprises an alert generation component for generating an alert when the score exceeds a predetermined threshold. The apparatus can further comprise a storage device for storing the interactions or the additional data. The apparatus optionally comprises a partial score determination component for determining according to a second rule a partial score for the current interaction, the partial score associated with the feature. The apparatus can further comprise an alert generation component for generating an alert when the partial score exceeds a predetermined threshold. Yet another aspect of the disclosed invention relates to a computer readable storage medium containing a set of instructions for a general purpose computer, the set of instructions comprising: a training step for obtaining an at least one rule for determining an at least one score for an at least one historic interaction, said rule depending on one or more features; a feature evaluation step for determining one or more values of the feature, in association with the current interactions; and a score determination step for integrating the values into the score for the current interaction, using the rules.
Yet another aspect of the disclosed invention relates to a method for performance evaluation of an interaction between a calling party and a personnel member of an organization, the method comprising: reviewing the interaction; receiving one or more data items related to the interaction; and evaluating the interaction using the data items.
BRIEF DESCRIPTION OF THE DRAWINGS
The present invention will be understood and appreciated more fully from the following detailed description taken in conjunction with the drawings in which:
Fig. 1 is a block diagram of the main components in a typical environment in which the disclosed invention is used;
Figure 2 is a flowchart of the automatic quality evaluation method, in accordance with a preferred embodiment of the disclosed invention; and
Figure 3 is a flowchart of the evaluation process itself, in accordance with a preferred embodiment of the disclosed invention.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT
The present invention overcomes the disadvantages of the prior art by providing a novel method and a system for automatic quality assessment of activities within an organization, such as agents or other personnel members interacting with customers in call centers or contact centers, the effectiveness of a campaign, the satisfaction level from a product, or the like. In accordance with the present invention, a performance evaluation system is provided that substantially eliminates or reduces disadvantages or problems associated with the previously developed systems and processes. The present invention evaluates one or more partial scores, total scores, or goals for an interaction, and assigns one or more labels to the interaction, wherein the scores, goals, labels are based on features, formulas, or rules for combining the features. A partial score generally relates to one value associated with a feature regarding an interaction, and a total score generally refers to a combination of feature values combined into a result associated with the interaction. A goal generally refers to a broader point of view of an interaction, wherein a feature generally refers to a specific aspect. A goal, unlike a total score, is optionally named. Thus, a goal may refer to a "politeness", "customer satisfaction", or the like, while a feature may be "emotion level", the partial score may be the actual emotion level assigned to a specific interaction, and a total score is a combination of one or more feature values associated with an interaction. A label is generally the result assigned to a certain goal in association with a specific interaction, for example "a polite interaction", "a dissatisfied customer interaction" or the like. The features and rules are deduced by training the system on interactions and user-supplied evaluations for the historic interactions. Alternatively, all or part of the features and the rules can be set manually by a user. A user in this case can be an evaluator, such as a supervisor or a manager, or a person whose task is to introduce the information into the system. Such person can be an employee of the organization or belong to a third party organization responsible for integrating such system within the organization. In particular, the present invention provides a performance evaluation system that extracts and evaluates one or more measures or features from an interaction and/or from data and metadata related to the interaction or to a personnel member such as an agent involved in the interaction, and automatically creates total evaluation score by considering the evaluated measures. The features to be evaluated may include metadata related to the call, such as time of day, contact origin, IVR category selected by the calling party, duration, the calling party's hold time, number of call transfers during the interaction or the like; the type of contact media used during the interaction (e.g. voice, video, chat, etc.); data extracted from the interaction itself, such as spotted words, emotion levels, or the like; and additional data, such as data related to the shifts of the agent handling the call; data related to the calling person or the like. The invention optionally extracts from previous interactions and evaluations the features to be extracted and evaluated, and the combination thereof for generating partial and total evaluation score, thus making the system independent of human definition of the features to be evaluated, the evaluation for different results associated with the features, and the way to integrate all results of all features into a total interaction evaluation score or any other measure. In a preferred embodiment, interactions with one or more notable measures or a notable total evaluation score are notified to a human evaluator or a relevant system, preferably in real-time or near-real- time, i.e. during the interaction or a short time, in the order of magnitude of minutes, after an interaction ends. For example, it may be desired to use a realtime partial or total score for directing the person to a survey system, or to use a bad call indication to navigate the calling person differently the next time he calls (the time of which is unknown, so the indication should be available as soon as possible).
Referring now to Fig. 1, which presents a block diagram of the main components in a typical environment in which the disclosed invention is used. The environment, generally referenced as 100, is an interaction-rich organization, typically a financial institute such as a bank, a trading floor, or an insurance company, a public safety contact center, a communications service provider contact center, customer service outsourcing center or the like. Interactions with customers, users, leads, employees, business partners, or other contacts are captured, thus generating input information of various types. Each organization may comprise one or more sites, i.e. geographic locations in which interactions are handled. The information types include vocal interactions, interactions comprising a vocal component, non-vocal interactions, organizational data and additional data. Interactions comprising a vocal component optionally include telephone calls 112, made using any device, such as a landline phone or a cellular phone, and transmitted using any technology, such as analog lines, voice over IP (VoIP) or others. The capturing of voice interactions can employ many forms and technologies, including trunk side, extension side, summed audio, separate audio, various encoding and decoding protocols such as G729, G726, G723.1, and the like. The voice typically passes through a PABX (not shown), which in addition to the voice of the two or more sides participating in the interaction, collects additional information discussed below. The interactions can further include face- to-face interactions, such as those recorded in a walk-in-center, and additional sources of vocal data, such as microphone, intercom, the audio part of a video capturing such as a video conference, vocal input by external systems or any other source. Another source of collected information includes multi media information 116, which comprises interactions or parts thereof, such as video conferences, e- mails, chats, screen events including text entered by the agent, buttons pressed, field value change, mouse clicks, windows opened or closed, links to additional interactions in which one of the participants in the current interaction participated, or any other information relevant to the interaction or to the participants, which may reside within other applications or databases. In addition, the environment receives Computer Telephony Integration (CTI) and PABX information 120, including start and end time, ring time, hold time, queue time, call wrap up time, number of participants, stages (i.e. segments of the call during which the speakers do not change), hold time, abandon from hold, hang-up side, abandon from queue, number and length of hold periods, transfer events, number called, number called from, DNIS, VDN, ANI, or the like. Yet another source of information is organization information 124, containing information such as customer feedback and partial or total scores collected for example via a customer survey taken after an interaction; agent information such as name, status such as temporary or not, hire date, grade, grade date, job function, job skills, training received, department, location, agent working parameters related to the interaction such as working hours and breaks during the shift, workload, quality of recent interactions, previous agent and evaluator partial or total scores and trends, average monthly agent evaluations, agent trend during the last predetermined period, service attrition indication, agent shift assignments, or the like. Organization information 124 can further include relevant information from other systems such as Customer Relationship Management (CRM), billing, Workflow Management (WFM), the corporate Intranet, mail servers, the Internet, relevant information exchanged between the parties before, during or after the interaction, details of the shift the agent worked on that day, the agent's experience, information about previous evaluations of the same agent, documents and the like. Yet another source of information relates to audio analysis information, i.e. results of processing vocal segments such as telephone interactions. The results can include speech-to-text, words extracted from the interaction and their timing within the interaction, for example greetings, bad words, satisfaction or dissatisfaction, fulfillment, or others; talk-over percentage, number of bursts and identification of bursting side, percentage of silence, percentage of agent/customer speech time, excitement and emotions on both sides. Additional information 130 can also be introduced into the system for evaluation processes, including for example video analysis of video streams, or capturing of the participants' screen images. Data from all the above- mentioned sources and others is captured and preferably logged by capturing/logging unit 132. Capturing/logging unit 132 comprises a computing platform running one or more computer applications as is detailed below. The captured data is optionally stored in storage 134, which is preferably a mass storage device, for example an optical storage device such as a CD, a DVD, or a laser disk; a magnetic storage device such as a tape or a hard disk; a semiconductor storage device such as Flash device, memory stick, or the like. The storage can be common or separate for different types of captured interactions and different types of additional data. Alternatively, the storage can be remote from the site of capturing and can serve one or more sites of a multi-site organization. Storage 134 further optionally stores features, parameters and rules 135, describing the features or measures to be extracted or evaluated from an interaction, such as spotted words, length of conversation, number of transfers, customer's satisfaction, or others, and the way to combine them into one or more total evaluation scores, or to goals (by assigning labels), referring for examples to customer satisfaction, compliance with instructions, or similar goals. Labels, however, may alternatively be unrelated to goals, such as "for follow-up", "reconsider" or the like. Each goal may be associated with a different business need, such as agent evaluation, customer retention or others. The rules can be for example a weighted sum, logical or algebraic calculation or a mixture thereof, multiplication or other linear and/or non linear functions connecting the partial scores assigned to features in connection with a certain interaction, and one or more labels assigned to goals in association with the interaction. Features, parameters, rules or labels 135 are either entered by a user such as an evaluator or determined by training module 141. Training module 141 preferably receives historic interactions, evaluations, and/or feedback thereof, and deduces features, parameters, rules, or labels 135. Training module 141 can also extract features or statistical behavior that do not require human evaluation, such as the average and variance of call duration, and provide this information to automated quality monitoring 136 or other systems. Such information can be used for business insight or for determining out-of-norm behavior that can be used as a basis for setting a partial or total score, evaluation prioritizing, alerting, or the like. The data, features, parameters, or rules are transferred from storage 134 or directly from capturing/logging unit 132 without being stored, to automated quality monitoring component 136 which executes the actual evaluation method, detailed in association with Fig. 2 and Fig. 3 below, and obtains one or more partial scores for the interaction, each partial score associated with one or more features, and a total score for the interaction. If one of the partial scores or the total score or their trend exceeds a predetermined threshold, or meets other criteria, such as belonging to the top/bottom predetermined percentage, or is more/less than a predetermined times an average partial or total score, or provides certain information, a command may be sent to alert generation component 138. The alert can take any form, such as transferring a call, providing an on-screen alert, sending an e-mail, fax, SMS, telephone message or others to a person in charge, updating a database or other actions. The person in charge preferably receives also the interaction or the relevant data. If the alert is a real-time alert, a live connection for monitoring or for intervening in the call as long as it is continued is preferably sent to the person in charge, so that he or she can listen and take part in the call and . If the alert is sent when the call was already finished, the call recording or a link to its recording thereof may be sent to the person. In a preferred embodiment, the evaluation results are optionally transferred to inspection component 140, where a human evaluator in inspector preferably monitors the performance of automated quality monitoring component 136. Optionally, input from the human inspection is fed back into training module 141 for updating rules and parameters 135. Alternatively, the evaluation information is transferred for storage purposes to result storage 144. In addition, the evaluation information can be transferred for any other purpose or component 148 such as reporting, storage in a human resources (HR) system, reward calculation, as a feedback to the agent himself, as a call assignment parameter in Automatic Call Distribution (ACD) systems or other systems and purposes, input to service, marketing, or product departments, or the like. For example, there might be a need to escalate an incoming call even before it was handled, such as when a customer whose last interaction with the organization was unsatisfactory due to the agent treatment might be directed to one of the agents handling VIP customers or to a specialist in customer retention. All components of the system, including capturing/logging components 132, automated quality monitoring component 136 and training module 141, preferably comprise one or more computing platforms, such as a personal computer, a mainframe computer, or any other type of computing platform that is provisioned with a memory device (not shown), a Central Processing Unit (CPU) or microprocessor device, and several I/O ports (not shown). Alternatively, each component can be a Digital Signal Processing (DSP) chip, an Application Specific Integrated Circuit (ASIC) device storing the commands and data necessary to execute the methods of the present invention, or the like. Each component can further include a storage device (not shown), storing the relevant applications and data required for processing. Each computing platform runs one or more applications, including the applications running on the capturing components, training component or the quality evaluation component are a set of logically inter-related computer programs, modules, or other units and associated data structures that interact to perform one or more specific tasks. All applications can be co-located and run on the same one or more computing platform, or on different platforms. In yet another alternative, the information sources, capturing platforms, computing platforms, the or storage devices, or any combination thereof can be located on or in association with one or more sites of a multi-site organization, and one or more evaluation components can be remotely located, evaluate interactions captured at one or more sites and store the segmentation results in a local, central, distributed or any other storage. It will be appreciated that information stored in storage 134 can be utilized for monitoring prior to or simultaneously with the storage operation, such that the captured information is streamed to automated quality monitoring component 136, and evaluation results are available in real-time or near-real-time to the agent, to a supervisor, to a manager, or another person when immediate intervention is required. It will be apparent to a person of ordinary skill in the art that the various data sources and applications used in the evaluation may be divided in a different way. For example audio analysis data 128 may be categorized under additional data 128, but may also be a product generated by automated quality monitoring component 136, and produced during its activity. The disclosed description is meant to provide one preferred embodiment, wherein other preferred embodiments can be designed without departing from the spirit of the disclosed invention. Referring now to Fig. 2, showing a flow chart of the main steps associated with the method of the disclosed invention. The process starts at capturing interactions step 200, during which interactions, including but not limited to vocal interactions, are captured. The vocal interactions comprise interactions made from or to any type of communication device, including landline, mobile, cellular, or personal computing device, or other types of vocal interactions, such as the audio part of video capturing, a capturing of interactions in walk-in centers or the like. At step 204 additional information is captured, including CTI information, multi-media information, and data from organizational databases. It will be appreciated by a person skilled in the art that step 200 and step 204 can be performed simultaneously or one after the other, at any order, and step 200 is not necessarily performed before step 204. Optionally, analysis engines such as word spotting, emotion detection and others are operated on one or more interactions at step 204, and supply indications related to the interactions. At storing interactions or data step 208 the interactions captured at step 200 and/or the data captured at step 204 are optionally stored for later examination. This step may be omitted if further analysis related to performance is performed online and not upon stored data. At optional step 212, interactions to be evaluated are selected. If processing power is not limited, it would be desirable to evaluate all interactions, so all interactions are processed and recommendations are issued to a user for focusing on important interactions or groups of interactions. However, if this is not possible, a selection step is performed at step 212 for selecting interactions according to criteria. The criteria can be related to the agent, to the interaction, to the product, to a campaign, to the environment or to the processing capacity of the computing platforms. Thus, typical selection rules can be, for example: at least one interaction associated with each agent should be evaluated every predetermined period of time, for example a month; alternatively, an agent should be evaluated every week, for example, during his first month and only then every month; an evaluated interaction should have a minimal and/or a maximal duration, in order to avoid insignificant interactions on one hand, and too long and processing-power-consuming interactions on the other hand; in case of back-log, a newer interaction should be prioritized for evaluation over older one; interactions related to a predetermined department within an organization should take precedence over interactions related to other departments, or the like. The criteria may also relate to factors found by the analysis engines discussed in association with step 204 above. Thus, the analysis engines may be operated as part of the selection step 212 rather than during data capturing step 204. At step 216, one or more interactions selected at step 212 are evaluated. The interactions are evaluated using the interaction itself, together with relevant additional information, including for example: details about the agent, such as the shift he or she were working on the day of the interaction, their experience, previous scoring or the like; information about previous interactions of the same customer, and other information. The evaluation is done according to features, parameters and rules gathered through training process 220 detailed below. The training process outputs elements including features, parameters, statistics, or rules according to which an interaction should be evaluated. The features preferably relate to aspects of the interaction that should be considered, including for example: call duration, hold time, spotted compliance words, such as "good morning, company X, this is Y speaking, how may I help you", "thank you for calling company X", etc., spotted words that indicate anger, emotion level of the different parties, number of bursts of parties into the other's party speech, crosstalk duration, number of transfers, or the like. The features to be examined optionally depend on the objective of the evaluation, such as: evaluating an agent, a campaign, a product or any other factor within the organization. The features may thus include characteristics of the agent, the call, the environment or other factors. The parameters include for example the specific words to be spotted, the emotion level and emotion type that provide required indication, or the like. The characteristics used may change according to other characteristics, for example, when an indication for a satisfied customer is available for the call, testing the emotion levels may be skipped, and higher importance may be assigned to other characteristics, such as spotted compliance words. The rules preferably relate to the partial scoring that should be assigned to the results of applying each feature to the interaction, for example: if at least a predetermined percentage, such as 80% of the required compliance words are spotted with high certainty in an interaction, then the interaction receives the maximal scoring for this aspect. The rules may be static or dynamic. A static rule for example would assign a fixed partial score to an emotional level. A dynamic rule may take into account the department the agent is working for and assign a differential partial score to the emotional level, depending on the service type provided by the department, the average emotion levels of all calls or workload on the day of the interaction. Another rule preferably refers to assigning one or more goals or labels to the interaction, optionally based on the partial scores of the various features. Each of the total scores should preferably combine partial scores assigned to different features. For example, a total score can comprise a weighted sum of the partial scores assigned to spotted compliance words, emotional level above a certain threshold, and a maximal number of bursts, wherein a label assigned to such a call may be "an emotional interaction", or a similar name. The rule for assigning a label or a total score may also be dynamic or static. For example, a dynamic rule may take into account the experience of an agent, and assign better total scores to a less experienced agent, wherein a more experienced agent would receive an inferior total score for the same performance. In another example, interactions with a customer who called more than a predetermined number of times recently, may justify a better total score.
Referring now to Fig. 3, showing a flow chart of the main steps associated with performance evaluation step 216 of Fig. 2. At feature selection step 300, the relevant features are selected for the interaction to be evaluated. The features may be selected based on factors such as: the quality of the capturing, as some features, for example emotion detection, are more sensitive to the voice quality; computational complexity of features; automatic feature selection; the significance of the specific interaction, or others. Once the features for the specific interaction, as selected by a user in steps 232 or 236, or by the system in step 240 are determined, at feature evaluation step 304 the values of the features are determined. This may include performing analyses such as word spotting, emotion detection, call flow analysis, or usage of already available data such as number of transfers or hold time. At optional step 308, a partial score is optionally assigned to the value associated with each feature, based on rules determined during training and calibration steps 220 of Fig. 2, and at step 312 one or more total scores are assigned to the interaction, possibly through integration of the partial scores obtained at step 308, and using relevant combination rules, also obtained by training and calibration steps 220 of Fig. 2. In a preferred embodiment, analysis engines such as word spotting, emotion detection and others are operated during step 216 and supply indications for the relevant evaluated features. Alternatively, they are operated during step 308. Step 312 can be carried out without partial score determination step 308, if the goal determination is based on raw parameters, such as number of transfers, call length above a predetermined threshold or the like. At optional step 316, one or more labels are assigned to the interaction, as an evaluation of one or more goals. The labels are preferably based on the partial scores or on the total scores assigned to the interaction at step 312.
Referring now back to Fig. 2, at step 224 the evaluation results, including the partial and total scores, are stored for later retrieval, for example during future evaluation of the same agent, evaluations of other agents, statistics or any other purpose. At step 226 the results are optionally visualized, using any visualization tool or methodology, such as graphs, tables, connection networks, reports, or the like. At step 248, such further analysis or another usage is performed. Once a substantial amount of interactions are evaluated, their results have statistical significance, and they can be used for deducing organizational parameters such as quality drivers for agents, abnormal behavior of agents, reasons for inefficient service, such as too long interactions, or the like. Tools including data mining and various statistical analyses can be used for designing predictive scoring models, discovering behavior patterns, discovering behavior trends and exceptions relevant for the organization. At step 228, one or more interactions are transferred to a supervisor, a compliance officer or any other user, including a person or a system for further evaluation. Such interactions are preferably interactions for which one or more partial scores or a total score is exceptionally good or bad. For example, an exceptionally "bad" call, i.e. a call that received a total score significantly under the average can be sent to the agent itself, to enable self learning. Thus, it would be desired to indicate an interaction with a high emotional level to a supervisor or a manager, especially when the evaluation is performed in real-time or in near-real-time, i.e. the vocal stream or a link or reference thereto, and relevant information is transferred for evaluation or intervention during or shortly after the interaction ended. At that time, intervention in the situation might still be possible and improvement chances of may be higher than later on. In another preferred embodiment, interactions are evaluated and indicated to a supervisor on a periodical basis. In yet another alternative, interactions that receive a total score that is close to the average of the total scores of an agent are transferred to a supervisor for periodical evaluation. In yet another preferred embodiment, a supervisor or another user may indicate parameters for an interaction to be transferred to him. For example, if an evaluator indicated in a previous evaluation to an agent to be more polite, he may indicate that he prefers to receive calls related to the same agent, for which a partial score related to politeness is lower than the average.
Referring now to training steps 220, at step 232 the system receives from a user the features, such as word spotting, emotional level, work load, agent experience or the like, to be considered in the evaluation. The features may relate to the call itself, to metadata thereof, to data extracted from the call, to the agent or the customer participating in the call, to the environment, or to the organization. The user optionally further supplies the parameters relevant for each feature, such as words to be spotted, and the rules, for example the partial score associated with a predetermined range of emotional level. The user also has to supply the rule for determining one or more total scores for an interaction. Further, the user may supply one or more rules for assigning one or more labels to the interaction, for example a customer satisfaction goal may be associated with an average of the partial score assigned for spotted words, emotional level and number of bursts. Step 236 is an alternative to step 232. In step 236, the user indicates for a multiplicity of historic interactions the features and the parameters according to which a feature is to be evaluated within an interaction, such as the emotional level, number of bursts, etc., and the way to determine the partial score, for example the maximal acceptable range of emotional level in a interaction. The user then evaluates, i.e. provides labels for multiple interactions and assigns one or more total scores to each exemplary interaction. The system then evaluates the partial scores for the interaction, and determines according to the partial scores and the total scores provided by the user, the rules according to which the partial scores are to be composed to goals. Alternatively, the training phase is also used for training a sub-system, such as the emotion analysis sub system, and the user also supplies the partial scores assigned to features of the exemplary interactions or some of them, to make the system more robust. In yet another alternative, at step 240, the user supplies only the labels for the training interactions or the total scores, and the system deduces the used features, parameters and rules. The features that should be considered for the goals may be determined by feature selection, through identification which features are more dominant in the goal calculation. The features and rules may be determined based on techniques such as artificial intelligence, fuzzy logic, data mining, statistics, pattern recognition, classification, machine learning, or others. In yet another alternative, the system may omit the use of an evaluator, and use a customer feedback available for the interaction as a proxy to an evaluator-provided total score. Alternative steps 232, 236, and 240 differ in the workload division between the user training the system, and the system itself, in determining the relevant features, parameters and rules. The more details provided by a user the more time consuming is the process. People skilled in the art will appreciate that other divisions of work between the user and the system may be implemented. For example, a user can provide accurate parameters and scoring rules for some features, such as compliance words which must be pronounced, and less accurate details for other features, such as silence within the interaction. For the less specific details, the system will complete the deficiencies. The determination of partial scores and rules can employ methods and techniques known in the fields of artificial intelligence, fuzzy logic, data mining, statistics, pattern recognition, classification, machine learning and others. At step 244 the features, parameters, rules and other related data such as statistical models, users' voice models or other information are stored for later retrieval during evaluations at step 216.
As an optional addition, the system may record the values assigned by different evaluators to partial scores or to labels assigned to goals, and compare them. Thus, the system can notify about evaluators who typically assign total scores significantly higher or lower than the average. In yet another embodiment, the evaluator performing the evaluation is provided with interactions that were earlier classified by assigning total scores or labels by the system or by another evaluator. The interactions are preferably mixed, so that interactions representing a wide range of labels are presented to the evaluator, preferably without the assigned partial or total scores. This provides for a more typical division of partial or total scores, and more accurate rules. Training and calibration steps can be performed not only for initial training of the system, but also at a later time, for fine-tuning or calibration of the system. ,
The disclosed invention overcomes the problems of manual evaluations, and provides significant advantages: the capacity of quality monitoring increases, and when sufficient computing power is available, total QM (quality monitoring) can be achieved, in which all interactions are evaluated; due to the increased number of evaluated interactions, the overall quality measurement accuracy will increase, too; quality drivers can be identified on one hand, and abnormal behavior of agents can be detected on the other hand; calls can be profiled in an efficient manner; quality measurements can be fine-tuned while avoiding human and statistical bias; the interactions are evaluates using common evaluation methods, which are objective and not biased by a specific evaluator; critical measures for business performance monitoring can be determined, such as acceptable waiting times in incoming queue; performance quality may be increased while service and quality management costs are decreased; and it is possible to obtain real time indication for problematic interactions. It should be appreciated that other methods, which may include one or more of the suggested steps, may be designed to suitably perform the concepts of the present invention in other similar manners. Such alternative methods and modes are also covered by the present invention.
It should further be appreciated that a human evaluator can perform the evaluation according to the organization's methodology and procedures, while receiving information from analysis engines, such as indication to spotted words, areas of high emotional level or the like. Such information may save the evaluator time in listening to the whole interaction and direct him or her to relevant areas. Thus, the evaluator will review the interaction, receive the additional interaction and provide an evaluation for the interaction based on the interaction itself or the auxiliary data.
It will be appreciated by persons skilled in the art that the present invention is not limited to what has been particularly shown and described hereinabove. Rather the scope of the present invention is defined only by the claims which follow.

Claims

1. A method for automated performance evaluation of an at least one current interaction between a calling party and a personnel member of an organization, the method comprising: a training and calibration step for obtaining an at least one rule for determining an at least one total score for an at least one historic interaction, said rule depending on an at least one feature; a feature evaluation step for determining an at least one value of the at least one feature, in association with the at least one current interaction; and a total score determination step for integrating the at least one value into the at least one total score evaluation for the current interaction, using the at least one rule.
2. The method of claim 1 further comprising a goal determination step for associating an at least one label to an at least one goal associated with the current interaction.
3. The method of claim 1 wherein the at least one current interaction comprises a vocal component.
4. The method of claim 1 wherein the training step comprises receiving the at least one feature and the at least one rule.
5. The method of claim 1 wherein the training step comprises: receiving an at least one historic interaction; receiving an at least one label for an at least one goal for each of the at least one historic interaction; and determining the at least one rule.
6. The method of claim 5 further comprising a step of receiving the at least one feature.
7. The method of claim 5 further comprising a step of deducing the at least one feature.
8. The method of claim 5 wherein determining the at least one rule is performed using any one or more of the group consisting of: artificial intelligence, fuzzy logic, data mining, statistics, pattern recognition, classification, or machine learning.
9. The method of claim 1 further comprising a step of visualizing the at least one total score.
10. The method of claim 1 further comprising a step of visualizing the at least one goal.
11. The method of claim 7 wherein deducing the at least one feature is performed using any one or more of the group consisting of: artificial intelligence, fuzzy logic, data mining, statistics, pattern recognition, classification, or machine learning.
12. The method of claim 1 further comprising a partial score determination step for determining according to an at least one second rule an at least one partial score for the at least one current interaction, the at least one partial score associated with the at least one feature. t
13. The method of claim 12 further comprising the step of storing the at least one
I partial score.
14. The method of claim 12 further comprising a step of visualizing the at least one partial score.
15. The method of claim 1 wherein the at least one feature is taken from the group consisting of: an at least one word spotted in the at least one interaction, an emotion level detected in the at least one interaction, talk over percentage, number of bursts in the at least one interaction, percentage of silence, number of participants in the at least one interaction, number of transfers in the at least one interaction, number of holds in the at least one interaction, hold time in the at least one interaction, abandon from hold time in the at least one interaction, hang-up side of the at least one interaction, abandon from queue time in the at least one interaction, start and end time of the at least one interaction, agent time in the at least one interaction, customer time in the at least one interaction, ring time in the at least one interaction, call wrap up time of the at least one interaction; personnel member identification, personnel member status personnel member hire date, personnel member grade, personnel member skills, personnel member department, personnel member location, personnel member working hours, personnel member workload, personnel member previous evaluations, an at least one screen event on a computing platform operated by the personnel member, information from workflow management system, information from customer relationship management system, information from billing system, or any other information relating to the customer.
16. The method of claim 1 further comprising a step of capturing the at least one interaction.
17. The method of claim 1 further comprising a step of capturing additional information.
18. The method of claim 17 wherein the additional information relates to any of the group consisting of: the at least one interaction; the personnel member; the calling party; the organization; or a part of the organization.
19. The method of claim 1 further comprising a step of indicating the at least one current interaction to an evaluator.
20. The method of claim 1 further comprising a step of performing further analysis on the at least one total score.
21. The method of claim 1 further comprising a step of further analysis related to the at least one current interaction, or to the at least one goal.
22. The method of claim 1 wherein the at least one total score is related to the personnel member.
23. The method of claim 1 wherein the at least one total score is related to an at least one product associated with the organization.
24. The method of claim 1 wherein the at least one total score is related to an at least one campaign associated with the organization.
25.An apparatus for automatically evaluating an at least one current interaction between a calling party and a personnel member of an organization, the apparatus comprising: a training component for obtaining an at least one feature and an at least one rule for evaluating an at least one historic interaction; and an automated quality monitoring component for obtaining an at least one score for the at least one current interaction, using the at least one rule.
26. The apparatus of claim 25 further comprising an at least one capturing component for capturing the at least one interaction.
27. The apparatus of claim 25 further comprising an at least one capturing component for capturing additional data.
28. The apparatus of claim 25 further comprising an alert generation component for generating an alert when the at least one score exceeds a predetermined ' threshold.
29. The apparatus of claim 25 further comprising a storage device for storing the at least one interaction.
30. The apparatus of claim 27 further comprising a storage device for storing the additional data.
31. The apparatus of claim 25 further comprising a partial score determination component for determining according to an at least one second rule an at least one partial score for the at least one current interaction.
32. The apparatus of claim 31 further comprising an alert generation component for generating an alert when the at least one partial score exceeds a predetermined threshold.
33. A computer readable storage medium containing a set of instructions for a general purpose computer, the set of instructions comprising: a training step for obtaining an at least one rule for determining an at least one score for an at least one historic interaction, said rule depending on an at least one feature; a feature evaluation step for determining an at least one value of the at least one feature, in association with the at least one current interaction; and a score determination step for integrating the at least one value into the at least one score for the current interaction, using the at least one rule.
34. A method for performance evaluation of an at least one interaction between a calling party and a personnel member of an organization, the method comprising: reviewing the at least one interaction; receiving an at least one data item related to the interaction; and evaluating the interaction using the at least one data item.
PCT/IL2006/001474 2006-12-20 2006-12-21 Method and system for automatic quality evaluation WO2008075329A2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US11/613,203 US7577246B2 (en) 2006-12-20 2006-12-20 Method and system for automatic quality evaluation
US11/613,203 2006-12-20

Publications (2)

Publication Number Publication Date
WO2008075329A2 true WO2008075329A2 (en) 2008-06-26
WO2008075329A3 WO2008075329A3 (en) 2009-04-09

Family

ID=39536814

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IL2006/001474 WO2008075329A2 (en) 2006-12-20 2006-12-21 Method and system for automatic quality evaluation

Country Status (2)

Country Link
US (1) US7577246B2 (en)
WO (1) WO2008075329A2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20240046191A1 (en) * 2022-07-27 2024-02-08 Nice Ltd. System and method for quality planning data evaluation using target kpis

Families Citing this family (171)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7953219B2 (en) * 2001-07-19 2011-05-31 Nice Systems, Ltd. Method apparatus and system for capturing and analyzing interaction based content
US8204884B2 (en) * 2004-07-14 2012-06-19 Nice Systems Ltd. Method, apparatus and system for capturing and analyzing interaction based content
WO2007120985A2 (en) * 2006-02-22 2007-10-25 Federal Signal Corporation Public safety warning network
US9346397B2 (en) 2006-02-22 2016-05-24 Federal Signal Corporation Self-powered light bar
US20070194906A1 (en) * 2006-02-22 2007-08-23 Federal Signal Corporation All hazard residential warning system
US9129290B2 (en) * 2006-02-22 2015-09-08 24/7 Customer, Inc. Apparatus and method for predicting customer behavior
US7476013B2 (en) * 2006-03-31 2009-01-13 Federal Signal Corporation Light bar and method for making
US9002313B2 (en) * 2006-02-22 2015-04-07 Federal Signal Corporation Fully integrated light bar
US7746794B2 (en) * 2006-02-22 2010-06-29 Federal Signal Corporation Integrated municipal management console
US7752043B2 (en) 2006-09-29 2010-07-06 Verint Americas Inc. Multi-pass speech analytics
US7822605B2 (en) * 2006-10-19 2010-10-26 Nice Systems Ltd. Method and apparatus for large population speaker identification in telephone interactions
US8199901B2 (en) * 2007-01-04 2012-06-12 Xora, Inc. Method and apparatus for customer retention
US9686367B2 (en) * 2007-03-15 2017-06-20 Scenera Technologies, Llc Methods, systems, and computer program products for providing predicted likelihood of communication between users
US8107613B2 (en) * 2007-03-23 2012-01-31 Avaya Inc. Context recovery for call center agents
US7707062B2 (en) * 2007-05-17 2010-04-27 Michael Abramowicz Method and system of forecasting customer satisfaction with potential commercial transactions
US10419611B2 (en) * 2007-09-28 2019-09-17 Mattersight Corporation System and methods for determining trends in electronic communications
US8903079B2 (en) * 2008-01-28 2014-12-02 Satmap International Holdings Limited Routing callers from a set of callers based on caller data
US8781100B2 (en) * 2008-01-28 2014-07-15 Satmap International Holdings Limited Probability multiplier process for call center routing
US9787841B2 (en) 2008-01-28 2017-10-10 Afiniti Europe Technologies Limited Techniques for hybrid behavioral pairing in a contact center system
US9692898B1 (en) 2008-01-28 2017-06-27 Afiniti Europe Technologies Limited Techniques for benchmarking paring strategies in a contact center system
US10708430B2 (en) 2008-01-28 2020-07-07 Afiniti Europe Technologies Limited Techniques for benchmarking pairing strategies in a contact center system
US20090190745A1 (en) * 2008-01-28 2009-07-30 The Resource Group International Ltd Pooling callers for a call center routing system
US10750023B2 (en) 2008-01-28 2020-08-18 Afiniti Europe Technologies Limited Techniques for hybrid behavioral pairing in a contact center system
US9712676B1 (en) 2008-01-28 2017-07-18 Afiniti Europe Technologies Limited Techniques for benchmarking pairing strategies in a contact center system
US9654641B1 (en) 2008-01-28 2017-05-16 Afiniti International Holdings, Ltd. Systems and methods for routing callers to an agent in a contact center
US9712679B2 (en) 2008-01-28 2017-07-18 Afiniti International Holdings, Ltd. Systems and methods for routing callers to an agent in a contact center
US9781269B2 (en) 2008-01-28 2017-10-03 Afiniti Europe Technologies Limited Techniques for hybrid behavioral pairing in a contact center system
US10708431B2 (en) 2008-01-28 2020-07-07 Afiniti Europe Technologies Limited Techniques for hybrid behavioral pairing in a contact center system
US10567586B2 (en) * 2008-11-06 2020-02-18 Afiniti Europe Technologies Limited Pooling callers for matching to agents based on pattern matching algorithms
US8824658B2 (en) 2008-11-06 2014-09-02 Satmap International Holdings Limited Selective mapping of callers in a call center routing system
US8670548B2 (en) * 2008-01-28 2014-03-11 Satmap International Holdings Limited Jumping callers held in queue for a call center routing system
US9300802B1 (en) 2008-01-28 2016-03-29 Satmap International Holdings Limited Techniques for behavioral pairing in a contact center system
US8718271B2 (en) * 2008-01-28 2014-05-06 Satmap International Holdings Limited Call routing methods and systems based on multiple variable standardized scoring
US8879715B2 (en) 2012-03-26 2014-11-04 Satmap International Holdings Limited Call mapping systems and methods using variance algorithm (VA) and/or distribution compensation
US9774740B2 (en) 2008-01-28 2017-09-26 Afiniti Europe Technologies Limited Techniques for benchmarking pairing strategies in a contact center system
US8797377B2 (en) 2008-02-14 2014-08-05 Cisco Technology, Inc. Method and system for videoconference configuration
US8316089B2 (en) * 2008-05-06 2012-11-20 Microsoft Corporation Techniques to manage media content for a multimedia conference event
US8199896B2 (en) * 2008-05-22 2012-06-12 Nice Systems Ltd. Session board controller based post call routing for customer feedback application
US20100002864A1 (en) * 2008-07-02 2010-01-07 International Business Machines Corporation Method and System for Discerning Learning Characteristics of Individual Knowledge Worker and Associated Team In Service Delivery
US20100020959A1 (en) * 2008-07-28 2010-01-28 The Resource Group International Ltd Routing callers to agents based on personality data of agents
US8781106B2 (en) * 2008-08-29 2014-07-15 Satmap International Holdings Limited Agent satisfaction data for call routing based on pattern matching algorithm
US8694658B2 (en) 2008-09-19 2014-04-08 Cisco Technology, Inc. System and method for enabling communication sessions in a network environment
US20100111288A1 (en) * 2008-11-06 2010-05-06 Afzal Hassan Time to answer selector and advisor for call routing center
USRE48412E1 (en) * 2008-11-06 2021-01-26 Afiniti, Ltd. Balancing multiple computer models in a call center routing system
US8472611B2 (en) * 2008-11-06 2013-06-25 The Resource Group International Ltd. Balancing multiple computer models in a call center routing system
TWI384423B (en) * 2008-11-26 2013-02-01 Ind Tech Res Inst Alarm method and system based on voice events, and building method on behavior trajectory thereof
US8659637B2 (en) 2009-03-09 2014-02-25 Cisco Technology, Inc. System and method for providing three dimensional video conferencing in a network environment
US20120059687A1 (en) * 2009-03-18 2012-03-08 Allen Ross Keyte Organisational tool
US8719016B1 (en) 2009-04-07 2014-05-06 Verint Americas Inc. Speech analytics system and system and method for determining structured speech
US8659639B2 (en) 2009-05-29 2014-02-25 Cisco Technology, Inc. System and method for extending communications between participants in a conferencing environment
US20100332287A1 (en) * 2009-06-24 2010-12-30 International Business Machines Corporation System and method for real-time prediction of customer satisfaction
US20100332286A1 (en) * 2009-06-24 2010-12-30 At&T Intellectual Property I, L.P., Predicting communication outcome based on a regression model
US9082297B2 (en) 2009-08-11 2015-07-14 Cisco Technology, Inc. System and method for verifying parameters in an audiovisual environment
US20110153642A1 (en) * 2009-12-21 2011-06-23 International Business Machines Corporation Client Relationship Management
US8977584B2 (en) * 2010-01-25 2015-03-10 Newvaluexchange Global Ai Llp Apparatuses, methods and systems for a digital conversation management platform
US8715178B2 (en) * 2010-02-18 2014-05-06 Bank Of America Corporation Wearable badge with sensor
US9138186B2 (en) * 2010-02-18 2015-09-22 Bank Of America Corporation Systems for inducing change in a performance characteristic
US8715179B2 (en) * 2010-02-18 2014-05-06 Bank Of America Corporation Call center quality management tool
US8958541B1 (en) * 2010-03-03 2015-02-17 West Corporation Intelligent network-based voice and data recording
US9225916B2 (en) 2010-03-18 2015-12-29 Cisco Technology, Inc. System and method for enhancing video images in a conferencing environment
US8301475B2 (en) * 2010-05-10 2012-10-30 Microsoft Corporation Organizational behavior monitoring analysis and influence
US8428246B2 (en) * 2010-05-12 2013-04-23 Verizon Patent And Licensing Inc. Unified customer service interactions
US9313452B2 (en) 2010-05-17 2016-04-12 Cisco Technology, Inc. System and method for providing retracting optics in a video conferencing environment
US20110307258A1 (en) * 2010-06-10 2011-12-15 Nice Systems Ltd. Real-time application of interaction anlytics
US8589384B2 (en) 2010-08-25 2013-11-19 International Business Machines Corporation Methods and arrangements for employing descriptors for agent-customer interactions
US8724797B2 (en) 2010-08-26 2014-05-13 Satmap International Holdings Limited Estimating agent performance in a call routing center system
US8699694B2 (en) 2010-08-26 2014-04-15 Satmap International Holdings Limited Precalculated caller-agent pairs for a call center routing system
US8750488B2 (en) 2010-08-31 2014-06-10 Satmap International Holdings Limited Predicted call time as routing variable in a call routing center system
US8896655B2 (en) 2010-08-31 2014-11-25 Cisco Technology, Inc. System and method for providing depth adaptive video conferencing
US8599934B2 (en) 2010-09-08 2013-12-03 Cisco Technology, Inc. System and method for skip coding during video conferencing in a network environment
US8599865B2 (en) 2010-10-26 2013-12-03 Cisco Technology, Inc. System and method for provisioning flows in a mobile network environment
US8699457B2 (en) 2010-11-03 2014-04-15 Cisco Technology, Inc. System and method for managing flows in a mobile network environment
US9338394B2 (en) 2010-11-15 2016-05-10 Cisco Technology, Inc. System and method for providing enhanced audio in a video environment
US8730297B2 (en) 2010-11-15 2014-05-20 Cisco Technology, Inc. System and method for providing camera functions in a video environment
US8902244B2 (en) 2010-11-15 2014-12-02 Cisco Technology, Inc. System and method for providing enhanced graphics in a video environment
US9143725B2 (en) 2010-11-15 2015-09-22 Cisco Technology, Inc. System and method for providing enhanced graphics in a video environment
US8723914B2 (en) 2010-11-19 2014-05-13 Cisco Technology, Inc. System and method for providing enhanced video processing in a network environment
US9111138B2 (en) 2010-11-30 2015-08-18 Cisco Technology, Inc. System and method for gesture interface control
US8692862B2 (en) * 2011-02-28 2014-04-08 Cisco Technology, Inc. System and method for selection of video data in a video conference environment
US8996426B2 (en) 2011-03-02 2015-03-31 Hewlett-Packard Development Company, L. P. Behavior and information model to yield more accurate probability of successful outcome
US8670019B2 (en) 2011-04-28 2014-03-11 Cisco Technology, Inc. System and method for providing enhanced eye gaze in a video conferencing environment
US8786631B1 (en) 2011-04-30 2014-07-22 Cisco Technology, Inc. System and method for transferring transparency information in a video environment
US8934026B2 (en) 2011-05-12 2015-01-13 Cisco Technology, Inc. System and method for video coding in a dynamic environment
US8527310B1 (en) * 2011-07-22 2013-09-03 Alcatel Lucent Method and apparatus for customer experience management
US20150220857A1 (en) * 2011-10-10 2015-08-06 Syntel, Inc. Store service workbench
US8947493B2 (en) 2011-11-16 2015-02-03 Cisco Technology, Inc. System and method for alerting a participant in a video conference
US8682087B2 (en) 2011-12-19 2014-03-25 Cisco Technology, Inc. System and method for depth-guided image filtering in a video conference environment
US9479642B2 (en) 2012-01-26 2016-10-25 Zoom International S.R.O. Enhanced quality monitoring
US9229684B2 (en) * 2012-01-30 2016-01-05 International Business Machines Corporation Automated corruption analysis of service designs
JP5663514B2 (en) 2012-03-14 2015-02-04 京セラドキュメントソリューションズ株式会社 Image forming apparatus
US9025757B2 (en) 2012-03-26 2015-05-05 Satmap International Holdings Limited Call mapping systems and methods using bayesian mean regression (BMR)
US9270711B1 (en) * 2012-04-10 2016-02-23 Google Inc. System and method for aggregating feedback
US9036888B2 (en) * 2012-04-30 2015-05-19 General Electric Company Systems and methods for performing quality review scoring of biomarkers and image analysis methods for biological tissue
WO2013184667A1 (en) 2012-06-05 2013-12-12 Rank Miner, Inc. System, method and apparatus for voice analytics of recorded audio
US20150149223A1 (en) * 2012-06-19 2015-05-28 Nec Corporation Motivation management device, motivation management method, and computer-readable recording medium
US8917853B2 (en) 2012-06-19 2014-12-23 International Business Machines Corporation Enhanced customer experience through speech detection and analysis
US9386144B2 (en) * 2012-08-07 2016-07-05 Avaya Inc. Real-time customer feedback
US9213781B1 (en) 2012-09-19 2015-12-15 Placemeter LLC System and method for processing image data
US8792630B2 (en) 2012-09-24 2014-07-29 Satmap International Holdings Limited Use of abstracted data in pattern matching system
US8478621B1 (en) 2012-10-08 2013-07-02 State Farm Mutual Automobile Insurance Company Customer satisfaction dashboard
US20150310877A1 (en) * 2012-10-31 2015-10-29 Nec Corporation Conversation analysis device and conversation analysis method
US9087131B1 (en) * 2012-12-18 2015-07-21 Google Inc. Auto-summarization for a multiuser communication session
US20140244762A1 (en) * 2013-02-26 2014-08-28 Facebook, Inc. Application distribution platform for rating and recommending applications
US20140337077A1 (en) * 2013-05-08 2014-11-13 VoloForce, LLC Task assignment and verification system and method
US9843621B2 (en) 2013-05-17 2017-12-12 Cisco Technology, Inc. Calendaring activities based on communication processing
US9830567B2 (en) * 2013-10-25 2017-11-28 Location Labs, Inc. Task management system and method
US9569743B2 (en) * 2014-01-31 2017-02-14 Verint Systems Ltd. Funnel analysis
US10026047B2 (en) 2014-03-04 2018-07-17 International Business Machines Corporation System and method for crowd sourcing
WO2015184440A2 (en) 2014-05-30 2015-12-03 Placemeter Inc. System and method for activity monitoring using video data
US9730085B2 (en) 2014-06-30 2017-08-08 At&T Intellectual Property I, L.P. Method and apparatus for managing wireless probe devices
US9661126B2 (en) 2014-07-11 2017-05-23 Location Labs, Inc. Driving distraction reduction system and method
US9781270B2 (en) * 2014-08-01 2017-10-03 Genesys Telecommunications Laboratories, Inc. System and method for case-based routing for a contact
US9848084B2 (en) 2014-08-01 2017-12-19 Genesys Telecommunications Laboratories, Inc. Adaptable business objective routing for a contact center
US11621932B2 (en) * 2014-10-31 2023-04-04 Avaya Inc. System and method for managing resources of an enterprise
US9118763B1 (en) 2014-12-09 2015-08-25 Five9, Inc. Real time feedback proxy
US10043078B2 (en) 2015-04-21 2018-08-07 Placemeter LLC Virtual turnstile system and method
US10380431B2 (en) 2015-06-01 2019-08-13 Placemeter LLC Systems and methods for processing video streams
US10237767B2 (en) * 2015-06-16 2019-03-19 Telefonaktiebolaget Lm Ericsson (Publ) Method and score management node for supporting evaluation of a delivered service
CN104991968B (en) * 2015-07-24 2018-04-20 成都云堆移动信息技术有限公司 The Internet media user property analysis method based on text mining
EP3506613A1 (en) * 2015-10-14 2019-07-03 Pindrop Security, Inc. Call detail record analysis to identify fraudulent activity and fraud detection in interactive voice response systems
US20170116616A1 (en) * 2015-10-27 2017-04-27 International Business Machines Corporation Predictive tickets management
US9538007B1 (en) * 2015-11-03 2017-01-03 Xerox Corporation Customer relationship management system based on electronic conversations
CN113095662B (en) 2015-12-01 2024-03-19 阿菲尼帝有限公司 Techniques for case distribution
US10438171B2 (en) * 2016-01-28 2019-10-08 Tata Consultancy Services Limited Method and system for real-time human resource activity impact assessment and real-time improvement
US10142473B1 (en) 2016-06-08 2018-11-27 Afiniti Europe Technologies Limited Techniques for benchmarking performance in a contact center system
US9692899B1 (en) 2016-08-30 2017-06-27 Afiniti Europe Technologies Limited Techniques for benchmarking pairing strategies in a contact center system
US10510339B2 (en) 2016-11-07 2019-12-17 Unnanu, LLC Selecting media using weighted key words
US9888121B1 (en) 2016-12-13 2018-02-06 Afiniti Europe Technologies Limited Techniques for behavioral pairing model evaluation in a contact center system
US9955013B1 (en) 2016-12-30 2018-04-24 Afiniti Europe Technologies Limited Techniques for L3 pairing in a contact center system
US10326882B2 (en) 2016-12-30 2019-06-18 Afiniti Europe Technologies Limited Techniques for workforce management in a contact center system
US10320984B2 (en) 2016-12-30 2019-06-11 Afiniti Europe Technologies Limited Techniques for L3 pairing in a contact center system
US11831808B2 (en) 2016-12-30 2023-11-28 Afiniti, Ltd. Contact center system
US10257354B2 (en) 2016-12-30 2019-04-09 Afiniti Europe Technologies Limited Techniques for L3 pairing in a contact center system
US10642889B2 (en) 2017-02-20 2020-05-05 Gong I.O Ltd. Unsupervised automated topic detection, segmentation and labeling of conversations
US10135986B1 (en) 2017-02-21 2018-11-20 Afiniti International Holdings, Ltd. Techniques for behavioral pairing model evaluation in a contact center system
US10970658B2 (en) 2017-04-05 2021-04-06 Afiniti, Ltd. Techniques for behavioral pairing in a dispatch center system
US9930180B1 (en) 2017-04-28 2018-03-27 Afiniti, Ltd. Techniques for behavioral pairing in a contact center system
US10582055B2 (en) * 2017-06-27 2020-03-03 Genesys Telecommunications Laboratories, Inc. System and method for managing contact center system
US10122860B1 (en) 2017-07-10 2018-11-06 Afiniti Europe Technologies Limited Techniques for estimating expected performance in a task assignment system
US10594861B2 (en) * 2017-09-28 2020-03-17 Plantronics, Inc. Forking transmit and receive call audio channels
US10509669B2 (en) 2017-11-08 2019-12-17 Afiniti Europe Technologies Limited Techniques for benchmarking pairing strategies in a task assignment system
US10110746B1 (en) 2017-11-08 2018-10-23 Afiniti Europe Technologies Limited Techniques for benchmarking pairing strategies in a task assignment system
US11399096B2 (en) 2017-11-29 2022-07-26 Afiniti, Ltd. Techniques for data matching in a contact center system
US10509671B2 (en) 2017-12-11 2019-12-17 Afiniti Europe Technologies Limited Techniques for behavioral pairing in a task assignment system
US10003688B1 (en) 2018-02-08 2018-06-19 Capital One Services, Llc Systems and methods for cluster-based voice verification
US10623565B2 (en) 2018-02-09 2020-04-14 Afiniti Europe Technologies Limited Techniques for behavioral pairing in a contact center system
US11276407B2 (en) 2018-04-17 2022-03-15 Gong.Io Ltd. Metadata-based diarization of teleconferences
US10593350B2 (en) 2018-04-21 2020-03-17 International Business Machines Corporation Quantifying customer care utilizing emotional assessments
US20190354935A1 (en) * 2018-05-17 2019-11-21 Microsoft Technology Licensing, Llc Mitigating an Effect of Bias in a Communication System
US11250359B2 (en) 2018-05-30 2022-02-15 Afiniti, Ltd. Techniques for workforce management in a task assignment system
US11288714B2 (en) * 2018-06-29 2022-03-29 Capital One Services, Llc Systems and methods for pre-communicating shoppers communication preferences to retailers
US10496438B1 (en) 2018-09-28 2019-12-03 Afiniti, Ltd. Techniques for adapting behavioral pairing to runtime conditions in a task assignment system
US11295315B2 (en) * 2018-11-16 2022-04-05 T-Mobile Usa, Inc. Active listening using artificial intelligence for communications evaluation
US10867263B2 (en) 2018-12-04 2020-12-15 Afiniti, Ltd. Techniques for behavioral pairing in a multistage task assignment system
US11144344B2 (en) 2019-01-17 2021-10-12 Afiniti, Ltd. Techniques for behavioral pairing in a task assignment system
US11302346B2 (en) * 2019-03-11 2022-04-12 Nice Ltd. System and method for frustration detection
US11349990B1 (en) * 2019-04-02 2022-05-31 United Services Automobile Association (Usaa) Call routing system
US10757261B1 (en) 2019-08-12 2020-08-25 Afiniti, Ltd. Techniques for pairing contacts and agents in a contact center system
US11445062B2 (en) 2019-08-26 2022-09-13 Afiniti, Ltd. Techniques for behavioral pairing in a task assignment system
US10757262B1 (en) 2019-09-19 2020-08-25 Afiniti, Ltd. Techniques for decisioning behavioral pairing in a task assignment system
US11790411B1 (en) 2019-11-29 2023-10-17 Wells Fargo Bank, N.A. Complaint classification in customer communications using machine learning models
WO2021158436A1 (en) 2020-02-03 2021-08-12 Afiniti, Ltd. Techniques for behavioral pairing in a task assignment system
CN115244513A (en) 2020-02-04 2022-10-25 阿菲尼帝有限公司 Techniques for error handling in a task distribution system with an external pairing system
US11050886B1 (en) 2020-02-05 2021-06-29 Afiniti, Ltd. Techniques for sharing control of assigning tasks between an external pairing system and a task assignment system with an internal pairing system
US20220076584A1 (en) * 2020-09-09 2022-03-10 Koninklijke Philips N.V. System and method for personalized healthcare staff training
US20220092512A1 (en) * 2020-09-21 2022-03-24 Nice Ltd System and method for distributing an agent interaction to the evaluator by utilizing hold factor
US11128754B1 (en) * 2020-11-16 2021-09-21 Allstate Insurance Company Machine learning system for routing optimization based on historical performance data
US11645449B1 (en) 2020-12-04 2023-05-09 Wells Fargo Bank, N.A. Computing system for data annotation
EP4040355A1 (en) * 2021-02-08 2022-08-10 Tata Consultancy Services Limited System and method for measuring user experience of information visualizations
US11915205B2 (en) * 2021-10-15 2024-02-27 EMC IP Holding Company LLC Method and system to manage technical support sessions using ranked historical technical support sessions
US11941641B2 (en) 2021-10-15 2024-03-26 EMC IP Holding Company LLC Method and system to manage technical support sessions using historical technical support sessions

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6459787B2 (en) * 2000-03-02 2002-10-01 Knowlagent, Inc. Method and system for delivery of individualized training to call center agents
US7023979B1 (en) * 2002-03-07 2006-04-04 Wai Wu Telephony control system with intelligent call routing

Family Cites Families (67)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4145715A (en) 1976-12-22 1979-03-20 Electronic Management Support, Inc. Surveillance system
US4527151A (en) 1982-05-03 1985-07-02 Sri International Method and apparatus for intrusion detection
US5353618A (en) 1989-08-24 1994-10-11 Armco Steel Company, L.P. Apparatus and method for forming a tubular frame member
US5051827A (en) 1990-01-29 1991-09-24 The Grass Valley Group, Inc. Television signal encoder/decoder configuration control
US5091780A (en) 1990-05-09 1992-02-25 Carnegie-Mellon University A trainable security system emthod for the same
EP0484076B1 (en) 1990-10-29 1996-12-18 Kabushiki Kaisha Toshiba Video camera having focusing and image-processing function
DE69124777T2 (en) 1990-11-30 1997-06-26 Canon Kk Device for the detection of the motion vector
GB2259212B (en) 1991-08-27 1995-03-29 Sony Broadcast & Communication Standards conversion of digital video signals
GB2268354B (en) 1992-06-25 1995-10-25 Sony Broadcast & Communication Time base conversion
US5519446A (en) 1993-11-13 1996-05-21 Goldstar Co., Ltd. Apparatus and method for converting an HDTV signal to a non-HDTV signal
US5491511A (en) 1994-02-04 1996-02-13 Odle; James A. Multimedia capture and audit system for a video surveillance network
IL113434A0 (en) 1994-04-25 1995-07-31 Katz Barry Surveillance system and method for asynchronously recording digital data with respect to video data
US6028626A (en) 1995-01-03 2000-02-22 Arc Incorporated Abnormality detection and surveillance system
US5751346A (en) 1995-02-10 1998-05-12 Dozier Financial Corporation Image retention and information security system
US5796439A (en) 1995-12-21 1998-08-18 Siemens Medical Systems, Inc. Video format conversion process and apparatus
US5742349A (en) 1996-05-07 1998-04-21 Chrontel, Inc. Memory efficient video graphics subsystem with vertical filtering and scan rate conversion
US6081606A (en) 1996-06-17 2000-06-27 Sarnoff Corporation Apparatus and a method for detecting motion within an image sequence
US7304662B1 (en) 1996-07-10 2007-12-04 Visilinx Inc. Video surveillance system and method
US5895453A (en) 1996-08-27 1999-04-20 Sts Systems, Ltd. Method and system for the detection, management and prevention of losses in retail and other environments
US5790096A (en) 1996-09-03 1998-08-04 Allus Technology Corporation Automated flat panel display control system for accomodating broad range of video types and formats
US6031573A (en) 1996-10-31 2000-02-29 Sensormatic Electronics Corporation Intelligent video information management system performing multiple functions in parallel
US6037991A (en) 1996-11-26 2000-03-14 Motorola, Inc. Method and apparatus for communicating video information in a communication system
EP0858066A1 (en) 1997-02-03 1998-08-12 Koninklijke Philips Electronics N.V. Method and device for converting the digital image rate
US6295367B1 (en) 1997-06-19 2001-09-25 Emtera Corporation System and method for tracking movement of objects in a scene using correspondence graphs
US6092197A (en) 1997-12-31 2000-07-18 The Customer Logic Company, Llc System and method for the secure discovery, exploitation and publication of information
US6014647A (en) 1997-07-08 2000-01-11 Nizzari; Marcia M. Customer interaction tracking
US6108711A (en) 1998-09-11 2000-08-22 Genesys Telecommunications Laboratories, Inc. Operating system having external media layer, workflow layer, internal media layer, and knowledge base for routing media events between transactions
US6111610A (en) 1997-12-11 2000-08-29 Faroudja Laboratories, Inc. Displaying film-originated video on high frame rate monitors without motions discontinuities
US6704409B1 (en) 1997-12-31 2004-03-09 Aspect Communications Corporation Method and apparatus for processing real-time transactions and non-real-time transactions
US6327343B1 (en) 1998-01-16 2001-12-04 International Business Machines Corporation System and methods for automatic call and data transfer processing
US6138139A (en) 1998-10-29 2000-10-24 Genesys Telecommunications Laboraties, Inc. Method and apparatus for supporting diverse interaction paths within a multimedia communication center
US6170011B1 (en) 1998-09-11 2001-01-02 Genesys Telecommunications Laboratories, Inc. Method and apparatus for determining and initiating interaction directionality within a multimedia communication center
US6212178B1 (en) 1998-09-11 2001-04-03 Genesys Telecommunication Laboratories, Inc. Method and apparatus for selectively presenting media-options to clients of a multimedia call center
US6167395A (en) 1998-09-11 2000-12-26 Genesys Telecommunications Laboratories, Inc Method and apparatus for creating specialized multimedia threads in a multimedia communication center
US6070142A (en) 1998-04-17 2000-05-30 Andersen Consulting Llp Virtual customer sales and service center and method
US6134530A (en) 1998-04-17 2000-10-17 Andersen Consulting Llp Rule based routing system and method for a virtual sales and service center
US6604108B1 (en) 1998-06-05 2003-08-05 Metasolutions, Inc. Information mart system and information mart browser
US6628835B1 (en) 1998-08-31 2003-09-30 Texas Instruments Incorporated Method and system for defining and recognizing complex events in a video sequence
US6570608B1 (en) 1998-09-30 2003-05-27 Texas Instruments Incorporated System and method for detecting interactions of people and vehicles
US6549613B1 (en) 1998-11-05 2003-04-15 Ulysses Holding Llc Method and apparatus for intercept of wireline communications
US6330025B1 (en) 1999-05-10 2001-12-11 Nice Systems Ltd. Digital video logging system
WO2000073996A1 (en) 1999-05-28 2000-12-07 Glebe Systems Pty Ltd Method and apparatus for tracking a moving object
US7103806B1 (en) 1999-06-04 2006-09-05 Microsoft Corporation System for performing context-sensitive decisions about ideal communication modalities considering information about channel reliability
US6275806B1 (en) 1999-08-31 2001-08-14 Andersen Consulting, Llp System method and article of manufacture for detecting emotion in voice signals by utilizing statistics for voice signal parameters
US6427137B2 (en) 1999-08-31 2002-07-30 Accenture Llp System, method and article of manufacture for a voice analysis system that detects nervousness for preventing fraud
US20010052081A1 (en) 2000-04-07 2001-12-13 Mckibben Bernard R. Communication network with a service agent element and method for providing surveillance services
JP2001357484A (en) 2000-06-14 2001-12-26 Kddi Corp Road abnormality detector
US6981000B2 (en) 2000-06-30 2005-12-27 Lg Electronics Inc. Customer relationship management system and operation method thereof
US20020059283A1 (en) 2000-10-20 2002-05-16 Enteractllc Method and system for managing customer relations
US20020054211A1 (en) 2000-11-06 2002-05-09 Edelson Steven D. Surveillance video camera enhancement system
US20020087385A1 (en) 2000-12-28 2002-07-04 Vincent Perry G. System and method for suggesting interaction strategies to a customer service representative
DE60220047T2 (en) * 2001-05-29 2008-01-10 Koninklijke Philips Electronics N.V. METHOD AND DEVICE FOR HIDING ERRORS
US7953219B2 (en) 2001-07-19 2011-05-31 Nice Systems, Ltd. Method apparatus and system for capturing and analyzing interaction based content
GB0118921D0 (en) 2001-08-02 2001-09-26 Eyretel Telecommunications interaction analysis
US6912272B2 (en) 2001-09-21 2005-06-28 Talkflow Systems, Llc Method and apparatus for managing communications and for creating communication routing rules
EP1472869A4 (en) 2002-02-06 2008-07-30 Nice Systems Ltd System and method for video content analysis-based detection, surveillance and alarm management
AU2003207979A1 (en) 2002-02-06 2003-09-02 Nice Systems Ltd. Method and apparatus for video frame sequence-based object tracking
US7386113B2 (en) 2002-02-25 2008-06-10 Genesys Telecommunications Laboratories, Inc. System and method for integrated resource scheduling and agent work management
US20040016113A1 (en) 2002-06-19 2004-01-29 Gerald Pham-Van-Diep Method and apparatus for supporting a substrate
AU2003263957A1 (en) 2002-08-16 2004-03-03 Nuasis Corporation Contact center architecture
JP4093012B2 (en) * 2002-10-17 2008-05-28 日本電気株式会社 Hypertext inspection apparatus, method, and program
US7076427B2 (en) 2002-10-18 2006-07-11 Ser Solutions, Inc. Methods and apparatus for audio data monitoring and evaluation using speech recognition
US20040098295A1 (en) 2002-11-15 2004-05-20 Iex Corporation Method and system for scheduling workload
US20040166484A1 (en) * 2002-12-20 2004-08-26 Mark Alan Budke System and method for simulating training scenarios
JP2004247812A (en) * 2003-02-12 2004-09-02 Alps Electric Co Ltd Electroacoustic transducer and electronic apparatus employing the same
WO2006045102A2 (en) 2004-10-20 2006-04-27 Seven Networks, Inc. Method and apparatus for intercepting events in a communication system
US20080063178A1 (en) * 2006-08-16 2008-03-13 Sbc Knowledge Ventures, L.P. Agent call flow monitoring and evaluation

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6459787B2 (en) * 2000-03-02 2002-10-01 Knowlagent, Inc. Method and system for delivery of individualized training to call center agents
US7023979B1 (en) * 2002-03-07 2006-04-04 Wai Wu Telephony control system with intelligent call routing

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20240046191A1 (en) * 2022-07-27 2024-02-08 Nice Ltd. System and method for quality planning data evaluation using target kpis

Also Published As

Publication number Publication date
US20080152122A1 (en) 2008-06-26
US7577246B2 (en) 2009-08-18
WO2008075329A3 (en) 2009-04-09

Similar Documents

Publication Publication Date Title
US7577246B2 (en) Method and system for automatic quality evaluation
US8615419B2 (en) Method and apparatus for predicting customer churn
US8331549B2 (en) System and method for integrated workforce and quality management
US10306055B1 (en) Reviewing portions of telephone call recordings in a contact center using topic meta-data records
US20080189171A1 (en) Method and apparatus for call categorization
US8112306B2 (en) System and method for facilitating triggers and workflows in workforce optimization
US7949552B2 (en) Systems and methods for context drilling in workforce optimization
US8396732B1 (en) System and method for integrated workforce and analytics
US8108237B2 (en) Systems for integrating contact center monitoring, training and scheduling
US9674358B1 (en) Reviewing call checkpoints in agent call recordings in a contact center
US8117064B2 (en) Systems and methods for workforce optimization and analytics
US8078486B1 (en) Systems and methods for providing workforce optimization to branch and back offices
US10194027B1 (en) Reviewing call checkpoints in agent call recording in a contact center
US10289967B2 (en) Customer-based interaction outcome prediction methods and system
US6724887B1 (en) Method and system for analyzing customer communications with a contact center
Scheidt et al. Making a case for speech analytics to improve customer service quality: Vision, implementation, and evaluation
US20060179064A1 (en) Upgrading performance using aggregated information shared between management systems
CA2989787C (en) System and method for quality management platform
US20160358115A1 (en) Quality assurance analytics systems and methods
CA2564003A1 (en) Systems and methods for workforce optimization and analytics
US20220253771A1 (en) System and method of processing data from multiple sources to project future resource allocation
WO2008093315A2 (en) Method and apparatus for call categorization
Hingst Call centres, recent history-where have they come from and how did they get here?

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 06821657

Country of ref document: EP

Kind code of ref document: A2

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 06821657

Country of ref document: EP

Kind code of ref document: A2