US20030147516A1 - Self-learning real-time prioritization of telecommunication fraud control actions - Google Patents

Self-learning real-time prioritization of telecommunication fraud control actions Download PDF

Info

Publication number
US20030147516A1
US20030147516A1 US10/346,636 US34663603A US2003147516A1 US 20030147516 A1 US20030147516 A1 US 20030147516A1 US 34663603 A US34663603 A US 34663603A US 2003147516 A1 US2003147516 A1 US 2003147516A1
Authority
US
United States
Prior art keywords
alert
case
fraud
predictive model
risk
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US10/346,636
Other versions
US6850606B2 (en
Inventor
Justin Lawyer
Alex Barclay
Dirk Englund
Robert Holmes
Dimpy Pathria
Tim Roach
Scott Zoldi
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fair Isaac Corp
Original Assignee
Fair Isaac Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US09/963,358 external-priority patent/US6597775B2/en
Application filed by Fair Isaac Corp filed Critical Fair Isaac Corp
Priority to US10/346,636 priority Critical patent/US6850606B2/en
Publication of US20030147516A1 publication Critical patent/US20030147516A1/en
Assigned to FAIR ISAAC CORPORATION reassignment FAIR ISAAC CORPORATION MERGER (SEE DOCUMENT FOR DETAILS). Assignors: HNC SOFTWARE, INC.
Priority to US10/970,318 priority patent/US7158622B2/en
Application granted granted Critical
Publication of US6850606B2 publication Critical patent/US6850606B2/en
Priority to US11/563,657 priority patent/US7457401B2/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M15/00Arrangements for metering, time-control or time indication ; Metering, charging or billing arrangements for voice wireline or wireless communications, e.g. VoIP
    • H04M15/47Fraud detection or prevention means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M15/00Arrangements for metering, time-control or time indication ; Metering, charging or billing arrangements for voice wireline or wireless communications, e.g. VoIP
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M15/00Arrangements for metering, time-control or time indication ; Metering, charging or billing arrangements for voice wireline or wireless communications, e.g. VoIP
    • H04M15/41Billing record details, i.e. parameters, identifiers, structure of call data record [CDR]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M15/00Arrangements for metering, time-control or time indication ; Metering, charging or billing arrangements for voice wireline or wireless communications, e.g. VoIP
    • H04M15/58Arrangements for metering, time-control or time indication ; Metering, charging or billing arrangements for voice wireline or wireless communications, e.g. VoIP based on statistics of usage or network monitoring
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2215/00Metering arrangements; Time controlling arrangements; Time indicating arrangements
    • H04M2215/01Details of billing arrangements
    • H04M2215/0148Fraud detection or prevention means
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2215/00Metering arrangements; Time controlling arrangements; Time indicating arrangements
    • H04M2215/01Details of billing arrangements
    • H04M2215/0164Billing record, e.g. Call Data Record [CDR], Toll Ticket[TT], Automatic Message Accounting [AMA], Call Line Identifier [CLI], details, i.e. parameters, identifiers, structure
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2215/00Metering arrangements; Time controlling arrangements; Time indicating arrangements
    • H04M2215/01Details of billing arrangements
    • H04M2215/0188Network monitoring; statistics on usage on called/calling number

Definitions

  • the present invention relates generally to detecting telecommunications fraud using intelligent predictive modeling systems.
  • the call data record (CDR) for the call is often stripped of identifying information such as the number from where the call was made (“originating number”). This is done so that the long distance company leasing the bandwidth (the lessor) and completing the call on behalf of another carrier will not attempt to solicit business from the caller at the originating number, who is presumably not one of the lessor's subscribers, but the lessee's subscriber. Unfortunately, this frustrates fraud control efforts, since the information that has been stripped from the CDR would normally be used subsequently to detect fraud. As a result, there is substantial opportunity for fraud in these types of bandwidth exchanges. What is needed is a way to use the information stripped from the CDR to predict fraud, without divulging the stripped information to the provider providing the bandwidth.
  • the present invention provides a system that includes a predictive model for detecting fraud in Call Data Records (CDRs).
  • CDRs Call Data Records
  • Telephone companies provide CDRs to the system, and the CDRs are then evaluated against Telco-specified rules; each participating Telco may define it own set of rules for evaluating calls made through that Telco. If one or more of a Telco's rules are matched, then the system generates an alert. All pending alerts for a caller (individual or company or portion thereof) form a case. The case also contains details—such as a statistical summary of alert-generating calls—about the caller's calling history. The case, current alert information, and a variety of risk factors serve as inputs to the predictive model.
  • the predictive model outputs a score that is predictive of the likelihood that the call being made is fraudulent. This information is then queued for examination by analysts. The queue is designed so that calls that are more likely to involve fraud are examined earlier. After an analyst has made a determination about whether the call involved fraud, or alternatively, if no decision is made on the case within a pre-specified time, the case is saved in a case database. The fraud/no-fraud decision is used to update the risk factors and the predictive model to improve predictions about future alerts.
  • FIG. 1 is a block diagram of an overall embodiment of the present invention.
  • FIG. 2 is a graph illustrating how a time-decayed rate changes over time.
  • FIG. 3 is a flow chart of steps taken in an embodiment of the present invention.
  • FIG. 4 is a flow chart of steps taken by the case engine in one embodiment of the present invention.
  • FIG. 5 is a diagram illustrating the determination of fraud scores when bandwidth is leased between telephone companies, in accordance with an embodiment of the present invention.
  • the present invention provides a threshold-based fraud prevention product aimed at detecting fraudulent calls using predictive modeling.
  • FIG. 1 there is shown an illustration of one embodiment of a system 100 in accordance with the present invention.
  • the system 100 comprises a communications server 101 , rule engine 104 , a rule database 106 , a case database 108 , a case engine 118 , a customer alert profile database 110 , a predictive model 112 , risk tables 114 , a queuing engine 120 , and various queues 116 .
  • CDR call data record
  • analyst workstation 118 Also shown in FIG. 1 are also shown in FIG. 1 .
  • CDR call data record
  • a Call Data Record (CDR) 102 is a record of a telephone call, and typically contains identification information for the originating telephone number, the terminating telephone number, the billing telephone number (which in some cases may be neither the originating nor terminating number) or credit card number, the time of the call, and the length of the call.
  • the CDR may contain additional data such as information specific to the telephone company billing the call, routing information, etc.
  • a CDR is an instance of a transaction summary. In other embodiments, a transaction summary will have corresponding content (e.g., details of a credit card transaction).
  • the communications server 101 receives CDRs from the telephone companies (Telcos), and passes them to the rule engine 104 .
  • the CDRs are consolidated from all switches, mediation devices and SS7 surveillance platforms. As is known by those skilled in the art, mediation devices and SS7 surveillance platforms are designed to detect abnormal system behavior. It should be noted here that one of the advantages of the present invention is that it operates with any number of Telcos and can provide fraud detection for either a single Telco, or a large number of Telcos.
  • the rule engine 104 determines whether an incoming CDR 102 should be further examined for evidence of fraud. This decision is made on the basis of rules stored in the rule database 106 .
  • the rule database 106 contains rule sets specified by various telephone companies using system 100 . Based on the Telco providing the CDR, the rule engine 104 applies a set of that Telco's specific rules to the CDR. This feature also allows the system to simultaneously evaluate CDRs for different Telcos, thereby providing a fraud detection service to the Telcos.
  • rule engine 104 If the CDR satisfies the corresponding Telco's rules, then rule engine 104 generates an alert, and the alert is sent to the case engine 118 .
  • the case engine 118 uses information stored in the case database 108 to update an existing case, or create a new case, as required.
  • the case database 108 contains records of cases examined for fraud, including the dispositions of those cases. Alerts generated by the rule engine 104 are also stored in the customer alert profile database 110 .
  • the predictive model 112 receives cases and scores alerts, and generates a score indicative of the likelihood of fraud (more generally indicative of a level of risk). More specifically, the predictive model 112 receives input from the customer alert profile database 110 , the case engine 118 , and risk tables 114 . The case engine also has access to certain CDR information about the CDRs that generated the alerts.
  • the predictive model 112 sends cases containing scored alerts back to the case engine 118 along with the score.
  • the case engine 118 then sends the case to the queuing engine 120 for assignment to one of the queues 116 according to a priority.
  • Analysts use analyst workstations 118 to examine cases from the queues 116 , preferably in order of highest-to-lowest priorities.
  • An analyst makes a disposition of a case by deciding whether the case is indeed fraudulent. If fraudulent, then the analyst applies fraud control actions as defined by the provider Telco. Again, this allows each Telco to determine specific control actions to calls that it services.
  • the dispositions e.g., fraud, no fraud, unconfirmed fraud, unworked
  • the dispositions e.g., fraud, no fraud, unconfirmed fraud, unworked
  • the dispositions e.g., fraud, no fraud, unconfirmed fraud, unworked
  • the risk tables and predictive model are updated in light of the disposition (e.g., recalculating the risk rates in the risk tables, and updating the predictive model parameters).
  • the case engine 118 then closes the case and stores it in the case database 108 .
  • a telephone company (Telco) using system 100 preferably stores its CDRs in a specified manner for easier processing.
  • the Telco establishes a connection to the sever 101 .
  • This server may be a dedicated system reserved for the Telco's use, or it may be a shared system. The shared system is described in further detail below.
  • the Telco encrypts a file containing batched CDRs, and sends the batch via FTP to the server 101 of the system 100 , where it is decrypted.
  • other methods may be employed to transmit the CDRs to the system 100 in real time, such as through dial-up connections, wireless connections, etc.
  • the records are encrypted in a preferred embodiment, other embodiments may involve the transmission of unencrypted data.
  • Each Telco's CDRs are evaluated according to alert rules defined by that Telco and stored in the rule database 106 .
  • alert rules include call collisions, volume of calls, aggregate duration, geographic velocity, single call duration, hot numbers, and exclusion rules.
  • call collision detection detects true call overlap.
  • Call overlap occurs when calls charged to the same billing number have start and end time ranges that overlap.
  • the call collision function generates an alert.
  • a call volume rule sets the maximum number of calls that can be made in a fixed time period.
  • an aggregate duration rule determines the total amount of billing time that can be spent on the telephone.
  • Calls can be analyzed to determine distances between call origin points within specific time intervals. The originating number of the last call made is compared to that of the current call charged to the same number. Using vertical and horizontal coordinate data, the system compares the distance and time between calls against Telco-defined thresholds. If the thresholds are exceeded, an alert is tripped.
  • Miles-per-hour designations determine the impracticality or impossibility of successive calls by the same person. Exceptions are made, for example, for the case when more than one person is granted use of a calling or charge card.
  • Alerts may also be generated for calls or groups of calls of excessive duration. Telco-defined thresholds are established based on specific billing types, destination and/or origination to generate alerts for lengthy calling activity. Alerts also are generated for hot numbers and excluded numbers, as specified by the Telco.
  • alerts store only those variables that were tripped, while in other embodiments, all variables are stored in the alert.
  • an alert contains information including originating telephone number, terminating telephone number, billing account number, time of call, call type, dial type, and type of alert. Other data may also be present according to the needs of the particular Telco.
  • a header is prepended to an alert to allow the case engine 118 to identify which case to attach the alert to, as well as the status of that case. (One embodiment has a 26 byte header and a 642 byte payload.)
  • the case engine 118 attempts to associate the alert with an existing case.
  • a case contains a billing account number or other identification code, an alert table of one or more alerts, a customer profile, and a score field.
  • each customer has an account code that can be used as the identification code for a case.
  • the billing number or other identifying data may be used.
  • a case may contain many alerts, and if one or more alerts are already present, the new alert will simply be added to those pending. Each alert in a case is identified in the alert table for the particular case. If an alert is generated and no active cases exist for the identification key, then a new case is created containing the new alert.
  • the case engine 118 determines to which case each incoming alert belongs based on a key. If a case does not yet exist for that key, a case is created and initialized. As noted, the key will typically be a customer account code or the billing ANI (Automatic Number Identification). In a preferred embodiment, either a billing account number or a billing ANI should always exist. However, as a precaution, alerts with neither a customer account number nor a billing ANI are assigned to an “unknown” case.
  • the alerts are joined with the case data into a BLOB (binary large object), with new alerts being appended to the end of the BLOB.
  • Each alert is uniquely identified by an alert ID, which is a unique incrementing number that is generated by the rule engine 104 . This facilitates retrieval, as only one query is needed into the database to gather all of the case and alert information.
  • the BLOBs are uniquely identified by a key that is associated with the case number.
  • an alert When an alert is created (tripped), it is stored in the customer alert profile database 110 , indexed by billing number or other identification key that corresponds to a billing hierarchy of the provider. Billing hierarchies are discussed in more detail below.
  • a customer alert profile tracks the alert behavior and case outcome of the corresponding subscriber.
  • the customer alert profile contains historical alert data about the customer.
  • the profile stores historical data in complete form, including all information associated with the alert.
  • only statistical summaries of the alert history are maintained in the customer profile.
  • data that is stored in the customer profile includes alert rates, risk, typical activity and unusual activity. Typical activity, or low-risk activity, is activity which is generally seen on non-fraud alerts. Unusual activity is activity more commonly associated with fraudulent alerts, and is of high risk. The decision as to what types of activity are low and high risk is made in a preferred embodiment by analyzing the activity patterns of dispositioned alerts. This is done automatically in a preferred embodiment, though in alternative embodiments it is done manually. Historical alert data in customer profiles is updated each time an analyst makes a determination about whether a pending alert is fraudulent or not.
  • the customer alert profile is also an input to the predictive model 112 .
  • Customer profiles are collections of variables that capture past and present alert behavior for that customer over time. This profile is updated each time an alert is received, in a preferred embodiment prior to scoring the alert. Variables may also be updated each time an alert is dispositioned. In a preferred embodiment, short-term behavior is captured on a 11 ⁇ 2-day time scale while long-term behavior is on a 9-day time scale.
  • the customer profile variables can be segmented into nine categories, as listed in Table 1 below.
  • a table of customer profile fields is included in Appendix 1.
  • TABLE 1 Profile Variable or Cate- gory Description Time of last alert The timestamp (in seconds) of the last alert that was processed for this subscriber.
  • Time of last alert disposi- The timestamp (in seconds) of the last alert tion disposition that was processed for this sub- scriber.
  • Short term rates of each of Decayed average rates (counts per unit time) the 11 alert types see Table for the 11 alert types for this subscriber.
  • the 2) time constant used is the short term constant of 1.5 days.
  • Average short term risk Decayed average risk weighted rates (risk weighted rates of the 11 a- multiplied by rate) for the 11 alert fields for lert types this subscriber.
  • the time constant used is the short term constant of 1.5 days.
  • the time constant used is the long term constant of 9 days.
  • the short term time constant is 1.5 days, while the long term constant is 9 days.
  • Short term rate of each of Decayed average rates (counts per unit time) the 4 dispositions for the 4 alert dispositions for this subscriber.
  • the time constant used is the short term con- stant of 1.5 days. Short term average risk of Decayed average risk of the subscriber, the customer where the rates of the four dispositions are the short term rates of the four dispositions.
  • the time constant used is the short term time constant of 1.5 days. Combinations of variables See Appendix #1 above and raw risk vari- ables
  • the time stamps refer to the end time of the call generating the last alert and are forward ratcheting only. This means that the most recent time stamp is used when comparing alerts out of order. For instance, suppose an alert with an end time of 10 AM is received and processed, and subsequently an alert with a 9 AM end time is received. The time-of-last-alert variable remains at 10 AM. An 11 AM end time would then cause the time of last alert to ratchet forward to 11 AM. This ratcheting is used to ensure that the profile variables do not grow exponentially if alerts arrive out of order.
  • the customer profile is updated.
  • the appropriate customer profile variables are updated using the time of the last alert as the incumbent time stamp, and the ENDTIME of the current alert to calculate time differences for all profile variable decays.
  • the updates are performed in the following order:
  • short term decay constants are 1.5 days, while long term decay constants are 9 days. Other decay constants may also be used.
  • the model then scores the updated profile. It does this by generating an input vector of all profile variables, minus the time stamps.
  • the profile variables and a list of which variables are used as model inputs are included in Appendix 1.
  • Risk tables 114 evaluate the relative risk of different alert fields, and adapt over time based on dispositions. For instance, a bill type risk table monitors how the level of risk of each possible bill type value changes over time.
  • the risk tables 114 are dynamic; if there is an entry in an alert that does not have a corresponding value in the risk table, a new entry is added and initialized in the risk table, and updated with the current alert's information. Thus, as new values are added, the risk tables 114 track statistics for those values. For instance, a telephone company might have 50 switches. Each of those switches might have very different risks, depending on the regions and clients served. The risk tables 114 would track the risk of each of those 50 switches, based upon the alerts and dispositions seen. If a new switch were to be installed, it would be added to the tables, which would then track 51 switches.
  • the risk tables 114 learn by example, so that each time an analyst makes a decision as to whether an alert is fraudulent or not, the risk tables 114 are updated to reflect that decision. This is done for each of the major variables in the alert that is decisioned, including alert rate, properties, alert type, bill type, call type, dial type, originating trunk group, source name, source type name, qualifier, and the velocity screening number type.
  • the risk tables 114 adapt to changes in the network and fraud traffic by learning from analysts' decisions.
  • a profiling technique is used to allow transactions to be summarized effectively.
  • the technique uses profiling filters, which are computed from a set of parameters specific to a customer. These parameters are weighted averages of customer properties.
  • T(i) denote the time when the ith alert (with value v i ) is processed.
  • [0062] represents the initial weight
  • [0063] represents the normalization factor, and e - ( ⁇ i - ⁇ j ) / T
  • T is a decay constant.
  • a larger T results in slower decay, and hence larger continued influence in older alerts.
  • a faster decay can make the system more sensitive to new fraud schemes.
  • Risk table variables are decayed in the same manner as profile variables in a preferred embodiment. But whereas the profile variables are specific to a customer, the risk tables are global, and are thus updated at every dispositioned alert.
  • the system 100 uses eleven different risk tables 114 :
  • the first ten risk tables track the rates of occurrence and the time of last update for the four dispositions (fraud, non-fraud, unconfirmed fraud, unworked) for each of the unique categorical values of the alert field in question.
  • One embodiment of an unpopulated alert type risk table is illustrated below in Table 2: TABLE 2 Alert Type Risk Table Alert Rate Time of Un- Alert Alert Type Last Non- confirmed Un- Code Description Update Fraud fraud Fraud worked c Low level collision C High level collision g Low level geo velocity G High level geo velocity b Low level volumetric B High level volumetric S Single call duration d Low level aggregate duration D High level aggregate duration H Hot number R Exclusion rule
  • the case-alert rate risk table is slightly different in that the key is a rate that is a numerical value, rather than a categorical value.
  • the key that is used is the actual case-alert rate as found in the account profile (Short Term Rate of All Alerts).
  • this risk table tracks the rates of occurrence and the time of last update for the four dispositions for ranges of the case alert rate.
  • the values in the second row of Table 3 below would be used to calculate the alert rate risk, since 2 ⁇ 3 ⁇ 5.
  • Risk ( fraud ⁇ ⁇ rate + unconfirmed ⁇ ⁇ rate ) ( fraud ⁇ ⁇ rate + unconfirmed ⁇ ⁇ rate + nonfraud ⁇ ⁇ rate )
  • the predictive model 112 receives input from the customer alert profile database 110 , the case engine 118 , and risk tables 114 .
  • the case engine also has access to certain CDR information about the CDRs that generated the alerts.
  • values preserved from the CDRs include the following: Field Name Field Description SWPropertyName switch that the call was received from LogicalQIDName the alert type VCScrnNumber Number the alert was generated on (billing number, originating number, terminating number) VCScrnNumTypeName Full text Number type the alert was generated on (′′Billing′′, ′′originating terminating′′) BillingNumber the billing number of the call ReceivedTimeSecs time the system received the CDR FirstElemSecs time the call began.
  • Fields that are generated in a preferred embodiment by the rule engine 104 from the CDRs that created the alert are: Field Name Field Description Alert ID a unique number to identify the alert GVCRate the rate used in calculating geo- velocity collisions ThreshValExceed the threshold value exceeded VCQuantity the actual value that exceeded the threshold PeriodName name of period, if used (i.e. name associated with holidays or multipliers)
  • PropertyName property in the hierarchy used to define threshold for alert SourceName the name of the property or class that generated the alert
  • SourceTypeName source of threshold property or class
  • Fields that are added to the alert by the case engine in a preferred embodiment are: Field Name Field Description AlertScore score of the alert Disposition analyst given disposition of the alert
  • the predictive model 112 in a preferred embodiment is a neural-network-based statistical tool that learns how various inputs can be correlated to predict a certain target variable, and that can be incrementally trained by example, as new alerts are decisioned by an analyst. This means that historical alert and disposition information can be used to pre-train the predictive model 112 and risk tables 114 before the model is put online, so that the system 100 can have a running start at install time.
  • the predictive model is an Extended Kalman filter. Each time a case is closed, the tagged alerts are sent to the Kalman filter and the model weights are updated. In this way, the predictive model 112 builds and updates a knowledge base to help the analyst manage workflow by predicting the relative degree of risk in each case.
  • the output of the predictive model 112 is a fraud score indicative of the likelihood that the call that generated the alert was fraudulent.
  • the score is attached to the case and returned by the predictive model 112 to the case engine 118 .
  • the score is preferably on a scale from 1-999, though other scales may be used. An alert with a score of 800 would therefore be much riskier than an alert with a score of 200, and should be worked first.
  • the Extended Kalman filter output is a number on the unit interval (0,1).
  • the values 0.9 and 0.1 used as target values can be adjusted to change the score distribution in some embodiments, in a preferred embodiment the target values are fixed. Because unworked alerts are excluded from training the network, their scores are similar to the most common disposition, which is non-fraud. Thus, fraudulent and unconfirmed fraud alerts will tend to give raw scores closer to 0.9, while non-fraudulent and unworked alerts will tend to score closer to 0.1.
  • raw score is the output of the Extended Kalman filter.
  • the summary case scores are updated. These summary scores are designed so they can be used to prioritize cases.
  • the scores are as follows:
  • Creation score is the score of the profile as soon as the first alert was processed.
  • High score is the maximum score of any alert in the case.
  • the predictive model 112 sends cases containing scored alerts back to the case engine 118 along with the score.
  • the case engine 118 then sends the case to the queuing engine 120 for assignment to one of the queues 116 according to a priority.
  • Analysts use analyst workstations 118 to examine cases from the queues 116 , preferably in order of highest-to-lowest priorities.
  • each case is assigned a case score. While they may be related, a case score is distinct from an alert score.
  • Score types that a case may be assigned in a preferred embodiment include creation score, current score, and high score, as detailed above.
  • fields that are used to determine priority also include the current number of alerts in the case; the last time the case was updated; and the time that the case was created.
  • cases may be queued for disposition for reasons other than risk. For example, it may be more efficient to have newly trained analysts receive cases that are fairly simple, while more experienced analysts study the more difficult cases.
  • cases additionally include the following fields that may be used by the queuing engine to determine queuing priority:
  • case number a unique incrementing number, where the higher the number, the more recently the case was created.
  • case worked status whether the case is unworked, pending, open, or closed.
  • case disposition whether the case has been marked as fraud, nonfraud, unconfirmed.
  • cic carrier information code
  • Analysts are assigned to one or more queues 116 . To maximize efficiency, and minimize the risk of loss resulting from fraudulent activity, analysts first work those cases that have higher fraud scores. As they analysts examine alerts within a case, they assign disposition values to the alerts. In a preferred embodiment, there are four possible values:
  • Fraud When an analyst confirms with a customer that an alert is fraud-related.
  • Non-fraud When an analyst confirms with a customer that an alert is not fraud-related.
  • Unconfirmed fraud When an analyst is confident that an alert is fraud-related without confirmation from the customer (i.e. when the analyst is willing to take action against the account).
  • Fraud A case is considered fraudulent if it contains at least one fraudulent alert.
  • Unconfirmed fraud A case is considered unconfirmed fraud if it contains at least one unconfirmed fraud alert, and no fraud alerts (i.e. fraud takes precedence over unconfirmed fraud).
  • Non-fraudulent A case is considered non-fraudulent if it contains at least one non-fraudulent alert and no fraud or unconfirmed fraud alerts.
  • Unworked A case is considered unworked if it contains only unworked alerts.
  • ANI's Individual phone lines
  • BTN's Billing Telephone Numbers
  • TelCo Inc. has a three-tiered billing system that handles primarily business customers. At the bottom level of the billing system is the ANI. At the next level up is a billing account number (BAN) that is location or department specific. Above that is a customer code that aggregates all billing account numbers for a given customer. For instance, TelCo Inc. may have 10 buildings, each with 2000 phone lines. Therefore, they would have 20,000 phone lines (or ANI's). Each of those 10 buildings might have a unique billing account number, in order to distinguish them for billing purposes. In addition, there would be one customer code to distinguish the company from an account of another company.
  • BAN billing account number
  • telephone companies using system 100 may choose to perform case management at the billing account number level (the middle tier in TelCo Inc's hierarchy). This prevents the analyst from becoming swamped with 20,000 different cases from the same large company, one for each ANI, and yet it does not clump all buildings or departments together. Typically, different buildings or departments in a company may use their telephone services quite differently. Consider the usage patterns of corporate offices, marketing, sales, customer support, or engineering; each would be quite different. Modeling at the middle tier in the billing hierarchy captures those differences. It will be noted, however, that modeling could take place at any of the other levels in a similar manner. In each instance, CDRs will still be evaluated against Telco defined rules, and when one or more rules are matched, an alert will be generated.
  • the decisioned case is sent by the queuing engine 120 back to the case engine 118 .
  • Data is also sent to the risk tables 114 .
  • the risk tables 114 update their variable weights to improve fraud detection performance, as described below.
  • the case engine 118 receives the decisioned case back from the queuing engine, it sends the decision to the predictive model 112 .
  • the predictive model uses this decision information to learn, as described below, thus improving its fraud predicting abilities.
  • the case engine 118 marks the case as closed, and sends it to the case database 108 for storage.
  • the model creates the input vector once again from the customer profile.
  • This input vector is then presented to the Kalman filter along with the risk tag, and the Extended Kalman filter weights and intermediate matrices are updated.
  • the risk tables 114 are updated.
  • the Extended Kalman filter weights are updated using the profile as it then appears. (Note that, as described above, the profile as it existed at the time of scoring is irretrievably lost in a preferred environment.)
  • Each of the 11 risk tables is then updated. The updates are done in this order so that the predictive model can learn to better predict using the state of the profile prior to receiving the disposition information.
  • For each risk table only the row matching the case alert rate or alert field in question is updated. For instance, for a low-level call collision alert, only the row corresponding to low level call collisions would be updated. For that row, the column matching the alert disposition is decayed and then incremented by 1. The other three disposition columns are simply decayed.
  • the unconfirmed fraud would be decayed and then incremented by 1, while the other three rates (fraud, non-fraud, and unworked) are decayed without being incremented.
  • the decay constant is the same as the short-term decay constant of the profile variables, or 1.5 days in a preferred embodiment.
  • a CDR 102 is received 302 from the Telco by the communications server 101 .
  • the rule engine 104 checks 304 the CDR 102 against the Telco rules stored in the rule database 106 . If an alert is generated, the rule engine sends 306 the alert to both the case engine 118 and the customer alert profile database 110 .
  • the case engine 118 attaches 308 the alert to a case. The operation of the case engine 118 is further described below with respect to FIG. 4.
  • the case engine sends 310 the case to the predictive model 112 , and the predictive model 112 scores 312 the alerts in the case using the risk tables 114 , the customer alert profile found in the customer alert profile database 110 , and case information.
  • the predictive model sends 314 the score back to the case engine, which then sends 316 the case to the queuing engine 120 .
  • the queuing engine assigns 318 the case to a position in the queue 116 based on the fraud score of the alerts in the case.
  • An analyst examining the case in the queue decides 320 whether fraud in fact exists in that case.
  • the queuing engine then sends 322 the decision made by the analyst back to the risk tables 114 and to the case engine 118 .
  • the case engine additionally sends 324 the alerts associated with a closed case, and their corresponding dispositions to the predictive model 112 .
  • the case engine next closes 326 the case, and stores it in the case database 108 .
  • the predictive model learns from the decision made by the analyst and performs 328 an update. Likewise, the risk tables variables are updated 330 based on the analyst's decision.
  • the case engine receives 402 an alert from the rule engine 104 .
  • the case manager attempts to locate 404 a case to which the alert can be added by examining cases stored in the case database 108 . If a case is located in the database 108 , the alert is added to that case. If no case can be located, the case engine then creates a new case, and adds the alert to the new case. Once the alert is attached to the case, the case engine then sends 406 the alert to the predictive model to be scored. The predictive model assigns a score to the alert and sends it back to the case engine.
  • the case engine compares the score with the previous high score of the case and determines whether the new score should be the high score.
  • the case engine also uses the score to update the “current score” value in the case, and if it is the first alert in the case, it also updates the “creation score” value. Either of these fields is used in preferred embodiments for queuing purposes.
  • the queuing engine determines whether the case is determined to be fraudulent, and the case engine receives 412 the decisioned case from the queuing engine 120 .
  • the case engine sends 414 the alerts associated with the case, and their corresponding dispositions to the predictive model 112 for training, and then stores 416 the case in the case database 108 .
  • the present invention may be implemented in a plurality of embodiments.
  • the system 100 is located at the same location as the Telco, and is connected to the Telco CDR generating system via a local area network (LAN) or other network-type system.
  • the system 100 may exist in a location remote from the Telco's own billing system.
  • the Telco may connect to the system 100 via a network such as the Internet, using a protocol such as FTP, telnet, HTTP, etc.
  • the analysts who determine whether or not scored cases are fraudulent are located at the same location as the system 100 .
  • the analysts may be located at the Telco, and scored cases may be downloaded from the system 100 .
  • analysts may be at the Telco site and use a World Wide Web connection to the system 100 to view cases and make fraud/no-fraud decisions.
  • one Telco may be leasing bandwidth to another Telco. This often occurs because telephone call volume changes rapidly, and one Telco may find its bandwidth suddenly underutilized, while another Telco finds it has no bandwidth to spare.
  • the Telco providing the bandwidth (the lessor) to do successful fraud detection, it should have access to the complete CDRs for all calls it carries, including those carried over leased-out bandwidth.
  • the Telco buying the bandwidth (the lessee) providing complete CDR information, including identifying information for the originating telephone number, is not desirable, because the lessor may choose to use that information to solicit telephone customers away from the other Telco.
  • the present invention overcomes this stalemate by providing an intermediary.
  • system 100 is outside of the control of either Telco, and is managed by a third (trusted) party.
  • the CDR 102 containing complete information is sent to the system 100 , and the case is scored by the predictive model 112 .
  • the stripped CDR is sent from the lessee Telco to the lessor Telco.
  • a score indicative of the likelihood of fraud is then sent to the Telco providing the bandwidth. That lessor Telco has an analyst to evaluate the scored cases and make fraud determinations. In this way, both the confidentiality of CDR records is maintained, and more accurate fraud/no-fraud determinations can be made.
  • FIG. 5 there is shown a diagram illustrating how Telcos leasing bandwidth can still receive fraud scores.
  • the lessor Telco 502 provides bandwidth to the lessee Telco 504 .
  • a call is made by a customer of the lessee Telco, which is carried over the lessor's lines.
  • the full CDR 102 containing sensitive information is sent to system 100 for scoring.
  • System 100 determines the fraud score 508 , and sends the score 508 to both the lessor 502 and the lessee 504 Telcos, though in other embodiments, the score may be sent only to the lessor Telco 502 .
  • the system 100 also provides the lessor Telco 502 with a stripped CDR 506 , which does not contain sensitive information such as the billing number.
  • analysts and queues may be at the system 100 , or may be at the Telco site.
  • the lessor Telco 502 may have analysts at analyst workstations 118 at the Telco 502 site.
  • the queues 116 may be at the system 100 location and accessed, e.g., via HTTP, or they may be at the Telco 502 site.
  • system 100 also maintains system report tables.
  • the system report tables keep track of system and analyst performance.
  • a fraud manager can generate daily, weekly, monthly, or yearly reports of alerts assigned to each of the four disposition types. Similar reports can be generated for alert type or the average time taken for analysts to open or close cases.
  • Another report shows the histogram of various dispositions for different score ranges. This report is a good measure of how well the model is doing at prioritizing cases; higher score ranges on average will contain a higher percentage of fraudulent cases. Reports also exist for showing queues, the cases assigned to those queues, the analysts working the cases, and the status of each case. Another report monitors the evolution of the fraction of fraudulent alerts processed. This report is useful for understanding how fraud trends are changing, as well as how effective the threshold may be at capturing fraud.
  • the system 100 helps analysts work cases by billing accounts, rather than at the ANI level.
  • the system 100 provides a valuable interface to provide frequently necessary billing information at one keystroke.
  • the predictive model helps adaptively prioritize those cases based upon learned risk, rather than heuristics.
  • System reporting helps fraud managers better understand both the fraud and case trends, as well as the workload and efficiency of their analysts. All of these tools provide fraud managers and analysts with a competitive advantage in fighting fraud.

Abstract

A predictive model system is used to detect telecommunications fraud. Call records (CDRs) provided by telephone companies are evaluated against specified rules. If one or more rules are matched, the system generates an alert. Pending alerts for a customer form a case, describing the caller's calling patterns. A predictive model determines a score that is predictive of the likelihood that the call involved fraud. Cases are queued for examination by analysts.

Description

    CROSS-REFERENCES TO RELATED APPLICATIONS
  • This application is a Continuation of U.S. patent application Ser. No. 09/963,358, filed Sep. 25, 2001.[0001]
  • BACKGROUND OF THE INVENTION
  • 1. Field of the Invention [0002]
  • The present invention relates generally to detecting telecommunications fraud using intelligent predictive modeling systems. [0003]
  • 2. Description of the Related Art [0004]
  • The rapid growth of the telecommunications industry has been accompanied by a correlative increase in telecommunications fraud. In some situations, however, a telecommunications service may be accessed or obtained in an undesirable fashion, e.g., by fraud, theft, or other nefarious activity, and unauthorized use may ensue. Providers take control actions to stop the provision of service when it is used in an undesirable fashion, e.g., by blocking compromised calling card numbers before service is fraudulently obtained. Unfortunately, by the time fraudulent use is detected and control actions can be taken, there has often already been a significant unauthorized use of the co-opted service, resulting in expense to the service provider. Accordingly, there is a need for a way to identify undesirable and unauthorized use of a service at an early juncture, in order to minimize the amount of loss resulting from that use. [0005]
  • Additionally, long distance carriers regularly lease bandwidth from other carriers. On such occasions, the call data record (CDR) for the call is often stripped of identifying information such as the number from where the call was made (“originating number”). This is done so that the long distance company leasing the bandwidth (the lessor) and completing the call on behalf of another carrier will not attempt to solicit business from the caller at the originating number, who is presumably not one of the lessor's subscribers, but the lessee's subscriber. Unfortunately, this frustrates fraud control efforts, since the information that has been stripped from the CDR would normally be used subsequently to detect fraud. As a result, there is substantial opportunity for fraud in these types of bandwidth exchanges. What is needed is a way to use the information stripped from the CDR to predict fraud, without divulging the stripped information to the provider providing the bandwidth. [0006]
  • BRIEF SUMMARY OF THE INVENTION
  • The present invention provides a system that includes a predictive model for detecting fraud in Call Data Records (CDRs). Telephone companies (Telcos) provide CDRs to the system, and the CDRs are then evaluated against Telco-specified rules; each participating Telco may define it own set of rules for evaluating calls made through that Telco. If one or more of a Telco's rules are matched, then the system generates an alert. All pending alerts for a caller (individual or company or portion thereof) form a case. The case also contains details—such as a statistical summary of alert-generating calls—about the caller's calling history. The case, current alert information, and a variety of risk factors serve as inputs to the predictive model. The predictive model outputs a score that is predictive of the likelihood that the call being made is fraudulent. This information is then queued for examination by analysts. The queue is designed so that calls that are more likely to involve fraud are examined earlier. After an analyst has made a determination about whether the call involved fraud, or alternatively, if no decision is made on the case within a pre-specified time, the case is saved in a case database. The fraud/no-fraud decision is used to update the risk factors and the predictive model to improve predictions about future alerts. [0007]
  • The features and advantages described in this summary and the following detailed description are not all-inclusive, and particularly, many additional features and advantages will be apparent to one of ordinary skill in the art. Moreover, it should be noted that the language used in this disclosure has been principally selected for readability and instructional purposes, and may not have been selected to delineate or circumscribe the inventive subject matter, resort to the claims being necessary to determine such inventive subject matter.[0008]
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a block diagram of an overall embodiment of the present invention. [0009]
  • FIG. 2 is a graph illustrating how a time-decayed rate changes over time. [0010]
  • FIG. 3 is a flow chart of steps taken in an embodiment of the present invention. [0011]
  • FIG. 4 is a flow chart of steps taken by the case engine in one embodiment of the present invention. [0012]
  • FIG. 5 is a diagram illustrating the determination of fraud scores when bandwidth is leased between telephone companies, in accordance with an embodiment of the present invention.[0013]
  • The figures depict a preferred embodiment of the present invention for purposes of illustration only. One skilled in the art will readily recognize from the following discussion that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles of the invention described herein. [0014]
  • DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
  • Overall Architecture [0015]
  • The present invention provides a threshold-based fraud prevention product aimed at detecting fraudulent calls using predictive modeling. Referring now to FIG. 1 there is shown an illustration of one embodiment of a [0016] system 100 in accordance with the present invention. The system 100 comprises a communications server 101, rule engine 104, a rule database 106, a case database 108, a case engine 118, a customer alert profile database 110, a predictive model 112, risk tables 114, a queuing engine 120, and various queues 116. Also shown in FIG. 1 are a call data record (CDR) 102 and an analyst workstation 118.
  • A Call Data Record (CDR) [0017] 102, as is known in the art, is a record of a telephone call, and typically contains identification information for the originating telephone number, the terminating telephone number, the billing telephone number (which in some cases may be neither the originating nor terminating number) or credit card number, the time of the call, and the length of the call. The CDR may contain additional data such as information specific to the telephone company billing the call, routing information, etc. More generally, a CDR is an instance of a transaction summary. In other embodiments, a transaction summary will have corresponding content (e.g., details of a credit card transaction).
  • The [0018] communications server 101 receives CDRs from the telephone companies (Telcos), and passes them to the rule engine 104. The CDRs are consolidated from all switches, mediation devices and SS7 surveillance platforms. As is known by those skilled in the art, mediation devices and SS7 surveillance platforms are designed to detect abnormal system behavior. It should be noted here that one of the advantages of the present invention is that it operates with any number of Telcos and can provide fraud detection for either a single Telco, or a large number of Telcos.
  • The [0019] rule engine 104 determines whether an incoming CDR 102 should be further examined for evidence of fraud. This decision is made on the basis of rules stored in the rule database 106. The rule database 106 contains rule sets specified by various telephone companies using system 100. Based on the Telco providing the CDR, the rule engine 104 applies a set of that Telco's specific rules to the CDR. This feature also allows the system to simultaneously evaluate CDRs for different Telcos, thereby providing a fraud detection service to the Telcos.
  • If the CDR satisfies the corresponding Telco's rules, then [0020] rule engine 104 generates an alert, and the alert is sent to the case engine 118. The case engine 118 uses information stored in the case database 108 to update an existing case, or create a new case, as required. The case database 108 contains records of cases examined for fraud, including the dispositions of those cases. Alerts generated by the rule engine 104 are also stored in the customer alert profile database 110.
  • The [0021] predictive model 112 receives cases and scores alerts, and generates a score indicative of the likelihood of fraud (more generally indicative of a level of risk). More specifically, the predictive model 112 receives input from the customer alert profile database 110, the case engine 118, and risk tables 114. The case engine also has access to certain CDR information about the CDRs that generated the alerts.
  • The [0022] predictive model 112 sends cases containing scored alerts back to the case engine 118 along with the score. The case engine 118 then sends the case to the queuing engine 120 for assignment to one of the queues 116 according to a priority. Analysts use analyst workstations 118 to examine cases from the queues 116, preferably in order of highest-to-lowest priorities.
  • An analyst makes a disposition of a case by deciding whether the case is indeed fraudulent. If fraudulent, then the analyst applies fraud control actions as defined by the provider Telco. Again, this allows each Telco to determine specific control actions to calls that it services. The dispositions (e.g., fraud, no fraud, unconfirmed fraud, unworked) made by analysts are then communicated back to the [0023] queuing engine 120, which in turn reports the results to the case engine 118, risk tables 114, and predictive model 112. The risk tables and predictive model are updated in light of the disposition (e.g., recalculating the risk rates in the risk tables, and updating the predictive model parameters). The case engine 118 then closes the case and stores it in the case database 108.
  • Call Data Records [0024]
  • A telephone company (Telco) using [0025] system 100 preferably stores its CDRs in a specified manner for easier processing. At regular intervals, the Telco establishes a connection to the sever 101. This server may be a dedicated system reserved for the Telco's use, or it may be a shared system. The shared system is described in further detail below. In a preferred embodiment, the Telco encrypts a file containing batched CDRs, and sends the batch via FTP to the server 101 of the system 100, where it is decrypted. In other embodiments, other methods may be employed to transmit the CDRs to the system 100 in real time, such as through dial-up connections, wireless connections, etc. Additionally, although the records are encrypted in a preferred embodiment, other embodiments may involve the transmission of unencrypted data.
  • CDR Evaluation and Alert Generation [0026]
  • Each Telco's CDRs are evaluated according to alert rules defined by that Telco and stored in the [0027] rule database 106. When any rule is satisfied, the rule engine 104 generates an alert. Typical rules include call collisions, volume of calls, aggregate duration, geographic velocity, single call duration, hot numbers, and exclusion rules.
  • For example, call collision detection detects true call overlap. Call overlap occurs when calls charged to the same billing number have start and end time ranges that overlap. When the number of overlapping calls meets a predefined threshold, the call collision function generates an alert. [0028]
  • A call volume rule sets the maximum number of calls that can be made in a fixed time period. Similarly, an aggregate duration rule determines the total amount of billing time that can be spent on the telephone. [0029]
  • Calls can be analyzed to determine distances between call origin points within specific time intervals. The originating number of the last call made is compared to that of the current call charged to the same number. Using vertical and horizontal coordinate data, the system compares the distance and time between calls against Telco-defined thresholds. If the thresholds are exceeded, an alert is tripped. [0030]
  • Miles-per-hour designations determine the impracticality or impossibility of successive calls by the same person. Exceptions are made, for example, for the case when more than one person is granted use of a calling or charge card. [0031]
  • Alerts may also be generated for calls or groups of calls of excessive duration. Telco-defined thresholds are established based on specific billing types, destination and/or origination to generate alerts for lengthy calling activity. Alerts also are generated for hot numbers and excluded numbers, as specified by the Telco. [0032]
  • Additionally, other fields that appear in the CDR may be configured as threshold-sensitive. User-defined fields (qualifiers) can be created to define alert types according to desired parameters. In one embodiment, alerts store only those variables that were tripped, while in other embodiments, all variables are stored in the alert. [0033]
  • In a preferred embodiment, an alert contains information including originating telephone number, terminating telephone number, billing account number, time of call, call type, dial type, and type of alert. Other data may also be present according to the needs of the particular Telco. In a preferred embodiment, a header is prepended to an alert to allow the [0034] case engine 118 to identify which case to attach the alert to, as well as the status of that case. (One embodiment has a 26 byte header and a 642 byte payload.)
  • Case Engine [0035]
  • The [0036] case engine 118 attempts to associate the alert with an existing case. A case contains a billing account number or other identification code, an alert table of one or more alerts, a customer profile, and a score field. In one embodiment, each customer has an account code that can be used as the identification code for a case. In other embodiments, the billing number or other identifying data may be used. A case may contain many alerts, and if one or more alerts are already present, the new alert will simply be added to those pending. Each alert in a case is identified in the alert table for the particular case. If an alert is generated and no active cases exist for the identification key, then a new case is created containing the new alert.
  • The [0037] case engine 118 determines to which case each incoming alert belongs based on a key. If a case does not yet exist for that key, a case is created and initialized. As noted, the key will typically be a customer account code or the billing ANI (Automatic Number Identification). In a preferred embodiment, either a billing account number or a billing ANI should always exist. However, as a precaution, alerts with neither a customer account number nor a billing ANI are assigned to an “unknown” case.
  • In a preferred implementation, the alerts are joined with the case data into a BLOB (binary large object), with new alerts being appended to the end of the BLOB. Each alert is uniquely identified by an alert ID, which is a unique incrementing number that is generated by the [0038] rule engine 104. This facilitates retrieval, as only one query is needed into the database to gather all of the case and alert information. The BLOBs are uniquely identified by a key that is associated with the case number.
  • Customer Alert Profile Database [0039]
  • When an alert is created (tripped), it is stored in the customer alert profile database [0040] 110, indexed by billing number or other identification key that corresponds to a billing hierarchy of the provider. Billing hierarchies are discussed in more detail below. A customer alert profile tracks the alert behavior and case outcome of the corresponding subscriber.
  • The customer alert profile contains historical alert data about the customer. In one embodiment, the profile stores historical data in complete form, including all information associated with the alert. In another embodiment, only statistical summaries of the alert history are maintained in the customer profile. In a preferred embodiment, data that is stored in the customer profile includes alert rates, risk, typical activity and unusual activity. Typical activity, or low-risk activity, is activity which is generally seen on non-fraud alerts. Unusual activity is activity more commonly associated with fraudulent alerts, and is of high risk. The decision as to what types of activity are low and high risk is made in a preferred embodiment by analyzing the activity patterns of dispositioned alerts. This is done automatically in a preferred embodiment, though in alternative embodiments it is done manually. Historical alert data in customer profiles is updated each time an analyst makes a determination about whether a pending alert is fraudulent or not. [0041]
  • The customer alert profile is also an input to the [0042] predictive model 112. Customer profiles are collections of variables that capture past and present alert behavior for that customer over time. This profile is updated each time an alert is received, in a preferred embodiment prior to scoring the alert. Variables may also be updated each time an alert is dispositioned. In a preferred embodiment, short-term behavior is captured on a 1½-day time scale while long-term behavior is on a 9-day time scale.
  • The customer profile variables can be segmented into nine categories, as listed in Table 1 below. A table of customer profile fields is included in Appendix 1. [0043]
    TABLE 1
    Profile Variable or Cate-
    gory Description
    Time of last alert The timestamp (in seconds) of the last alert
    that was processed for this subscriber.
    Time of last alert disposi- The timestamp (in seconds) of the last alert
    tion disposition that was processed for this sub-
    scriber.
    Short term rates of each of Decayed average rates (counts per unit time)
    the 11 alert types (see Table for the 11 alert types for this subscriber. The
    2) time constant used is the short term constant
    of 1.5 days.
    Average short term risk Decayed average risk weighted rates (risk
    weighted rates of the 11 a- multiplied by rate) for the 11 alert fields for
    lert types this subscriber. The time constant used is the
    short term constant of 1.5 days.
    Average long term risks of Decayed average risk for the 11 alert fields
    11 alert fields for this subscriber. The time constant used is
    the long term constant of 9 days.
    Ratio of short term to long Decayed average value of the ratio of the
    term average risks of the 11 short term to long term average risks for the
    alert fields 11 alert fields for this subscriber. The short
    term time constant is 1.5 days, while the
    long term constant is 9 days.
    Short term rate of each of Decayed average rates (counts per unit time)
    the 4 dispositions for the 4 alert dispositions for this subscriber.
    The time constant used is the short term con-
    stant of 1.5 days.
    Short term average risk of Decayed average risk of the subscriber,
    the customer where the rates of the four dispositions are
    the short term rates of the four dispositions.
    The time constant used is the short term time
    constant of 1.5 days.
    Combinations of variables See Appendix #1
    above and raw risk vari-
    ables
  • The time stamps refer to the end time of the call generating the last alert and are forward ratcheting only. This means that the most recent time stamp is used when comparing alerts out of order. For instance, suppose an alert with an end time of 10 AM is received and processed, and subsequently an alert with a 9 AM end time is received. The time-of-last-alert variable remains at 10 AM. An 11 AM end time would then cause the time of last alert to ratchet forward to 11 AM. This ratcheting is used to ensure that the profile variables do not grow exponentially if alerts arrive out of order. [0044]
  • When new alerts arrive, the customer profile is updated. The appropriate customer profile variables are updated using the time of the last alert as the incumbent time stamp, and the ENDTIME of the current alert to calculate time differences for all profile variable decays. In one preferred embodiment, the updates are performed in the following order: [0045]
  • Case alert rate [0046]
  • Short term rates of each of the 11 alert types [0047]
  • Long term average case risks of case alert rate and various alert fields [0048]
  • Ratios of short term to long term average case risks of case alert rate and various alert fields [0049]
  • Risk weighted rates of case alert rate and various alert fields [0050]
  • Time of last alert [0051]
  • To perform the decays, the time difference is assumed to be: [0052] Δ t i = { t i - t i - 1 , if t i < t i - 1 0 , otherwise .
    Figure US20030147516A1-20030807-M00001
  • where t[0053] i=ENDTIME of current alert and ti−1=ENDTIME of last alert.
  • In a preferred embodiment, short term decay constants are 1.5 days, while long term decay constants are 9 days. Other decay constants may also be used. [0054]
  • The model then scores the updated profile. It does this by generating an input vector of all profile variables, minus the time stamps. The profile variables and a list of which variables are used as model inputs are included in Appendix 1. [0055]
  • Risk Tables [0056]
  • Risk tables [0057] 114 evaluate the relative risk of different alert fields, and adapt over time based on dispositions. For instance, a bill type risk table monitors how the level of risk of each possible bill type value changes over time. The risk tables 114 are dynamic; if there is an entry in an alert that does not have a corresponding value in the risk table, a new entry is added and initialized in the risk table, and updated with the current alert's information. Thus, as new values are added, the risk tables 114 track statistics for those values. For instance, a telephone company might have 50 switches. Each of those switches might have very different risks, depending on the regions and clients served. The risk tables 114 would track the risk of each of those 50 switches, based upon the alerts and dispositions seen. If a new switch were to be installed, it would be added to the tables, which would then track 51 switches.
  • The risk tables [0058] 114 learn by example, so that each time an analyst makes a decision as to whether an alert is fraudulent or not, the risk tables 114 are updated to reflect that decision. This is done for each of the major variables in the alert that is decisioned, including alert rate, properties, alert type, bill type, call type, dial type, originating trunk group, source name, source type name, qualifier, and the velocity screening number type. Thus, the risk tables 114 adapt to changes in the network and fraud traffic by learning from analysts' decisions.
  • In a preferred embodiment, a profiling technique is used to allow transactions to be summarized effectively. The technique uses profiling filters, which are computed from a set of parameters specific to a customer. These parameters are weighted averages of customer properties. Consider property X, and let T(i) denote the time when the ith alert (with value v[0059] i) is processed. The time weighted average for X is then x ( T i ) = j = 1 i c j i v j
    Figure US20030147516A1-20030807-M00002
  • where [0060] c j i = τ j - 1 τ j e - ( τ i - t ) / Td t 0 τ i e - ( τ i - t ) / Td t = 0 1 e - t / T t 0 τ i e - t / T t e - ( τ i - τ j ) / T = 1 - e - 1 / T 1 - e - τ i / T e - ( τ i - τ j ) / T
    Figure US20030147516A1-20030807-M00003
  • and [0061] 1 - e - 1 / T
    Figure US20030147516A1-20030807-M00004
  • represents the initial weight; [0062] 1 - e - τ i / T
    Figure US20030147516A1-20030807-M00005
  • represents the normalization factor, and [0063] e - ( τ i - τ j ) / T
    Figure US20030147516A1-20030807-M00006
  • is the decay factor. [0064]
  • A recursive formula is commonly used. Suppose the profile was last updated at time t=T(i), and a new alert with its disposition is processed at time T(i+1). Then a recursive equation for x(T(i+1)) is: [0065] x ( τ i + 1 ) = 1 - e 1 / T 1 - e - τ i + 1 / T v i + 1 + 1 - e - τ i / T 1 - e - τ i + 1 / T e - Δ r / T x ( τ i )
    Figure US20030147516A1-20030807-M00007
  • where T is a decay constant. A larger T results in slower decay, and hence larger continued influence in older alerts. A faster decay can make the system more sensitive to new fraud schemes. [0066]
  • Risk table variables are decayed in the same manner as profile variables in a preferred embodiment. But whereas the profile variables are specific to a customer, the risk tables are global, and are thus updated at every dispositioned alert. [0067]
  • Risk tables [0068] 114 measure the level of risk associated with individual values of various alert fields and quantities. The risk of a certain value is defined as the ratio of the rate of risky alerts to the rate of all worked alerts, where the rates are calculated as per the decayed rate description given above: Risk = ( fraud rate + unconfirmed rate ) ( fraud rate + unconfirmed rate + nonfraud rate )
    Figure US20030147516A1-20030807-M00008
  • The rate of unworked alerts is not included for this calculation, as no information is known as to their true disposition. [0069]
  • In a preferred embodiment, the [0070] system 100 uses eleven different risk tables 114:
  • 1. Alert Type [0071]
  • 2. Bill Type [0072]
  • 3. Call Type [0073]
  • 4. Dial Type [0074]
  • 5. Originating Trunk Group [0075]
  • 6. Property Name [0076]
  • 7. Source Name [0077]
  • 8. Source Type Name [0078]
  • 9. TACS Qualifier [0079]
  • 10. Velocity Screening Number Type [0080]
  • 11. Case Alert Rate [0081]
  • The first ten risk tables track the rates of occurrence and the time of last update for the four dispositions (fraud, non-fraud, unconfirmed fraud, unworked) for each of the unique categorical values of the alert field in question. One embodiment of an unpopulated alert type risk table is illustrated below in Table 2: [0082]
    TABLE 2
    Alert Type Risk Table
    Alert Rate
    Time of Un-
    Alert Alert Type Last Non- confirmed Un-
    Code Description Update Fraud fraud Fraud worked
    c Low level
    collision
    C High level
    collision
    g Low level geo
    velocity
    G High level geo
    velocity
    b Low level
    volumetric
    B High level
    volumetric
    S Single call
    duration
    d Low level
    aggregate duration
    D High level
    aggregate duration
    H Hot number
    R Exclusion rule
  • The case-alert rate risk table, Table 3, is slightly different in that the key is a rate that is a numerical value, rather than a categorical value. The key that is used is the actual case-alert rate as found in the account profile (Short Term Rate of All Alerts). Thus, this risk table tracks the rates of occurrence and the time of last update for the four dispositions for ranges of the case alert rate. Thus, if a case had an alert rate of 3, the values in the second row of Table 3 below would be used to calculate the alert rate risk, since 2≦3<5. [0083]
    TABLE 3
    Case Alert Risk Table
    Time of Alert Rate
    Case Alert Rate Last Unconfirmed
    [Min, Max) Update Fraud Nonfraud Fraud Unworked
    [0, 2
    [2, 5)
    [5, 10)
    [10, 20)
    [20, 50)
    [50, 100)
    [100, 200)
    [200, 500)
    [500, 1000)
    [1000, Inf)
  • Once the correct row is selected for a given value of an alert field or case alert rate, the risk associated with that value is calculated as the ratio of the rates of risky alerts to the rates of all worked alerts: [0084] Risk = ( fraud rate + unconfirmed rate ) ( fraud rate + unconfirmed rate + nonfraud rate )
    Figure US20030147516A1-20030807-M00009
  • Note that if the denominator is zero, risk is defined to be zero. [0085]
  • Predictive Model [0086]
  • The [0087] predictive model 112 receives input from the customer alert profile database 110, the case engine 118, and risk tables 114. The case engine also has access to certain CDR information about the CDRs that generated the alerts. In a preferred embodiment, values preserved from the CDRs include the following:
    Field Name Field Description
    SWPropertyName switch that the call was received from
    LogicalQIDName the alert type
    VCScrnNumber Number the alert was generated on (billing
    number, originating number, terminating
    number)
    VCScrnNumTypeName Full text Number type the alert was
    generated on (″Billing″, ″originating
    terminating″)
    BillingNumber the billing number of the call
    ReceivedTimeSecs time the system received the CDR
    FirstElemSecs time the call began.
    PostTimeSecs time the rules engine processed the call
    EndTimeSecs time the call was completed
    BillType the bill type of the call
    CallType the type of call made
    BillNumberPin pin number used, if made with calling card
    DialType dial type
    CDR_ID the unique id of the CDR that generated the
    alert
    CardTypeName card type, if calling card is used
    OrigTrunkGroup originating trunk group
    CIC carrier identification code
    CustomerCode the account number for the customer
  • Fields that are generated in a preferred embodiment by the [0088] rule engine 104 from the CDRs that created the alert are:
    Field Name Field Description
    Alert ID a unique number to identify the alert
    GVCRate the rate used in calculating geo-
    velocity collisions
    ThreshValExceed the threshold value exceeded
    VCQuantity the actual value that exceeded the
    threshold
    PeriodName name of period, if used (i.e. name
    associated with holidays or
    multipliers)
    PropertyName property in the hierarchy used to
    define threshold for alert
    SourceName the name of the property or class that
    generated the alert
    SourceTypeName source of threshold (property or class)
  • Fields that are added to the alert by the case engine in a preferred embodiment are: [0089]
    Field Name Field Description
    AlertScore score of the alert
    Disposition analyst given disposition of the
    alert
  • The [0090] predictive model 112 in a preferred embodiment is a neural-network-based statistical tool that learns how various inputs can be correlated to predict a certain target variable, and that can be incrementally trained by example, as new alerts are decisioned by an analyst. This means that historical alert and disposition information can be used to pre-train the predictive model 112 and risk tables 114 before the model is put online, so that the system 100 can have a running start at install time. In a preferred embodiment, the predictive model is an Extended Kalman filter. Each time a case is closed, the tagged alerts are sent to the Kalman filter and the model weights are updated. In this way, the predictive model 112 builds and updates a knowledge base to help the analyst manage workflow by predicting the relative degree of risk in each case.
  • The output of the [0091] predictive model 112, determined in a manner described below, is a fraud score indicative of the likelihood that the call that generated the alert was fraudulent. The score is attached to the case and returned by the predictive model 112 to the case engine 118. The score is preferably on a scale from 1-999, though other scales may be used. An alert with a score of 800 would therefore be much riskier than an alert with a score of 200, and should be worked first.
  • In a preferred embodiment of the predictive model, the Extended Kalman filter output is a number on the unit interval (0,1). The Extended Kalman filter output attempts to predict the Boolean risky alert tag: [0092] risk tag = { 0.9 , for fraud of unconfirmed fraud 0.1 , for nonfraud
    Figure US20030147516A1-20030807-M00010
  • While the values 0.9 and 0.1 used as target values can be adjusted to change the score distribution in some embodiments, in a preferred embodiment the target values are fixed. Because unworked alerts are excluded from training the network, their scores are similar to the most common disposition, which is non-fraud. Thus, fraudulent and unconfirmed fraud alerts will tend to give raw scores closer to 0.9, while non-fraudulent and unworked alerts will tend to score closer to 0.1. [0093]
  • The raw score is mapped onto a score range of [1,999] by using a linear function: [0094]
  • scaled score=floor(999*raw score)+1
  • where raw score is the output of the Extended Kalman filter. [0095]
  • Once a scaled score has been computed for an alert, the summary case scores are updated. These summary scores are designed so they can be used to prioritize cases. In a preferred embodiment, the scores are as follows: [0096]
  • Creation score is the score of the profile as soon as the first alert was processed. [0097]
  • Current score is the score of the profile at the time the most recent alert was processed. [0098]
  • High score is the maximum score of any alert in the case. [0099]
  • Queuing [0100]
  • The [0101] predictive model 112 sends cases containing scored alerts back to the case engine 118 along with the score. The case engine 118 then sends the case to the queuing engine 120 for assignment to one of the queues 116 according to a priority. Analysts use analyst workstations 118 to examine cases from the queues 116, preferably in order of highest-to-lowest priorities. To facilitate the prioritization of cases, in a preferred embodiment, each case is assigned a case score. While they may be related, a case score is distinct from an alert score. Score types that a case may be assigned in a preferred embodiment include creation score, current score, and high score, as detailed above.
  • In alternative embodiments, fields that are used to determine priority also include the current number of alerts in the case; the last time the case was updated; and the time that the case was created. [0102]
  • In addition, cases may be queued for disposition for reasons other than risk. For example, it may be more efficient to have newly trained analysts receive cases that are fairly simple, while more experienced analysts study the more difficult cases. Thus, in a preferred embodiment, cases additionally include the following fields that may be used by the queuing engine to determine queuing priority: [0103]
  • case number—a unique incrementing number, where the higher the number, the more recently the case was created. [0104]
  • case worked status—whether the case is unworked, pending, open, or closed. [0105]
  • case disposition—whether the case has been marked as fraud, nonfraud, unconfirmed. [0106]
  • cic (carrier information code)—used for segmenting customer traffic. [0107]
  • callback—whether another analyst has deemed this case is worth looking into again. [0108]
  • Analysts [0109]
  • Analysts are assigned to one or [0110] more queues 116. To maximize efficiency, and minimize the risk of loss resulting from fraudulent activity, analysts first work those cases that have higher fraud scores. As they analysts examine alerts within a case, they assign disposition values to the alerts. In a preferred embodiment, there are four possible values:
  • Fraud: When an analyst confirms with a customer that an alert is fraud-related. [0111]
  • Non-fraud: When an analyst confirms with a customer that an alert is not fraud-related. [0112]
  • Unconfirmed fraud: When an analyst is confident that an alert is fraud-related without confirmation from the customer (i.e. when the analyst is willing to take action against the account). [0113]
  • Unworked: When an analyst is unsure whether the alert is fraud or not, or when the alert has not been examined by the analyst. [0114]
  • Cases as a whole may also be assigned values, as follows: [0115]
  • Fraud: A case is considered fraudulent if it contains at least one fraudulent alert. [0116]
  • Unconfirmed fraud: A case is considered unconfirmed fraud if it contains at least one unconfirmed fraud alert, and no fraud alerts (i.e. fraud takes precedence over unconfirmed fraud). [0117]
  • Non-fraudulent: A case is considered non-fraudulent if it contains at least one non-fraudulent alert and no fraud or unconfirmed fraud alerts. [0118]
  • Unworked: A case is considered unworked if it contains only unworked alerts. [0119]
  • It should be noted that typically, service providers organize billing information based upon a tiered structure, though the specifics of that tiered structure may vary from carrier to carrier. Individual phone lines (ANI's) lie at the lowest tier. Higher tiers may group ANI's or services by Billing Telephone Numbers (BTN's), by building, location, or department, and ultimately by the customer. [0120]
  • As an example of a billing system, consider a fictional telephone company TelCo Inc. TelCo Inc. has a three-tiered billing system that handles primarily business customers. At the bottom level of the billing system is the ANI. At the next level up is a billing account number (BAN) that is location or department specific. Above that is a customer code that aggregates all billing account numbers for a given customer. For instance, TelCo Inc. may have 10 buildings, each with 2000 phone lines. Therefore, they would have 20,000 phone lines (or ANI's). Each of those 10 buildings might have a unique billing account number, in order to distinguish them for billing purposes. In addition, there would be one customer code to distinguish the company from an account of another company. [0121]
  • Thus, telephone [0122] companies using system 100 may choose to perform case management at the billing account number level (the middle tier in TelCo Inc's hierarchy). This prevents the analyst from becoming swamped with 20,000 different cases from the same large company, one for each ANI, and yet it does not clump all buildings or departments together. Typically, different buildings or departments in a company may use their telephone services quite differently. Consider the usage patterns of corporate offices, marketing, sales, customer support, or engineering; each would be quite different. Modeling at the middle tier in the billing hierarchy captures those differences. It will be noted, however, that modeling could take place at any of the other levels in a similar manner. In each instance, CDRs will still be evaluated against Telco defined rules, and when one or more rules are matched, an alert will be generated.
  • Updating [0123]
  • When an analyst works a case and all alerts are assigned a disposition, the decisioned case is sent by the queuing [0124] engine 120 back to the case engine 118. Data is also sent to the risk tables 114. Using this data, the risk tables 114 update their variable weights to improve fraud detection performance, as described below. When the case engine 118 receives the decisioned case back from the queuing engine, it sends the decision to the predictive model 112. The predictive model uses this decision information to learn, as described below, thus improving its fraud predicting abilities. The case engine 118 then marks the case as closed, and sends it to the case database 108 for storage.
  • In practice, cases are not dispositioned immediately because of the delay between alert generation and analyst availability. While a case is accumulating alerts, the profile for that customer may be updated as other alerts are generated and scored. When the case is finally dispositioned, the profile may differ from the profile as it existed during scoring of the alert, due to intervening alerts and updated risk tables. Similarly, when the predictive model is updated, there is a time lag between scoring and model update, during which the customer profile may be affected. Therefore, the customer profile used in conjunction with the fraud tag (disposition) may be out of phase by the time it is received by the predictive model. [0125]
  • The model creates the input vector once again from the customer profile. The risk tag is then created from the fraud tag: [0126] risk tag = { 1 , for fraud or unconfirmed fraud 0 , for nonfraud
    Figure US20030147516A1-20030807-M00011
  • This input vector is then presented to the Kalman filter along with the risk tag, and the Extended Kalman filter weights and intermediate matrices are updated. [0127]
  • Next, the risk tables [0128] 114 are updated. The Extended Kalman filter weights are updated using the profile as it then appears. (Note that, as described above, the profile as it existed at the time of scoring is irretrievably lost in a preferred environment.) Each of the 11 risk tables is then updated. The updates are done in this order so that the predictive model can learn to better predict using the state of the profile prior to receiving the disposition information. For each risk table, only the row matching the case alert rate or alert field in question is updated. For instance, for a low-level call collision alert, only the row corresponding to low level call collisions would be updated. For that row, the column matching the alert disposition is decayed and then incremented by 1. The other three disposition columns are simply decayed. For instance, if the alert was unconfirmed fraud, then the unconfirmed fraud would be decayed and then incremented by 1, while the other three rates (fraud, non-fraud, and unworked) are decayed without being incremented. The time difference by which the rates are decayed are: Δ t i = { t i - t i - 1 , if t i < t i - 1 0 , otherwise .
    Figure US20030147516A1-20030807-M00012
  • where t[0129] i=ENDTIME of current alert and ti−1=ENDTIME of last disposition. The decay constant is the same as the short-term decay constant of the profile variables, or 1.5 days in a preferred embodiment.
  • The time of last disposition of that row is then ratcheted up, if the time of the current alert is greater than the time of last disposition. [0130]
  • Those variables of the customer profile that are concerned with the rates of the four dispositions and the average risk of the case are updated after the risk tables are updated, using the time of last disposition and time of the current alert to determine the time difference for decay purposes. Just as was done for the risk tables, the four disposition rates are either decayed and incremented by one or simply decayed, according to the alert disposition. The short-term average case risk is decayed and then incremented by the case risk, as per the four short-term case disposition rates. [0131]
  • The time of last disposition for the profile is then ratcheted up, only if the time of the current alert is greater than the time of last disposition. [0132]
  • Architecture and Flow [0133]
  • The steps taken in a preferred embodiment include the following, with reference being had to FIG. 3. A [0134] CDR 102 is received 302 from the Telco by the communications server 101. The rule engine 104 checks 304 the CDR 102 against the Telco rules stored in the rule database 106. If an alert is generated, the rule engine sends 306 the alert to both the case engine 118 and the customer alert profile database 110. The case engine 118 attaches 308 the alert to a case. The operation of the case engine 118 is further described below with respect to FIG. 4. The case engine sends 310 the case to the predictive model 112, and the predictive model 112 scores 312 the alerts in the case using the risk tables 114, the customer alert profile found in the customer alert profile database 110, and case information. The predictive model sends 314 the score back to the case engine, which then sends 316 the case to the queuing engine 120. The queuing engine assigns 318 the case to a position in the queue 116 based on the fraud score of the alerts in the case. An analyst examining the case in the queue decides 320 whether fraud in fact exists in that case. The queuing engine then sends 322 the decision made by the analyst back to the risk tables 114 and to the case engine 118. The case engine additionally sends 324 the alerts associated with a closed case, and their corresponding dispositions to the predictive model 112. The case engine next closes 326 the case, and stores it in the case database 108. The predictive model learns from the decision made by the analyst and performs 328 an update. Likewise, the risk tables variables are updated 330 based on the analyst's decision.
  • The steps taken by the [0135] case engine 118 in a preferred embodiment are as follows, reference being had to FIG. 4: the case engine receives 402 an alert from the rule engine 104. The case manager attempts to locate 404 a case to which the alert can be added by examining cases stored in the case database 108. If a case is located in the database 108, the alert is added to that case. If no case can be located, the case engine then creates a new case, and adds the alert to the new case. Once the alert is attached to the case, the case engine then sends 406 the alert to the predictive model to be scored. The predictive model assigns a score to the alert and sends it back to the case engine. At this point, the case engine compares the score with the previous high score of the case and determines whether the new score should be the high score. The case engine also uses the score to update the “current score” value in the case, and if it is the first alert in the case, it also updates the “creation score” value. Either of these fields is used in preferred embodiments for queuing purposes.
  • The queuing engine determines whether the case is determined to be fraudulent, and the case engine receives [0136] 412 the decisioned case from the queuing engine 120. The case engine sends 414 the alerts associated with the case, and their corresponding dispositions to the predictive model 112 for training, and then stores 416 the case in the case database 108.
  • The present invention may be implemented in a plurality of embodiments. In one embodiment, the [0137] system 100 is located at the same location as the Telco, and is connected to the Telco CDR generating system via a local area network (LAN) or other network-type system. In another embodiment, the system 100 may exist in a location remote from the Telco's own billing system. The Telco may connect to the system 100 via a network such as the Internet, using a protocol such as FTP, telnet, HTTP, etc.
  • Also, in some embodiments, the analysts who determine whether or not scored cases are fraudulent are located at the same location as the [0138] system 100. In other embodiments, the analysts may be located at the Telco, and scored cases may be downloaded from the system 100. In one embodiment, for example, analysts may be at the Telco site and use a World Wide Web connection to the system 100 to view cases and make fraud/no-fraud decisions.
  • Bandwidth Leasing [0139]
  • In one embodiment, one Telco may be leasing bandwidth to another Telco. This often occurs because telephone call volume changes rapidly, and one Telco may find its bandwidth suddenly underutilized, while another Telco finds it has no bandwidth to spare. In order for the Telco providing the bandwidth (the lessor) to do successful fraud detection, it should have access to the complete CDRs for all calls it carries, including those carried over leased-out bandwidth. For the Telco buying the bandwidth (the lessee), however, providing complete CDR information, including identifying information for the originating telephone number, is not desirable, because the lessor may choose to use that information to solicit telephone customers away from the other Telco. The present invention overcomes this stalemate by providing an intermediary. In this embodiment, [0140] system 100 is outside of the control of either Telco, and is managed by a third (trusted) party. The CDR 102 containing complete information is sent to the system 100, and the case is scored by the predictive model 112. The stripped CDR is sent from the lessee Telco to the lessor Telco. A score indicative of the likelihood of fraud is then sent to the Telco providing the bandwidth. That lessor Telco has an analyst to evaluate the scored cases and make fraud determinations. In this way, both the confidentiality of CDR records is maintained, and more accurate fraud/no-fraud determinations can be made.
  • Referring now to FIG. 5, there is shown a diagram illustrating how Telcos leasing bandwidth can still receive fraud scores. The [0141] lessor Telco 502 provides bandwidth to the lessee Telco 504. A call is made by a customer of the lessee Telco, which is carried over the lessor's lines. The full CDR 102 containing sensitive information is sent to system 100 for scoring. System 100 determines the fraud score 508, and sends the score 508 to both the lessor 502 and the lessee 504 Telcos, though in other embodiments, the score may be sent only to the lessor Telco 502. The system 100 also provides the lessor Telco 502 with a stripped CDR 506, which does not contain sensitive information such as the billing number.
  • Note once again that analysts and queues may be at the [0142] system 100, or may be at the Telco site. For example, in the case of Telcos that share bandwidth, the lessor Telco 502 may have analysts at analyst workstations 118 at the Telco 502 site. The queues 116 may be at the system 100 location and accessed, e.g., via HTTP, or they may be at the Telco 502 site. In some embodiments, there is a system 100 provided for each Telco.
  • In one embodiment, [0143] system 100 also maintains system report tables. The system report tables keep track of system and analyst performance. A fraud manager can generate daily, weekly, monthly, or yearly reports of alerts assigned to each of the four disposition types. Similar reports can be generated for alert type or the average time taken for analysts to open or close cases. Another report shows the histogram of various dispositions for different score ranges. This report is a good measure of how well the model is doing at prioritizing cases; higher score ranges on average will contain a higher percentage of fraudulent cases. Reports also exist for showing queues, the cases assigned to those queues, the analysts working the cases, and the status of each case. Another report monitors the evolution of the fraction of fraudulent alerts processed. This report is useful for understanding how fraud trends are changing, as well as how effective the threshold may be at capturing fraud.
  • By using the present invention, fraud managers and analysts will have effective tools to make them more efficient in working cases. The [0144] system 100 helps analysts work cases by billing accounts, rather than at the ANI level. The system 100 provides a valuable interface to provide frequently necessary billing information at one keystroke. The predictive model helps adaptively prioritize those cases based upon learned risk, rather than heuristics. System reporting helps fraud managers better understand both the fraud and case trends, as well as the workload and efficiency of their analysts. All of these tools provide fraud managers and analysts with a competitive advantage in fighting fraud.
  • As will be understood by those familiar with the art, the invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof, and the mechanisms that implement the invention or its features may have different names or formats. Accordingly, the disclosure of the present invention is intended to be illustrative, but not limiting, of the scope of the invention. [0145]
    Figure US20030147516A1-20030807-P00001
    Figure US20030147516A1-20030807-P00002
    Figure US20030147516A1-20030807-P00003
    Figure US20030147516A1-20030807-P00004
    Figure US20030147516A1-20030807-P00005

Claims (12)

1. A method for detecting telecommunications fraud, comprising:
receiving at least one call data record (CDR) of a telephone call from a telephone company (TelCo);
checking the CDR against a plurality of rules;
responsive to a determination that at least one of the rules has been satisfied by the CDR, generating an alert; and
scoring the alert by a predictive model to determine a likelihood that the telephone call is fraudulent.
2. The method of claim 1, further comprising associating the alert with a case, the case containing additional data related to the caller.
3. The method of claim 2, further comprising:
queuing the case according to the scored alerts within the case;
determining whether fraud exists within the case; and
updating the predictive model based on the determination of whether fraud exists.
4. The method of claim 2, wherein the additional data related to the caller includes an alert profile, containing historical alert data about the customer.
5. The method of claim 1 wherein scoring alerts by a predictive model further comprises:
inputting into the predictive model data extracted from the alert, related instances of unusual activity, and risk data.
6. A method for detecting telecommunications fraud, comprising:
receiving a record of telecommunications activity by a caller;
responsive to a determination that the record includes unusual activity of the caller with respect to either the caller's prior calls or rules defining unusual activity:
using a predictive model to determine a likelihood that the unusual activity is associated with telecommunications fraud.
7. The method of claim 6, wherein the telecommunications activity is a telephone call.
8. The method of claim 6, further comprising:
receiving confirming information indicating whether the unusual activity is actually associated with telecommunications fraud; and
updating the predictive model using the confirming information.
9. The method of claim 8 wherein the confirming information is received from a fraud analyst.
10. The method of claim 6 wherein a determination that the record includes unusual activity includes a determination that a detail of the record exceeds a preset threshold.
11. The method of claim 6 wherein using a predictive model further comprises:
providing as input to the predictive model the unusual activity found in the record, related instances of unusual activity, and risk data; and
obtaining as output from the predictive model a score indicative of the likelihood that the unusual activity in the record is the result of fraud.
12. The method of claim 11, further comprising:
transmitting the unusual activity and the score to an analyst; and
receiving from the analyst a decision indicating whether the unusual activity is the result of fraud.
US10/346,636 2000-09-29 2003-01-17 Self-learning real-time prioritization of telecommunication fraud control actions Expired - Lifetime US6850606B2 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US10/346,636 US6850606B2 (en) 2001-09-25 2003-01-17 Self-learning real-time prioritization of telecommunication fraud control actions
US10/970,318 US7158622B2 (en) 2000-09-29 2004-10-20 Self-learning real-time prioritization of telecommunication fraud control actions
US11/563,657 US7457401B2 (en) 2000-09-29 2006-11-27 Self-learning real-time prioritization of fraud control actions

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US09/963,358 US6597775B2 (en) 2000-09-29 2001-09-25 Self-learning real-time prioritization of telecommunication fraud control actions
US10/346,636 US6850606B2 (en) 2001-09-25 2003-01-17 Self-learning real-time prioritization of telecommunication fraud control actions

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US09/963,358 Continuation US6597775B2 (en) 2000-09-29 2001-09-25 Self-learning real-time prioritization of telecommunication fraud control actions

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US10/970,318 Continuation US7158622B2 (en) 2000-09-29 2004-10-20 Self-learning real-time prioritization of telecommunication fraud control actions

Publications (2)

Publication Number Publication Date
US20030147516A1 true US20030147516A1 (en) 2003-08-07
US6850606B2 US6850606B2 (en) 2005-02-01

Family

ID=27663753

Family Applications (3)

Application Number Title Priority Date Filing Date
US10/346,636 Expired - Lifetime US6850606B2 (en) 2000-09-29 2003-01-17 Self-learning real-time prioritization of telecommunication fraud control actions
US10/970,318 Expired - Lifetime US7158622B2 (en) 2000-09-29 2004-10-20 Self-learning real-time prioritization of telecommunication fraud control actions
US11/563,657 Expired - Fee Related US7457401B2 (en) 2000-09-29 2006-11-27 Self-learning real-time prioritization of fraud control actions

Family Applications After (2)

Application Number Title Priority Date Filing Date
US10/970,318 Expired - Lifetime US7158622B2 (en) 2000-09-29 2004-10-20 Self-learning real-time prioritization of telecommunication fraud control actions
US11/563,657 Expired - Fee Related US7457401B2 (en) 2000-09-29 2006-11-27 Self-learning real-time prioritization of fraud control actions

Country Status (1)

Country Link
US (3) US6850606B2 (en)

Cited By (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050267788A1 (en) * 2004-05-13 2005-12-01 International Business Machines Corporation Workflow decision management with derived scenarios and workflow tolerances
GB2417641A (en) * 2004-08-25 2006-03-01 Agilent Technologies Inc Call detail record correlation to prevent arbitrage
US20060155847A1 (en) * 2005-01-10 2006-07-13 Brown William A Deriving scenarios for workflow decision management
US20070098013A1 (en) * 2005-11-01 2007-05-03 Brown William A Intermediate message invalidation
US20070100990A1 (en) * 2005-11-01 2007-05-03 Brown William A Workflow decision management with workflow administration capacities
US20070101007A1 (en) * 2005-11-01 2007-05-03 Brown William A Workflow decision management with intermediate message validation
US20070100884A1 (en) * 2005-11-01 2007-05-03 Brown William A Workflow decision management with message logging
US20070116013A1 (en) * 2005-11-01 2007-05-24 Brown William A Workflow decision management with workflow modification in dependence upon user reactions
US20070233688A1 (en) * 2006-04-04 2007-10-04 Karla Weekes Smolen Online system for exchanging fraud investigation information
US20080178193A1 (en) * 2005-01-10 2008-07-24 International Business Machines Corporation Workflow Decision Management Including Identifying User Reaction To Workflows
US20080235706A1 (en) * 2005-01-10 2008-09-25 International Business Machines Corporation Workflow Decision Management With Heuristics
US20100303211A1 (en) * 2005-04-21 2010-12-02 Victrio Method and system for generating a fraud risk score using telephony channel based audio and non-audio data
US20120263285A1 (en) * 2005-04-21 2012-10-18 Anthony Rajakumar Systems, methods, and media for disambiguating call data to determine fraud
US8793131B2 (en) 2005-04-21 2014-07-29 Verint Americas Inc. Systems, methods, and media for determining fraud patterns and creating fraud behavioral models
US9460722B2 (en) 2013-07-17 2016-10-04 Verint Systems Ltd. Blind diarization of recorded calls with arbitrary number of speakers
US9503571B2 (en) 2005-04-21 2016-11-22 Verint Americas Inc. Systems, methods, and media for determining fraud patterns and creating fraud behavioral models
US9571652B1 (en) 2005-04-21 2017-02-14 Verint Americas Inc. Enhanced diarization systems, media and methods of use
US9875742B2 (en) 2015-01-26 2018-01-23 Verint Systems Ltd. Word-level blind diarization of recorded calls with arbitrary number of speakers
US9875739B2 (en) 2012-09-07 2018-01-23 Verint Systems Ltd. Speaker separation in diarization
US9984706B2 (en) 2013-08-01 2018-05-29 Verint Systems Ltd. Voice activity detection using a soft decision mechanism
US10134401B2 (en) 2012-11-21 2018-11-20 Verint Systems Ltd. Diarization using linguistic labeling
CN110248322A (en) * 2019-06-28 2019-09-17 国家计算机网络与信息安全管理中心 A kind of swindling gang identifying system and recognition methods based on fraud text message
US10567402B1 (en) * 2017-04-13 2020-02-18 United Services Automobile Association (Usaa) Systems and methods of detecting and mitigating malicious network activity
CN111385420A (en) * 2018-12-29 2020-07-07 中兴通讯股份有限公司 User identification method and device
US10878428B1 (en) * 2017-05-09 2020-12-29 United Services Automobile Association (Usaa) Systems and methods for generation of alerts based on fraudulent network activity
US10887452B2 (en) 2018-10-25 2021-01-05 Verint Americas Inc. System architecture for fraud detection
US20210176354A1 (en) * 2019-10-11 2021-06-10 Alipay (Hangzhou) Information Technology Co., Ltd. Decentralized automatic phone fraud risk management
US11115521B2 (en) 2019-06-20 2021-09-07 Verint Americas Inc. Systems and methods for authentication and fraud detection
US11310360B2 (en) * 2019-12-20 2022-04-19 Clear Labs Israel Ltd. System and methods thereof for real-time fraud detection of a telephone call transaction
US11470194B2 (en) * 2019-08-19 2022-10-11 Pindrop Security, Inc. Caller verification via carrier metadata
US20220366430A1 (en) * 2021-05-14 2022-11-17 At&T Intellectual Property I, L.P. Data stream based event sequence anomaly detection for mobility customer fraud analysis
US11538128B2 (en) 2018-05-14 2022-12-27 Verint Americas Inc. User interface for fraud alert management
US11868453B2 (en) 2019-11-07 2024-01-09 Verint Americas Inc. Systems and methods for customer authentication based on audio-of-interest

Families Citing this family (120)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6850606B2 (en) * 2001-09-25 2005-02-01 Fair Isaac Corporation Self-learning real-time prioritization of telecommunication fraud control actions
GB0207392D0 (en) * 2002-03-28 2002-05-08 Neural Technologies Ltd A configurable data profiling system
US10521857B1 (en) * 2003-05-12 2019-12-31 Symantec Corporation System and method for identity-based fraud detection
US9412123B2 (en) 2003-07-01 2016-08-09 The 41St Parameter, Inc. Keystroke analysis
US7706574B1 (en) 2003-11-06 2010-04-27 Admitone Security, Inc. Identifying and protecting composed and transmitted messages utilizing keystroke dynamics
US10999298B2 (en) 2004-03-02 2021-05-04 The 41St Parameter, Inc. Method and system for identifying users and detecting fraud by use of the internet
US8732004B1 (en) 2004-09-22 2014-05-20 Experian Information Solutions, Inc. Automated analysis of data to generate prospect notifications based on trigger events
US7620819B2 (en) * 2004-10-04 2009-11-17 The Penn State Research Foundation System and method for classifying regions of keystroke density with a neural network
US20060179063A1 (en) * 2005-02-08 2006-08-10 Rose Alan B Method and system for reducing dependent eligibility fraud in healthcare programs
US20070043577A1 (en) * 2005-08-16 2007-02-22 Sheldon Kasower Apparatus and method of enabling a victim of identity theft to resolve and prevent fraud
US20080243680A1 (en) * 2005-10-24 2008-10-02 Megdal Myles G Method and apparatus for rating asset-backed securities
US20080221971A1 (en) * 2005-10-24 2008-09-11 Megdal Myles G Using commercial share of wallet to rate business prospects
US20080221973A1 (en) * 2005-10-24 2008-09-11 Megdal Myles G Using commercial share of wallet to rate investments
US20080033852A1 (en) * 2005-10-24 2008-02-07 Megdal Myles G Computer-based modeling of spending behaviors of entities
US20080228541A1 (en) * 2005-10-24 2008-09-18 Megdal Myles G Using commercial share of wallet in private equity investments
US20080228540A1 (en) * 2005-10-24 2008-09-18 Megdal Myles G Using commercial share of wallet to compile marketing company lists
US8175939B2 (en) * 2005-10-28 2012-05-08 Microsoft Corporation Merchant powered click-to-call method
US8938671B2 (en) 2005-12-16 2015-01-20 The 41St Parameter, Inc. Methods and apparatus for securely displaying digital images
US11301585B2 (en) 2005-12-16 2022-04-12 The 41St Parameter, Inc. Methods and apparatus for securely displaying digital images
US8020005B2 (en) * 2005-12-23 2011-09-13 Scout Analytics, Inc. Method and apparatus for multi-model hybrid comparison system
EP1816595A1 (en) * 2006-02-06 2007-08-08 MediaKey Ltd. A method and a system for identifying potentially fraudulent customers in relation to network based commerce activities, in particular involving payment, and a computer program for performing said method
US20070198712A1 (en) * 2006-02-07 2007-08-23 Biopassword, Inc. Method and apparatus for biometric security over a distributed network
US10127554B2 (en) * 2006-02-15 2018-11-13 Citibank, N.A. Fraud early warning system and method
US7711636B2 (en) 2006-03-10 2010-05-04 Experian Information Solutions, Inc. Systems and methods for analyzing data
US7526412B2 (en) * 2006-03-31 2009-04-28 Biopassword, Inc. Method and apparatus for multi-distant weighted scoring system
US8151327B2 (en) 2006-03-31 2012-04-03 The 41St Parameter, Inc. Systems and methods for detection of session tampering and fraud prevention
US20070233667A1 (en) * 2006-04-01 2007-10-04 Biopassword, Llc Method and apparatus for sample categorization
US20070300077A1 (en) * 2006-06-26 2007-12-27 Seshadri Mani Method and apparatus for biometric verification of secondary authentications
US8411833B2 (en) * 2006-10-03 2013-04-02 Microsoft Corporation Call abuse prevention for pay-per-call services
US8036979B1 (en) 2006-10-05 2011-10-11 Experian Information Solutions, Inc. System and method for generating a finance attribute from tradeline data
US7657497B2 (en) * 2006-11-07 2010-02-02 Ebay Inc. Online fraud prevention using genetic algorithm solution
US7657569B1 (en) 2006-11-28 2010-02-02 Lower My Bills, Inc. System and method of removing duplicate leads
US7778885B1 (en) 2006-12-04 2010-08-17 Lower My Bills, Inc. System and method of enhancing leads
DE102006062210A1 (en) * 2006-12-22 2008-06-26 Deutsche Telekom Ag Method for detecting a woman in roaming connections in mobile communication networks
US8606626B1 (en) 2007-01-31 2013-12-10 Experian Information Solutions, Inc. Systems and methods for providing a direct marketing campaign planning environment
US8606666B1 (en) 2007-01-31 2013-12-10 Experian Information Solutions, Inc. System and method for providing an aggregation tool
US7975299B1 (en) 2007-04-05 2011-07-05 Consumerinfo.Com, Inc. Child identity monitor
US7742982B2 (en) * 2007-04-12 2010-06-22 Experian Marketing Solutions, Inc. Systems and methods for determining thin-file records and determining thin-file risk levels
WO2008147918A2 (en) 2007-05-25 2008-12-04 Experian Information Solutions, Inc. System and method for automated detection of never-pay data sets
US8301574B2 (en) * 2007-09-17 2012-10-30 Experian Marketing Solutions, Inc. Multimedia engagement study
US20090089190A1 (en) * 2007-09-27 2009-04-02 Girulat Jr Rollin M Systems and methods for monitoring financial activities of consumers
US9690820B1 (en) 2007-09-27 2017-06-27 Experian Information Solutions, Inc. Database system for triggering event notifications based on updates to database records
US7996521B2 (en) * 2007-11-19 2011-08-09 Experian Marketing Solutions, Inc. Service for mapping IP addresses to user segments
US8332932B2 (en) * 2007-12-07 2012-12-11 Scout Analytics, Inc. Keystroke dynamics authentication techniques
US10373198B1 (en) 2008-06-13 2019-08-06 Lmb Mortgage Services, Inc. System and method of generating existing customer leads
US7991689B1 (en) 2008-07-23 2011-08-02 Experian Information Solutions, Inc. Systems and methods for detecting bust out fraud using credit data
US8805836B2 (en) * 2008-08-29 2014-08-12 Fair Isaac Corporation Fuzzy tagging method and apparatus
US8275899B2 (en) * 2008-12-29 2012-09-25 At&T Intellectual Property I, L.P. Methods, devices and computer program products for regulating network activity using a subscriber scoring system
US20100174638A1 (en) 2009-01-06 2010-07-08 ConsumerInfo.com Report existence monitoring
US20100235908A1 (en) * 2009-03-13 2010-09-16 Silver Tail Systems System and Method for Detection of a Change in Behavior in the Use of a Website Through Vector Analysis
US20100235909A1 (en) * 2009-03-13 2010-09-16 Silver Tail Systems System and Method for Detection of a Change in Behavior in the Use of a Website Through Vector Velocity Analysis
US9112850B1 (en) 2009-03-25 2015-08-18 The 41St Parameter, Inc. Systems and methods of sharing information through a tag-based consortium
US8924438B2 (en) * 2009-11-12 2014-12-30 Verizon Patent And Licensing Inc. Usage record enhancement and analysis
US8868728B2 (en) * 2010-03-11 2014-10-21 Accenture Global Services Limited Systems and methods for detecting and investigating insider fraud
US9652802B1 (en) 2010-03-24 2017-05-16 Consumerinfo.Com, Inc. Indirect monitoring and reporting of a user's credit data
US10453093B1 (en) 2010-04-30 2019-10-22 Lmb Mortgage Services, Inc. System and method of optimizing matching of leads
US8412563B2 (en) * 2010-07-02 2013-04-02 Fis Financial Compliance Solutions, Llc Method and system for analyzing and optimizing distribution of work from a plurality of queues
WO2012054646A2 (en) 2010-10-19 2012-04-26 The 41St Parameter, Inc. Variable risk engine
US20120109821A1 (en) * 2010-10-29 2012-05-03 Jesse Barbour System, method and computer program product for real-time online transaction risk and fraud analytics and management
US8359006B1 (en) * 2010-11-05 2013-01-22 Sprint Communications Company L.P. Using communications records to detect unauthorized use of telecommunication services
EP2676197B1 (en) 2011-02-18 2018-11-28 CSidentity Corporation System and methods for identifying compromised personally identifiable information on the internet
US8458069B2 (en) * 2011-03-04 2013-06-04 Brighterion, Inc. Systems and methods for adaptive identification of sources of fraud
US11030562B1 (en) 2011-10-31 2021-06-08 Consumerinfo.Com, Inc. Pre-data breach monitoring
US10754913B2 (en) 2011-11-15 2020-08-25 Tapad, Inc. System and method for analyzing user device information
US9633201B1 (en) 2012-03-01 2017-04-25 The 41St Parameter, Inc. Methods and systems for fraud containment
US9521551B2 (en) 2012-03-22 2016-12-13 The 41St Parameter, Inc. Methods and systems for persistent cross-application mobile device identification
US20130282523A1 (en) * 2012-04-20 2013-10-24 Howard Pfeffer Network service provider assisted payment fraud detection and mitigation methods and apparatus
EP2880619A1 (en) 2012-08-02 2015-06-10 The 41st Parameter, Inc. Systems and methods for accessing records via derivative locators
WO2014078569A1 (en) 2012-11-14 2014-05-22 The 41St Parameter, Inc. Systems and methods of global identification
US10255598B1 (en) 2012-12-06 2019-04-09 Consumerinfo.Com, Inc. Credit card account data extraction
US8812387B1 (en) 2013-03-14 2014-08-19 Csidentity Corporation System and method for identifying related credit inquiries
US9633322B1 (en) 2013-03-15 2017-04-25 Consumerinfo.Com, Inc. Adjustment of knowledge-based authentication
US10354251B1 (en) 2013-07-26 2019-07-16 Sprint Communications Company L.P. Assigning risk levels to electronic commerce transactions
US10902327B1 (en) 2013-08-30 2021-01-26 The 41St Parameter, Inc. System and method for device identification and uniqueness
US9832646B2 (en) * 2013-09-13 2017-11-28 Network Kinetix, LLC System and method for an automated system for continuous observation, audit and control of user activities as they occur within a mobile network
US9779407B2 (en) 2014-08-08 2017-10-03 Brighterion, Inc. Healthcare fraud preemption
US10262362B1 (en) 2014-02-14 2019-04-16 Experian Information Solutions, Inc. Automatic generation of code for attributes
US10896421B2 (en) 2014-04-02 2021-01-19 Brighterion, Inc. Smart retail analytics and commercial messaging
US9280661B2 (en) 2014-08-08 2016-03-08 Brighterion, Inc. System administrator behavior analysis
US20150032589A1 (en) 2014-08-08 2015-01-29 Brighterion, Inc. Artificial intelligence fraud management solution
US20150339673A1 (en) 2014-10-28 2015-11-26 Brighterion, Inc. Method for detecting merchant data breaches with a computer network server
US20150066771A1 (en) 2014-08-08 2015-03-05 Brighterion, Inc. Fast access vectors in real-time behavioral profiling
US20160055427A1 (en) 2014-10-15 2016-02-25 Brighterion, Inc. Method for providing data science, artificial intelligence and machine learning as-a-service
US10091312B1 (en) 2014-10-14 2018-10-02 The 41St Parameter, Inc. Data structures for intelligently resolving deterministic and probabilistic device identifiers to device profiles and/or groups
US20160078367A1 (en) 2014-10-15 2016-03-17 Brighterion, Inc. Data clean-up method for improving predictive model training
US20160071017A1 (en) 2014-10-15 2016-03-10 Brighterion, Inc. Method of operating artificial intelligence machines to improve predictive model training and performance
US20160063502A1 (en) 2014-10-15 2016-03-03 Brighterion, Inc. Method for improving operating profits with better automated decision making with artificial intelligence
US11080709B2 (en) 2014-10-15 2021-08-03 Brighterion, Inc. Method of reducing financial losses in multiple payment channels upon a recognition of fraud first appearing in any one payment channel
US10546099B2 (en) 2014-10-15 2020-01-28 Brighterion, Inc. Method of personalizing, individualizing, and automating the management of healthcare fraud-waste-abuse to unique individual healthcare providers
US10290001B2 (en) 2014-10-28 2019-05-14 Brighterion, Inc. Data breach detection
US10339527B1 (en) 2014-10-31 2019-07-02 Experian Information Solutions, Inc. System and architecture for electronic fraud detection
US10445152B1 (en) 2014-12-19 2019-10-15 Experian Information Solutions, Inc. Systems and methods for dynamic report generation based on automatic modeling of complex data structures
US9900333B2 (en) * 2015-02-05 2018-02-20 Qualys, Inc. System and method for detecting vulnerability state deltas
US11151468B1 (en) 2015-07-02 2021-10-19 Experian Information Solutions, Inc. Behavior analysis using distributed representations of event data
US10671915B2 (en) 2015-07-31 2020-06-02 Brighterion, Inc. Method for calling for preemptive maintenance and for equipment failure prevention
US20170053291A1 (en) * 2015-08-17 2017-02-23 International Business Machines Corporation Optimal time scale and data volume for real-time fraud analytics
US9948733B2 (en) * 2016-05-09 2018-04-17 Dstillery, Inc. Evaluating authenticity of geographic data associated with media requests
US9729727B1 (en) * 2016-11-18 2017-08-08 Ibasis, Inc. Fraud detection on a communication network
US9774726B1 (en) * 2016-12-22 2017-09-26 Microsoft Technology Licensing, Llc Detecting and preventing fraud and abuse in real time
US20180350006A1 (en) * 2017-06-02 2018-12-06 Visa International Service Association System, Method, and Apparatus for Self-Adaptive Scoring to Detect Misuse or Abuse of Commercial Cards
US10091349B1 (en) * 2017-07-11 2018-10-02 Vail Systems, Inc. Fraud detection system and method
US10623581B2 (en) 2017-07-25 2020-04-14 Vail Systems, Inc. Adaptive, multi-modal fraud detection system
CA3014377A1 (en) * 2017-08-16 2019-02-16 Royal Bank Of Canada Systems and methods for early fraud detection
EP3680845A4 (en) * 2017-09-05 2021-01-13 Rakuten, Inc. Estimation system, estimation method, and program
US10699028B1 (en) 2017-09-28 2020-06-30 Csidentity Corporation Identity security architecture systems and methods
US10896472B1 (en) 2017-11-14 2021-01-19 Csidentity Corporation Security and identity verification system and architecture
WO2019190438A2 (en) * 2017-12-29 2019-10-03 Netaş Telekomüni̇kasyon Anoni̇m Şi̇rketi̇ Ott bypass fraud detection by using call detail record and voice quality analytics
US11379855B1 (en) * 2018-03-06 2022-07-05 Wells Fargo Bank, N.A. Systems and methods for prioritizing fraud cases using artificial intelligence
US20190342297A1 (en) * 2018-05-01 2019-11-07 Brighterion, Inc. Securing internet-of-things with smart-agent technology
US10972472B2 (en) 2018-06-01 2021-04-06 Bank Of America Corporation Alternate user communication routing utilizing a unique user identification
US10855666B2 (en) 2018-06-01 2020-12-01 Bank Of America Corporation Alternate user communication handling based on user identification
US10785220B2 (en) 2018-06-01 2020-09-22 Bank Of America Corporation Alternate user communication routing
US10785214B2 (en) 2018-06-01 2020-09-22 Bank Of America Corporation Alternate user communication routing for a one-time credential
US10798126B2 (en) 2018-06-01 2020-10-06 Bank Of America Corporation Alternate display generation based on user identification
US10805459B1 (en) * 2018-08-07 2020-10-13 First Orion Corp. Call screening service for communication devices
US10484532B1 (en) * 2018-10-23 2019-11-19 Capital One Services, Llc System and method detecting fraud using machine-learning and recorded voice clips
US11164206B2 (en) * 2018-11-16 2021-11-02 Comenity Llc Automatically aggregating, evaluating, and providing a contextually relevant offer
US11343376B1 (en) 2021-04-30 2022-05-24 Verizon Patent And Licensing Inc. Computerized system and method for robocall steering
US20230113752A1 (en) * 2021-10-13 2023-04-13 The Toronto-Dominion Bank Dynamic behavioral profiling using trained machine-learning and artificial-intelligence processes
US20230216968A1 (en) * 2021-12-31 2023-07-06 At&T Intellectual Property I, L.P. Call graphs for telecommunication network activity detection

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6597775B2 (en) * 2000-09-29 2003-07-22 Fair Isaac Corporation Self-learning real-time prioritization of telecommunication fraud control actions

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5666481A (en) * 1993-02-26 1997-09-09 Cabletron Systems, Inc. Method and apparatus for resolving faults in communications networks
TW225623B (en) * 1993-03-31 1994-06-21 American Telephone & Telegraph Real-time fraud monitoring system
US5566234A (en) * 1993-08-16 1996-10-15 Mci Communications Corporation Method for controlling fraudulent telephone calls
US5438570A (en) * 1993-12-29 1995-08-01 Tekno Industries, Inc. Service observing equipment for signalling System Seven telephone network
US5627886A (en) * 1994-09-22 1997-05-06 Electronic Data Systems Corporation System and method for detecting fraudulent network usage patterns using real-time network monitoring
US5768354A (en) * 1995-02-02 1998-06-16 Mci Communications Corporation Fraud evaluation and reporting system and method thereof
US5907602A (en) * 1995-03-30 1999-05-25 British Telecommunications Public Limited Company Detecting possible fraudulent communication usage
US5802145A (en) * 1995-08-03 1998-09-01 Bell Atlantic Network Services, Inc. Common channel signaling event detection and control
US5875236A (en) * 1995-11-21 1999-02-23 At&T Corp Call handling method for credit and fraud management
US5805686A (en) * 1995-12-22 1998-09-08 Mci Corporation Telephone fraud detection system
US5963625A (en) * 1996-09-30 1999-10-05 At&T Corp Method for providing called service provider control of caller access to pay services
US6119103A (en) * 1997-05-27 2000-09-12 Visa International Service Association Financial risk prediction systems and methods therefor
US6163604A (en) * 1998-04-03 2000-12-19 Lucent Technologies Automated fraud management in transaction-based networks
US6535728B1 (en) * 1998-11-18 2003-03-18 Lightbridge, Inc. Event manager for use in fraud detection
US6850606B2 (en) * 2001-09-25 2005-02-01 Fair Isaac Corporation Self-learning real-time prioritization of telecommunication fraud control actions

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6597775B2 (en) * 2000-09-29 2003-07-22 Fair Isaac Corporation Self-learning real-time prioritization of telecommunication fraud control actions

Cited By (77)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050267788A1 (en) * 2004-05-13 2005-12-01 International Business Machines Corporation Workflow decision management with derived scenarios and workflow tolerances
US9489645B2 (en) * 2004-05-13 2016-11-08 International Business Machines Corporation Workflow decision management with derived scenarios and workflow tolerances
GB2417641A (en) * 2004-08-25 2006-03-01 Agilent Technologies Inc Call detail record correlation to prevent arbitrage
US20060045248A1 (en) * 2004-08-25 2006-03-02 Kernohan William P Method of telecommunications call record correlation providing a basis for quantitative analysis of telecommunications call traffic routing
US7424103B2 (en) 2004-08-25 2008-09-09 Agilent Technologies, Inc. Method of telecommunications call record correlation providing a basis for quantitative analysis of telecommunications call traffic routing
US20060155847A1 (en) * 2005-01-10 2006-07-13 Brown William A Deriving scenarios for workflow decision management
US20080178193A1 (en) * 2005-01-10 2008-07-24 International Business Machines Corporation Workflow Decision Management Including Identifying User Reaction To Workflows
US20080235706A1 (en) * 2005-01-10 2008-09-25 International Business Machines Corporation Workflow Decision Management With Heuristics
US8046734B2 (en) 2005-01-10 2011-10-25 International Business Machines Corporation Workflow decision management with heuristics
US8793131B2 (en) 2005-04-21 2014-07-29 Verint Americas Inc. Systems, methods, and media for determining fraud patterns and creating fraud behavioral models
US20120263285A1 (en) * 2005-04-21 2012-10-18 Anthony Rajakumar Systems, methods, and media for disambiguating call data to determine fraud
US9571652B1 (en) 2005-04-21 2017-02-14 Verint Americas Inc. Enhanced diarization systems, media and methods of use
US9503571B2 (en) 2005-04-21 2016-11-22 Verint Americas Inc. Systems, methods, and media for determining fraud patterns and creating fraud behavioral models
US20100303211A1 (en) * 2005-04-21 2010-12-02 Victrio Method and system for generating a fraud risk score using telephony channel based audio and non-audio data
US9113001B2 (en) * 2005-04-21 2015-08-18 Verint Americas Inc. Systems, methods, and media for disambiguating call data to determine fraud
US8930261B2 (en) 2005-04-21 2015-01-06 Verint Americas Inc. Method and system for generating a fraud risk score using telephony channel based audio and non-audio data
US20070098013A1 (en) * 2005-11-01 2007-05-03 Brown William A Intermediate message invalidation
US8155119B2 (en) 2005-11-01 2012-04-10 International Business Machines Corporation Intermediate message invalidation
US8010700B2 (en) 2005-11-01 2011-08-30 International Business Machines Corporation Workflow decision management with workflow modification in dependence upon user reactions
US7657636B2 (en) 2005-11-01 2010-02-02 International Business Machines Corporation Workflow decision management with intermediate message validation
US20070100990A1 (en) * 2005-11-01 2007-05-03 Brown William A Workflow decision management with workflow administration capacities
US20070116013A1 (en) * 2005-11-01 2007-05-24 Brown William A Workflow decision management with workflow modification in dependence upon user reactions
US20070100884A1 (en) * 2005-11-01 2007-05-03 Brown William A Workflow decision management with message logging
US20070101007A1 (en) * 2005-11-01 2007-05-03 Brown William A Workflow decision management with intermediate message validation
US9594587B2 (en) 2005-11-01 2017-03-14 International Business Machines Corporation Workflow decision management with workflow administration capacities
US20070233688A1 (en) * 2006-04-04 2007-10-04 Karla Weekes Smolen Online system for exchanging fraud investigation information
US9875739B2 (en) 2012-09-07 2018-01-23 Verint Systems Ltd. Speaker separation in diarization
US10446156B2 (en) 2012-11-21 2019-10-15 Verint Systems Ltd. Diarization using textual and audio speaker labeling
US11367450B2 (en) 2012-11-21 2022-06-21 Verint Systems Inc. System and method of diarization and labeling of audio data
US11776547B2 (en) 2012-11-21 2023-10-03 Verint Systems Inc. System and method of video capture and search optimization for creating an acoustic voiceprint
US11380333B2 (en) 2012-11-21 2022-07-05 Verint Systems Inc. System and method of diarization and labeling of audio data
US11322154B2 (en) 2012-11-21 2022-05-03 Verint Systems Inc. Diarization using linguistic labeling
US11227603B2 (en) 2012-11-21 2022-01-18 Verint Systems Ltd. System and method of video capture and search optimization for creating an acoustic voiceprint
US10134401B2 (en) 2012-11-21 2018-11-20 Verint Systems Ltd. Diarization using linguistic labeling
US10134400B2 (en) 2012-11-21 2018-11-20 Verint Systems Ltd. Diarization using acoustic labeling
US10950241B2 (en) 2012-11-21 2021-03-16 Verint Systems Ltd. Diarization using linguistic labeling with segmented and clustered diarized textual transcripts
US10950242B2 (en) 2012-11-21 2021-03-16 Verint Systems Ltd. System and method of diarization and labeling of audio data
US10438592B2 (en) 2012-11-21 2019-10-08 Verint Systems Ltd. Diarization using speech segment labeling
US10902856B2 (en) 2012-11-21 2021-01-26 Verint Systems Ltd. System and method of diarization and labeling of audio data
US10522152B2 (en) 2012-11-21 2019-12-31 Verint Systems Ltd. Diarization using linguistic labeling
US10522153B2 (en) 2012-11-21 2019-12-31 Verint Systems Ltd. Diarization using linguistic labeling
US10720164B2 (en) 2012-11-21 2020-07-21 Verint Systems Ltd. System and method of diarization and labeling of audio data
US10692501B2 (en) 2012-11-21 2020-06-23 Verint Systems Ltd. Diarization using acoustic labeling to create an acoustic voiceprint
US10650826B2 (en) 2012-11-21 2020-05-12 Verint Systems Ltd. Diarization using acoustic labeling
US10692500B2 (en) 2012-11-21 2020-06-23 Verint Systems Ltd. Diarization using linguistic labeling to create and apply a linguistic model
US9460722B2 (en) 2013-07-17 2016-10-04 Verint Systems Ltd. Blind diarization of recorded calls with arbitrary number of speakers
US9881617B2 (en) 2013-07-17 2018-01-30 Verint Systems Ltd. Blind diarization of recorded calls with arbitrary number of speakers
US10109280B2 (en) 2013-07-17 2018-10-23 Verint Systems Ltd. Blind diarization of recorded calls with arbitrary number of speakers
US11670325B2 (en) 2013-08-01 2023-06-06 Verint Systems Ltd. Voice activity detection using a soft decision mechanism
US10665253B2 (en) 2013-08-01 2020-05-26 Verint Systems Ltd. Voice activity detection using a soft decision mechanism
US9984706B2 (en) 2013-08-01 2018-05-29 Verint Systems Ltd. Voice activity detection using a soft decision mechanism
US11636860B2 (en) 2015-01-26 2023-04-25 Verint Systems Ltd. Word-level blind diarization of recorded calls with arbitrary number of speakers
US10726848B2 (en) 2015-01-26 2020-07-28 Verint Systems Ltd. Word-level blind diarization of recorded calls with arbitrary number of speakers
US9875742B2 (en) 2015-01-26 2018-01-23 Verint Systems Ltd. Word-level blind diarization of recorded calls with arbitrary number of speakers
US9875743B2 (en) 2015-01-26 2018-01-23 Verint Systems Ltd. Acoustic signature building for a speaker from multiple sessions
US10366693B2 (en) 2015-01-26 2019-07-30 Verint Systems Ltd. Acoustic signature building for a speaker from multiple sessions
US10992692B1 (en) 2017-04-13 2021-04-27 United Services Automobile Association (Usaa) Systems and methods of detecting and mitigating malicious network activity
US11005862B1 (en) 2017-04-13 2021-05-11 United Services Automobile Association (Usaa) Systems and methods of detecting and mitigating malicious network activity
US11722502B1 (en) 2017-04-13 2023-08-08 United Services Automobile Association (Usaa) Systems and methods of detecting and mitigating malicious network activity
US10812503B1 (en) 2017-04-13 2020-10-20 United Services Automobile Association (Usaa) Systems and methods of detecting and mitigating malicious network activity
US10572884B1 (en) 2017-04-13 2020-02-25 United Services Automobile Association (Usaa) Systems and methods of detecting and mitigating malicious network activity
US10567402B1 (en) * 2017-04-13 2020-02-18 United Services Automobile Association (Usaa) Systems and methods of detecting and mitigating malicious network activity
US11669844B1 (en) * 2017-05-09 2023-06-06 United Services Automobile Association (Usaa) Systems and methods for generation of alerts based on fraudulent network activity
US10878428B1 (en) * 2017-05-09 2020-12-29 United Services Automobile Association (Usaa) Systems and methods for generation of alerts based on fraudulent network activity
US11538128B2 (en) 2018-05-14 2022-12-27 Verint Americas Inc. User interface for fraud alert management
US11240372B2 (en) 2018-10-25 2022-02-01 Verint Americas Inc. System architecture for fraud detection
US10887452B2 (en) 2018-10-25 2021-01-05 Verint Americas Inc. System architecture for fraud detection
CN111385420A (en) * 2018-12-29 2020-07-07 中兴通讯股份有限公司 User identification method and device
US11652917B2 (en) 2019-06-20 2023-05-16 Verint Americas Inc. Systems and methods for authentication and fraud detection
US11115521B2 (en) 2019-06-20 2021-09-07 Verint Americas Inc. Systems and methods for authentication and fraud detection
CN110248322A (en) * 2019-06-28 2019-09-17 国家计算机网络与信息安全管理中心 A kind of swindling gang identifying system and recognition methods based on fraud text message
US11470194B2 (en) * 2019-08-19 2022-10-11 Pindrop Security, Inc. Caller verification via carrier metadata
US11889024B2 (en) 2019-08-19 2024-01-30 Pindrop Security, Inc. Caller verification via carrier metadata
US20210176354A1 (en) * 2019-10-11 2021-06-10 Alipay (Hangzhou) Information Technology Co., Ltd. Decentralized automatic phone fraud risk management
US11868453B2 (en) 2019-11-07 2024-01-09 Verint Americas Inc. Systems and methods for customer authentication based on audio-of-interest
US11310360B2 (en) * 2019-12-20 2022-04-19 Clear Labs Israel Ltd. System and methods thereof for real-time fraud detection of a telephone call transaction
US20220366430A1 (en) * 2021-05-14 2022-11-17 At&T Intellectual Property I, L.P. Data stream based event sequence anomaly detection for mobility customer fraud analysis

Also Published As

Publication number Publication date
US20070124246A1 (en) 2007-05-31
US7457401B2 (en) 2008-11-25
US6850606B2 (en) 2005-02-01
US7158622B2 (en) 2007-01-02
US20050084083A1 (en) 2005-04-21

Similar Documents

Publication Publication Date Title
US6850606B2 (en) Self-learning real-time prioritization of telecommunication fraud control actions
US6597775B2 (en) Self-learning real-time prioritization of telecommunication fraud control actions
EP3324607B1 (en) Fraud detection on a communication network
US5602906A (en) Toll fraud detection system
US5805686A (en) Telephone fraud detection system
US6594481B1 (en) Apparatus and method for detecting potentially fradulent telecommunication
US7117191B2 (en) System, method and computer program product for processing event records
US7466672B2 (en) System, tool and method for network monitoring and corresponding network
US20050222806A1 (en) Detection of outliers in communication networks
US5596632A (en) Message-based interface for phone fraud system
JP2002510942A (en) Automatic handling of fraudulent means in processing-based networks
US6570968B1 (en) Alert suppression in a telecommunications fraud control system
US6636592B2 (en) Method and system for using bad billed number records to prevent fraud in a telecommunication system
KR102200253B1 (en) System and method for detecting fraud usage of message
US20060269050A1 (en) Adaptive fraud management systems and methods for telecommunications
CA2371730A1 (en) Account fraud scoring
US7079634B2 (en) Apparatus for tracking connection of service provider customers via customer use patterns
EP1365565B1 (en) System and method for network monitoring and corresponding network
EP1404090A1 (en) System, tool and method for network monitoring and corresponding network

Legal Events

Date Code Title Description
AS Assignment

Owner name: FAIR ISAAC CORPORATION, CALIFORNIA

Free format text: MERGER;ASSIGNOR:HNC SOFTWARE, INC.;REEL/FRAME:014506/0405

Effective date: 20021031

STCF Information on status: patent grant

Free format text: PATENTED CASE

FPAY Fee payment

Year of fee payment: 4

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

FPAY Fee payment

Year of fee payment: 8

FPAY Fee payment

Year of fee payment: 12