US6138115A - Method and system for generating a decision-tree classifier in parallel in a multi-processor system - Google Patents

Method and system for generating a decision-tree classifier in parallel in a multi-processor system Download PDF

Info

Publication number
US6138115A
US6138115A US09/245,765 US24576599A US6138115A US 6138115 A US6138115 A US 6138115A US 24576599 A US24576599 A US 24576599A US 6138115 A US6138115 A US 6138115A
Authority
US
United States
Prior art keywords
node
attribute
records
decision tree
recited
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
US09/245,765
Inventor
Rakesh Agrawal
Manish Mehta
John Christopher Shafer
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
International Business Machines Corp
Original Assignee
International Business Machines Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Business Machines Corp filed Critical International Business Machines Corp
Priority to US09/245,765 priority Critical patent/US6138115A/en
Application granted granted Critical
Publication of US6138115A publication Critical patent/US6138115A/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/01Dynamic search techniques; Heuristics; Dynamic trees; Branch-and-bound
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/31Indexing; Data structures therefor; Storage structures
    • G06F16/316Indexing structures
    • G06F16/322Trees
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10TECHNICAL SUBJECTS COVERED BY FORMER USPC
    • Y10STECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10S707/00Data processing: database and file management or data structures
    • Y10S707/953Organization of data
    • Y10S707/962Entity-attribute-value
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10TECHNICAL SUBJECTS COVERED BY FORMER USPC
    • Y10STECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10S707/00Data processing: database and file management or data structures
    • Y10S707/964Database arrangement
    • Y10S707/966Distributed
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10TECHNICAL SUBJECTS COVERED BY FORMER USPC
    • Y10STECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10S707/00Data processing: database and file management or data structures
    • Y10S707/964Database arrangement
    • Y10S707/966Distributed
    • Y10S707/967Peer-to-peer
    • Y10S707/968Partitioning
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10TECHNICAL SUBJECTS COVERED BY FORMER USPC
    • Y10STECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10S707/00Data processing: database and file management or data structures
    • Y10S707/99931Database or file accessing
    • Y10S707/99933Query processing, i.e. searching
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10TECHNICAL SUBJECTS COVERED BY FORMER USPC
    • Y10STECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10S707/00Data processing: database and file management or data structures
    • Y10S707/99931Database or file accessing
    • Y10S707/99933Query processing, i.e. searching
    • Y10S707/99936Pattern matching access
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10TECHNICAL SUBJECTS COVERED BY FORMER USPC
    • Y10STECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10S707/00Data processing: database and file management or data structures
    • Y10S707/99931Database or file accessing
    • Y10S707/99937Sorting
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10TECHNICAL SUBJECTS COVERED BY FORMER USPC
    • Y10STECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10S707/00Data processing: database and file management or data structures
    • Y10S707/99941Database schema or data structure
    • Y10S707/99944Object-oriented database structure

Definitions

  • the invention relates in general to computer databases, and in particular to data mining.
  • the invention specifically relates to an efficient method and system for generating a decision tree classifier from data records in parallel by the processors of a multi-processor system.
  • Data mining is an emerging application of computer databases that involves the development of tools for analyzing large databases to extract useful information from them.
  • customer purchasing patterns may be derived from a large customer transaction database by analyzing its transaction records. Such purchasing habits can provide valuable marketing information to retailers in displaying their merchandise or controlling the store inventory.
  • Other applications of data mining include fraud detection, store location search, and medical diagnosis.
  • Classification of data records according to certain classes of the records is an important part of data mining.
  • a set of example records referred to as a training set or input data, is provided from which a record classifier will be built.
  • Each record of the training set consists of several attributes where the attributes can be either numeric or categorical. Numeric (or continuous) attributes are those from an ordered domain, such as employee age or employee salary. Categorical attributes are those from an unordered domain such as marital status or gender.
  • One of these attributes, called the classifying attribute indicates the class to which the record belongs.
  • the objective of classification is to build a model of the classifying attribute, or classifier, based upon the other attributes. Once the classifier is built, it can be used to determine the classes of future records.
  • classifiers Another desirable property of classifiers is their short training time, i.e., the time required to generate a classifier from a set of training records.
  • Some prior art methods address both the execution time and memory constraint problems by partitioning the data into subsets that fit in the system memory and developing classifiers for the subsets in parallel. The output of these classifiers is then combined using various algorithms to obtain the final classification. Although this approach reduces running time significantly, studies have shown that the multiple classifiers do not achieve the same level of accuracy of a single classifier built using all the data. See, for example, "Experiments on Multistrategy Learning by Meta-Leaming," by P. K. Chan and S. J. Stolfo, Proc. Second Intl. Conf. on Information and Knowledge Management, pp. 314-323, 1993.
  • a decision tree is a class discriminator that recursively partitions the training set until each partition consists entirely or dominantly of examples from the same class.
  • the tree generally has a root node, interior nodes, and multiple leaf nodes where each leaf node is associated with the records belonging to a record class.
  • Each non-leaf node of the tree contains a split point which is a test on one or more attributes to determine how the data records are partitioned at that node.
  • Decision trees are compact, easy to understand and to be converted to classification rules, or to Structured Query Language (SQL) statements for accessing databases.
  • SQL Structured Query Language
  • FIG. 1 shows a training set where each record represents a car insurance applicant and includes three attributes: Age, Car Type, and Risk level.
  • FIG. 2 shows a prior art decision tree classifier created from the training records of FIG. 1.
  • Nodes 2 and 3 are two split points that partition the records based on the split tests (Age ⁇ 25) and (Car Type in ⁇ Sports ⁇ ), respectively.
  • the records of applicants whose age is less than 25 years belong to the High Risk class associated with node 4.
  • the records of those older than 25 years but have a sport car belong to the High Risk class associated with node 5.
  • Other applicants fall into the Low risk class of node 6.
  • the decision tree then can be used to screen future applicants by classifying them into the High or Low Risk categories.
  • the method described in the '694 application still has some drawbacks.
  • it requires some data per record to stay memory-resident all the time, e.g., a class list containing the attribute values and node IDs. Since the size of this data structure grows in direct proportion to the number of input records, this places a limit on the amount of data that can be classified.
  • the method does not take advantage of the parallelism of the multi-processor system to build the decision tree classifier more efficiently across the processors. Such parallel generation of the classifier would lead to both shorter training times and reduced system memory requirements.
  • Another object of the present invention is to obtain a decision-tree classifier that is compact, accurate, and has short training times.
  • Still another object of the present invention is a method for generating a classifier that is scalable on large disk-resident training sets, without restricting the size of the training set to the system memory limit.
  • the present invention achieves the foregoing and other objects by providing a method for generating a decision tree classifier in parallel in a multi-processor system, from a training set of records.
  • Each record includes one or more attributes, a class label to which the record belongs, and a record ID.
  • the method partitions the training records generally evenly among the processors of the multi-processor system.
  • Each processor generates in parallel with other processors an attribute list for each attribute of the records.
  • the list includes the values for that attribute, class labels and record IDs of the records from which the attribute values are obtained.
  • the processors then cooperatively generate a decision tree by repeatedly partitioning the records according to record classes, using the attribute lists.
  • the final decision tree becomes the desired classifier in which the records associated with each leaf node are of the same class.
  • the step of generating attribute lists preferably includes the processors sorting in parallel the attribute lists for numeric attributes based on the attribute values, and distributing the sorted attribute lists among the processors.
  • the processors cooperatively create the decision tree by splitting the records at each examined node, starting with the root node.
  • Each processor first determines a split test to best separate the records by record classes, using the attribute lists available in the processor.
  • the processor shares its best split test with other processors to determine the best overall split test for the examined node.
  • the processor then partitions the records of the examined node that are assigned to it, according to the best split test for the examined node.
  • the partitions of records form the child nodes of the examined node and also become new leaf nodes of the tree.
  • the records of the new leaf nodes are then similarly split.
  • the split tests are determined based on a splitting index corresponding to the criterion used in splitting the records.
  • each processor maintains for each attribute one or more variables, such as histograms, representing the distribution of the records at each leaf node.
  • the splitting index used is preferably a gini-index based on the relative frequency of records from each class present in the training set.
  • various subsets of the values of A are considered as possible split points. If the number of values for A is less than a certain threshold, then all subsets of a set S of all values of A are evaluated to find one with the highest splitting index for the examined node. If the number of values is equal to or more than the threshold, each value from set S is added, one at a time, to an initially empty set S' to find a split with the highest splitting index.
  • the partitioning of records at a node by each processor includes, for an attribute B used in the split test, dividing the attribute list for B at the processor into new attribute lists corresponding respectively to the child nodes of the examined node.
  • the method traverses the list to apply the split test to each entry in the list and puts the entry into a respective new list according to the test.
  • the processor also builds a hash table with the record IDs obtained from the attribute list as it is being divided and shares the hash table with other processors.
  • the processor partitions the remaining attribute lists of the examined node among its child nodes according to the shared hash tables.
  • the processor updates the histograms of each new leaf node with the distributions of records at these nodes, and shares the updated histograms with the other processors.
  • the originally created decision tree is pruned based on the MDL principle to obtain a more compact classifier.
  • the original tree and split tests are first encoded in a MDL-based code.
  • the code length for each node of the tree is calculated.
  • the method determines whether to prune the node, and if so, how to prune it.
  • each node of the decision tree is encoded using one bit. If the code length in the case the node has no child node is less than when it has both child nodes, then both of its child nodes are pruned and it is converted to a leaf node. Otherwise, the node is left intact.
  • two bits are used to encode each node of the tree.
  • the code length is evaluated for the cases where the node is a leaf node, has a left child, has a right child, and has both child nodes.
  • a pruning option is selected from these cases that would result in the shortest code length for the node.
  • a smaller tree is first obtained using the steps of the first embodiment.
  • the smaller tree is further pruned by examining the code length of each node for the cases where the node has only a left child, only a right child, and both child nodes.
  • a pruning option is selected so that the shortest code length for the node is obtained.
  • FIG. 1 shows an example of a prior art training set of records.
  • FIG. 2 illustrates a prior art decision tree corresponding to the training set of FIG. 1 in which each leaf node represents a class of records.
  • FIG. 3 is a simplified block diagram of a computer system having multiple processors upon which the present invention may be practiced.
  • FIG. 4 is a flow chart showing the overall operation of the method of the present invention.
  • FIG. 5 illustrates an exemplary training set of records for use with the method of the invention.
  • FIG. 6 illustrates a typical partitioning of the records between the two processors of a multi-processor system, according to block 15 of FIG. 4.
  • FIG. 7 illustrates the attribute lists built by the processors of the multi-processor system, according to block 16 of FIG. 4.
  • FIG. 8 is a flow chart showing further details for the step of creating the decision tree, from block 17 of FIG. 4.
  • FIG. 9 is a flow chart showing further details for the step of determining a split test at each examined node, from block 29 of FIG. 8.
  • FIGS. 10a and 10b illustrate the numeric attribute lists in the processors and the respective histograms of the processors, according to block 38 of FIG. 9.
  • FIGS. 11a and 11b illustrate the categorical attribute lists in the processors and the respective histograms of the processors, according to block 44, FIG. 9.
  • FIG. 12 is a flow chart showing further details for the step of determining a subset of the attribute values with the highest splitting index, from block 45, FIG. 9.
  • FIG. 13 is a flow chart showing further details for the step of splitting the records at a node to create child nodes, from block 31, FIG. 8.
  • FIG. 14a illustrates a part of the decision tree as the records at node 67 are split to create child nodes, according to block 31 of FIG. 8.
  • FIGS. 14b and 14c show how the attribute lists of the node 67 are partitioned into new attribute lists for the child nodes of node 67, from block 63 of FIG. 13.
  • FIG. 15 is a flow chart showing the steps for pruning the decision tree based on the Minimum Description Length principle to obtain the decision-tree classifier.
  • FIG. 16 is a flow chart showing the Full pruning embodiment for the pruning steps of FIG. 15.
  • FIG. 17 is a flow chart showing the Partial pruning embodiment for the pruning steps of FIG. 15.
  • FIG. 18 is a flow chart showing the Hybrid pruning embodiment for the pruning steps of FIG. 15.
  • the invention is primarily described as a method for generating a decision-tree classifier in parallel in a multi-processor system.
  • an apparatus such as a data processing system, including a CPU, memory, I/O, program storage, a connecting bus, and other appropriate components, could be programmed or otherwise designed to facilitate the practice of the method of the invention.
  • Such a system would include appropriate program means for executing the method of the invention.
  • an article of manufacture such as a pre-recorded disk or other similar computer program product, for use with a data processing system, could include a storage medium and program means recorded thereon for directing the data processing system to facilitate the practice of the method of the invention.
  • Such apparatus and articles of manufacture also fall within the spirit and scope of the invention.
  • FIG. 3 is a simplified block diagram of a multi-processor system with which the method of the invention may be practiced.
  • the system includes several processors 10 that communicate with each other by a link 11.
  • Each processor 10 may be implemented in hardware, software, or a combination thereof.
  • the processors 10 may be nodes within an IBM SP2 multi-processor computer, or software tasks of a multi-task program running on a single computer. They may also be IBM RISC System/6000 workstations or currently available microprocessors interconnected by the link 11.
  • the link 11 may be implemented in hardware, software, or a combination thereof. For example, it may be a data bus, network, or software layer based on the Message Passage Interface (MPI) standard.
  • MPI Message Passage Interface
  • FIG. 4 illustrates a high-level flow chart of the method for generating a decision-tree classifier in parallel by the processors 10 in accordance with the invention, from a training set of records.
  • Each record has one or more data attribute values, a class label of the class to which the record belongs, and a record ID.
  • An attribute may be numeric (or continuous) such as Age, or categorical such as Car Type.
  • the records are partitioned among the processors 10. Generally, the records are divided evenly among the processors 10 to maintain a balanced workload in the system. However, an unequal partitioning of the records may be necessary to balance the workload of the processors when they do not have the same computing power.
  • each processor 10 generates an attribute list for each attribute of the records at that processor.
  • the processors 10 generate their attribute lists in parallel.
  • Each record of an attribute list has an attribute value, class label, and record ID of the record from which the attribute value is obtained.
  • the attribute list generation is described in more detail below in accordance with FIG. 7.
  • the processors 10 cooperatively generate a decision tree by repeatedly partitioning the records using the attribute lists.
  • the decision tree generation by the processors is described further below in reference to FIGS. 8 through 15. The resulting decision tree after all record classes are identified becomes the decision-tree classifier.
  • FIG. 5 illustrates an exemplary training set of records before they are partitioned according to block 15 of FIG. 4.
  • Each record represents a car insurance applicant with the values of two attributes Age and Car Type, and a class label indicating the Risk level for the applicant.
  • Age is a numeric attribute indicating the applicant's age
  • Car type is a categorical attribute indicating the type of car the applicant owns.
  • FIG. 6 shows a typical partitioning of the records of FIG. 5 between processors P1 and P2 in a two-processor system, per block 15 of FIG. 4.
  • the records of processors P1 and P2 are in tables 20 and 21, respectively.
  • attribute lists generated by each of the processors P1 and P2 for the attributes Age and Car Type, according to block 16 of FIG. 4, are shown.
  • attribute lists 23 and 25 can be generated from the records at processors P1 and P2, respectively, without further processing.
  • attribute lists 22 and 24 are preferably generated by processors P1 and P2, respectively, after the processors cooperatively sort their attribute lists based on attribute values and distribute the sorted lists among each other. Each processor thus has a contiguous sorted portion of the global attribute list for each attribute.
  • a parallel sorting algorithm like the one described by D. J.
  • FIG. 8 shows the preferred embodiment for the step of generating the decision tree cooperatively by the processors 10, from block 17 of FIG. 4.
  • each processor examines each current leaf node and separates its records by record class to create new nodes. This process continues until all classes are identified. Note that initially, the tree is viewed as having a single leaf node that is also the root node.
  • each processor 10 working in parallel with other processors, examines each leaf node of the decision tree.
  • Each processor determines a split test to best separate the records at the examined node, using the attribute lists of that processor, as shown by block 29. The processor shares its best split test with other processors so that the best overall split of the records at the examined node can be determined, at block 30.
  • the goal at each node is to determine the split point that best divides the training records belonging to that node.
  • the value of a split point depends on how well it separates the classes.
  • a splitting index corresponding to a criterion used for splitting the records may be used to help determine the split test at each leaf node.
  • the splitting index is a gini-index as described, for example, by Brieman et al. in "Classification and Regression Trees, Wadsworth, 1984.
  • the advantage of the gini-index is that its calculation requires only the distribution of the class values in each record partition. For instance, to find the best split point for a node, the node's attribute lists are scanned to evaluate the splits for the attributes. The attribute containing the split point with the lowest value for the gini-index is used to split the node's records. The evaluation of the split points is described further below in reference to FIG. 9.
  • the processor 10 splits the records at the examined node, that are assigned to the processor, according to the best overall split test for the examined node. Each group of records forms a new leaf node of the tree and is also a child node of the examined node.
  • the processor checks to see if each leaf node now contains records from only one class. If this condition has not been achieved, the processor repeats the process starting with block 28 for each leaf node.
  • FIG. 9 shows further details for the step of determining a split test from block 29 of FIG. 8.
  • a variable showing the distribution of records by record class at each leaf node may be used.
  • each processor may have a histogram for each categorical attribute showing the class distribution of the records at that node.
  • the processor typically maintains two histograms, C below and C above . They are initialized to reflect, respectively, the distributions of the records preceding those assigned to the processor and the records following the first record assigned to the processor, including this first record.
  • the processor traverses the attribute list for A at the examined node in block 36. For each value v of the attribute list, the processor updates the class histograms for A at the examined node with the class label corresponding to v and the value v, as shown by block 38. If A is determined in block 39 to be numeric, the splitting index for the splitting criterion (A ⁇ v) at the examined node is computed at block 40. Another attribute value v is then examined, at block 41, until the complete list is traversed, at block 42.
  • one of the processors 10 collects all the class histograms for A from other processors (block 44) to determine a subset of the attribute A that results in the highest splitting index for the examined node, at block 45. The determination of this subset will be further described below in reference to FIG. 12.
  • FIGS. 10a and 10b illustrate how the histograms for the numeric attribute Age are updated by the processors P1 and P2.
  • the attribute lists for Age in processors P1 and P2, from FIG. 7, are shown respectively as tables 48 and 49.
  • FIG. 10b represents the initial state and final state of the C below and C above histograms for attribute Age, according to the steps of FIG. 9.
  • the initial state of the histograms in processors P1 and P2 (tables 50 and 51, respectively) reflects the class distribution in each processor before the Age attribute lists are traversed, according to block 36, FIG. 9.
  • the final state of the histograms in processors P1 and P2 (tables 52 and 53, respectively) reflects the distribution in each processor after the histograms are updated according to block 38 of FIG. 9.
  • FIGS. 11a and 11b illustrate the attribute lists for the categorical attribute Car Type and the histograms for this attribute in the processors, respectively.
  • the attribute lists for Car Type for processors P1 and P2 are reproduced from FIG. 7.
  • the histograms for attribute Car Type maintained by P1 and P2 are shown as tables 54 and 55, respectively, in FIG. 11b.
  • a preferred embodiment for block 45 of FIG. 9, for determining a subset of a categorical attribute A with the highest splitting index is shown as a flow chart.
  • the cardinality of A i.e., the number of elements in the set S of all the values of A
  • a predetermined threshold i.e., the number of elements in the set S of all the values of A
  • all subsets of S are evaluated to find the best split, at block 59. Otherwise, a greedy algorithm may be used for subsetting. For instance, starting with an empty set S' at block 60, each element of set S is added to S', one at a time, and a corresponding splitting index is computed at block 61. This incremental addition to S' continues until there is no further improvement in the splitting index, as determined by block 62.
  • FIG. 13 shows further details for the step of splitting the records per block 31 of FIG. 8.
  • the attribute list for an attribute B used in the split test is partitioned into new attribute lists, one for each child node of the examined node.
  • the processor typically traverses the original attribute list, applies the split test to each entry in the list, and puts the entry into the respective new list according to the test.
  • the processor also builds a hash table with the record IDs from the entries of the attribute list for B as the entries are distributed among the new attribute lists.
  • the processor then shares its hash table with other processors, at block 65, and partitions the remaining attribute lists among the child nodes of the examined node, according to the collected hash tables, at block 66.
  • FIGS. 14a through 14c illustrate how the attribute lists of FIG. 7 are partitioned into new attribute lists according to block 63, FIG. 13.
  • FIG. 14a shows a part of the decision tree being generated with a node 67 and its child nodes 68 and 69.
  • the split test at node 67 is whether the insurance applicant's car is of a sport type, i.e., ⁇ Car Type E Sports ⁇ .
  • FIG. 14b illustrates attribute lists 70 and 71 in processor P1 for child nodes 68 and 69, respectively. Attribute lists 70 and 71 are created when processor P1 partitions its attribute lists for node 67 (blocks 23 and 23 of FIG. 7) according to step 63, FIG. 13.
  • FIG. 14c shows attribute lists 72 and 73 in processor P2 for child nodes 68 and 69, respectively. They are created when processor P2 partitions its attribute lists for node 67 (blocks 24 and 25 of FIG. 7).
  • the decision tree as created may further be pruned to remove extraneous nodes.
  • the pruning algorithm is based on the Minimum Description Length (MDL) principle so that a subset of the child nodes at each node may be discarded without over-pruning the tree.
  • MDL Minimum Description Length
  • the MDL principle generally states that the best model for encoding data is one that minimizes the sum of the cost of describing the data in terms of the model and the cost of describing the model. If M is a model that encodes data D, the total cost of encoding, cost(M, D), is defined as:
  • cost of encoding X is defined as the number of bits required to encode X.
  • the models are the set of trees obtained by pruning the original decision tree T, and the data is the training set S. Since the cost of encoding the data is relatively low, the objective of MDL pruning will be to find a subtree of T that best describes the training set S.
  • a typical pruning of the decision tree based on the MDL principle is shown. It consists of two main phases: (a) encoding the tree and (b) determining whether to prune the tree and how it is pruned, based on the cost of encoding.
  • the tree is encoded in a MDL-based code.
  • the preferred encoding methods are described below in reference to FIGS. 16, 17, and 18.
  • the split tests for the leaf nodes are also encoded with the MDL-based code, as shown by block 81.
  • a code length C(n) for the node is computed in block 82 for each pruning option, and evaluated in block 83 to determine whether to convert the node into a leaf node, to prune its left or right child node, or to leave node n intact.
  • the code length C(t) for the test options at a node n is calculated as follows: ##EQU1## where L test is the cost of encoding any test at an internal node, L(t) is the cost of encoding the node itself, Errors, represents the misclassification errors at the node, C(t i ) is the cost of encoding the i th subtree, and C'(t i ) is the cost of encoding a child node's records using the parent node's statistics.
  • FIGS. 16 through 18 the flow charts of the preferred embodiments of the step of pruning of FIG. 15 are shown.
  • the embodiment in FIG. 16 is referred to as Full pruning and is used when a node may have zero or two child nodes (options 1 and 2). Accordingly, only one bit is needed to encode each node of the tree, shown by block 86.
  • the code length C leaf (t) when the node has no child nodes is compared to the code length C both (t) when it has both child nodes. If C leaf (t) is less than C both (t), both child nodes of the test node are pruned and the node is converted into a leaf node, as shown by blocks 88 and 89.
  • FIG. 17 shows another embodiment of the step of pruning from FIG. 15, referred to as Partial pruning. Partial pruning is desirable where all four options are applicable to each tree node, i.e., the node is a leaf node, has only a left child node or a right child node, or has both child nodes.
  • Partial pruning is desirable where all four options are applicable to each tree node, i.e., the node is a leaf node, has only a left child node or a right child node, or has both child nodes.
  • two bits are used to encode each node n of the decision tree.
  • the code lengths for the four options are evaluated at block 93 and the option with the shortest code length for node n is selected at block 94.
  • FIG. 18 shows a third preferred embodiment of the pruning step from FIG. 15 that combines Full pruning and Partial pruning, and is appropriately referred to as Hybrid pruning.
  • the Hybrid method prunes the decision tree in two phases. At block 95, it first uses Full pruning to obtain a smaller tree from the originally generated tree. It then considers only options 2, 3, and 4, i.e., where the node has a left child, a right child, or both, to further prune the smaller tree. For these three options, log(3) bits are used for encoding each node. At blocks 96 and 97, for each node of the smaller tree, the code lengths corresponding to the three options are evaluated to select a pruning option that results in the shortest code length for the node, as shown by block 98.
  • SLIQ A Fast Scalable Classifier For Data Mining
  • Proc. of the EDBT '96 Conf., Avignon, France, 1996) may be parallelized by replicating the class list in each processor of a multi-processor system or distributing the class list among the processors.
  • the SLIQ method uses a class list in which each entry contains a class label and node ID corresponding to a leaf node.
  • the class list for the entire training set is replicated in the local memory of every processor.
  • the split tests are evaluated in the same manner as described above in reference to FIGS. 8, 9, and 12.
  • the partitioning of the attribute lists according to a chosen split test (block 63 of FIG. 13) is different as the execution of the split points requires updating the class list for each record. Since every processor must maintain a consistent copy of the entire class list, every class-list update must be communicated to and applied by every processor.
  • each processor of the system contains a portion of the class list for all the records.
  • the partitioning of the class list has no correlation with the partitioning of the numeric attribute lists.
  • the class label corresponding to an attribute value in one processor may reside in another processor.
  • the two processors communicate when it is necessary to find a non-local class label in the case of a numeric attribute. This inter-processor communication is not necessary for categorical attributes since the class list is created from the original partitioned training set and perfectly correlated with the categorical attribute lists.
  • split tests are evaluated in the same manner as described above in reference to FIGS. 8, 9, and 12.
  • a processor may request another processor to look up a corresponding class label. It may also have to service look-up requests from other processors. This inter-processor communications, however, may be minimized by batching the look-ups to the distributed class lists.
  • the invention may be implemented using standard programming or engineering techniques including computer programming software, firmware, hardware or any combination or subset thereof. Any such resulting program, having computer-readable program code means, may be embodied or provided within one or more computer-readable media, thereby making a computer program product, i.e., an article of manufacture, according to the invention.
  • the computer readable media may be, for instance, a fixed (hard) drive, diskette, optical disk, magnetic tape, semiconductor memory such as read-only memory (ROM), etc., or any transmitting/receiving medium such as the Internet or other communication network or link.
  • the article of manufacture containing the computer programming code may be made and/or used by executing the code directly from one medium, by copying the code from one medium to another medium, or by transmitting the code over a network.
  • An apparatus for making, using, or selling the invention may be one or more processing systems including, but not limited to, a central processing unit (CPU), memory, storage devices, communication links, communication devices, servers, I/O devices, or any sub-components or individual parts of one or more processing systems, including software, firmware, hardware or any combination or subset thereof, which embody the invention as set forth in the claims.
  • CPU central processing unit
  • memory storage devices
  • communication links communication devices
  • communication devices servers, I/O devices, or any sub-components or individual parts of one or more processing systems, including software, firmware, hardware or any combination or subset thereof, which embody the invention as set forth in the claims.
  • User input may be received from the keyboard, mouse, pen, voice, touch screen, or any other means by which a human can input data to a computer, including through other programs such as application programs.

Abstract

A method and system are disclosed for generating a decision-tree classifier in parallel in a multi-processor system, from a training set of records. The method comprises the steps of: partitioning the records among the processors, each processor generating an attribute list for each attribute, and the processors cooperatively generating a decision tree by repeatedly partitioning the records using the attribute lists. For each node, each processor determines its best split test and, along with other processors, selects the best overall split for the records at that node. Preferably, the gini-index and class histograms are used in determining the best splits. Also, each processor builds a hash table using the attribute list of the split attribute and shares it with other processors. The hash tables are used for splitting the remaining attribute lists. The created tree is then pruned based on the MDL principle, which encodes the tree and split tests in an MDL-based code, and determines whether to prune and how to prune each node based on the code length of the node.

Description

This is a divisional of application Ser. No. 08/641,404 filed on May 1, 1996, U.S. Pat. No. 5,870,735.
FIELD OF THE INVENTION
The invention relates in general to computer databases, and in particular to data mining. The invention specifically relates to an efficient method and system for generating a decision tree classifier from data records in parallel by the processors of a multi-processor system.
Background of the Invention
Data mining is an emerging application of computer databases that involves the development of tools for analyzing large databases to extract useful information from them. As an example of data mining, customer purchasing patterns may be derived from a large customer transaction database by analyzing its transaction records. Such purchasing habits can provide valuable marketing information to retailers in displaying their merchandise or controlling the store inventory. Other applications of data mining include fraud detection, store location search, and medical diagnosis.
Classification of data records according to certain classes of the records is an important part of data mining. In classification, a set of example records, referred to as a training set or input data, is provided from which a record classifier will be built. Each record of the training set consists of several attributes where the attributes can be either numeric or categorical. Numeric (or continuous) attributes are those from an ordered domain, such as employee age or employee salary. Categorical attributes are those from an unordered domain such as marital status or gender. One of these attributes, called the classifying attribute, indicates the class to which the record belongs. The objective of classification is to build a model of the classifying attribute, or classifier, based upon the other attributes. Once the classifier is built, it can be used to determine the classes of future records.
Classification models have been studied extensively in the fields of statistics, neural networks, and machine learning. They are described, for example, in "Computer Systems that Learn: Classification and Prediction Methods from Statistics," S. M. Weiss and C. A. Kulikowski, 1991. Prior art classification methods, however, lack scalability and usually break down in cases of large training datasets. They commonly require the training set to be sufficiently small so that it would fit in the memory of the computer performing the classification. This restriction is partially due to the relatively small number of training examples available for the applications considered by the prior art methods, rather than for data mining applications. Early classifiers thus do not work well in data mining applications.
In the paper "An Interval Classifier For Database Mining Applications," Proc. of the Very Large Database Conference, August 1992, Agrawal et al. described a classifier specially designed for database applications. However, the focus there was on a classifier that can use database indices to improve retrieval efficiency, and not on the size of the training set. The described classifier is therefore not suitable for most data mining applications, where the training sets are large.
Another desirable property of classifiers is their short training time, i.e., the time required to generate a classifier from a set of training records. Some prior art methods address both the execution time and memory constraint problems by partitioning the data into subsets that fit in the system memory and developing classifiers for the subsets in parallel. The output of these classifiers is then combined using various algorithms to obtain the final classification. Although this approach reduces running time significantly, studies have shown that the multiple classifiers do not achieve the same level of accuracy of a single classifier built using all the data. See, for example, "Experiments on Multistrategy Learning by Meta-Leaming," by P. K. Chan and S. J. Stolfo, Proc. Second Intl. Conf. on Information and Knowledge Management, pp. 314-323, 1993.
Other prior art methods classify data in batches. Such incremental learning methods have the disadvantage that the cumulative cost of classifying data incrementally can sometimes exceed the cost of classifying all of the training set once. See, for example, "Megainduction: Machine Learning on Very Large Databases," Ph.D. Thesis by J. Catlett, Univ. of Sydney, 1991.
Still other prior art classification methods, including those discussed above, achieve short training times by creating the classifiers based on decision trees. A decision tree is a class discriminator that recursively partitions the training set until each partition consists entirely or dominantly of examples from the same class. The tree generally has a root node, interior nodes, and multiple leaf nodes where each leaf node is associated with the records belonging to a record class. Each non-leaf node of the tree contains a split point which is a test on one or more attributes to determine how the data records are partitioned at that node. Decision trees are compact, easy to understand and to be converted to classification rules, or to Structured Query Language (SQL) statements for accessing databases.
For example, FIG. 1 shows a training set where each record represents a car insurance applicant and includes three attributes: Age, Car Type, and Risk level. FIG. 2 shows a prior art decision tree classifier created from the training records of FIG. 1. Nodes 2 and 3 are two split points that partition the records based on the split tests (Age <25) and (Car Type in {Sports}), respectively. The records of applicants whose age is less than 25 years belong to the High Risk class associated with node 4. The records of those older than 25 years but have a sport car belong to the High Risk class associated with node 5. Other applicants fall into the Low risk class of node 6. The decision tree then can be used to screen future applicants by classifying them into the High or Low Risk categories.
As another example of decision-tree classifiers, an efficient method for constructing a scalable, fast, and accurate decision-tree classifier is described in the assignee's pending application "Method and System For Generating a Decision-Tree Classifier For Data Records," Ser. No. 08/564,694 U.S. Pat. No. 5,787,274 (hereinafter '694 application). The method described there effectively handles disk-resident data that is too large to fit in the system memory by presorting the records, building the tree branches in parallel, and pruning the tree using the Description Length (MDL) principle. Further, it forms a single decision tree using the entire training set, instead of combining multiple classifiers or partitioning the data. For more details on MDL pruning, see for example, "MDL-based Decision Tree Pruning,"Intl. Conf. on Knowledge Discovery in Databases and Data Mining, pp. 216-221, 1995.
Nevertheless, the method described in the '694 application still has some drawbacks. First, it requires some data per record to stay memory-resident all the time, e.g., a class list containing the attribute values and node IDs. Since the size of this data structure grows in direct proportion to the number of input records, this places a limit on the amount of data that can be classified. Secondly, in a parallel processing environment such as a multi-processor system, the method does not take advantage of the parallelism of the multi-processor system to build the decision tree classifier more efficiently across the processors. Such parallel generation of the classifier would lead to both shorter training times and reduced system memory requirements.
Therefore, there remains a need for an efficient method for generating a decision tree classifier in parallel by the processors of a multi-processor system that is fast, compact, and scalable on large training sets.
SUMMARY OF THE INVENTION
It is an object of the present invention to provide an efficient method for generating a decision-tree classifier in parallel by the processors of a multi-processor system, from a training set of records for classifying other records.
Another object of the present invention is to obtain a decision-tree classifier that is compact, accurate, and has short training times.
Still another object of the present invention is a method for generating a classifier that is scalable on large disk-resident training sets, without restricting the size of the training set to the system memory limit.
The present invention achieves the foregoing and other objects by providing a method for generating a decision tree classifier in parallel in a multi-processor system, from a training set of records. Each record includes one or more attributes, a class label to which the record belongs, and a record ID. In accordance with the invention, the method partitions the training records generally evenly among the processors of the multi-processor system. Each processor generates in parallel with other processors an attribute list for each attribute of the records. The list includes the values for that attribute, class labels and record IDs of the records from which the attribute values are obtained. The processors then cooperatively generate a decision tree by repeatedly partitioning the records according to record classes, using the attribute lists. The final decision tree becomes the desired classifier in which the records associated with each leaf node are of the same class.
The step of generating attribute lists preferably includes the processors sorting in parallel the attribute lists for numeric attributes based on the attribute values, and distributing the sorted attribute lists among the processors.
The processors cooperatively create the decision tree by splitting the records at each examined node, starting with the root node. Each processor first determines a split test to best separate the records by record classes, using the attribute lists available in the processor. The processor shares its best split test with other processors to determine the best overall split test for the examined node. The processor then partitions the records of the examined node that are assigned to it, according to the best split test for the examined node. The partitions of records form the child nodes of the examined node and also become new leaf nodes of the tree. The records of the new leaf nodes are then similarly split. Preferably, the split tests are determined based on a splitting index corresponding to the criterion used in splitting the records.
In addition, each processor maintains for each attribute one or more variables, such as histograms, representing the distribution of the records at each leaf node. In determining a split test, the processor would traverse the attribute list for each attribute A. For each value v of A in the attribute list, the class histograms for A at the examined node are updated using the class label corresponding to v and the value v. If A is a numeric attribute, then the splitting index for the splitting criterion (A<=v) for the examined node is calculated. If A is categorical, then one of the processors collects all the class histograms for A from other processors after the scan and determines a subset of the attribute A that results in the highest splitting index for the examined node. The splitting index used is preferably a gini-index based on the relative frequency of records from each class present in the training set.
Also in the case where the attribute A is categorical, various subsets of the values of A are considered as possible split points. If the number of values for A is less than a certain threshold, then all subsets of a set S of all values of A are evaluated to find one with the highest splitting index for the examined node. If the number of values is equal to or more than the threshold, each value from set S is added, one at a time, to an initially empty set S' to find a split with the highest splitting index.
In accordance with the invention, the partitioning of records at a node by each processor includes, for an attribute B used in the split test, dividing the attribute list for B at the processor into new attribute lists corresponding respectively to the child nodes of the examined node. In dividing the attribute list, the method traverses the list to apply the split test to each entry in the list and puts the entry into a respective new list according to the test. The processor also builds a hash table with the record IDs obtained from the attribute list as it is being divided and shares the hash table with other processors. The processor partitions the remaining attribute lists of the examined node among its child nodes according to the shared hash tables.
In addition, the processor updates the histograms of each new leaf node with the distributions of records at these nodes, and shares the updated histograms with the other processors.
In another aspect of the invention, the originally created decision tree is pruned based on the MDL principle to obtain a more compact classifier. The original tree and split tests are first encoded in a MDL-based code. The code length for each node of the tree is calculated. Depending on the code lengths resulting from different pruning options at the node, the method determines whether to prune the node, and if so, how to prune it.
In a first embodiment of the pruning step, each node of the decision tree is encoded using one bit. If the code length in the case the node has no child node is less than when it has both child nodes, then both of its child nodes are pruned and it is converted to a leaf node. Otherwise, the node is left intact.
In a second embodiment, two bits are used to encode each node of the tree. The code length is evaluated for the cases where the node is a leaf node, has a left child, has a right child, and has both child nodes. A pruning option is selected from these cases that would result in the shortest code length for the node.
In a third embodiment of the pruning step, a smaller tree is first obtained using the steps of the first embodiment. The smaller tree is further pruned by examining the code length of each node for the cases where the node has only a left child, only a right child, and both child nodes. A pruning option is selected so that the shortest code length for the node is obtained.
Additional objects and advantages of the present invention will be set forth in the description which follows, and in part will be obvious from the description and with the accompanying drawing, or may be learned from the practice of this invention.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 shows an example of a prior art training set of records.
FIG. 2 illustrates a prior art decision tree corresponding to the training set of FIG. 1 in which each leaf node represents a class of records.
FIG. 3 is a simplified block diagram of a computer system having multiple processors upon which the present invention may be practiced.
FIG. 4 is a flow chart showing the overall operation of the method of the present invention.
FIG. 5 illustrates an exemplary training set of records for use with the method of the invention.
FIG. 6 illustrates a typical partitioning of the records between the two processors of a multi-processor system, according to block 15 of FIG. 4.
FIG. 7 illustrates the attribute lists built by the processors of the multi-processor system, according to block 16 of FIG. 4.
FIG. 8 is a flow chart showing further details for the step of creating the decision tree, from block 17 of FIG. 4.
FIG. 9 is a flow chart showing further details for the step of determining a split test at each examined node, from block 29 of FIG. 8.
FIGS. 10a and 10b illustrate the numeric attribute lists in the processors and the respective histograms of the processors, according to block 38 of FIG. 9.
FIGS. 11a and 11b illustrate the categorical attribute lists in the processors and the respective histograms of the processors, according to block 44, FIG. 9.
FIG. 12 is a flow chart showing further details for the step of determining a subset of the attribute values with the highest splitting index, from block 45, FIG. 9.
FIG. 13 is a flow chart showing further details for the step of splitting the records at a node to create child nodes, from block 31, FIG. 8.
FIG. 14a illustrates a part of the decision tree as the records at node 67 are split to create child nodes, according to block 31 of FIG. 8.
FIGS. 14b and 14c show how the attribute lists of the node 67 are partitioned into new attribute lists for the child nodes of node 67, from block 63 of FIG. 13.
FIG. 15 is a flow chart showing the steps for pruning the decision tree based on the Minimum Description Length principle to obtain the decision-tree classifier.
FIG. 16 is a flow chart showing the Full pruning embodiment for the pruning steps of FIG. 15.
FIG. 17 is a flow chart showing the Partial pruning embodiment for the pruning steps of FIG. 15.
FIG. 18 is a flow chart showing the Hybrid pruning embodiment for the pruning steps of FIG. 15.
DESCRIPTION OF THE PREFERRED EMBODIMENT
The invention is primarily described as a method for generating a decision-tree classifier in parallel in a multi-processor system. However, persons skilled in the art will recognize that an apparatus, such as a data processing system, including a CPU, memory, I/O, program storage, a connecting bus, and other appropriate components, could be programmed or otherwise designed to facilitate the practice of the method of the invention. Such a system would include appropriate program means for executing the method of the invention.
Also, an article of manufacture, such as a pre-recorded disk or other similar computer program product, for use with a data processing system, could include a storage medium and program means recorded thereon for directing the data processing system to facilitate the practice of the method of the invention. Such apparatus and articles of manufacture also fall within the spirit and scope of the invention.
FIG. 3 is a simplified block diagram of a multi-processor system with which the method of the invention may be practiced. The system includes several processors 10 that communicate with each other by a link 11. Each processor 10 may be implemented in hardware, software, or a combination thereof. For instance, the processors 10 may be nodes within an IBM SP2 multi-processor computer, or software tasks of a multi-task program running on a single computer. They may also be IBM RISC System/6000 workstations or currently available microprocessors interconnected by the link 11. Similarly, the link 11 may be implemented in hardware, software, or a combination thereof. For example, it may be a data bus, network, or software layer based on the Message Passage Interface (MPI) standard.
FIG. 4 illustrates a high-level flow chart of the method for generating a decision-tree classifier in parallel by the processors 10 in accordance with the invention, from a training set of records. Each record has one or more data attribute values, a class label of the class to which the record belongs, and a record ID. An attribute may be numeric (or continuous) such as Age, or categorical such as Car Type. Beginning with block 15, the records are partitioned among the processors 10. Generally, the records are divided evenly among the processors 10 to maintain a balanced workload in the system. However, an unequal partitioning of the records may be necessary to balance the workload of the processors when they do not have the same computing power.
At block 16, each processor 10 generates an attribute list for each attribute of the records at that processor. The processors 10 generate their attribute lists in parallel. Each record of an attribute list has an attribute value, class label, and record ID of the record from which the attribute value is obtained. The attribute list generation is described in more detail below in accordance with FIG. 7. At block 17, the processors 10 cooperatively generate a decision tree by repeatedly partitioning the records using the attribute lists. The decision tree generation by the processors is described further below in reference to FIGS. 8 through 15. The resulting decision tree after all record classes are identified becomes the decision-tree classifier.
FIG. 5 illustrates an exemplary training set of records before they are partitioned according to block 15 of FIG. 4. Each record represents a car insurance applicant with the values of two attributes Age and Car Type, and a class label indicating the Risk level for the applicant. In this case, Age is a numeric attribute indicating the applicant's age, while Car type is a categorical attribute indicating the type of car the applicant owns. FIG. 6 shows a typical partitioning of the records of FIG. 5 between processors P1 and P2 in a two-processor system, per block 15 of FIG. 4. The records of processors P1 and P2 are in tables 20 and 21, respectively.
Referring to FIG. 7, the attribute lists generated by each of the processors P1 and P2 for the attributes Age and Car Type, according to block 16 of FIG. 4, are shown. For a categorical attribute, such as Car Type, attribute lists 23 and 25 can be generated from the records at processors P1 and P2, respectively, without further processing. For a numeric attribute such as Age, attribute lists 22 and 24 are preferably generated by processors P1 and P2, respectively, after the processors cooperatively sort their attribute lists based on attribute values and distribute the sorted lists among each other. Each processor thus has a contiguous sorted portion of the global attribute list for each attribute. A parallel sorting algorithm like the one described by D. J. DeWitt et al., "Parallel Sorting on A Shared-nothing Architecture Using Probalistic Splitting," Proc. of the First Intl. Conf. on Parallel and Distributed Information Systems, pp. 280-291, 1991, may be used for this purpose.
GENERATING THE DECISION TREE
FIG. 8 shows the preferred embodiment for the step of generating the decision tree cooperatively by the processors 10, from block 17 of FIG. 4. Generally, each processor examines each current leaf node and separates its records by record class to create new nodes. This process continues until all classes are identified. Note that initially, the tree is viewed as having a single leaf node that is also the root node. Starting with block 28, each processor 10, working in parallel with other processors, examines each leaf node of the decision tree. Each processor determines a split test to best separate the records at the examined node, using the attribute lists of that processor, as shown by block 29. The processor shares its best split test with other processors so that the best overall split of the records at the examined node can be determined, at block 30.
While growing the decision tree, the goal at each node is to determine the split point that best divides the training records belonging to that node. The value of a split point depends on how well it separates the classes. Thus, a splitting index corresponding to a criterion used for splitting the records may be used to help determine the split test at each leaf node. Preferably, the splitting index is a gini-index as described, for example, by Brieman et al. in "Classification and Regression Trees, Wadsworth, 1984. The advantage of the gini-index is that its calculation requires only the distribution of the class values in each record partition. For instance, to find the best split point for a node, the node's attribute lists are scanned to evaluate the splits for the attributes. The attribute containing the split point with the lowest value for the gini-index is used to split the node's records. The evaluation of the split points is described further below in reference to FIG. 9.
At block 31, the processor 10 splits the records at the examined node, that are assigned to the processor, according to the best overall split test for the examined node. Each group of records forms a new leaf node of the tree and is also a child node of the examined node. At block 32, the processor checks to see if each leaf node now contains records from only one class. If this condition has not been achieved, the processor repeats the process starting with block 28 for each leaf node.
FIG. 9 shows further details for the step of determining a split test from block 29 of FIG. 8. To help evaluate the split tests, a variable showing the distribution of records by record class at each leaf node may be used. For example, for each leaf node, each processor may have a histogram for each categorical attribute showing the class distribution of the records at that node. For each numeric attribute, the processor typically maintains two histograms, Cbelow and Cabove. They are initialized to reflect, respectively, the distributions of the records preceding those assigned to the processor and the records following the first record assigned to the processor, including this first record.
Starting with block 35 of FIG. 9, for each attribute A, the processor traverses the attribute list for A at the examined node in block 36. For each value v of the attribute list, the processor updates the class histograms for A at the examined node with the class label corresponding to v and the value v, as shown by block 38. If A is determined in block 39 to be numeric, the splitting index for the splitting criterion (A≦v) at the examined node is computed at block 40. Another attribute value v is then examined, at block 41, until the complete list is traversed, at block 42. If A is a categorical attribute, one of the processors 10 collects all the class histograms for A from other processors (block 44) to determine a subset of the attribute A that results in the highest splitting index for the examined node, at block 45. The determination of this subset will be further described below in reference to FIG. 12.
FIGS. 10a and 10b illustrate how the histograms for the numeric attribute Age are updated by the processors P1 and P2. In FIG. 10a, the attribute lists for Age in processors P1 and P2, from FIG. 7, are shown respectively as tables 48 and 49. FIG. 10b represents the initial state and final state of the Cbelow and Cabove histograms for attribute Age, according to the steps of FIG. 9. The initial state of the histograms in processors P1 and P2 (tables 50 and 51, respectively) reflects the class distribution in each processor before the Age attribute lists are traversed, according to block 36, FIG. 9. The final state of the histograms in processors P1 and P2 (tables 52 and 53, respectively) reflects the distribution in each processor after the histograms are updated according to block 38 of FIG. 9.
Similarly, FIGS. 11a and 11b illustrate the attribute lists for the categorical attribute Car Type and the histograms for this attribute in the processors, respectively. In FIG. 11a, the attribute lists for Car Type for processors P1 and P2 are reproduced from FIG. 7. The histograms for attribute Car Type maintained by P1 and P2 are shown as tables 54 and 55, respectively, in FIG. 11b.
Referring now to FIG. 12, a preferred embodiment for block 45 of FIG. 9, for determining a subset of a categorical attribute A with the highest splitting index, is shown as a flow chart. At block 58, the cardinality of A, i.e., the number of elements in the set S of all the values of A, is compared to a predetermined threshold. If the cardinality is less than the threshold, all subsets of S are evaluated to find the best split, at block 59. Otherwise, a greedy algorithm may be used for subsetting. For instance, starting with an empty set S' at block 60, each element of set S is added to S', one at a time, and a corresponding splitting index is computed at block 61. This incremental addition to S' continues until there is no further improvement in the splitting index, as determined by block 62.
FIG. 13 shows further details for the step of splitting the records per block 31 of FIG. 8. At block 63, the attribute list for an attribute B used in the split test is partitioned into new attribute lists, one for each child node of the examined node. The processor typically traverses the original attribute list, applies the split test to each entry in the list, and puts the entry into the respective new list according to the test. At block 64, the processor also builds a hash table with the record IDs from the entries of the attribute list for B as the entries are distributed among the new attribute lists. The processor then shares its hash table with other processors, at block 65, and partitions the remaining attribute lists among the child nodes of the examined node, according to the collected hash tables, at block 66.
FIGS. 14a through 14c illustrate how the attribute lists of FIG. 7 are partitioned into new attribute lists according to block 63, FIG. 13. FIG. 14a shows a part of the decision tree being generated with a node 67 and its child nodes 68 and 69. Suppose the split test at node 67 is whether the insurance applicant's car is of a sport type, i.e., {Car Type E Sports}. FIG. 14b illustrates attribute lists 70 and 71 in processor P1 for child nodes 68 and 69, respectively. Attribute lists 70 and 71 are created when processor P1 partitions its attribute lists for node 67 ( blocks 23 and 23 of FIG. 7) according to step 63, FIG. 13. Similarly, FIG. 14c shows attribute lists 72 and 73 in processor P2 for child nodes 68 and 69, respectively. They are created when processor P2 partitions its attribute lists for node 67 ( blocks 24 and 25 of FIG. 7).
PRUNING THE DECISION TREE
In order to obtain a compact classifier, the decision tree as created may further be pruned to remove extraneous nodes. Preferably, the pruning algorithm is based on the Minimum Description Length (MDL) principle so that a subset of the child nodes at each node may be discarded without over-pruning the tree. The pruning step is illustrated in more detail in FIGS. 15 through 18.
The MDL principle generally states that the best model for encoding data is one that minimizes the sum of the cost of describing the data in terms of the model and the cost of describing the model. If M is a model that encodes data D, the total cost of encoding, cost(M, D), is defined as:
cost(M,D)=cost(D|M)+cost(M)
where the cost of encoding X, cost(X), is defined as the number of bits required to encode X. Here, the models are the set of trees obtained by pruning the original decision tree T, and the data is the training set S. Since the cost of encoding the data is relatively low, the objective of MDL pruning will be to find a subtree of T that best describes the training set S.
Referring to FIG. 15, a typical pruning of the decision tree based on the MDL principle is shown. It consists of two main phases: (a) encoding the tree and (b) determining whether to prune the tree and how it is pruned, based on the cost of encoding. First, at block 80, the tree is encoded in a MDL-based code. The preferred encoding methods are described below in reference to FIGS. 16, 17, and 18. The split tests for the leaf nodes are also encoded with the MDL-based code, as shown by block 81. Next, for each node n of the tree, a code length C(n) for the node is computed in block 82 for each pruning option, and evaluated in block 83 to determine whether to convert the node into a leaf node, to prune its left or right child node, or to leave node n intact.
The code length C(t) for the test options at a node n is calculated as follows: ##EQU1## where Ltest is the cost of encoding any test at an internal node, L(t) is the cost of encoding the node itself, Errors, represents the misclassification errors at the node, C(ti) is the cost of encoding the ith subtree, and C'(ti) is the cost of encoding a child node's records using the parent node's statistics.
In FIGS. 16 through 18, the flow charts of the preferred embodiments of the step of pruning of FIG. 15 are shown. The embodiment in FIG. 16 is referred to as Full pruning and is used when a node may have zero or two child nodes (options 1 and 2). Accordingly, only one bit is needed to encode each node of the tree, shown by block 86. At block 87, the code length Cleaf (t) when the node has no child nodes is compared to the code length Cboth (t) when it has both child nodes. If Cleaf (t) is less than Cboth (t), both child nodes of the test node are pruned and the node is converted into a leaf node, as shown by blocks 88 and 89.
FIG. 17 shows another embodiment of the step of pruning from FIG. 15, referred to as Partial pruning. Partial pruning is desirable where all four options are applicable to each tree node, i.e., the node is a leaf node, has only a left child node or a right child node, or has both child nodes. At block 92, two bits are used to encode each node n of the decision tree. The code lengths for the four options are evaluated at block 93 and the option with the shortest code length for node n is selected at block 94.
Finally, FIG. 18 shows a third preferred embodiment of the pruning step from FIG. 15 that combines Full pruning and Partial pruning, and is appropriately referred to as Hybrid pruning. The Hybrid method prunes the decision tree in two phases. At block 95, it first uses Full pruning to obtain a smaller tree from the originally generated tree. It then considers only options 2, 3, and 4, i.e., where the node has a left child, a right child, or both, to further prune the smaller tree. For these three options, log(3) bits are used for encoding each node. At blocks 96 and 97, for each node of the smaller tree, the code lengths corresponding to the three options are evaluated to select a pruning option that results in the shortest code length for the node, as shown by block 98.
PARALLELIZING OTHER CLASSIFICATION METHODS
Existing classification methods may be similarly parallelized in a multi-processing environment as described above. For instance, the method for generating a classifier in the assignee's pending application '694 (also described in "SLIQ: A Fast Scalable Classifier For Data Mining," Proc. of the EDBT '96 Conf., Avignon, France, 1996) may be parallelized by replicating the class list in each processor of a multi-processor system or distributing the class list among the processors. The SLIQ method uses a class list in which each entry contains a class label and node ID corresponding to a leaf node.
In the replication method, the class list for the entire training set is replicated in the local memory of every processor. The split tests are evaluated in the same manner as described above in reference to FIGS. 8, 9, and 12. However, the partitioning of the attribute lists according to a chosen split test (block 63 of FIG. 13) is different as the execution of the split points requires updating the class list for each record. Since every processor must maintain a consistent copy of the entire class list, every class-list update must be communicated to and applied by every processor.
To minimize communications among the processors, a technique similar to the one described in reference to FIGS. 9, 10a-b, and 11a-b may be used where only the smaller half of each split is communicated and updated by the processors. As a result, updates to the replicated class lists can be exchanged in small batches or in a single communication.
In the distribution method, each processor of the system contains a portion of the class list for all the records. The partitioning of the class list has no correlation with the partitioning of the numeric attribute lists. The class label corresponding to an attribute value in one processor may reside in another processor. Thus, the two processors communicate when it is necessary to find a non-local class label in the case of a numeric attribute. This inter-processor communication is not necessary for categorical attributes since the class list is created from the original partitioned training set and perfectly correlated with the categorical attribute lists.
The split tests are evaluated in the same manner as described above in reference to FIGS. 8, 9, and 12. In traversing the attribute list of a numeric attribute, a processor may request another processor to look up a corresponding class label. It may also have to service look-up requests from other processors. This inter-processor communications, however, may be minimized by batching the look-ups to the distributed class lists.
Using the foregoing specification, the invention may be implemented using standard programming or engineering techniques including computer programming software, firmware, hardware or any combination or subset thereof. Any such resulting program, having computer-readable program code means, may be embodied or provided within one or more computer-readable media, thereby making a computer program product, i.e., an article of manufacture, according to the invention. The computer readable media may be, for instance, a fixed (hard) drive, diskette, optical disk, magnetic tape, semiconductor memory such as read-only memory (ROM), etc., or any transmitting/receiving medium such as the Internet or other communication network or link. The article of manufacture containing the computer programming code may be made and/or used by executing the code directly from one medium, by copying the code from one medium to another medium, or by transmitting the code over a network.
An apparatus for making, using, or selling the invention may be one or more processing systems including, but not limited to, a central processing unit (CPU), memory, storage devices, communication links, communication devices, servers, I/O devices, or any sub-components or individual parts of one or more processing systems, including software, firmware, hardware or any combination or subset thereof, which embody the invention as set forth in the claims.
User input may be received from the keyboard, mouse, pen, voice, touch screen, or any other means by which a human can input data to a computer, including through other programs such as application programs.
One skilled in the art of computer science will easily be able to combine the software created as described with appropriate general purpose or special purpose computer hardware to create a computer system or computer sub-component embodying the invention and to create a computer system or computer sub-component for carrying out the method of the invention.
While several preferred embodiments of the invention have been described, it should be apparent that modifications and adaptations to those embodiments may occur to persons skilled in the art without departing from the scope and the spirit of the present invention as set forth in the following claims.

Claims (36)

What is claimed is:
1. A method for generating a decision-tree classifier in parallel in a system having a plurality of processors, from a training set of records, each record having: (i) at least one attribute, each attribute having a value, (ii) a class label of the class to which the record belongs, and (iii) a record ID, the method comprising the steps of:
partitioning the records among the processors of the system;
generating in parallel by each processor an attribute list for each attribute of the records, each entry in the attribute lists having the attribute value, class label, and record ID of the record from which the attribute value is obtained; and
creating a decision tree cooperatively by the processors, the decision tree being formed by repeatedly partitioning the records using the attribute lists, the resulting decision tree becoming the decision-tree classifier.
2. The method as recited in claim 1, wherein:
the attributes include numeric attributes; and
the step of generating an attribute list includes the steps of:
sorting the attribute lists for the numeric attributes based on the attribute values; and
distributing the sorted attribute lists among the processors.
3. The method as recited in claim 1, wherein the attributes include categorical attributes.
4. The method as recited in claim 1, wherein:
the decision tree includes a root node, a plurality of interior nodes, and a plurality of leaf nodes, all of the records initially belonging to the root node; and
the step of creating a decision tree includes the steps, executed by each processor for each node being examined until each leaf node of the decision tree contains only one class of records, of:
a) determining a split test to best separate the records at the examined node by record classes, using the attribute lists of the processor;
b) sharing the split test with the other processors to determine a best overall split test for the examined node; and
c) splitting the records of the examined node that are assigned to the processor, according to the best overall split test for the examined node, to create child nodes for the examined node, the child nodes becoming new leaf nodes.
5. The method as recited in claim 4, wherein the step of determining a split test is based on a splitting index corresponding to a criterion for splitting the records.
6. The method as recited in claim 5, wherein the splitting index includes a gini-index based on relative frequencies of records from each record class present in the training set.
7. The method as recited in claim 5, wherein:
each processor includes, for each leaf node, a plurality of histograms for each attribute of the records at the leaf node, the histograms representing the class distribution of the records at the leaf node; and
the step of determining a split test includes the steps of:
a) for each attribute A, traversing the attribute list for A at the examined node;
b) for each value v of A in the attribute list for A:
i) updating the class histograms for A, at the examined node, with the class label corresponding to v and the value v, and
ii) if the attribute A is numeric, then computing the splitting index corresponding to splitting criterion (A<=v) for the examined node; and
c) if the attribute A is categorical, then:
i) a first processor collecting all the class histograms for A from all the processors, and
ii) the first processor determining a subset of the attribute A that results in the highest splitting index for the examined node.
8. The method as recited in claim 7, wherein the histograms for each numeric attribute include a Cbelow histogram and a Cabove histogram, the Cbelow histogram corresponding to the class distribution of the records preceding those assigned to the processor, and the Caboos histogram corresponding to the class distribution of the records following a first record assigned to the processor, including the first record.
9. The method as recited in claim 7, wherein the step of determining a subset of the attribute A includes the steps of:
if a number of elements in a set S of all values of A is less than a predetermined threshold, then evaluating all subsets of the set S to find one with the highest splitting index; and
if the number of elements in S is equal to or more than the predetermined threshold, then:
a) adding an element of S to an initially empty subset S' of S such that the splitting index for the splitting criterion at the examined node is maximized; and
b) repeating the step of adding until there is no improvement in the splitting index.
10. The method as recited in claim 4, wherein the step of splitting the records includes the steps of:
partitioning the attribute list for an attribute B used in the split test into new attribute lists corresponding, respectively, to the child nodes of the examined node;
building a hash table with the record IDs from the entries of the attribute list for B as the entries are partitioned among the new attribute lists;
sharing the hash table for attribute B with other processors; and
partitioning the remaining attribute lists of the examined node among the newly created child nodes according to the hash tables shared by the processors.
11. The method as recited in claim 10, wherein the step of partitioning the attribute list includes the steps of:
traversing the attribute list for attribute B;
applying the split test to each entry of the attribute list for B; and
entering the entry into a respective new attribute list according to the split test.
12. The method as recited in claim 10, wherein the step of creating a decision tree further comprises the steps of:
updating the histograms for each newly created child node with the distribution of records at the child node; and
sharing the updated histograms with other processors so that all the histograms remain updated.
13. The method recited in claim 4 further comprising the step of pruning the decision-tree classifier to obtain a more compact classifier.
14. The method as recited in claim 13, wherein:
the step of pruning is based on a Minimum Description Length (MDL) principle that encodes the decision tree as a model such that an encoding cost for describing the decision tree and the training set is minimized;
the step of pruning includes the steps of:
encoding the decision tree in an MDL-based code;
encoding the split tests for the leaf nodes in the MDL-based code;
calculating a code length C(n) for each node n of the decision tree; and
determining whether to prune the child nodes of node n, convert n into a leaf node, or leave n intact, depending on the encoding cost; and
the encoding cost is based on the code length C(n).
15. The method as recited in claim 14, wherein:
a) the step of encoding the decision tree includes:
(i) encoding each node of the decision tree using one bit, if the node has two or no child nodes;
(ii) encoding each node of the decision tree using two bits, if the node has one, two, or no child nodes; and
(iii) encoding each internal node of the decision tree using tog(3) bits; and
b) the encoding cost includes:
(i) a cost for encoding an attribute value v of an attribute A, where a split test is of the form (A≦v) and A is numeric; and
(ii) a cost related to In(nA) where nA is a number of times the split test is used in the tree and A is a categorical attribute.
16. The method as recited in claim 14, wherein:
each node n of the decision tree is encoded using one bit; and
if the code length C(n) in the case n has both child nodes is more than C(n) in the case n is a leaf node, the n the step of determining whether to prune includes the steps of:
pruning both child nodes of the node n; and
converting the node n into a leaf node.
17. The method as recited in claim 16 further comprising the steps of: for each node n of the pruned decision tree, evaluating the code length C(n) when n has only a left child node, n has only a right child node, and n has both child nodes; and
selecting a pruning option that results in a shortest code length C(n).
18. The method as recited in claim 14, wherein:
each node n of the decision tree is encoded using two bits; and
the step of determining whether to prune includes the steps of:
evaluating the code length C(n) when n is a leaf node, n has only a left child node, n has only a right child node, and n has both child nodes; and
selecting a pruning option that results in a shortest code length C(n).
19. A database system for generating a decision-tree classifier in parallel from a training set of records, the system having a plurality of processors, each record having: (i) at least one attribute, each attribute having a value, (ii) a class label of the class to which the record belongs, and (iii) a record ID, the system comprising:
means for partitioning the records among the processors of the system;
means for generating in parallel by each processor an attribute list for each attribute of the records, each entry in the attribute lists having the attribute value, class label, and record ID of the record from which the attribute value is obtained; and
means for creating a decision tree cooperatively by the processors, the decision tree being formed by repeatedly partitioning the records using the attribute lists, the resulting decision tree becoming the decision-tree classifier.
20. The system as recited in claim 19, wherein:
the attributes include numeric attributes; and
the means for generating an attribute list includes:
means for sorting the attribute lists for the numeric attributes based on the attribute values; and
means for distributing the sorted attribute lists among the processors.
21. The system as recited in claim 19, wherein the attributes include categorical attributes.
22. The system as recited in claim 19, wherein:
the decision tree includes a root node, a plurality of interior nodes, and a plurality of leaf nodes, all of the records initially belonging to the root node; and
the means for creating a decision tree includes, for each processor and for each node being examined until each leaf node of the decision tree contains only one class of records:
a) means for determining a split test to best separate the records at the examined node by record classes, using the attribute lists of the processor;
b) means for sharing the split test with the other processors to determine a best overall split test for the examined node; and
c) means for splitting the records of the examined node that are assigned to the processor, according to the best overall split test for the examined node, to create child nodes for the examined node, the child nodes becoming new leaf nodes.
23. The system as recited in claim 22, wherein the means for determining a split test is based on a splitting index corresponding to a criterion for splitting the records.
24. The system as recited in claim 23, wherein the splitting index includes a gini-index based on relative frequencies of records from each record class present in the training set.
25. The system as recited in claim 23, wherein:
each processor includes, for each leaf node, a plurality of histograms for each attribute of the records at the leaf node, the histograms representing the class distribution of the records at the leaf node; and
the means for determining a split test includes:
a) for each attribute A, means for traversing the attribute list for A at the examined node;
b) for each value v of A in the attribute list for A:
i) means for updating the class histograms for A, at the examined node, with the class label corresponding to v and the value v; and
ii) if the attribute A is numeric, then means for computing the splitting index corresponding to splitting criterion (A<=v) for the examined node; and
c) if the attribute A is categorical, then:
i) means for collecting all the class histograms for A from all the processors; and
ii) means for determining a subset of the attribute A that results is in the highest splitting index for the examined node.
26. The system as recited in claim 25, wherein the histograms for each numeric attribute include a Cbelow histogram and a Cabove histogram, the Cbelow histogram corresponding to the class distribution of the records preceding those assigned to the processor, and the Cabove histogram corresponding to the class distribution of the records following a first record assigned to the processor, including the first record.
27. The system as recited in claim 25, wherein the means for determining a subset of the attribute A includes:
if a number of elements in a set S of all values of A is less than a predetermined threshold, then means for evaluating all subsets of the set S to find one with the highest splitting index; and
if the number of elements in S is equal to or more than the predetermined threshold, then:
a) means for adding an element of S to an initially empty subset S' of S such that the splitting index for the splitting criterion at the examined node is maximized; and
b) means for repeating the adding until there is no improvement in the splitting index.
28. The system as recited in claim 22, wherein the means for splitting the records includes:
means for partitioning the attribute list for an attribute B used in the split test into new attribute lists corresponding, respectively, to the child nodes of the examined node;
means for building a hash table with the record IDs from the entries of the attribute list for B as the entries are partitioned among the new attribute lists;
means for sharing the hash table for attribute B with other processors; and
means for partitioning the remaining attribute lists of the examined node among the newly created child nodes according to the hash tables shared by the processors.
29. The system as recited in claim 28, wherein the means for partitioning the attribute list includes:
means for traversing the attribute list for attribute B;
means for applying the split test to each entry of the attribute list for B; and
means for entering the entry into a respective new attribute list according to the split test.
30. The system as recited in claim 28, wherein the means for creating a decision tree further comprises:
means for updating the histograms for each newly created child node with the distribution of records at the child node; and
means for sharing the updated histograms with other processors so that all the histograms remain updated.
31. The system recited in claim 22 further comprising means for pruning the decision-tree classifier to obtain a more compact classifier.
32. The system as recited in claim 31, wherein:
the means for pruning is based on a Minimum Description Length (MDL) principle that encodes the decision tree as a model such that an encoding cost for describing the decision tree and the training set is minimized;
the means for pruning includes:
means for encoding the decision tree in an MDL-based code;
means for encoding the split tests for the leaf nodes in the MDL-based code;
means for calculating a code length C(n) for each node n of the decision tree; and
means for determining whether to prune the child nodes of node n, convert n into a leaf node, or leave n intact, depending on the encoding cost; and
the encoding cost is based on the code length C(n).
33. The system as recited in claim 32, wherein:
a) the means for encoding the decision tree includes:
(i) means for encoding each node of the decision tree using one bit, if the node has two or no child nodes;
(ii) means for encoding each node of the decision tree using two bits, if the node has one, two, or no child nodes; and
(iii) means for encoding each internal node of the decision tree using log(3) bits; and
b) the encoding cost includes:
(i) a cost for encoding an attribute value of an attribute A, where a split test is of the form (A≦v) and A is numeric; and
(ii) a cost related to In(nA) where nA is a number of times the split test is used in the tree and A is a categorical attribute.
34. The system as recited in claim 32, wherein:
each node n of the decision tree is encoded using one bit; and
if the code length C(n) in the case n has both child nodes is more than C(n) in the case n is a leaf node, then the means for determining whether to prune includes:
means for pruning both child nodes of the node n; and
means for converting the node n into a leaf node.
35. The system as recited in claim 34 further comprising:
for each node n of the pruned decision tree, means for evaluating the code length C(n) when n has only a left child node, n has only a right child node, and n has both child nodes; and
means for selecting a pruning option that results in a shortest code length C(n).
36. The system as recited in claim 32, wherein:
each node n of the decision tree is encoded using two bits; and
the means for determining whether to prune includes:
means for evaluating the code length C(n) when n is a leaf node, n has only a left child node, n has only a right child node, and n has both child nodes; and
means for selecting a pruning option that results in a shortest code length C(n).
US09/245,765 1996-05-01 1999-02-05 Method and system for generating a decision-tree classifier in parallel in a multi-processor system Expired - Lifetime US6138115A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US09/245,765 US6138115A (en) 1996-05-01 1999-02-05 Method and system for generating a decision-tree classifier in parallel in a multi-processor system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US08/641,404 US5870735A (en) 1996-05-01 1996-05-01 Method and system for generating a decision-tree classifier in parallel in a multi-processor system
US09/245,765 US6138115A (en) 1996-05-01 1999-02-05 Method and system for generating a decision-tree classifier in parallel in a multi-processor system

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US08/641,404 Division US5870735A (en) 1996-05-01 1996-05-01 Method and system for generating a decision-tree classifier in parallel in a multi-processor system

Publications (1)

Publication Number Publication Date
US6138115A true US6138115A (en) 2000-10-24

Family

ID=24572233

Family Applications (2)

Application Number Title Priority Date Filing Date
US08/641,404 Expired - Lifetime US5870735A (en) 1996-05-01 1996-05-01 Method and system for generating a decision-tree classifier in parallel in a multi-processor system
US09/245,765 Expired - Lifetime US6138115A (en) 1996-05-01 1999-02-05 Method and system for generating a decision-tree classifier in parallel in a multi-processor system

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US08/641,404 Expired - Lifetime US5870735A (en) 1996-05-01 1996-05-01 Method and system for generating a decision-tree classifier in parallel in a multi-processor system

Country Status (1)

Country Link
US (2) US5870735A (en)

Cited By (65)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6243695B1 (en) * 1998-03-18 2001-06-05 Motorola, Inc. Access control system and method therefor
US6363375B1 (en) * 1998-09-30 2002-03-26 Nippon Telegraph And Telephone Company Classification tree based information retrieval scheme
US20020174429A1 (en) * 2001-03-29 2002-11-21 Srinivas Gutta Methods and apparatus for generating recommendation scores
US6546389B1 (en) * 2000-01-19 2003-04-08 International Business Machines Corporation Method and system for building a decision-tree classifier from privacy-preserving data
US20030126103A1 (en) * 2001-11-14 2003-07-03 Ye Chen Agent using detailed predictive model
US6601050B1 (en) * 1999-07-26 2003-07-29 Large Scale Biology Corporation Trainable adaptive focused replicator network for analyzing data
US6687691B1 (en) * 2000-01-19 2004-02-03 International Business Machines Corporation Method and system for reconstructing original distributions from randomized numeric data
US20040049504A1 (en) * 2002-09-06 2004-03-11 International Business Machines Corporation System and method for exploring mining spaces with multiple attributes
US20040225631A1 (en) * 2003-04-23 2004-11-11 International Business Machines Corporation System and method for identifying a workload type for a given workload of database requests
US20040243530A1 (en) * 2001-07-04 2004-12-02 Akeel Al-Attar Process-related systems and methods
US20050027593A1 (en) * 2003-08-01 2005-02-03 Wilson Joseph G. System and method for segmenting and targeting audience members
US20050125290A1 (en) * 2003-08-01 2005-06-09 Gil Beyda Audience targeting system with profile synchronization
US20050125289A1 (en) * 2003-08-01 2005-06-09 Gil Beyda Audience targeting system with segment management
US20050166233A1 (en) * 2003-08-01 2005-07-28 Gil Beyda Network for matching an audience with deliverable content
US20050165643A1 (en) * 2003-08-01 2005-07-28 Wilson Joseph G. Audience targeting with universal profile synchronization
US20050165644A1 (en) * 2003-08-01 2005-07-28 Gil Beyda Audience matching network with performance factoring and revenue allocation
US20050246736A1 (en) * 2003-08-01 2005-11-03 Gil Beyda Audience server
US20050256956A1 (en) * 2004-05-14 2005-11-17 Battelle Memorial Institute Analyzing user-activity data using a heuristic-based approach
US20060004741A1 (en) * 2001-03-02 2006-01-05 Cooke Jonathan G G Polyarchical data indexing and automatically generated hierarchical data indexing paths
US7016887B2 (en) 2001-01-03 2006-03-21 Accelrys Software Inc. Methods and systems of classifying multiple properties simultaneously using a decision tree
US20060074947A1 (en) * 2003-03-10 2006-04-06 Mazzagatti Jane C System and method for storing and accessing data in an interlocking trees datastore
US20060280511A1 (en) * 2005-06-14 2006-12-14 Ryutaro Futami Optical receiver having bias circuit for avalanche photodiode with wide dynamic range
US20070038654A1 (en) * 2004-11-08 2007-02-15 Mazzagatti Jane C API to KStore interlocking trees datastore
US20070044564A1 (en) * 2005-08-26 2007-03-01 Integrated Curved Linear Ultrasonic Transducer Inspection Apparatus, Systems, And Methods Integrated curved linear ultrasonic transducer inspection apparatus, systems, and methods
US7213041B2 (en) 2004-10-05 2007-05-01 Unisys Corporation Saving and restoring an interlocking trees datastore
US20070150353A1 (en) * 2005-12-24 2007-06-28 Rich Media Club, Llc System and method for creation, distribution and tracking of advertising via electronic networks
US20070156615A1 (en) * 2005-12-29 2007-07-05 Ali Davar Method for training a classifier
US20070162508A1 (en) * 2004-11-08 2007-07-12 Mazzagatti Jane C Updating information in an interlocking trees datastore
US20070220070A1 (en) * 2006-03-20 2007-09-20 Mazzagatti Jane C Method for processing sensor data within a particle stream by a KStore
US20070219975A1 (en) * 2003-09-19 2007-09-20 Mazzagatti Jane C Method for processing K node count fields using an intensity variable
US20070226160A1 (en) * 2006-03-22 2007-09-27 Sony Corporation Method and system for transitioning from a case-based classifier system to a rule-based classifier system
US20070233723A1 (en) * 2006-04-04 2007-10-04 Mazzagatti Jane C Method for determining a most probable K location
US7340471B2 (en) 2004-01-16 2008-03-04 Unisys Corporation Saving and restoring an interlocking trees datastore
US7348980B2 (en) 2004-11-08 2008-03-25 Unisys Corporation Method and apparatus for interface for graphic display of data from a Kstore
US7389301B1 (en) 2005-06-10 2008-06-17 Unisys Corporation Data aggregation user interface and analytic adapted for a KStore
US20080168011A1 (en) * 2007-01-04 2008-07-10 Health Care Productivity, Inc. Methods and systems for automatic selection of classification and regression trees
US20080172375A1 (en) * 2007-01-11 2008-07-17 Microsoft Corporation Ranking items by optimizing ranking cost function
US7409380B1 (en) 2005-04-07 2008-08-05 Unisys Corporation Facilitated reuse of K locations in a knowledge store
US7418445B1 (en) 2004-11-08 2008-08-26 Unisys Corporation Method for reducing the scope of the K node construction lock
US20080275842A1 (en) * 2006-03-20 2008-11-06 Jane Campbell Mazzagatti Method for processing counts when an end node is encountered
US20080319753A1 (en) * 2007-06-25 2008-12-25 International Business Machines Corporation Technique for training a phonetic decision tree with limited phonetic exceptional terms
US20090133500A1 (en) * 2004-09-24 2009-05-28 The Boeing Company Integrated ultrasonic inspection probes, systems, and methods for inspection of composite assemblies
US7593923B1 (en) 2004-06-29 2009-09-22 Unisys Corporation Functional operations for accessing and/or building interlocking trees datastores to enable their use with applications software
US20090265243A1 (en) * 2005-12-24 2009-10-22 Brad Karassner System and method for creation, distribution and tracking of advertising via electronic networks
US7676330B1 (en) 2006-05-16 2010-03-09 Unisys Corporation Method for processing a particle using a sensor structure
US7676477B1 (en) 2005-10-24 2010-03-09 Unisys Corporation Utilities for deriving values and information from within an interlocking trees data store
US7689571B1 (en) 2006-03-24 2010-03-30 Unisys Corporation Optimizing the size of an interlocking tree datastore structure for KStore
US7716241B1 (en) 2004-10-27 2010-05-11 Unisys Corporation Storing the repository origin of data inputs within a knowledge store
US20100153544A1 (en) * 2008-12-16 2010-06-17 Brad Krassner Content rendering control system and method
US20100153836A1 (en) * 2008-12-16 2010-06-17 Rich Media Club, Llc Content rendering control system and method
US7774329B1 (en) 2006-12-22 2010-08-10 Amazon Technologies, Inc. Cross-region data access in partitioned framework
US7908240B1 (en) 2004-10-28 2011-03-15 Unisys Corporation Facilitated use of column and field data for field record universe in a knowledge store
US8150870B1 (en) * 2006-12-22 2012-04-03 Amazon Technologies, Inc. Scalable partitioning in a multilayered data service framework
US8402157B2 (en) 2003-08-14 2013-03-19 Rich Media Worldwide, Llc Internet-based system and method for distributing interstitial advertisements
US20140324756A1 (en) * 2013-04-30 2014-10-30 Wal-Mart Stores, Inc. Decision tree with set-based nodal comparisons
US8924654B1 (en) * 2003-08-18 2014-12-30 Cray Inc. Multistreamed processor vector packing method and apparatus
JP2015026372A (en) * 2013-07-25 2015-02-05 インターナショナル・ビジネス・マシーンズ・コーポレーションInternational Business Machines Corporation Computer-implemented method, storage medium and computer system for parallel tree based prediction
US9292599B2 (en) 2013-04-30 2016-03-22 Wal-Mart Stores, Inc. Decision-tree based quantitative and qualitative record classification
US9336249B2 (en) 2013-04-30 2016-05-10 Wal-Mart Stores, Inc. Decision tree with just-in-time nodal computations
US9355369B2 (en) 2013-04-30 2016-05-31 Wal-Mart Stores, Inc. Decision tree with compensation for previously unseen data
US20170019675A1 (en) * 2015-07-17 2017-01-19 Freescale Semiconductor, Inc. Parallel decoder with inter-prediction of video pictures
US10380602B2 (en) 2005-12-24 2019-08-13 Rich Media Club, Llc System and method for creation, distribution and tracking of advertising via electronic networks
US20200143284A1 (en) * 2018-11-05 2020-05-07 Takuya Tanaka Learning device and learning method
US11195210B2 (en) 2019-08-06 2021-12-07 Duration Media LLC Technologies for content presentation
US11443329B2 (en) 2005-12-24 2022-09-13 Rich Media Club, Llc System and method for creation, distribution and tracking of advertising via electronic networks

Families Citing this family (64)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5870735A (en) * 1996-05-01 1999-02-09 International Business Machines Corporation Method and system for generating a decision-tree classifier in parallel in a multi-processor system
US6014656A (en) * 1996-06-21 2000-01-11 Oracle Corporation Using overlapping partitions of data for query optimization
US6772103B1 (en) * 1996-12-19 2004-08-03 Dow Global Technologies Inc. Method for selecting a parts kit detail
US5978795A (en) * 1997-01-14 1999-11-02 Microsoft Corporation Temporally ordered binary search method and system
US6249761B1 (en) * 1997-09-30 2001-06-19 At&T Corp. Assigning and processing states and arcs of a speech recognition model in parallel processors
US6212526B1 (en) * 1997-12-02 2001-04-03 Microsoft Corporation Method for apparatus for efficient mining of classification models from databases
JP2001527244A (en) * 1997-12-22 2001-12-25 リンダ ジー デミシェル Method and apparatus for efficiently partitioning query execution in object-relational mapping between client and server
CA2260336A1 (en) 1999-02-15 2000-08-15 Robert Inkol Modulation recognition system
US6393427B1 (en) * 1999-03-22 2002-05-21 Nec Usa, Inc. Personalized navigation trees
US6175830B1 (en) 1999-05-20 2001-01-16 Evresearch, Ltd. Information management, retrieval and display system and associated method
US7013300B1 (en) 1999-08-03 2006-03-14 Taylor David C Locating, filtering, matching macro-context from indexed database for searching context where micro-context relevant to textual input by user
US7219073B1 (en) * 1999-08-03 2007-05-15 Brandnamestores.Com Method for extracting information utilizing a user-context-based search engine
US7424439B1 (en) 1999-09-22 2008-09-09 Microsoft Corporation Data mining for managing marketing resources
US6727914B1 (en) 1999-12-17 2004-04-27 Koninklijke Philips Electronics N.V. Method and apparatus for recommending television programming using decision trees
US6883168B1 (en) * 2000-06-21 2005-04-19 Microsoft Corporation Methods, systems, architectures and data structures for delivering software via a network
US7000230B1 (en) 2000-06-21 2006-02-14 Microsoft Corporation Network-based software extensions
US6757678B2 (en) 2001-04-12 2004-06-29 International Business Machines Corporation Generalized method and system of merging and pruning of data trees
US6850920B2 (en) * 2001-05-01 2005-02-01 The Regents Of The University Of California Performance analysis of distributed applications using automatic classification of communication inefficiencies
US20020174088A1 (en) * 2001-05-07 2002-11-21 Tongwei Liu Segmenting information records with missing values using multiple partition trees
US7007035B2 (en) * 2001-06-08 2006-02-28 The Regents Of The University Of California Parallel object-oriented decision tree system
US20040002981A1 (en) * 2002-06-28 2004-01-01 Microsoft Corporation System and method for handling a high-cardinality attribute in decision trees
US7158996B2 (en) * 2003-01-27 2007-01-02 International Business Machines Corporation Method, system, and program for managing database operations with respect to a database table
US7370066B1 (en) 2003-03-24 2008-05-06 Microsoft Corporation System and method for offline editing of data files
US7275216B2 (en) * 2003-03-24 2007-09-25 Microsoft Corporation System and method for designing electronic forms and hierarchical schemas
US7415672B1 (en) * 2003-03-24 2008-08-19 Microsoft Corporation System and method for designing electronic forms
US7913159B2 (en) 2003-03-28 2011-03-22 Microsoft Corporation System and method for real-time validation of structured data files
US7406660B1 (en) * 2003-08-01 2008-07-29 Microsoft Corporation Mapping between structured data and a visual surface
US7334187B1 (en) 2003-08-06 2008-02-19 Microsoft Corporation Electronic form aggregation
US7430711B2 (en) * 2004-02-17 2008-09-30 Microsoft Corporation Systems and methods for editing XML documents
US8487879B2 (en) * 2004-10-29 2013-07-16 Microsoft Corporation Systems and methods for interacting with a computer through handwriting to a screen
US7937651B2 (en) * 2005-01-14 2011-05-03 Microsoft Corporation Structural editing operations for network forms
US20060158023A1 (en) * 2005-01-14 2006-07-20 The Boler Company Continuous radius axle and fabricated spindle assembly
US8010515B2 (en) 2005-04-15 2011-08-30 Microsoft Corporation Query to an electronic form
US8200975B2 (en) * 2005-06-29 2012-06-12 Microsoft Corporation Digital signatures for network forms
US8429167B2 (en) 2005-08-08 2013-04-23 Google Inc. User-context-based search engine
US8027876B2 (en) 2005-08-08 2011-09-27 Yoogli, Inc. Online advertising valuation apparatus and method
US8001459B2 (en) * 2005-12-05 2011-08-16 Microsoft Corporation Enabling electronic documents for limited-capability computing devices
US7809723B2 (en) * 2006-06-26 2010-10-05 Microsoft Corporation Distributed hierarchical text classification framework
US7873583B2 (en) * 2007-01-19 2011-01-18 Microsoft Corporation Combining resilient classifiers
US8364617B2 (en) * 2007-01-19 2013-01-29 Microsoft Corporation Resilient classification of data
US8250008B1 (en) 2009-09-22 2012-08-21 Google Inc. Decision tree refinement
US9171044B2 (en) * 2010-02-16 2015-10-27 Oracle International Corporation Method and system for parallelizing database requests
US8538897B2 (en) * 2010-12-03 2013-09-17 Microsoft Corporation Cross-trace scalable issue detection and clustering
CN102523241B (en) * 2012-01-09 2014-11-19 北京邮电大学 Method and device for classifying network traffic on line based on decision tree high-speed parallel processing
US9324034B2 (en) 2012-05-14 2016-04-26 Qualcomm Incorporated On-device real-time behavior analyzer
US9690635B2 (en) 2012-05-14 2017-06-27 Qualcomm Incorporated Communicating behavior information in a mobile computing device
US9202047B2 (en) 2012-05-14 2015-12-01 Qualcomm Incorporated System, apparatus, and method for adaptive observation of mobile device behavior
US9609456B2 (en) 2012-05-14 2017-03-28 Qualcomm Incorporated Methods, devices, and systems for communicating behavioral analysis information
US9298494B2 (en) 2012-05-14 2016-03-29 Qualcomm Incorporated Collaborative learning for efficient behavioral analysis in networked mobile device
US9495537B2 (en) 2012-08-15 2016-11-15 Qualcomm Incorporated Adaptive observation of behavioral features on a mobile device
US9747440B2 (en) 2012-08-15 2017-08-29 Qualcomm Incorporated On-line behavioral analysis engine in mobile device with multiple analyzer model providers
US9330257B2 (en) 2012-08-15 2016-05-03 Qualcomm Incorporated Adaptive observation of behavioral features on a mobile device
US9319897B2 (en) 2012-08-15 2016-04-19 Qualcomm Incorporated Secure behavior analysis over trusted execution environment
US9684870B2 (en) 2013-01-02 2017-06-20 Qualcomm Incorporated Methods and systems of using boosted decision stumps and joint feature selection and culling algorithms for the efficient classification of mobile device behaviors
US9686023B2 (en) 2013-01-02 2017-06-20 Qualcomm Incorporated Methods and systems of dynamically generating and using device-specific and device-state-specific classifier models for the efficient classification of mobile device behaviors
US10089582B2 (en) 2013-01-02 2018-10-02 Qualcomm Incorporated Using normalized confidence values for classifying mobile device behaviors
US9742559B2 (en) 2013-01-22 2017-08-22 Qualcomm Incorporated Inter-module authentication for securing application execution integrity within a computing device
US9491187B2 (en) 2013-02-15 2016-11-08 Qualcomm Incorporated APIs for obtaining device-specific behavior classifier models from the cloud
US10055691B2 (en) 2014-09-08 2018-08-21 Pivotal Software, Inc. Stream processing with dynamic event routing
US9946958B1 (en) * 2016-10-14 2018-04-17 Cloudera, Inc. Image processing system and method
CN108334951B (en) * 2017-01-20 2023-04-25 微软技术许可有限责任公司 Pre-statistics of data for nodes of a decision tree
CN108346178A (en) 2017-01-23 2018-07-31 微软技术许可有限责任公司 Mixed reality object is presented
JP6800901B2 (en) * 2018-03-06 2020-12-16 株式会社東芝 Object area identification device, object area identification method and program
WO2020185556A1 (en) * 2019-03-08 2020-09-17 Musara Mubayiwa Cornelious Adaptive interactive medical training program with virtual patients

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5870735A (en) * 1996-05-01 1999-02-09 International Business Machines Corporation Method and system for generating a decision-tree classifier in parallel in a multi-processor system

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5870735A (en) * 1996-05-01 1999-02-09 International Business Machines Corporation Method and system for generating a decision-tree classifier in parallel in a multi-processor system

Non-Patent Citations (35)

* Cited by examiner, † Cited by third party
Title
D. E. Goldberg, Genetic Algorithms in Search, Optimization, and Machine Learning, Chapter 6, Intro. to Genetics Based Machine Learning, pp. 218 257, (Book), 1989. *
D. E. Goldberg, Genetic Algorithms in Search, Optimization, and Machine Learning, Chapter 6, Intro. to Genetics Based Machine Learning, pp. 218-257, (Book), 1989.
D. J. DeWitt, J. F. Naughton and D. A. Schneider, Parallel Sorting on Shared Nothing Architecture Using Probabilistic Splitting, In Proc. of the 1st Int l Conf. on Parallel and Distributed Information Systems, pp. 280 291, Dec. 1991. *
D. J. DeWitt, J. F. Naughton and D. A. Schneider, Parallel Sorting on Shared-Nothing Architecture Using Probabilistic Splitting, In Proc. of the 1st Int'l Conf. on Parallel and Distributed Information Systems, pp. 280-291, Dec. 1991.
D. J. DeWitt, S. Ghandeharizadeh, D. A. Schneider, A. Bricker, H. Hsiao & R. Rasmussen, The Gamma Database Machine Project, IEEE Transactions on Knowledge and Data Eng. vol. 2, No. 1, pp. 44 62, Mar. 1990. *
D. J. DeWitt, S. Ghandeharizadeh, D. A. Schneider, A. Bricker, H. Hsiao & R. Rasmussen, The Gamma Database Machine Project, IEEE Transactions on Knowledge and Data Eng. vol. 2, No. 1, pp. 44-62, Mar. 1990.
J. Catlett, Megainduction: Machine Learning on Very Large Databases, PhD thesis, Univ. of Sydney, Jun./Dec. 1991. *
J. R. Quinlan et al., Inferring Decision Trees Using Minimum Description Length Principle, Information and Computation 80, pp. 227 248, 1989. (0890 5401/89 Academic Press, Inc.). *
J. R. Quinlan et al., Inferring Decision Trees Using Minimum Description Length Principle, Information and Computation 80, pp. 227-248, 1989. (0890-5401/89 Academic Press, Inc.).
L. Breiman (Univ. of CA Berkeley) et al. Classification and Regression Trees (Book) Chapter 2. Introduction to Tree Classification pp. 18 58, Wadsworth International Group, Belmont, CA 1984. *
L. Breiman (Univ. of CA-Berkeley) et al. Classification and Regression Trees (Book) Chapter 2. Introduction to Tree Classification pp. 18-58, Wadsworth International Group, Belmont, CA 1984.
M. James, Classification Algorithms (book), Chapters 1 3, QA278.65, J281 Wiley Interscience Pub., 1985. *
M. James, Classification Algorithms (book), Chapters 1-3, QA278.65, J281 Wiley-Interscience Pub., 1985.
M. Mehta et al., Mdl based Decision Tree Pruning. Int l Conference on Knowledge Discovery in Databases and Data Mining (KDD 95) Montreal, Canada, pp. 216 221, Aug. 1995. *
M. Mehta et al., Mdl-based Decision Tree Pruning. Int'l Conference on Knowledge Discovery in Databases and Data Mining (KDD-95) Montreal, Canada, pp. 216-221, Aug. 1995.
M. Mehta, R. Agrawal & J. Rissanen, SLIQ: Fast Scalable Classifier for Data Mining, In EDBT 96, Avignon, France, Mar. 1996. *
MPI: A Message Passing Interface Standard, Message Passing Interface Forum May 5, 1994. *
MPI: A Message-Passing Interface Standard, Message Passing Interface Forum May 5, 1994.
No. 08/500,717, filed Jul. 11, 1995, for System and Method for Parallel Mining of Association Rules in Databases, Pat. No. 5,842,200. *
No. 08/541,665, filed Oct. 10, 1995, for Method and System for Mining Generalized Sequential Patterns in a Large Database, Pat. No. 5,742,811. *
No. 08/564,694, filed Nov. 29, 1995, for Method and System for Generating a Decision tree Clarifier for Data Records, Pat. No. 5,787,274. *
No. 08/564,694, filed Nov. 29, 1995, for Method and System for Generating a Decision-tree Clarifier for Data Records, Pat. No. 5,787,274.
P. K. Chan et al., Experiments on Multistrategy Learning by Meta learning. In Proc. Second Intl. Conf. on Info. and Knowledge Mgmt., pp. 314 323, 1993. *
P. K. Chan et al., Experiments on Multistrategy Learning by Meta-learning. In Proc. Second Intl. Conf. on Info. and Knowledge Mgmt., pp. 314-323, 1993.
R. Agrawal et al., An Interval Classifier for Database Mining Applications, Proceedings of the 18th VLDB Conference Vancouver, British Columbia, Aug. 1992. *
R. Agrawal et al., Database Mining: A Performance Perspective, IEEE Transactions on Knowledge and Data Engineering, vol. 5, No. 6, pp. 914 925, Special Issue on Learning and Discovery in Knowledge Based Databases, Dec. 1993. *
R. Agrawal et al., Database Mining: A Performance Perspective, IEEE Transactions on Knowledge and Data Engineering, vol. 5, No. 6, pp. 914-925, Special Issue on Learning and Discovery in Knowledge-Based Databases, Dec. 1993.
R. P. Lippmann, An Introduction to Computing with Neural Nets, IEEE ASSP Magazine, pp. 4 22, 0740 7467/87/0400, Apr. 1987. *
R. P. Lippmann, An Introduction to Computing with Neural Nets, IEEE ASSP Magazine, pp. 4-22, 0740-7467/87/0400, Apr. 1987.
S. M. Weiss et al., Computer Systems that Learn, Classification and Prediction Methods from Statistics, Neural Nets, Machine Learning, and Expert Systems, pp.113 143, 1991. Q325.5, W432, C2, Morgan Kaufmann Pub. Inc., San Mateo, CA. *
S. M. Weiss et al., Computer Systems that Learn, Classification and Prediction Methods from Statistics, Neural Nets, Machine Learning, and Expert Systems, pp.113-143, 1991. Q325.5, W432, C2, Morgan Kaufmann Pub. Inc., San Mateo, CA.
U. Fayyad et al., The Attribute Selection Problem in Decision Tree Generation. In 105h Nat l Conf. on AI AAAI 92, Learning: Inductive 1992. *
U. Fayyad et al., The Attribute Selection Problem in Decision Tree Generation. In 105h Nat'l Conf. on AI AAAI-92, Learning: Inductive 1992.
Wallace et al., Coding Decision Trees, Machine Learning, 11, pp. 7 22, 1993. (Kluwer Academic Pub., Boston. Mfg. in the Netherlands.). *
Wallace et al., Coding Decision Trees, Machine Learning, 11, pp. 7-22, 1993. (Kluwer Academic Pub., Boston. Mfg. in the Netherlands.).

Cited By (120)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6243695B1 (en) * 1998-03-18 2001-06-05 Motorola, Inc. Access control system and method therefor
US6363375B1 (en) * 1998-09-30 2002-03-26 Nippon Telegraph And Telephone Company Classification tree based information retrieval scheme
US6601050B1 (en) * 1999-07-26 2003-07-29 Large Scale Biology Corporation Trainable adaptive focused replicator network for analyzing data
US6687691B1 (en) * 2000-01-19 2004-02-03 International Business Machines Corporation Method and system for reconstructing original distributions from randomized numeric data
US6546389B1 (en) * 2000-01-19 2003-04-08 International Business Machines Corporation Method and system for building a decision-tree classifier from privacy-preserving data
US7016887B2 (en) 2001-01-03 2006-03-21 Accelrys Software Inc. Methods and systems of classifying multiple properties simultaneously using a decision tree
US7475083B2 (en) * 2001-03-02 2009-01-06 Factiva, Inc. Polyarchical data indexing and automatically generated hierarchical data indexing paths
US20060004741A1 (en) * 2001-03-02 2006-01-05 Cooke Jonathan G G Polyarchical data indexing and automatically generated hierarchical data indexing paths
US20090144223A1 (en) * 2001-03-02 2009-06-04 Jonathan Guy Grenside Cooke Polyarchical data indexing and automatically generated hierarchical data indexing paths
US7908253B2 (en) 2001-03-02 2011-03-15 Factiva, Inc. Polyarchical data indexing and automatically generated hierarchical data indexing paths
US20020174429A1 (en) * 2001-03-29 2002-11-21 Srinivas Gutta Methods and apparatus for generating recommendation scores
US20040243530A1 (en) * 2001-07-04 2004-12-02 Akeel Al-Attar Process-related systems and methods
US7644863B2 (en) * 2001-11-14 2010-01-12 Sap Aktiengesellschaft Agent using detailed predictive model
US20030126103A1 (en) * 2001-11-14 2003-07-03 Ye Chen Agent using detailed predictive model
US20040049504A1 (en) * 2002-09-06 2004-03-11 International Business Machines Corporation System and method for exploring mining spaces with multiple attributes
US7424480B2 (en) 2003-03-10 2008-09-09 Unisys Corporation System and method for storing and accessing data in an interlocking trees datastore
US20060074947A1 (en) * 2003-03-10 2006-04-06 Mazzagatti Jane C System and method for storing and accessing data in an interlocking trees datastore
US7788287B2 (en) 2003-03-10 2010-08-31 Unisys Corporation System and method for storing and accessing data in an interlocking trees datastore
US7499908B2 (en) 2003-04-23 2009-03-03 International Business Machines Corporation Method for identifying a workload type for a given workload of database requests
US20040225631A1 (en) * 2003-04-23 2004-11-11 International Business Machines Corporation System and method for identifying a workload type for a given workload of database requests
US9117217B2 (en) 2003-08-01 2015-08-25 Advertising.Com Llc Audience targeting with universal profile synchronization
US10552865B2 (en) 2003-08-01 2020-02-04 Oath (Americas) Inc. System and method for segmenting and targeting audience members
US10134047B2 (en) 2003-08-01 2018-11-20 Oath (Americas) Inc. Audience targeting with universal profile synchronization
US11200596B2 (en) 2003-08-01 2021-12-14 Verizon Media Inc. System and method for segmenting and targeting audience members
US20050027593A1 (en) * 2003-08-01 2005-02-03 Wilson Joseph G. System and method for segmenting and targeting audience members
US20050125290A1 (en) * 2003-08-01 2005-06-09 Gil Beyda Audience targeting system with profile synchronization
US10229430B2 (en) 2003-08-01 2019-03-12 Oath (Americas) Inc. Audience matching network with performance factoring and revenue allocation
US9118812B2 (en) 2003-08-01 2015-08-25 Advertising.Com Llc Audience server
US7805332B2 (en) 2003-08-01 2010-09-28 AOL, Inc. System and method for segmenting and targeting audience members
US20050125289A1 (en) * 2003-08-01 2005-06-09 Gil Beyda Audience targeting system with segment management
US20050166233A1 (en) * 2003-08-01 2005-07-28 Gil Beyda Network for matching an audience with deliverable content
US10991003B2 (en) 2003-08-01 2021-04-27 Verizon Media Inc. Audience matching network with performance factoring and revenue allocation
US20050165643A1 (en) * 2003-08-01 2005-07-28 Wilson Joseph G. Audience targeting with universal profile synchronization
US20050165644A1 (en) * 2003-08-01 2005-07-28 Gil Beyda Audience matching network with performance factoring and revenue allocation
US9691079B2 (en) 2003-08-01 2017-06-27 Advertising.Com Llc Audience server
US20110066705A1 (en) * 2003-08-01 2011-03-17 Tacoda Llc System and method for segmenting and targeting audience members
US8464290B2 (en) 2003-08-01 2013-06-11 Tacoda, Inc. Network for matching an audience with deliverable content
US9928522B2 (en) 2003-08-01 2018-03-27 Oath (Americas) Inc. Audience matching network with performance factoring and revenue allocation
US11587114B2 (en) 2003-08-01 2023-02-21 Yahoo Ad Tech Llc System and method for segmenting and targeting audience members
US10846709B2 (en) 2003-08-01 2020-11-24 Verizon Media Inc. Audience targeting with universal profile synchronization
US8150732B2 (en) 2003-08-01 2012-04-03 Tacoda Llc Audience targeting system with segment management
US20050246736A1 (en) * 2003-08-01 2005-11-03 Gil Beyda Audience server
US8402157B2 (en) 2003-08-14 2013-03-19 Rich Media Worldwide, Llc Internet-based system and method for distributing interstitial advertisements
US8738796B2 (en) 2003-08-14 2014-05-27 Rich Media Worldwide, Llc Internet-based system and method for distributing interstitial advertisements
US8924654B1 (en) * 2003-08-18 2014-12-30 Cray Inc. Multistreamed processor vector packing method and apparatus
US20070219975A1 (en) * 2003-09-19 2007-09-20 Mazzagatti Jane C Method for processing K node count fields using an intensity variable
US8516004B2 (en) 2003-09-19 2013-08-20 Unisys Corporation Method for processing K node count fields using an intensity variable
US7340471B2 (en) 2004-01-16 2008-03-04 Unisys Corporation Saving and restoring an interlocking trees datastore
US20080065661A1 (en) * 2004-01-16 2008-03-13 Mazzagatti Jane C Saving and restoring an interlocking trees datastore
US20050256956A1 (en) * 2004-05-14 2005-11-17 Battelle Memorial Institute Analyzing user-activity data using a heuristic-based approach
US7593923B1 (en) 2004-06-29 2009-09-22 Unisys Corporation Functional operations for accessing and/or building interlocking trees datastores to enable their use with applications software
US20090133500A1 (en) * 2004-09-24 2009-05-28 The Boeing Company Integrated ultrasonic inspection probes, systems, and methods for inspection of composite assemblies
US7690259B2 (en) * 2004-09-24 2010-04-06 The Boeing Company Integrated ultrasonic inspection probes, systems, and methods for inspection of composite assemblies
US20070143527A1 (en) * 2004-10-05 2007-06-21 Mazzagatti Jane C Saving and restoring an interlocking trees datastore
US7213041B2 (en) 2004-10-05 2007-05-01 Unisys Corporation Saving and restoring an interlocking trees datastore
US7716241B1 (en) 2004-10-27 2010-05-11 Unisys Corporation Storing the repository origin of data inputs within a knowledge store
US7908240B1 (en) 2004-10-28 2011-03-15 Unisys Corporation Facilitated use of column and field data for field record universe in a knowledge store
US7499932B2 (en) 2004-11-08 2009-03-03 Unisys Corporation Accessing data in an interlocking trees data structure using an application programming interface
US7348980B2 (en) 2004-11-08 2008-03-25 Unisys Corporation Method and apparatus for interface for graphic display of data from a Kstore
US20070038654A1 (en) * 2004-11-08 2007-02-15 Mazzagatti Jane C API to KStore interlocking trees datastore
US20070162508A1 (en) * 2004-11-08 2007-07-12 Mazzagatti Jane C Updating information in an interlocking trees datastore
US7418445B1 (en) 2004-11-08 2008-08-26 Unisys Corporation Method for reducing the scope of the K node construction lock
US7409380B1 (en) 2005-04-07 2008-08-05 Unisys Corporation Facilitated reuse of K locations in a knowledge store
US7389301B1 (en) 2005-06-10 2008-06-17 Unisys Corporation Data aggregation user interface and analytic adapted for a KStore
US20060280511A1 (en) * 2005-06-14 2006-12-14 Ryutaro Futami Optical receiver having bias circuit for avalanche photodiode with wide dynamic range
US7617732B2 (en) 2005-08-26 2009-11-17 The Boeing Company Integrated curved linear ultrasonic transducer inspection apparatus, systems, and methods
US20070044564A1 (en) * 2005-08-26 2007-03-01 Integrated Curved Linear Ultrasonic Transducer Inspection Apparatus, Systems, And Methods Integrated curved linear ultrasonic transducer inspection apparatus, systems, and methods
US7676477B1 (en) 2005-10-24 2010-03-09 Unisys Corporation Utilities for deriving values and information from within an interlocking trees data store
US11741482B2 (en) 2005-12-24 2023-08-29 Rich Media Club, Llc System and method for creation, distribution and tracking of advertising via electronic networks
US11468453B2 (en) 2005-12-24 2022-10-11 Rich Media Club, Llc System and method for creation, distribution and tracking of advertising via electronic networks
US20070150353A1 (en) * 2005-12-24 2007-06-28 Rich Media Club, Llc System and method for creation, distribution and tracking of advertising via electronic networks
US11004090B2 (en) 2005-12-24 2021-05-11 Rich Media Club, Llc System and method for creation, distribution and tracking of advertising via electronic networks
US10380602B2 (en) 2005-12-24 2019-08-13 Rich Media Club, Llc System and method for creation, distribution and tracking of advertising via electronic networks
US20090265243A1 (en) * 2005-12-24 2009-10-22 Brad Karassner System and method for creation, distribution and tracking of advertising via electronic networks
US10380597B2 (en) 2005-12-24 2019-08-13 Rich Media Club, Llc System and method for creation, distribution and tracking of advertising via electronic networks
US11443329B2 (en) 2005-12-24 2022-09-13 Rich Media Club, Llc System and method for creation, distribution and tracking of advertising via electronic networks
US20080262986A1 (en) * 2005-12-29 2008-10-23 Ali Davar Method for training a classifier
US20070156615A1 (en) * 2005-12-29 2007-07-05 Ali Davar Method for training a classifier
US20080275842A1 (en) * 2006-03-20 2008-11-06 Jane Campbell Mazzagatti Method for processing counts when an end node is encountered
US7734571B2 (en) 2006-03-20 2010-06-08 Unisys Corporation Method for processing sensor data within a particle stream by a KStore
US20070220070A1 (en) * 2006-03-20 2007-09-20 Mazzagatti Jane C Method for processing sensor data within a particle stream by a KStore
US20110010322A1 (en) * 2006-03-22 2011-01-13 Sony Corporation, A Japanese Corporation Method and system for transitioning from a case-based classifier system to a rule-based classifier system
US20070226160A1 (en) * 2006-03-22 2007-09-27 Sony Corporation Method and system for transitioning from a case-based classifier system to a rule-based classifier system
US8051027B2 (en) 2006-03-22 2011-11-01 Sony Corporation Method and system for transitioning from a case-based classifier system to a rule-based classifier system
US7809665B2 (en) * 2006-03-22 2010-10-05 Sony Corporation Method and system for transitioning from a case-based classifier system to a rule-based classifier system
US7689571B1 (en) 2006-03-24 2010-03-30 Unisys Corporation Optimizing the size of an interlocking tree datastore structure for KStore
US20070233723A1 (en) * 2006-04-04 2007-10-04 Mazzagatti Jane C Method for determining a most probable K location
US8238351B2 (en) 2006-04-04 2012-08-07 Unisys Corporation Method for determining a most probable K location
US7676330B1 (en) 2006-05-16 2010-03-09 Unisys Corporation Method for processing a particle using a sensor structure
US8898105B1 (en) 2006-12-22 2014-11-25 Amazon Technologies, Inc. Scalable partitioning in a multilayered data service framework
US8271468B1 (en) 2006-12-22 2012-09-18 Amazon Technologies, Inc. Cross-region data access in partitioned framework
US8150870B1 (en) * 2006-12-22 2012-04-03 Amazon Technologies, Inc. Scalable partitioning in a multilayered data service framework
US9239838B1 (en) 2006-12-22 2016-01-19 Amazon Technologies, Inc. Scalable partitioning in a multilayered data service framework
US7774329B1 (en) 2006-12-22 2010-08-10 Amazon Technologies, Inc. Cross-region data access in partitioned framework
US9330127B2 (en) * 2007-01-04 2016-05-03 Health Care Productivity, Inc. Methods and systems for automatic selection of classification and regression trees
US9524476B2 (en) * 2007-01-04 2016-12-20 Health Care Productivity, Inc. Methods and systems for automatic selection of preferred size classification and regression trees
US20080168011A1 (en) * 2007-01-04 2008-07-10 Health Care Productivity, Inc. Methods and systems for automatic selection of classification and regression trees
US20170061331A1 (en) * 2007-01-04 2017-03-02 Health Care Productivity, Inc Methods and systems for automatic selection of classification and regression trees having preferred consistency and accuracy
US20160210561A1 (en) * 2007-01-04 2016-07-21 Dan Steinberg Methods and systems for automatic selection of preferred size classification and regression trees
US9760656B2 (en) * 2007-01-04 2017-09-12 Minitab, Inc. Methods and systems for automatic selection of classification and regression trees having preferred consistency and accuracy
US20150370849A1 (en) * 2007-01-04 2015-12-24 Dan Steinberg Methods and systems for automatic selection of classification and regression trees
US9842175B2 (en) * 2007-01-04 2017-12-12 Minitab, Inc. Methods and systems for automatic selection of classification and regression trees
US20080172375A1 (en) * 2007-01-11 2008-07-17 Microsoft Corporation Ranking items by optimizing ranking cost function
US7925651B2 (en) * 2007-01-11 2011-04-12 Microsoft Corporation Ranking items by optimizing ranking cost function
US20080319753A1 (en) * 2007-06-25 2008-12-25 International Business Machines Corporation Technique for training a phonetic decision tree with limited phonetic exceptional terms
US8027834B2 (en) 2007-06-25 2011-09-27 Nuance Communications, Inc. Technique for training a phonetic decision tree with limited phonetic exceptional terms
US9824074B2 (en) 2008-12-16 2017-11-21 Rich Media Club, Llc Content rendering control system for a pre-defined area of a content page
US20100153836A1 (en) * 2008-12-16 2010-06-17 Rich Media Club, Llc Content rendering control system and method
US8356247B2 (en) 2008-12-16 2013-01-15 Rich Media Worldwide, Llc Content rendering control system and method
US20100153544A1 (en) * 2008-12-16 2010-06-17 Brad Krassner Content rendering control system and method
US9355369B2 (en) 2013-04-30 2016-05-31 Wal-Mart Stores, Inc. Decision tree with compensation for previously unseen data
US9336249B2 (en) 2013-04-30 2016-05-10 Wal-Mart Stores, Inc. Decision tree with just-in-time nodal computations
US9292599B2 (en) 2013-04-30 2016-03-22 Wal-Mart Stores, Inc. Decision-tree based quantitative and qualitative record classification
US20140324756A1 (en) * 2013-04-30 2014-10-30 Wal-Mart Stores, Inc. Decision tree with set-based nodal comparisons
JP2015026372A (en) * 2013-07-25 2015-02-05 インターナショナル・ビジネス・マシーンズ・コーポレーションInternational Business Machines Corporation Computer-implemented method, storage medium and computer system for parallel tree based prediction
US10027969B2 (en) * 2015-07-17 2018-07-17 Nxp Usa, Inc. Parallel decoder with inter-prediction of video pictures
US20170019675A1 (en) * 2015-07-17 2017-01-19 Freescale Semiconductor, Inc. Parallel decoder with inter-prediction of video pictures
US20200143284A1 (en) * 2018-11-05 2020-05-07 Takuya Tanaka Learning device and learning method
US11195210B2 (en) 2019-08-06 2021-12-07 Duration Media LLC Technologies for content presentation
US11587126B2 (en) 2019-08-06 2023-02-21 Duration Media LLC Technologies for content presentation

Also Published As

Publication number Publication date
US5870735A (en) 1999-02-09

Similar Documents

Publication Publication Date Title
US6138115A (en) Method and system for generating a decision-tree classifier in parallel in a multi-processor system
US5799311A (en) Method and system for generating a decision-tree classifier independent of system memory size
US5787274A (en) Data mining method and system for generating a decision tree classifier for data records based on a minimum description length (MDL) and presorting of records
US6055539A (en) Method to reduce I/O for hierarchical data partitioning methods
US6230151B1 (en) Parallel classification for data mining in a shared-memory multiprocessor system
US5899992A (en) Scalable set oriented classifier
US6212526B1 (en) Method for apparatus for efficient mining of classification models from databases
US6519580B1 (en) Decision-tree-based symbolic rule induction system for text categorization
Chen et al. Addressing diverse user preferences in sql-query-result navigation
US7310624B1 (en) Methods and apparatus for generating decision trees with discriminants and employing same in data classification
US20070185896A1 (en) Binning predictors using per-predictor trees and MDL pruning
US7571159B2 (en) System and method for building decision tree classifiers using bitmap techniques
Zhao et al. Constrained cascade generalization of decision trees
US6563952B1 (en) Method and apparatus for classification of high dimensional data
Ibrahim et al. Compact weighted class association rule mining using information gain
Al Aghbari et al. Geosimmr: A mapreduce algorithm for detecting communities based on distance and interest in social networks
Wedashwara et al. Combination of genetic network programming and knapsack problem to support record clustering on distributed databases
Sassi et al. About database summarization
AU2020104033A4 (en) CDM- Separating Items Device: Separating Items into their Corresponding Class using Iris Dataset Machine Learning Classification Device
Klösgen Knowledge discovery in databases and data mining
Pravallika et al. Analysis on Medical Data sets using Apriori Algorithm Based on Association Rules
Abd El-Ghafar et al. An Efficient Multi-Phase Blocking Strategy for Entity Resolution in Big Data
ul Ain et al. Discovery of Jumping Emerging Patterns Using Genetic Algorithm
Fiolet et al. Intelligent database distribution on a grid using clustering
Zaki et al. Parallel classification on SMP systems

Legal Events

Date Code Title Description
STCF Information on status: patent grant

Free format text: PATENTED CASE

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

FPAY Fee payment

Year of fee payment: 4

FPAY Fee payment

Year of fee payment: 8

FPAY Fee payment

Year of fee payment: 12