US6134344A - Method and apparatus for improving the efficiency of support vector machines - Google Patents

Method and apparatus for improving the efficiency of support vector machines Download PDF

Info

Publication number
US6134344A
US6134344A US08/883,193 US88319397A US6134344A US 6134344 A US6134344 A US 6134344A US 88319397 A US88319397 A US 88319397A US 6134344 A US6134344 A US 6134344A
Authority
US
United States
Prior art keywords
vectors
reduced set
support vector
training
svm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
US08/883,193
Inventor
Christopher John Burges
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nokia of America Corp
Original Assignee
Lucent Technologies Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lucent Technologies Inc filed Critical Lucent Technologies Inc
Assigned to LUCENT TECHNOLOGIES, INC. reassignment LUCENT TECHNOLOGIES, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: BURGES, CHRISTOPHER JOHN
Priority to US08/883,193 priority Critical patent/US6134344A/en
Priority to CA002238164A priority patent/CA2238164A1/en
Priority to JP10169787A priority patent/JPH1173406A/en
Priority to EP98304770A priority patent/EP0887761A3/en
Publication of US6134344A publication Critical patent/US6134344A/en
Application granted granted Critical
Assigned to THE CHASE MANHATTAN BANK, AS COLLATERAL AGENT reassignment THE CHASE MANHATTAN BANK, AS COLLATERAL AGENT CONDITIONAL ASSIGNMENT OF AND SECURITY INTEREST IN PATENT RIGHTS Assignors: LUCENT TECHNOLOGIES INC. (DE CORPORATION)
Assigned to LUCENT TECHNOLOGIES INC. reassignment LUCENT TECHNOLOGIES INC. TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENT RIGHTS Assignors: JPMORGAN CHASE BANK, N.A. (FORMERLY KNOWN AS THE CHASE MANHATTAN BANK), AS ADMINISTRATIVE AGENT
Assigned to ALCATEL-LUCENT USA INC. reassignment ALCATEL-LUCENT USA INC. MERGER (SEE DOCUMENT FOR DETAILS). Assignors: LUCENT TECHNOLOGIES INC.
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines

Definitions

  • This invention relates generally to universal learning machines, and, in particular, to support vector machines.
  • a Support Vector Machine is a universal learning machine whose decision surface is parameterized by a set of support vectors, and by a set of corresponding weights.
  • An SVM is also characterized by a kernel function. Choice of the kernel determines whether the resulting SVM is a polynomial classifier, a two-layer neural network, a radial basis function machine, or some other learning machine.
  • a decision rule for an SVM is a function of the corresponding kernel function and support vectors.
  • An SVM generally operates in two phases: a training phase and a testing phase.
  • the training phase the set of support vectors is generated for use in the decision rule.
  • the testing phase decisions are made using the particular decision rule.
  • the complexity of computation for an SVM decision rule scales with the number of support vectors, N S , in the support vector set.
  • reduced set vectors are used.
  • the number of reduced set vectors is smaller than the number of vectors in the set.
  • These reduced set vectors are different from the vectors in the set and are determined pursuant to an optimization approach other than the eigenvalue computation used for homogeneous quadratic kernels.
  • an SVM for use in pattern recognition, utilizes reduced set vectors, which improves the efficiency of this SVM by a user-chosen factor. These reduced set vectors are determined pursuant to an unconstrained optimization approach.
  • the selection of the reduced set vectors allows direct control of performance/complexity trade-offs.
  • inventive concept is not specific to pattern recognition and is applicable to any problem where the Support Vector algorithm is used (e.g., regression estimation).
  • FIG. 1 is a flow chart depicting the operation of a prior art SVM
  • FIG. 2 is an general representation of the separation of training data into two classes with representative support vectors
  • FIG. 3 shows an illustrative method for training an SVM system in accordance with the principles of the invention
  • FIG. 4 shown an illustrative method for operating an SVM system in accordance with the principles of the invention.
  • FIG. 5 shows a block diagram of a portion of a recognition system embodying the principles of the invention.
  • inventive concept Before describing an illustrative embodiment of the invention, a brief background is provided on support vector machines, followed by a description of the inventive concept itself. Other than the inventive concept, it is assumed that the reader is familiar with mathematical notation used to generally represent kernel-based methods as known in the art. Also, the inventive concept is illustratively described in the context of pattern recognition. However, the inventive concept is applicable to any problem where the Support Vector algorithm is used (e.g., regression estimation).
  • test data was used from two optical character recognition (OCR) data sets containing grey level images of the ten digits: a set of 7,291 training and 2,007 test patterns, which is referred to herein as the "postal set” (e.g., see L. Bottou, C. Cortes, H. Drucker, L. D. Jackel, Y. LeCun, U. A. Muller, E. Sackinger, P. Simard, and V. Vapnik, Comparison of Classifier Methods: A Case Study in Handwritten Digit Recognition, Proceedings of the 12th IAPR International Conference on Pattern Recognition, Vol. 2, IEEE Computer Society Press, Los Alamos, Calif., pp. 77-83, 1994; and Y.
  • OCR optical character recognition
  • NIST set e.g., see, R. A. Wilkinson, J. Geist, S. Janet, P. J. Grother, C. J. C. Burges, R. Creecy, R. Hammond, J. J. Hull, N. J. Larsen, T. P. Vogl and C. L. Wilson, The First Census Optical Character Recognition System Conference, U.S. Department of Commerce, NIST, August 1992). Postal images were 16 ⁇ 16 pixels and NIST images were 28 ⁇ 28 pixels.
  • bold face is used for vector and matrix quantities, and light face for their components.
  • the support vector algorithm is a principled method for training any learning machine whose decision rule takes the form of Equation (1): the only condition required is that the kernel K satisfy a general positivity constraint (e.g., see The Nature of Statistical Learning Theory, and A training algorithm for optimal margin classifiers, cited above).
  • Support Vector Machines have a number of striking properties.
  • the training procedure amounts to solving a constrained quadratic optimization problem, and the solution found is thus guaranteed to be the unique global minimum of the objective function.
  • SVMs can be used to directly implement Structural Risk Minimization, in which the capacity of the learning machine can be controlled so as to minimize a bound on the generalization error (e.g., see The Nature of Statistical Learning Theory, and Extracting Support Data for a Given Task, cited above).
  • a support vector decision surface is actually a linear separating hyperplane in a high dimensional space; similarly, SVMs can be used to construct a regression, which is linear in some high dimensional space (e.g., see The Nature of Statistical Learning Theory, cited above).
  • Support Vector Learning Machines have been successfully applied to pattern recognition problems such as optical character recognition (OCR) (e.g., see The Nature of Statistical Learning Theory, and Extracting Support Data for a Given Task, cited above, and C. Cortes and V. Vapnik, Support Vector Networks, Machine Learning, Vol 20, pp 1-25, 1995), and object recognition.
  • OCR optical character recognition
  • FIG. 1 is a flow chart depicting the operation of a prior art SVM. This operation comprises two phases: a training phase and a testing phase.
  • the SVM receives elements of a training set with pre-assigned classes in step 52.
  • the input data vectors from the training set are transformed into a multi-dimensional space.
  • parameters i.e., support vectors and associated weights
  • FIG. 2 shows an example where the training data elements are separated into two classes, one class represented by circles and the other class represented by boxes.
  • This is typical of a 2-class pattern recognition problem: for example, an SVM which is trained to separate patterns of "cars" from those patterns that are "not cars.”
  • An optimal hyperplane is the linear decision function with maximal margin between the vectors of two classes. That is, the optimal hyperplane is the unique decision surface which separates the training data with a maximal margin. As illustrated in FIG. 2, the optimal hyperplane is defined by the area where the separation between the two classes is maximum.
  • to construct an optimal hyperplane one only has to take into account a small subset of the trained data elements which determine this maximal margin.
  • This subset of training elements that determines the parameters of an optimal hyperplane are known as support vectors. In FIG. 2, the support vectors are indicating by shading.
  • the optimal hyperplane parameters are represented as linear combinations of the mapped support vectors in the high dimensional space.
  • the SVM algorithm ensures that errors on a set of vectors are minimized by assigning weights to all of the support vectors. These weights are used in computing the decision surface in terms of the support vectors. The algorithm also allows for these weights to adapt in order to minimize the error rate on the training data belonging to a particular problem. These weights are calculated during the training phase of the SVM.
  • Constructing an optimal hyperplane therefore becomes a constrained quadratic optimization programming problem determined by the elements of the training set and functions determining the dot products in the mapped space.
  • the solution to the optimization problem is found using conventional intermediate optimization techniques.
  • the optimal hyperplane involves separating the training data without any errors.
  • training data cannot be separated without errors.
  • the SVM attempts to separate the training data with a minimal number of errors and separates the rest of the elements with maximal margin.
  • These hyperplanes are generally known as soft margin hyperplanes.
  • the SVM receives elements of a testing set to be classified in step 62.
  • the SVM then transforms the input data vectors of the testing set by mapping them into a multi-dimensional space using support vectors as parameters in the Kernel (step 64).
  • the mapping function is determined by the choice of a kernel which is preloaded in the SVM.
  • the mapping involves taking a single vector and transforming it to a high-dimensional feature space so that a linear decision function can be created in this high dimensional feature space. Although the flow chart of FIG. 1 shows implicit mapping, this mapping may be performed explicitly as well.
  • the SVM generates a classification signal from the decision surface to indicate the membership status of each input data vector.
  • the final result is the creation of an output classification signal, e.g., as illustrated in FIG. 2, a (+1) for a circle and an (-1) for a box.
  • Equation (1) scales with the number of support vectors N S .
  • the expectation of the number of support vectors is bounded below by (l-1)E(P), where P is the probability of error on a test vector using a given SVM trained on l training samples, and E[P] is the expectation of P over all choices of the l samples (e.g., see The Nature of Statistical Learning Theory, cited above).
  • N S can be expected to approximately scale with l.
  • this results in a machine which is considerably slower in test phase than other systems with similar generalization performance e.g., see Comparison of Classifier Methods: A Case Study in Handwritten Digit Recognition, cited above; and Y. LeCun, L.
  • the reduced set vectors have the following properties:
  • the number of reduced set vectors (and hence the speed of the resulting SVM in test phase) is chosen a priori;
  • the reduced set method is applicable wherever the support vector method is used (for example, regression estimation).
  • the training data be elements x.di-elect cons.L, , where L (for "low dimensional") is defined to be the d L -dimensional Euclidean space R d .sbsp.L.
  • L for "low dimensional”
  • vectors in H will be denoted with a bar.
  • the mapping ⁇ is determined by the choice of kernel K.
  • the basic SVM pattern recognition algorithm solves a two-class problem (e.g., see Estimation of Dependencies Based on Empirical Data, The Nature of Statistical Learning Theory, A training algorithm for optimal margin classifiers, cited above).
  • ⁇ i are positive slack variables, introduced to handle the non-separable case (e.g., see Support Vector Networks, cited above).
  • the SVM algorithm constructs that separating hyperplane for which the margin between the positive and negative examples in H is maximized.
  • a test vector x.di-elect cons.L is then assigned a class label ⁇ +1,-1 ⁇ depending on whether ⁇ (x)+b is greater or less than (k 0 +k 1 )/2.
  • a support vector s.di-elect cons.L is defined as any training sample for which one of the equations (2) or (3) is an equality. (The support vectors are named s to distinguish them from the rest of the training data).
  • is then given by ##EQU3## where ⁇ a ⁇ 0 are the weights, determined during training, y a .di-elect cons. ⁇ +1,-1 ⁇ the class labels of the s a , and N S is the number of support vectors.
  • ⁇ a ⁇ 0 are the weights, determined during training
  • y a .di-elect cons. ⁇ +1,-1 ⁇ the class labels of the s a
  • N S is the number of support vectors.
  • Equation (5) the expansion in Equation (5) is replaced by the approximation ##EQU6##
  • N Z N S
  • 0 (described below).
  • the reduced set leads to a reduction in the decision rule complexity with no loss in generalization performance.
  • may be viewed as a monotonic decreasing function of N Z , and the generalization performance also becomes a function of N Z . In this description, only empirical results are provided regarding the dependence of the generalization performance on N Z .
  • mapping ⁇ The image of ⁇ will not in general be a linear space. ⁇ will also in general not be surjective, and may not be one-to-one (for example, when K is a homogeneous polynomial of even degree). Further, ⁇ can map linearly dependent vectors in L onto linearly independent vectors in H (for example, when K is an inhomogeneous polynomial). In general one cannot scale the coefficients ⁇ a to unity by scaling z a , even when K is a homogeneous polynomial (for example, if K is homogeneous of even degree, the ⁇ a can be scaled to ⁇ +1,-1 ⁇ , but not to unity).
  • trace(S 2 ) is the sum of the squared eigenvalues of S
  • Table 1 shows the reduced set size N Z necessary to attain a number of errors E Z on the test set, where E Z differs from the number of errors E S found using the full set of support vectors by at most one error, for a quadratic polynomial SVM trained on the postal set.
  • E Z differs from the number of errors E S found using the full set of support vectors by at most one error
  • Table 1 shows the reduced set size N Z necessary to attain a number of errors E Z on the test set, where E Z differs from the number of errors E S found using the full set of support vectors by at most one error, for a quadratic polynomial SVM trained on the postal set.
  • the reduced set can offer a significant reduction in complexity with little loss in accuracy.
  • Equation (15) the incremental equation for the second order solution z 2 takes the form of Equation (15), with S, z 1 and ⁇ 1 replaced by S, z 2 and ⁇ 2 , respectively.
  • the z a will not in general be orthogonal.
  • these equations will have multiple solutions, most of which will lead to local minima in ⁇ .
  • K will lead to other fixed point equations. While solutions to Equation (15) could be found by iterating (i.e. by starting with arbitrary z, computing a new z using Equation (15), and repeating), the method described in the next Section proves more flexible and powerful.
  • the kernel K has first derivatives defined, the gradients of the objective function F.tbd. ⁇ 2 /2 with respect to the unknowns ⁇ i , z i ⁇ can be computed.
  • K(s m , s n ) is a function of the scalar s m ⁇ s n : ##EQU11##
  • a (possibly local) minimum can then be found using unconstrained optimization techniques.
  • N z the desired order of approximation, N z .
  • N z the desired order of approximation
  • phase 2 (described below), all X i are allowed to vary.
  • Equation (20) the gradient in Equation (20) is zero if ⁇ k is zero. This fact can lead to severe numerical instabilities.
  • phase 1 relies on a simple "level crossing" theorem.
  • [1] choose ⁇ 1 '+1 or -1 randomly, set z 1 to a selection of random values
  • This procedure is then iterated with ⁇ z 3 , ⁇ 3 ⁇ and ⁇ z 4 , ⁇ 4 ⁇ , and so on up to ⁇ z N .sbsb.z, ⁇ N .sbsb.z ⁇ .
  • each computation of a given ⁇ z i , ⁇ i ⁇ pair is repeated in phase 1 several (T) times, with different initial values for the X i .
  • T is determined heuristically from the number M of different minima in F found.
  • phase 2 all vectors X i found in phase 1 are concatenated into a single vector, and the unconstrained minimization process then applied again, allowing all parameters to vary. It should be noted that phase 2 often results in roughly a factor of two further reduction in the objective function F.
  • the following first order unconstrained optimization method was used for both phases.
  • the search direction is found using conjugate gradients. Bracketing points x 1 , x 2 and x 3 are found along the search direction such that F(x 1 )>F(x 2 ) ⁇ F(x 3 ).
  • the bracket is then balanced (for balancing techniques, see, e.g., W. H. Press, S. A. Teukolsky, W. T. Vetterling and B. P. Flannery, Numerical Recipes in C, Second Edition, Cambridge University Press, 1992).
  • the minimum of the quadratic fit through these three points is then used as the starting point for the next iteration.
  • the conjugate gradient process is restarted after a fixed, chosen number of iterations, and the whole process stops when the rate of decrease of F falls below a threshold. It should be noted that this general approach gave the same results as the analytic approach when applied to the case of the quadratic polynomial kernel, described above.
  • the reduced set gives only a factor of six speed up, since different two class classifiers have some support vectors in common, allowing the possibility of caching.
  • the study was repeated for a two-class classifier separating digit 0 from all other digits for the NIST set (60,000 training, 10,000 test patterns).
  • This classifier was also chosen to be that which gave best accuracy using the full support set: a degree 4 polynomial. The full set of 1,273 support vectors gave 19 test errors, while a reduced set of size 127 gave 20 test errors.
  • FIG. 3 an illustrative flow chart embodying the principles of the invention is shown for use in a training phase of an SVM.
  • Input training data is applied to an SVM (not shown) in step 100.
  • the SVM is trained on this input data in step 105 and generates a set of support vectors in step 110.
  • a number of reduced set vectors is selected in step 135.
  • the unconstrained optimization approach (described above) is used to generate reduced set vectors in step 120.
  • These reduced set vectors are used to test a set of sample data (not shown) in step 125. Results from this test are evaluated in step 130. If the test results are acceptable (e.g., as to speed and accuracy), then the reduced set vectors are available for subsequent use.
  • test results are not acceptable, then the process of determining the reduced set vectors is performed again. (In this latter case, it should be noted that the test results (e.g., in terms of speed and/or accuracy) could suggest a further reduction in the number of reduced set vectors.)
  • step 215 input data vectors from a test set are applied to the SVM.
  • step 220 the SVM transforms the input data vectors of the testing set by mapping them into a multi-dimensional space using reduced set vectors as parameters in the Kernel.
  • step 225 the SVM generates a classification signal from the decision surface to indicate the membership status of each input data vector.
  • a number, m, of reduced set vectors are in the reduced set. These reduced set vectors are determined in the above-mention training phase illustrated in FIG. 3. If the speed and accuracy data suggest that less than m reduced set vectors can be used, an alternative approach can be taken that obviates the need to recalculate a new, and smaller, set of reduced set vectors.
  • a number of reduced set vectors, x are selected from the set of m reduced set vectors, where x ⁇ m.
  • the determination of how many reduced set vectors, x, to use is empirically determined, using, e.g., the speed and accuracy data generated in the training phase. However, there is no need to recalculate the values of these reduced set vectors.
  • Pattern recognition system 100 comprises processor 105 and recognizer 110, which further comprises data capture element 115, and SVM 120.
  • data input element 115 provides input data for classification to SVM 120.
  • data input element 115 is a scanner.
  • the input data are pixel representations of an image (not shown).
  • SVM 120 operates on the input data in accordance with the principles of the invention using reduced set vectors. During operation, or testing, SVM 120 provides a numerical result representing classification of the input data to processor 105 for subsequent processing.
  • Processor 105 is representative of a stored-program-controlled processor such as a microprocessor with associated memory.
  • Processor 105 additionally processes the output signals of recognizer 110, such as, e.g., in an automatic teller machine (ATM).
  • ATM automatic teller machine
  • the system shown in FIG. 5 operates in two modes, a training mode and an operating (or test) mode.
  • An illustration of the training mode is represented by the above-described method shown in FIG. 3.
  • An illustration of the test mode is represented by the above-described method shown in FIG. 4.
  • inventive concept is also applicable to kernel-based methods other than support vector machines, which can also be used for, but are not limited to, regression estimates, density estimation, etc.

Abstract

A method and apparatus is described for improving the efficiency of any machine that uses an algorithm that maps to a higher dimensional space in which a given set of vectors is used in a test phase. In particular, reduced set vectors are used. These reduced set vectors are different from the vectors in the set and are determined pursuant to an optimization approach other than the eigenvalue computation used for homogeneous quadratic kernels. An illustrative embodiment is described in the context of a support vector machine (SVM).

Description

FIELD OF THE INVENTION
This invention relates generally to universal learning machines, and, in particular, to support vector machines.
BACKGROUND OF THE INVENTION
A Support Vector Machine (SVM) is a universal learning machine whose decision surface is parameterized by a set of support vectors, and by a set of corresponding weights. An SVM is also characterized by a kernel function. Choice of the kernel determines whether the resulting SVM is a polynomial classifier, a two-layer neural network, a radial basis function machine, or some other learning machine. A decision rule for an SVM is a function of the corresponding kernel function and support vectors.
An SVM generally operates in two phases: a training phase and a testing phase. During the training phase, the set of support vectors is generated for use in the decision rule. During the testing phase, decisions are made using the particular decision rule. Unfortunately, in this latter phase, the complexity of computation for an SVM decision rule scales with the number of support vectors, NS, in the support vector set.
SUMMARY OF THE INVENTION
We have realized a method and apparatus for improving the efficiency of any machine that uses an algorithm that maps to a higher dimensional space in which a given set of vectors is used in a test phase. In particular, and in accordance with the principles of the invention, reduced set vectors are used. The number of reduced set vectors is smaller than the number of vectors in the set. These reduced set vectors are different from the vectors in the set and are determined pursuant to an optimization approach other than the eigenvalue computation used for homogeneous quadratic kernels.
In an embodiment of the invention, an SVM, for use in pattern recognition, utilizes reduced set vectors, which improves the efficiency of this SVM by a user-chosen factor. These reduced set vectors are determined pursuant to an unconstrained optimization approach.
In accordance with a feature of the invention, the selection of the reduced set vectors allows direct control of performance/complexity trade-offs.
In addition, the inventive concept is not specific to pattern recognition and is applicable to any problem where the Support Vector algorithm is used (e.g., regression estimation).
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 is a flow chart depicting the operation of a prior art SVM;
FIG. 2 is an general representation of the separation of training data into two classes with representative support vectors;
FIG. 3 shows an illustrative method for training an SVM system in accordance with the principles of the invention;
FIG. 4 shown an illustrative method for operating an SVM system in accordance with the principles of the invention; and
FIG. 5 shows a block diagram of a portion of a recognition system embodying the principles of the invention.
DETAILED DESCRIPTION
Before describing an illustrative embodiment of the invention, a brief background is provided on support vector machines, followed by a description of the inventive concept itself. Other than the inventive concept, it is assumed that the reader is familiar with mathematical notation used to generally represent kernel-based methods as known in the art. Also, the inventive concept is illustratively described in the context of pattern recognition. However, the inventive concept is applicable to any problem where the Support Vector algorithm is used (e.g., regression estimation).
In the description below, it should be noted that test data was used from two optical character recognition (OCR) data sets containing grey level images of the ten digits: a set of 7,291 training and 2,007 test patterns, which is referred to herein as the "postal set" (e.g., see L. Bottou, C. Cortes, H. Drucker, L. D. Jackel, Y. LeCun, U. A. Muller, E. Sackinger, P. Simard, and V. Vapnik, Comparison of Classifier Methods: A Case Study in Handwritten Digit Recognition, Proceedings of the 12th IAPR International Conference on Pattern Recognition, Vol. 2, IEEE Computer Society Press, Los Alamos, Calif., pp. 77-83, 1994; and Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, L. D. Jackel, Backpropagation Applied to Handwritten ZIP Code Recognition, Neural Computation, 1, 1989, pp. 541-551), and a set of 60,000 training and 10,000 test patterns from NIST Special Database 3 and NIST Test Data 1, which is referred to herein as the "NIST set" (e.g., see, R. A. Wilkinson, J. Geist, S. Janet, P. J. Grother, C. J. C. Burges, R. Creecy, R. Hammond, J. J. Hull, N. J. Larsen, T. P. Vogl and C. L. Wilson, The First Census Optical Character Recognition System Conference, U.S. Department of Commerce, NIST, August 1992). Postal images were 16×16 pixels and NIST images were 28×28 pixels.
BACKGROUND--SUPPORT VECTOR MACHINES
In the following, bold face is used for vector and matrix quantities, and light face for their components.
Consider a two-class classifier for which the decision rule takes the form: ##EQU1## where x, si .di-elect cons.Rd, αi, b.di-elect cons.R, and Θ is the step function; Rd is the d-dimensional Euclidean space and R is the real line, αi, si, Ns and b are parameters and x is the vector to be classified. The decision rule for a large family of classifiers can be cast in this functional form: for example, K=(x·si)P implements a polynomial classifier; K=exp (-∥x-si22) implements a radial basis function machine; and K=tanh(γ(x·si)+δ) implements a two-layer neural network (e.g., see V. Vapnik, Estimation of Dependencies Based on Empirical Data, Springer Verlag, 1982; V. Vapnik, The Nature of Statistical Learning Theory, Springer Verlag, 1995; Boser, B. E., Guyon, I. M., and Vapnik, V., A training algorithm for optimal margin classifiers, Fifth Annual Workshop on Computational Learning Theory, Pittsburgh ACM 144-152, 1992; and B. Scholkopf, C. J. C. Burges, and V. Vapnik, Extracting Support Data for a Given Task, Proceedings of the First International Conference on Knowledge Discovery and Data Mining, AAAI Press, Menlo Park, Calif., 1995).
The support vector algorithm is a principled method for training any learning machine whose decision rule takes the form of Equation (1): the only condition required is that the kernel K satisfy a general positivity constraint (e.g., see The Nature of Statistical Learning Theory, and A training algorithm for optimal margin classifiers, cited above). In contrast to other techniques, the SVM training process determines the entire parameter set {αi, si, Ns and b}; the resulting si, i=1, . . . , Ns are a subset of the training set and are called support vectors.
Support Vector Machines have a number of striking properties. The training procedure amounts to solving a constrained quadratic optimization problem, and the solution found is thus guaranteed to be the unique global minimum of the objective function. SVMs can be used to directly implement Structural Risk Minimization, in which the capacity of the learning machine can be controlled so as to minimize a bound on the generalization error (e.g., see The Nature of Statistical Learning Theory, and Extracting Support Data for a Given Task, cited above). A support vector decision surface is actually a linear separating hyperplane in a high dimensional space; similarly, SVMs can be used to construct a regression, which is linear in some high dimensional space (e.g., see The Nature of Statistical Learning Theory, cited above).
Support Vector Learning Machines have been successfully applied to pattern recognition problems such as optical character recognition (OCR) (e.g., see The Nature of Statistical Learning Theory, and Extracting Support Data for a Given Task, cited above, and C. Cortes and V. Vapnik, Support Vector Networks, Machine Learning, Vol 20, pp 1-25, 1995), and object recognition.
FIG. 1 is a flow chart depicting the operation of a prior art SVM. This operation comprises two phases: a training phase and a testing phase. In the training phase, the SVM receives elements of a training set with pre-assigned classes in step 52. In step 54, the input data vectors from the training set are transformed into a multi-dimensional space. In step 56, parameters (i.e., support vectors and associated weights) are determined for an optimal multi-dimensional hyperplane.
FIG. 2 shows an example where the training data elements are separated into two classes, one class represented by circles and the other class represented by boxes. This is typical of a 2-class pattern recognition problem: for example, an SVM which is trained to separate patterns of "cars" from those patterns that are "not cars." An optimal hyperplane is the linear decision function with maximal margin between the vectors of two classes. That is, the optimal hyperplane is the unique decision surface which separates the training data with a maximal margin. As illustrated in FIG. 2, the optimal hyperplane is defined by the area where the separation between the two classes is maximum. As observed in FIG. 2, to construct an optimal hyperplane, one only has to take into account a small subset of the trained data elements which determine this maximal margin. This subset of training elements that determines the parameters of an optimal hyperplane are known as support vectors. In FIG. 2, the support vectors are indicating by shading.
The optimal hyperplane parameters are represented as linear combinations of the mapped support vectors in the high dimensional space. The SVM algorithm ensures that errors on a set of vectors are minimized by assigning weights to all of the support vectors. These weights are used in computing the decision surface in terms of the support vectors. The algorithm also allows for these weights to adapt in order to minimize the error rate on the training data belonging to a particular problem. These weights are calculated during the training phase of the SVM.
Constructing an optimal hyperplane therefore becomes a constrained quadratic optimization programming problem determined by the elements of the training set and functions determining the dot products in the mapped space. The solution to the optimization problem is found using conventional intermediate optimization techniques.
Typically, the optimal hyperplane involves separating the training data without any errors. However, in some cases, training data cannot be separated without errors. In these cases, the SVM attempts to separate the training data with a minimal number of errors and separates the rest of the elements with maximal margin. These hyperplanes are generally known as soft margin hyperplanes.
In the testing phase, the SVM receives elements of a testing set to be classified in step 62. The SVM then transforms the input data vectors of the testing set by mapping them into a multi-dimensional space using support vectors as parameters in the Kernel (step 64). The mapping function is determined by the choice of a kernel which is preloaded in the SVM. The mapping involves taking a single vector and transforming it to a high-dimensional feature space so that a linear decision function can be created in this high dimensional feature space. Although the flow chart of FIG. 1 shows implicit mapping, this mapping may be performed explicitly as well. In step 66, the SVM generates a classification signal from the decision surface to indicate the membership status of each input data vector. The final result is the creation of an output classification signal, e.g., as illustrated in FIG. 2, a (+1) for a circle and an (-1) for a box.
Unfortunately, the complexity of the computation for Equation (1) scales with the number of support vectors NS. The expectation of the number of support vectors is bounded below by (l-1)E(P), where P is the probability of error on a test vector using a given SVM trained on l training samples, and E[P] is the expectation of P over all choices of the l samples (e.g., see The Nature of Statistical Learning Theory, cited above). Thus NS can be expected to approximately scale with l. For practical pattern recognition problems, this results in a machine which is considerably slower in test phase than other systems with similar generalization performance (e.g., see Comparison of Classifier Methods: A Case Study in Handwritten Digit Recognition, cited above; and Y. LeCun, L. Jackel, L. Bottou, A. Brunot, C. Cortes, J. Denker, H. Drucker, I. Guyon, U. Muller, E. Sackinger, P. Simard, and V. Vapnik, Comparison of Learning Algorithms for Handwritten Digit Recognition, International Conference on Artificial Neural Networks, Ed. F. Fogelman, P. Gallinari, pp. 53-60, 1995).
Reduced Set Vectors
Therefore, and in accordance with the principles of the invention, we present a method and apparatus to approximate the SVM decision rule with a much smaller number of reduced set vectors. The reduced set vectors have the following properties:
They appear in the approximate SVM decision rule in the same way that the support vectors appear in the full SVM decision rule;
They are not support vectors; they do not necessarily lie on the separating margin, and unlike support vectors, they are not training samples;
They are computed for a given, trained SVM;
The number of reduced set vectors (and hence the speed of the resulting SVM in test phase) is chosen a priori;
The reduced set method is applicable wherever the support vector method is used (for example, regression estimation).
The Reduced Set
Let the training data be elements x.di-elect cons.L, , where L (for "low dimensional") is defined to be the dL -dimensional Euclidean space Rd.sbsp.L. An SVM performs an implicit mapping Φ:x→x, x.di-elect cons.H (for "high dimensional"), similarly H=Rd.sbsp.H, dH ≧∞. In the following, vectors in H will be denoted with a bar. The mapping Φ is determined by the choice of kernel K. In fact, for any K which satisfies Mercer's positivity constraint (e.g., see, The Nature of Statistical Learning Theory, and A training algorithm for optimal margin classifiers, cited above), there exists a pair {Φ, H} for which K(xi, xj)=xi ·xj. Thus in H, the SVM decision rule is simply a linear separating hyperplane (as noted above). The mapping Φ is usually not explicitly computed, and the dimension dH of H is usually large (for example, for the homogeneous map K(xi, xj)=(xi ·xj)P, ##EQU2## (the number of ways of choosing p objects from p+dL -1 objects; thus for degree 4 polynomials and for dL =256, dH is approximately 180 million).
The basic SVM pattern recognition algorithm solves a two-class problem (e.g., see Estimation of Dependencies Based on Empirical Data, The Nature of Statistical Learning Theory, A training algorithm for optimal margin classifiers, cited above). Given training data x.di-elect cons.L and corresponding class labels yi .di-elect cons.{-1,1}, the SVM algorithm constructs a decision surface Ψ.di-elect cons.H which separates the xi into two classes (i=1, . . . , l):
Ψ·x.sub.i +b≧k.sub.o -ξ.sub.i, y.sub.i =+1(2)
Ψ·x.sub.i +b≦k.sub.l +ξ.sub.i, y.sub.i =-1(3)
where the ξi are positive slack variables, introduced to handle the non-separable case (e.g., see Support Vector Networks, cited above). In the separable case, the SVM algorithm constructs that separating hyperplane for which the margin between the positive and negative examples in H is maximized. A test vector x.di-elect cons.L is then assigned a class label {+1,-1} depending on whether Ψ·Φ(x)+b is greater or less than (k0 +k1)/2. A support vector s.di-elect cons.L is defined as any training sample for which one of the equations (2) or (3) is an equality. (The support vectors are named s to distinguish them from the rest of the training data). Ψ is then given by ##EQU3## where αa ≧0 are the weights, determined during training, ya .di-elect cons.{+1,-1} the class labels of the sa, and NS is the number of support vectors. Thus in order to classify a test point x one computes ##EQU4##
However, and in accordance with the inventive concept, consider now a set za .di-elect cons.L, a=1, . . . , Nz and corresponding weights γa .di-elect cons.R for which ##EQU5## minimizes (for fixed Nz) the distance measure
ρ=∥Ψ-Ψ'∥.                    (7)
As defined herein, the {γa, za }, a=1, . . . , Nz are called the reduced set. To classify a test point x, the expansion in Equation (5) is replaced by the approximation ##EQU6##
The goal is then to choose the smallest NZ <<NS, and corresponding reduced set, such that any resulting loss in generalization performance remains acceptable. Clearly, by allowing NZ =NS, ρ can be made zero; there are non-trivial cases where NZ <NS, and ρ=0 (described below). In those cases the reduced set leads to a reduction in the decision rule complexity with no loss in generalization performance. If for each NZ one computes the corresponding reduced set, ρ may be viewed as a monotonic decreasing function of NZ, and the generalization performance also becomes a function of NZ. In this description, only empirical results are provided regarding the dependence of the generalization performance on NZ.
The following should be noted about the mapping Φ. The image of Φ will not in general be a linear space. Φ will also in general not be surjective, and may not be one-to-one (for example, when K is a homogeneous polynomial of even degree). Further, Φ can map linearly dependent vectors in L onto linearly independent vectors in H (for example, when K is an inhomogeneous polynomial). In general one cannot scale the coefficients γa to unity by scaling za, even when K is a homogeneous polynomial (for example, if K is homogeneous of even degree, the γa can be scaled to {+1,-1}, but not to unity).
Exact Solutions
In this Section, the problem of computing the minimum of ρ analytically is considered. A simple, but non-trivial, case is first described.
Homogeneous Quadratic Polynomials
For homogeneous degree two polynomials, choosing a normalization of one:
K(x.sub.i, x.sub.j)=(x.sub.i ·x.sub.j).sup.2.     (9)
To simplify the exposition, the first order approximation, NZ =1 is computed. Introducing the symmetric tensor ##EQU7## it can be found that ρ∥Ψ-γz∥ is minimized for {γ,z} satisfying
S.sub.μν z.sub.ν =γz.sup.2 z.sub.μ,      (11)
(repeated indices are assumed summed). With this choice of {γ, z}, ρ2 becomes
ρ.sup.2 =S.sub.μν S.sup.μν -γ.sup.2 z.sup.4.(12)
The largest drop in ρ is thus achieved when {γ, z} is chosen such that z is that eigenvector of S whose eigenvalue λ=γz2 has largest absolute size. Note that γ can be chosen so that γ=sing{λ}, and z scaled so that z2 =|λ|:
Extending to order Nz, it can similarly be shown that the zi in the set {γi,zi } that minimize ##EQU8## are eigenvectors of S, each with eigenvalue γi ∥zi2. This gives ##EQU9## and the drop in ρ is maximized if the za are chosen to be the first NZ eigenvectors of S, where the eigenvectors are ordered by absolute size of their eigenvalues. Note that, since trace(S2) is the sum of the squared eigenvalues of S, by choosing NZ =dL (the dimension of the data) the approximation becomes exact, i.e., ρ=0. Since the number of support vectors NS is often larger than dL, this shows that the size of the reduced set can be smaller than the number of support vectors, with no loss in generalization performance.
In the general case, in order to compute the reduced set, ρ must be minimized over all {γa, za }, a=1, . . . , NZ simultaneously. It is convenient to consider an incremental approach in which on the ith step, {γj, zj }, j<i are held fixed while {γi, zi } is computed. In the case of quadratic polynomials, the series of minima generated by the incremental approach also generates a minimum for the full problem. This result is particular to second degree polynomials and is a consequence of the fact that the zi are orthogonal (or can be so chosen).
Table 1, below, shows the reduced set size NZ necessary to attain a number of errors EZ on the test set, where EZ differs from the number of errors ES found using the full set of support vectors by at most one error, for a quadratic polynomial SVM trained on the postal set. Clearly, in the quadratic case, the reduced set can offer a significant reduction in complexity with little loss in accuracy. Note also that many digits have numbers of support vectors larger than dL =256, presenting in this case the opportunity for a speed up with no loss in accuracy.
              TABLE 1                                                     
______________________________________                                    
        Support Vectors   Reduced Set                                     
Digit     N.sub.S                                                         
                 E.sub.S      N.sub.Z                                     
                                   E.sub.Z                                
______________________________________                                    
0         292    15           10   16                                     
1          95     9            6    9                                     
2         415    28           22   29                                     
3         403    26           14   27                                     
4         375    35           14   34                                     
5         421    26           18   27                                     
6         261    13           12   14                                     
7         228    18           10   19                                     
8         446    33           24   33                                     
9         330    20           20   21                                     
______________________________________                                    
General Kernels
To apply the reduced set method to an arbitrary support vector machine, the above analysis must be extended for a general kernel. For example, for the homogeneous polynomial K(x1, x2)=N(x1 ·x2)n, setting ∂ρ/∂z1a.sbsb.1 =0 to find the first pair {γ1, z1 } in the incremental approach gives an equation analogous to Equation (11):
S.sub.μ1μ2-μn z.sub.1μ2 z.sub.1μ3 . . . z.sub.1μn =γ.sub.1 ∥z.sub.1 ∥.sup.(2n-2) z.sub.1μ1(15)
where ##EQU10##
In this case, varying ρ with respect to γ gives no new conditions. Having solved Equation (15) for the first order solution {γ1, z1 }, ρ2 becomes
ρ.sup.2 =S.sub.μ1μ2 . . . .sub.μn .tbd.S.sup.μ1μ2 . . . .sup.μn -γ.sup.2.sub.1 ∥z.sub.1 ∥.sup.2n.(17)
One can then define
S.sub.μ1μ2 . . . .sub.μn .tbd.S.sub.μ1μ2 . . . .sub.μn -γ.sub.1 z.sub.1μ2 . . . z.sub.1μn            (18)
in terms of which the incremental equation for the second order solution z2 takes the form of Equation (15), with S, z1 and γ1 replaced by S, z2 and γ2, respectively. (Note that for polynomials of degree greater than 2, the za will not in general be orthogonal). However, these are only the incremental solutions: one still needs to solve the coupled equations where all {γa,za } are allowed to vary simultaneously. Moreover, these equations will have multiple solutions, most of which will lead to local minima in ρ. Furthermore, other choices of K will lead to other fixed point equations. While solutions to Equation (15) could be found by iterating (i.e. by starting with arbitrary z, computing a new z using Equation (15), and repeating), the method described in the next Section proves more flexible and powerful.
Unconstrained Optimization Approach
Provided the kernel K has first derivatives defined, the gradients of the objective function F.tbd.ρ2 /2 with respect to the unknowns {γi, zi } can be computed. For example, assuming that K(sm, sn) is a function of the scalar sm ·sn : ##EQU11##
Therefore, and in accordance with the principles of the invention, a (possibly local) minimum can then be found using unconstrained optimization techniques.
The Algorithm
First, the desired order of approximation, Nz, is chosen. Let Xi .tbd.{γi, zi }. A two-phase approach is used. In phase 1 (described below), the Xi are computed incrementally, keeping all zj, j<i, fixed.
In phase 2 (described below), all Xi are allowed to vary.
It should be noted that the gradient in Equation (20) is zero if γk is zero. This fact can lead to severe numerical instabilities. In order to circumvent this problem, phase 1 relies on a simple "level crossing" theorem. The algorithm is as follows. First, γi is initialized to +1 or -1; zi is initialized with random values. zi is then allowed to vary, while keeping γi fixed. The optimal value for γi, given that zi, Xj, j<i are fixed, is then computed analytically. F is then minimized with respect to both zi and γi simultaneously. Finally, the optimal γj for all j≦i is computed analytically, and are given by Γ=Z-1 Δ, where vectors Δ, Γ and Z are given by (see equation (19)):
Γ.sub.j .tbd.γ.sub.j,                          (21) ##EQU12##
Z.sub.jk .tbd.K(z.sub.j, z.sub.k).                         (23)
Since Z is positive definite and symmetric, it can be inverted efficiently using the well-known Choleski decomposition.
Thus, the first phase of the algorithm proceeds as follows:
[1] choose γ1 '+1 or -1 randomly, set z1 to a selection of random values;
[2] vary z1 to minimize F;
[3] compute the γ1, keeping z1 fixed, that maximally further reduces F;
[4] allow z1, γ1 to vary together to further reduce F;
[5] repeat steps [1] through [4] T times keeping the best answer;
[6] fix z1, γ1, choose γ2 =+1 or -1 randomly, set z2 to a selection of random values;
[7] vary z2 to minimize F;
[8] then fixing z2 (and z1, γ1) compute the optimal γ2 that maximally further reduces F;
[9] then let {z2, γ2 } vary together, to further reduce F;
[10] repeat steps [6] to [9] T times, keeping the best answer; and
[11] finally, fixing z1, z2, compute the optimal γ1, γ2 (as shown above in equations (21)-(23)) that further reduces F.
This procedure is then iterated with {z3, γ3 } and {z4, γ4 }, and so on up to {zN.sbsb.z, γN.sbsb.z }.
Numerical instabilities are avoided by preventing γi from approaching zero. The above algorithm ensures this automatically: if the first step, in which zi is varied while γi is kept fixed, results in a decrease in the objective function F, then when γi is subsequently allowed to vary, it cannot pass through zero, because doing so would require an increase in F (since the contribution of {zi, γi } to F would then be zero).
Note that each computation of a given {zi, γi } pair is repeated in phase 1 several (T) times, with different initial values for the Xi. T is determined heuristically from the number M of different minima in F found. For the above-mentioned data sets, M was usually 2 or 3, and T was chosen as T=10.
In phase 2, all vectors Xi found in phase 1 are concatenated into a single vector, and the unconstrained minimization process then applied again, allowing all parameters to vary. It should be noted that phase 2 often results in roughly a factor of two further reduction in the objective function F.
In accordance with the principles of the inventions, the following first order unconstrained optimization method was used for both phases. The search direction is found using conjugate gradients. Bracketing points x1, x2 and x3 are found along the search direction such that F(x1)>F(x2)<F(x3). The bracket is then balanced (for balancing techniques, see, e.g., W. H. Press, S. A. Teukolsky, W. T. Vetterling and B. P. Flannery, Numerical Recipes in C, Second Edition, Cambridge University Press, 1992). The minimum of the quadratic fit through these three points is then used as the starting point for the next iteration. The conjugate gradient process is restarted after a fixed, chosen number of iterations, and the whole process stops when the rate of decrease of F falls below a threshold. It should be noted that this general approach gave the same results as the analytic approach when applied to the case of the quadratic polynomial kernel, described above.
Experiments
The above approach was applied to the SVM that gave the best performance on the postal set, which was a degree 3 inhomogeneous polynomial machine (for the latter see, e.g., The Nature of Statistical Learning Theory, cited above). The order of approximation, Nz, was chosen to give a factor often speed up in test phase for each two-class classifier. The results are given in Table 2 (shown below). The reduced set method achieved the speed up with essentially no loss in accuracy. Using the ten classifiers together as a ten-class classifier (for the latter, see, e.g., The Nature of Statistical Learning Theory, and Support Vector Networks, cited above) gave 4.2% error using the full support set, as opposed to 4.3% using the reduced set. Note that for the combined case, the reduced set gives only a factor of six speed up, since different two class classifiers have some support vectors in common, allowing the possibility of caching. To address the question as to whether these techniques can be scaled up to larger problems, the study was repeated for a two-class classifier separating digit 0 from all other digits for the NIST set (60,000 training, 10,000 test patterns). This classifier was also chosen to be that which gave best accuracy using the full support set: a degree 4 polynomial. The full set of 1,273 support vectors gave 19 test errors, while a reduced set of size 127 gave 20 test errors.
              TABLE 2                                                     
______________________________________                                    
         Support Vectors  Reduced Set                                     
Digit      N.sub.S E.sub.S    N.sub.Z                                     
                                   E.sub.Z                                
______________________________________                                    
0          272     13         27   13                                     
1          109      9         11   10                                     
2          380     26         38   26                                     
3          418     20         42   20                                     
4          392     34         39   32                                     
5          397     21         40   22                                     
6          257     11         26   11                                     
7          214     14         21   13                                     
8          463     26         46   28                                     
9          387     13         39   13                                     
Totals:    3289    187        329  188                                    
______________________________________                                    
 (Note that tests were also done on the full 10 digit NIST giving a factor
 of 50 speedup with 10% loss of accuracy; see C. J. C. Burges, B. Schokopf
 Improving the Accuracy and Speed of Support Vector Machines, in press,   
 NIPS '96.)                                                               
Illustrative Embodiment
Turning now to FIG. 3, an illustrative flow chart embodying the principles of the invention is shown for use in a training phase of an SVM. Input training data is applied to an SVM (not shown) in step 100. The SVM is trained on this input data in step 105 and generates a set of support vectors in step 110. A number of reduced set vectors is selected in step 135. In step 115, the unconstrained optimization approach (described above) is used to generate reduced set vectors in step 120. These reduced set vectors are used to test a set of sample data (not shown) in step 125. Results from this test are evaluated in step 130. If the test results are acceptable (e.g., as to speed and accuracy), then the reduced set vectors are available for subsequent use. If the test results are not acceptable, then the process of determining the reduced set vectors is performed again. (In this latter case, it should be noted that the test results (e.g., in terms of speed and/or accuracy) could suggest a further reduction in the number of reduced set vectors.)
Once the reduced set vectors have been determined, they are available for use in a SVM. A method for using these reduced set vectors in a testing phase is shown in FIG. 4. In step 215, input data vectors from a test set are applied to the SVM. In step 220, the SVM transforms the input data vectors of the testing set by mapping them into a multi-dimensional space using reduced set vectors as parameters in the Kernel. In step 225, the SVM generates a classification signal from the decision surface to indicate the membership status of each input data vector.
As noted above, a number, m, of reduced set vectors are in the reduced set. These reduced set vectors are determined in the above-mention training phase illustrated in FIG. 3. If the speed and accuracy data suggest that less than m reduced set vectors can be used, an alternative approach can be taken that obviates the need to recalculate a new, and smaller, set of reduced set vectors. In particular, a number of reduced set vectors, x, are selected from the set of m reduced set vectors, where x<m. In this case, the determination of how many reduced set vectors, x, to use is empirically determined, using, e.g., the speed and accuracy data generated in the training phase. However, there is no need to recalculate the values of these reduced set vectors.
An illustrative embodiment of the inventive concept is shown in FIG. 5 in the context of pattern recognition. Pattern recognition system 100 comprises processor 105 and recognizer 110, which further comprises data capture element 115, and SVM 120. Other than the inventive concept, the elements of FIG. 5 are well-known and will not be described in detail. For example, data input element 115 provides input data for classification to SVM 120. One example of data input element 115 is a scanner. In this context, the input data are pixel representations of an image (not shown). SVM 120 operates on the input data in accordance with the principles of the invention using reduced set vectors. During operation, or testing, SVM 120 provides a numerical result representing classification of the input data to processor 105 for subsequent processing. Processor 105 is representative of a stored-program-controlled processor such as a microprocessor with associated memory. Processor 105 additionally processes the output signals of recognizer 110, such as, e.g., in an automatic teller machine (ATM).
The system shown in FIG. 5 operates in two modes, a training mode and an operating (or test) mode. An illustration of the training mode is represented by the above-described method shown in FIG. 3. An illustration of the test mode is represented by the above-described method shown in FIG. 4.
The foregoing merely illustrates the principles of the invention and it will thus be appreciated that those skilled in the art will be able to devise numerous alternative arrangements which, although not explicitly described herein, embody the principles of the invention and are within its spirit and scope.
For example, the inventive concept is also applicable to kernel-based methods other than support vector machines, which can also be used for, but are not limited to, regression estimates, density estimation, etc.

Claims (3)

What is claimed is:
1. A method for using a support vector machine, the method comprising the steps of:
receiving input data signals; and
using the support vector machine operable on the input data signals for providing an output signal, wherein the support vector machine utilizes reduced set vectors, wherein the reduced set vectors were a priori determined during a training phase using an unconstrained optimization approach other than an eigenvalue computation used for homogeneous quadratic kernels wherein the training phase further comprises the steps of:
receiving elements of a training set;
generating a set of support vectors, the number of support vectors being NS;
selecting a number m of reduced set vectors, where m≦NS; and
generating the number m of reduced set vectors using the unconstrained optimization approach.
2. The method of claim 1 wherein the input data signals represent different patterns and the output signal represents a classification of the different patterns.
3. A method for using a support vector machine, the method comprising the steps of:
receiving input data signals; and
using the support vector machine operable on the input data signals for providing an output signal, wherein the support vector machine utilizes reduced set vectors, wherein the reduced set vectors were a priori determined during a training phase using an unconstrained optimization approach other than an eigenvalue computation used for homogeneous quadratic kernels wherein the training phase further comprises the steps of:
training the support vector machine for determining a number, NS, of support vectors; and
using the unconstrained optimization technique to determine the reduced set vectors, where a number of reduced set vectors is m, where m≦NS.
US08/883,193 1997-06-26 1997-06-26 Method and apparatus for improving the efficiency of support vector machines Expired - Lifetime US6134344A (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
US08/883,193 US6134344A (en) 1997-06-26 1997-06-26 Method and apparatus for improving the efficiency of support vector machines
CA002238164A CA2238164A1 (en) 1997-06-26 1998-05-21 Method and apparatus for improving the efficiency of support vector machines
JP10169787A JPH1173406A (en) 1997-06-26 1998-06-17 Method for using support vector machine
EP98304770A EP0887761A3 (en) 1997-06-26 1998-06-17 Method and apparatus for improving the efficiency of support vector machines

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US08/883,193 US6134344A (en) 1997-06-26 1997-06-26 Method and apparatus for improving the efficiency of support vector machines

Publications (1)

Publication Number Publication Date
US6134344A true US6134344A (en) 2000-10-17

Family

ID=25382151

Family Applications (1)

Application Number Title Priority Date Filing Date
US08/883,193 Expired - Lifetime US6134344A (en) 1997-06-26 1997-06-26 Method and apparatus for improving the efficiency of support vector machines

Country Status (4)

Country Link
US (1) US6134344A (en)
EP (1) EP0887761A3 (en)
JP (1) JPH1173406A (en)
CA (1) CA2238164A1 (en)

Cited By (72)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6303644B1 (en) * 1997-07-25 2001-10-16 Byk Gulden Lomberg Chemische Fabrik Gmbh Proton pump inhibitor in therapeutic combination with antibacterial substances
WO2001077855A1 (en) * 2000-04-11 2001-10-18 Telstra New Wave Pty Ltd A gradient based training method for a support vector machine
US6337927B1 (en) * 1999-06-04 2002-01-08 Hewlett-Packard Company Approximated invariant method for pattern detection
US6427141B1 (en) * 1998-05-01 2002-07-30 Biowulf Technologies, Llc Enhancing knowledge discovery using multiple support vector machines
US20030046277A1 (en) * 2001-04-04 2003-03-06 Peter Jackson System, method, and software for identifying historically related legal opinions
US20030093393A1 (en) * 2001-06-18 2003-05-15 Mangasarian Olvi L. Lagrangian support vector machine
US20030101161A1 (en) * 2001-11-28 2003-05-29 Bruce Ferguson System and method for historical database training of support vector machines
WO2003060822A1 (en) * 2002-01-08 2003-07-24 Pavilion Technologies, Inc. System and method for historical database training of non-linear models for use in electronic commerce
US20030172043A1 (en) * 1998-05-01 2003-09-11 Isabelle Guyon Methods of identifying patterns in biological systems and uses thereof
US6633829B2 (en) * 2001-07-26 2003-10-14 Northrop Grumman Corporation Mechanical rotation axis measurement system and method
US20030192765A1 (en) * 1998-12-02 2003-10-16 Mars Incorporated, A Delaware Corporation Classification method and apparatus
US6658395B1 (en) * 1998-05-01 2003-12-02 Biowulf Technologies, L.L.C. Enhancing knowledge discovery from multiple data sets using multiple support vector machines
US6671391B1 (en) * 2000-05-26 2003-12-30 Microsoft Corp. Pose-adaptive face detection system and process
US20040015462A1 (en) * 2002-07-19 2004-01-22 Lienhart Rainer W. Fast method for training and evaluating support vector machines with a large set of linear features
US20040013303A1 (en) * 2002-07-19 2004-01-22 Lienhart Rainer W. Facial classification of static images using support vector machines
US20040064201A1 (en) * 2002-09-26 2004-04-01 Aragones James Kenneth Methods and apparatus for reducing hyperplanes in a control space
US20040103105A1 (en) * 2002-06-13 2004-05-27 Cerisent Corporation Subtree-structured XML database
US6789069B1 (en) * 1998-05-01 2004-09-07 Biowulf Technologies Llc Method for enhancing knowledge discovered from biological data using a learning machine
US6804391B1 (en) * 2000-11-22 2004-10-12 Microsoft Corporation Pattern detection methods and systems, and face detection methods and systems
US6803933B1 (en) 2003-06-16 2004-10-12 Hewlett-Packard Development Company, L.P. Systems and methods for dot gain determination and dot gain based printing
US20040259764A1 (en) * 2002-10-22 2004-12-23 Stuart Tugendreich Reticulocyte depletion signatures
US20050049985A1 (en) * 2003-08-28 2005-03-03 Mangasarian Olvi L. Input feature and kernel selection for support vector machine classification
US20050066075A1 (en) * 2001-11-15 2005-03-24 Vojislav Kecman Method, apparatus and software for lossy data compression and function estimation
WO2005043450A1 (en) * 2003-10-31 2005-05-12 The University Of Queensland Improved support vector machine
US20050131847A1 (en) * 1998-05-01 2005-06-16 Jason Weston Pre-processed feature ranking for a support vector machine
US20050196035A1 (en) * 2004-03-03 2005-09-08 Trw Automotive U.S. Llc Method and apparatus for producing classifier training images
US20050197981A1 (en) * 2004-01-20 2005-09-08 Bingham Clifton W. Method for identifying unanticipated changes in multi-dimensional data sets
US20050216426A1 (en) * 2001-05-18 2005-09-29 Weston Jason Aaron E Methods for feature selection in a learning machine
US20060035250A1 (en) * 2004-06-10 2006-02-16 Georges Natsoulis Necessary and sufficient reagent sets for chemogenomic analysis
US7010167B1 (en) 2002-04-30 2006-03-07 The United States Of America As Represented By The National Security Agency Method of geometric linear discriminant analysis pattern recognition
US20060057066A1 (en) * 2004-07-19 2006-03-16 Georges Natsoulis Reagent sets and gene signatures for renal tubule injury
US20060112026A1 (en) * 2004-10-29 2006-05-25 Nec Laboratories America, Inc. Parallel support vector method and apparatus
AU2001248153B2 (en) * 2000-04-11 2006-08-17 Telstra Corporation Limited A gradient based training method for a support vector machine
US20060224539A1 (en) * 1998-05-01 2006-10-05 Hong Zhang Computer-aided image analysis
US20060248440A1 (en) * 1998-07-21 2006-11-02 Forrest Rhoads Systems, methods, and software for presenting legal case histories
US20070021918A1 (en) * 2004-04-26 2007-01-25 Georges Natsoulis Universal gene chip for high throughput chemogenomic analysis
US20070026406A1 (en) * 2003-08-13 2007-02-01 Iconix Pharmaceuticals, Inc. Apparatus and method for classifying multi-dimensional biological data
US20070077987A1 (en) * 2005-05-03 2007-04-05 Tangam Gaming Technology Inc. Gaming object recognition
US20070094170A1 (en) * 2005-09-28 2007-04-26 Nec Laboratories America, Inc. Spread Kernel Support Vector Machine
US20070136250A1 (en) * 2002-06-13 2007-06-14 Mark Logic Corporation XML Database Mixed Structural-Textual Classification System
US20070168327A1 (en) * 2002-06-13 2007-07-19 Mark Logic Corporation Parent-child query indexing for xml databases
US20070198653A1 (en) * 2005-12-30 2007-08-23 Kurt Jarnagin Systems and methods for remote computer-based analysis of user-provided chemogenomic data
US20080033899A1 (en) * 1998-05-01 2008-02-07 Stephen Barnhill Feature selection method using support vector machine classifier
US7396645B1 (en) 2002-12-17 2008-07-08 Entelos, Inc. Cholestasis signature
US20080177680A1 (en) * 2007-01-19 2008-07-24 Microsoft Corporation Resilient classification of data
US20080177684A1 (en) * 2007-01-19 2008-07-24 Microsoft Corporation Combining resilient classifiers
US20080215513A1 (en) * 2000-08-07 2008-09-04 Jason Aaron Edward Weston Methods for feature selection in a learning machine
US7422854B1 (en) 2002-12-20 2008-09-09 Entelos, Inc. Cholesterol reduction signature
US20080233576A1 (en) * 1998-05-01 2008-09-25 Jason Weston Method for feature selection in a support vector machine using feature ranking
US20090063115A1 (en) * 2007-08-31 2009-03-05 Zhao Lu Linear programming support vector regression with wavelet kernel
US20090087023A1 (en) * 2007-09-27 2009-04-02 Fatih M Porikli Method and System for Detecting and Tracking Objects in Images
US7519519B1 (en) 2002-12-20 2009-04-14 Entelos, Inc. Signature projection score
US7529756B1 (en) 1998-07-21 2009-05-05 West Services, Inc. System and method for processing formatted text documents in a database
US20090204555A1 (en) * 2008-02-07 2009-08-13 Nec Laboratories America, Inc. System and method using hidden information
US20100021885A1 (en) * 2006-09-18 2010-01-28 Mark Fielden Reagent sets and gene signatures for non-genotoxic hepatocarcinogenicity
US7778782B1 (en) 2002-12-17 2010-08-17 Entelos, Inc. Peroxisome proliferation activated receptor alpha (PPARα) signatures
US20100246997A1 (en) * 2009-03-30 2010-09-30 Porikli Fatih M Object Tracking With Regressing Particles
US20100272350A1 (en) * 2009-04-27 2010-10-28 Morris Lee Methods and apparatus to perform image classification based on pseudorandom features
US7840060B2 (en) 2006-06-12 2010-11-23 D&S Consultants, Inc. System and method for machine learning using a similarity inverse matrix
US20110078099A1 (en) * 2001-05-18 2011-03-31 Health Discovery Corporation Method for feature selection and for evaluating features identified as significant for classifying data
WO2011148366A1 (en) 2010-05-26 2011-12-01 Ramot At Tel-Aviv University Ltd. Method and system for correcting gaze offset
EP2442258A1 (en) * 2010-10-18 2012-04-18 Harman Becker Automotive Systems GmbH Method of generating a classification data base for classifying a traffic sign and device and method for classifying a traffic sign
US8311341B1 (en) 2006-11-29 2012-11-13 D & S Consultants, Inc. Enhanced method for comparing images using a pictorial edit distance
US20120314940A1 (en) * 2011-06-09 2012-12-13 Electronics And Telecommunications Research Institute Image recognition device and method of recognizing image thereof
US8738271B2 (en) 2011-12-16 2014-05-27 Toyota Motor Engineering & Manufacturing North America, Inc. Asymmetric wavelet kernel in support vector learning
US20140219554A1 (en) * 2013-02-06 2014-08-07 Kabushiki Kaisha Toshiba Pattern recognition apparatus, method thereof, and program product therefor
US9405959B2 (en) 2013-03-11 2016-08-02 The United States Of America, As Represented By The Secretary Of The Navy System and method for classification of objects from 3D reconstruction
US20160358100A1 (en) * 2015-06-05 2016-12-08 Intel Corporation Techniques for improving classification performance in supervised learning
US20160365096A1 (en) * 2014-03-28 2016-12-15 Intel Corporation Training classifiers using selected cohort sample subsets
US9600231B1 (en) * 2015-03-13 2017-03-21 Amazon Technologies, Inc. Model shrinking for embedded keyword spotting
CN111476709A (en) * 2020-04-09 2020-07-31 广州华多网络科技有限公司 Face image processing method and device and electronic equipment
US20210150338A1 (en) * 2019-11-20 2021-05-20 Abbyy Production Llc Identification of fields in documents with neural networks without templates

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU780050B2 (en) * 1999-05-25 2005-02-24 Health Discovery Corporation Enhancing knowledge discovery from multiple data sets using multiple support vector machines
US6633857B1 (en) * 1999-09-04 2003-10-14 Microsoft Corporation Relevance vector machine
IT1320956B1 (en) * 2000-03-24 2003-12-18 Univ Bologna METHOD, AND RELATED EQUIPMENT, FOR THE AUTOMATIC DETECTION OF MICROCALCIFICATIONS IN DIGITAL SIGNALS OF BREAST FABRIC.
JP4827285B2 (en) * 2000-09-04 2011-11-30 東京エレクトロン株式会社 Pattern recognition method, pattern recognition apparatus, and recording medium
DE60033535T2 (en) 2000-12-15 2007-10-25 Mei, Inc. Currency validator
DE60234571D1 (en) * 2001-01-23 2010-01-14 Health Discovery Corp COMPUTER-ASSISTED IMAGE ANALYSIS
US6591598B2 (en) 2001-10-01 2003-07-15 Macdon Industries Ltd. Crop harvesting header with cam controlled movement of the reel fingers
JP4034602B2 (en) * 2002-06-17 2008-01-16 富士通株式会社 Data classification device, active learning method of data classification device, and active learning program
US7648016B2 (en) 2002-06-19 2010-01-19 Mei, Inc. Currency validator
KR100708337B1 (en) * 2003-06-27 2007-04-17 주식회사 케이티 Apparatus and method for automatic video summarization using fuzzy one-class support vector machines
JP4859351B2 (en) * 2004-06-14 2012-01-25 財団法人電力中央研究所 Case database construction method, discrimination device learning method, data discrimination support device, data discrimination support program
JP4796356B2 (en) * 2005-01-13 2011-10-19 学校法人 中央大学 Method, program and apparatus for performing discriminant analysis
US7197487B2 (en) * 2005-03-16 2007-03-27 Lg Chem, Ltd. Apparatus and method for estimating battery state of charge
JP5404062B2 (en) * 2008-10-20 2014-01-29 住友ゴム工業株式会社 Tire pressure drop detection device and method, and tire pressure drop detection program
CN101504781B (en) * 2009-03-10 2011-02-09 广州广电运通金融电子股份有限公司 Valuable document recognition method and apparatus
CN103577690A (en) * 2013-10-29 2014-02-12 西安电子科技大学 Sparse nonparametric body area channel probability representation method

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4259661A (en) * 1978-09-01 1981-03-31 Burroughs Corporation Apparatus and method for recognizing a pattern
US4661913A (en) * 1984-09-11 1987-04-28 Becton, Dickinson And Company Apparatus and method for the detection and classification of articles using flow cytometry techniques
US5355088A (en) * 1991-04-16 1994-10-11 Schlumberger Technology Corporation Method and apparatus for determining parameters of a transition zone of a formation traversed by a wellbore and generating a more accurate output record medium
US5467457A (en) * 1991-05-02 1995-11-14 Mitsubishi Denki Kabushiki Kaisha Read only type semiconductor memory device including address coincidence detecting circuits assigned to specific address regions and method of operating the same
US5479523A (en) * 1994-03-16 1995-12-26 Eastman Kodak Company Constructing classification weights matrices for pattern recognition systems using reduced element feature subsets
US5546472A (en) * 1992-08-07 1996-08-13 Arch Development Corp. Feature guided method and apparatus for obtaining an image of an object
US5577135A (en) * 1994-03-01 1996-11-19 Apple Computer, Inc. Handwriting signal processing front-end for handwriting recognizers
US5647058A (en) * 1993-05-24 1997-07-08 International Business Machines Corporation Method for high-dimensionality indexing in a multi-media database
US5664067A (en) * 1992-06-19 1997-09-02 United Parcel Service Of America, Inc. Method and apparatus for training a neural network
US5794178A (en) * 1993-09-20 1998-08-11 Hnc Software, Inc. Visualization of information using graphical representations of context vector based relationships and attributes
US5872865A (en) * 1995-02-08 1999-02-16 Apple Computer, Inc. Method and system for automatic classification of video images

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4259661A (en) * 1978-09-01 1981-03-31 Burroughs Corporation Apparatus and method for recognizing a pattern
US4661913A (en) * 1984-09-11 1987-04-28 Becton, Dickinson And Company Apparatus and method for the detection and classification of articles using flow cytometry techniques
US5355088A (en) * 1991-04-16 1994-10-11 Schlumberger Technology Corporation Method and apparatus for determining parameters of a transition zone of a formation traversed by a wellbore and generating a more accurate output record medium
US5467457A (en) * 1991-05-02 1995-11-14 Mitsubishi Denki Kabushiki Kaisha Read only type semiconductor memory device including address coincidence detecting circuits assigned to specific address regions and method of operating the same
US5664067A (en) * 1992-06-19 1997-09-02 United Parcel Service Of America, Inc. Method and apparatus for training a neural network
US5546472A (en) * 1992-08-07 1996-08-13 Arch Development Corp. Feature guided method and apparatus for obtaining an image of an object
US5647058A (en) * 1993-05-24 1997-07-08 International Business Machines Corporation Method for high-dimensionality indexing in a multi-media database
US5794178A (en) * 1993-09-20 1998-08-11 Hnc Software, Inc. Visualization of information using graphical representations of context vector based relationships and attributes
US5577135A (en) * 1994-03-01 1996-11-19 Apple Computer, Inc. Handwriting signal processing front-end for handwriting recognizers
US5479523A (en) * 1994-03-16 1995-12-26 Eastman Kodak Company Constructing classification weights matrices for pattern recognition systems using reduced element feature subsets
US5872865A (en) * 1995-02-08 1999-02-16 Apple Computer, Inc. Method and system for automatic classification of video images

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
C. J. Burges "Simplified Support Vector Decision Rules" Machine Learning, Proceedings of the Thirteenth International Conference (ICML '96), Proceedings of the Thirteenth International Conference on Machine Learning, Bari, Italy, Jul. 3-6, 1996, pp. 71-77, XP002087853 1996, San Francisco, CA, USA, Morgan Kaufmann Publishers, USA *the whole document* particularly relevant if taken alone.
C. J. Burges Simplified Support Vector Decision Rules Machine Learning, Proceedings of the Thirteenth International Conference (ICML 96), Proceedings of the Thirteenth International Conference on Machine Learning, Bari, Italy, Jul. 3 6, 1996, pp. 71 77, XP002087853 1996, San Francisco, CA, USA, Morgan Kaufmann Publishers, USA *the whole document* particularly relevant if taken alone. *
C. J. C. Burges, "A tutorial on support vector machines for pattern recognition", Data Mining and Knowledge Discovery, 1998, Kluwer Academic Publishers, Netherlands, vol. 2, No. 2, pp. 121-167, XP))2087854, ISSN 1384-5810 *the whole document* theory or principle underlying the invention.
C.J.C. Burges, A tutorial on support vector machines for pattern recognition , Data Mining and Knowledge Discovery, 1998, Kluwer Academic Publishers, Netherlands, vol. 2, No. 2, pp. 121-167, XP002087854, ISSN 1384 5810 *the whole document* theory or principle underlying the invention. *

Cited By (134)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6303644B1 (en) * 1997-07-25 2001-10-16 Byk Gulden Lomberg Chemische Fabrik Gmbh Proton pump inhibitor in therapeutic combination with antibacterial substances
US6658395B1 (en) * 1998-05-01 2003-12-02 Biowulf Technologies, L.L.C. Enhancing knowledge discovery from multiple data sets using multiple support vector machines
US20050131847A1 (en) * 1998-05-01 2005-06-16 Jason Weston Pre-processed feature ranking for a support vector machine
US7542959B2 (en) 1998-05-01 2009-06-02 Health Discovery Corporation Feature selection method using support vector machine classifier
US20080033899A1 (en) * 1998-05-01 2008-02-07 Stephen Barnhill Feature selection method using support vector machine classifier
US7383237B2 (en) 1998-05-01 2008-06-03 Health Discovery Corporation Computer-aided image analysis
US7475048B2 (en) 1998-05-01 2009-01-06 Health Discovery Corporation Pre-processed feature ranking for a support vector machine
US20080233576A1 (en) * 1998-05-01 2008-09-25 Jason Weston Method for feature selection in a support vector machine using feature ranking
US6789069B1 (en) * 1998-05-01 2004-09-07 Biowulf Technologies Llc Method for enhancing knowledge discovered from biological data using a learning machine
US20030172043A1 (en) * 1998-05-01 2003-09-11 Isabelle Guyon Methods of identifying patterns in biological systems and uses thereof
US7117188B2 (en) 1998-05-01 2006-10-03 Health Discovery Corporation Methods of identifying patterns in biological systems and uses thereof
US6427141B1 (en) * 1998-05-01 2002-07-30 Biowulf Technologies, Llc Enhancing knowledge discovery using multiple support vector machines
US20060224539A1 (en) * 1998-05-01 2006-10-05 Hong Zhang Computer-aided image analysis
US7805388B2 (en) 1998-05-01 2010-09-28 Health Discovery Corporation Method for feature selection in a support vector machine using feature ranking
US8661066B2 (en) 1998-07-21 2014-02-25 West Service, Inc. Systems, methods, and software for presenting legal case histories
US8250118B2 (en) 1998-07-21 2012-08-21 West Services, Inc. Systems, methods, and software for presenting legal case histories
US8600974B2 (en) 1998-07-21 2013-12-03 West Services Inc. System and method for processing formatted text documents in a database
US20060248440A1 (en) * 1998-07-21 2006-11-02 Forrest Rhoads Systems, methods, and software for presenting legal case histories
US7778954B2 (en) 1998-07-21 2010-08-17 West Publishing Corporation Systems, methods, and software for presenting legal case histories
US7529756B1 (en) 1998-07-21 2009-05-05 West Services, Inc. System and method for processing formatted text documents in a database
US20100005388A1 (en) * 1998-07-21 2010-01-07 Bob Haschart System and method for processing formatted text documents in a database
US7073652B2 (en) * 1998-12-02 2006-07-11 Mars Incorporated Classification method and apparatus
US20030192765A1 (en) * 1998-12-02 2003-10-16 Mars Incorporated, A Delaware Corporation Classification method and apparatus
US6337927B1 (en) * 1999-06-04 2002-01-08 Hewlett-Packard Company Approximated invariant method for pattern detection
US20110106735A1 (en) * 1999-10-27 2011-05-05 Health Discovery Corporation Recursive feature elimination method using support vector machines
US20110119213A1 (en) * 1999-10-27 2011-05-19 Health Discovery Corporation Support vector machine - recursive feature elimination (svm-rfe)
US8095483B2 (en) 1999-10-27 2012-01-10 Health Discovery Corporation Support vector machine—recursive feature elimination (SVM-RFE)
US10402685B2 (en) 1999-10-27 2019-09-03 Health Discovery Corporation Recursive feature elimination method using support vector machines
US8005293B2 (en) 2000-04-11 2011-08-23 Telestra New Wave Pty Ltd Gradient based training method for a support vector machine
AU2001248153B2 (en) * 2000-04-11 2006-08-17 Telstra Corporation Limited A gradient based training method for a support vector machine
US20030158830A1 (en) * 2000-04-11 2003-08-21 Adam Kowalczyk Gradient based training method for a support vector machine
WO2001077855A1 (en) * 2000-04-11 2001-10-18 Telstra New Wave Pty Ltd A gradient based training method for a support vector machine
US6671391B1 (en) * 2000-05-26 2003-12-30 Microsoft Corp. Pose-adaptive face detection system and process
US20080215513A1 (en) * 2000-08-07 2008-09-04 Jason Aaron Edward Weston Methods for feature selection in a learning machine
US7624074B2 (en) 2000-08-07 2009-11-24 Health Discovery Corporation Methods for feature selection in a learning machine
US20050157933A1 (en) * 2000-11-22 2005-07-21 Microsoft Corporation Pattern detection using reduced set vectors
US20040213439A1 (en) * 2000-11-22 2004-10-28 Microsoft Corporation Pattern detection methods and systems and face detection methods and systems
US7391908B2 (en) * 2000-11-22 2008-06-24 Microsoft Corporation Pattern detection using reduced set vectors
US7236626B2 (en) * 2000-11-22 2007-06-26 Microsoft Corporation Pattern detection
US20050196048A1 (en) * 2000-11-22 2005-09-08 Microsoft Corporation Pattern detection
US7099504B2 (en) * 2000-11-22 2006-08-29 Microsoft Corporation Pattern detection methods and systems and face detection methods and systems
US6804391B1 (en) * 2000-11-22 2004-10-12 Microsoft Corporation Pattern detection methods and systems, and face detection methods and systems
US7593920B2 (en) * 2001-04-04 2009-09-22 West Services, Inc. System, method, and software for identifying historically related legal opinions
US20060206467A1 (en) * 2001-04-04 2006-09-14 Peter Jackson System, method, and software for identifying historically related legal opinions
US20030046277A1 (en) * 2001-04-04 2003-03-06 Peter Jackson System, method, and software for identifying historically related legal opinions
US7620626B2 (en) 2001-04-04 2009-11-17 West Services, Inc. System, method, and software for identifying historically related legal opinions
US20100125601A1 (en) * 2001-04-04 2010-05-20 Peter Jackson System, method, and software for identifying historically related legal cases
US7984053B2 (en) 2001-04-04 2011-07-19 West Services, Inc. System, method, and software for identifying historically related legal cases
US7970718B2 (en) 2001-05-18 2011-06-28 Health Discovery Corporation Method for feature selection and for evaluating features identified as significant for classifying data
US7318051B2 (en) 2001-05-18 2008-01-08 Health Discovery Corporation Methods for feature selection in a learning machine
US20050216426A1 (en) * 2001-05-18 2005-09-29 Weston Jason Aaron E Methods for feature selection in a learning machine
US20110078099A1 (en) * 2001-05-18 2011-03-31 Health Discovery Corporation Method for feature selection and for evaluating features identified as significant for classifying data
US20070005538A1 (en) * 2001-06-18 2007-01-04 Wisconsin Alumni Research Foundation Lagrangian support vector machine
US7395253B2 (en) * 2001-06-18 2008-07-01 Wisconsin Alumni Research Foundation Lagrangian support vector machine
US20030093393A1 (en) * 2001-06-18 2003-05-15 Mangasarian Olvi L. Lagrangian support vector machine
US6633829B2 (en) * 2001-07-26 2003-10-14 Northrop Grumman Corporation Mechanical rotation axis measurement system and method
US7469065B2 (en) 2001-11-15 2008-12-23 Auckland Uniservices Limited Method, apparatus and software for lossy data compression and function estimation
US20050066075A1 (en) * 2001-11-15 2005-03-24 Vojislav Kecman Method, apparatus and software for lossy data compression and function estimation
US6944616B2 (en) * 2001-11-28 2005-09-13 Pavilion Technologies, Inc. System and method for historical database training of support vector machines
US20030101161A1 (en) * 2001-11-28 2003-05-29 Bruce Ferguson System and method for historical database training of support vector machines
WO2003060822A1 (en) * 2002-01-08 2003-07-24 Pavilion Technologies, Inc. System and method for historical database training of non-linear models for use in electronic commerce
US7010167B1 (en) 2002-04-30 2006-03-07 The United States Of America As Represented By The National Security Agency Method of geometric linear discriminant analysis pattern recognition
US20070136250A1 (en) * 2002-06-13 2007-06-14 Mark Logic Corporation XML Database Mixed Structural-Textual Classification System
US20070168327A1 (en) * 2002-06-13 2007-07-19 Mark Logic Corporation Parent-child query indexing for xml databases
US20040103105A1 (en) * 2002-06-13 2004-05-27 Cerisent Corporation Subtree-structured XML database
US7962474B2 (en) 2002-06-13 2011-06-14 Marklogic Corporation Parent-child query indexing for XML databases
US7756858B2 (en) 2002-06-13 2010-07-13 Mark Logic Corporation Parent-child query indexing for xml databases
US20040015462A1 (en) * 2002-07-19 2004-01-22 Lienhart Rainer W. Fast method for training and evaluating support vector machines with a large set of linear features
US7174040B2 (en) * 2002-07-19 2007-02-06 Intel Corporation Fast method for training and evaluating support vector machines with a large set of linear features
US20040013303A1 (en) * 2002-07-19 2004-01-22 Lienhart Rainer W. Facial classification of static images using support vector machines
US7146050B2 (en) * 2002-07-19 2006-12-05 Intel Corporation Facial classification of static images using support vector machines
US6871105B2 (en) * 2002-09-26 2005-03-22 General Electric Company Methods and apparatus for reducing hyperplanes in a control space
US20040064201A1 (en) * 2002-09-26 2004-04-01 Aragones James Kenneth Methods and apparatus for reducing hyperplanes in a control space
US20040259764A1 (en) * 2002-10-22 2004-12-23 Stuart Tugendreich Reticulocyte depletion signatures
US7778782B1 (en) 2002-12-17 2010-08-17 Entelos, Inc. Peroxisome proliferation activated receptor alpha (PPARα) signatures
US7396645B1 (en) 2002-12-17 2008-07-08 Entelos, Inc. Cholestasis signature
US7519519B1 (en) 2002-12-20 2009-04-14 Entelos, Inc. Signature projection score
US7422854B1 (en) 2002-12-20 2008-09-09 Entelos, Inc. Cholesterol reduction signature
US6803933B1 (en) 2003-06-16 2004-10-12 Hewlett-Packard Development Company, L.P. Systems and methods for dot gain determination and dot gain based printing
US20070026406A1 (en) * 2003-08-13 2007-02-01 Iconix Pharmaceuticals, Inc. Apparatus and method for classifying multi-dimensional biological data
US20050049985A1 (en) * 2003-08-28 2005-03-03 Mangasarian Olvi L. Input feature and kernel selection for support vector machine classification
US7421417B2 (en) * 2003-08-28 2008-09-02 Wisconsin Alumni Research Foundation Input feature and kernel selection for support vector machine classification
US7478074B2 (en) 2003-10-31 2009-01-13 The University Of Queensland Support vector machine
WO2005043450A1 (en) * 2003-10-31 2005-05-12 The University Of Queensland Improved support vector machine
US20070203861A1 (en) * 2003-10-31 2007-08-30 Gates Kevin E Support Vector Machine
US20050197981A1 (en) * 2004-01-20 2005-09-08 Bingham Clifton W. Method for identifying unanticipated changes in multi-dimensional data sets
US7609893B2 (en) * 2004-03-03 2009-10-27 Trw Automotive U.S. Llc Method and apparatus for producing classifier training images via construction and manipulation of a three-dimensional image model
US20050196035A1 (en) * 2004-03-03 2005-09-08 Trw Automotive U.S. Llc Method and apparatus for producing classifier training images
US20070021918A1 (en) * 2004-04-26 2007-01-25 Georges Natsoulis Universal gene chip for high throughput chemogenomic analysis
US20060035250A1 (en) * 2004-06-10 2006-02-16 Georges Natsoulis Necessary and sufficient reagent sets for chemogenomic analysis
US20060199205A1 (en) * 2004-07-19 2006-09-07 Georges Natsoulis Reagent sets and gene signatures for renal tubule injury
US20060057066A1 (en) * 2004-07-19 2006-03-16 Georges Natsoulis Reagent sets and gene signatures for renal tubule injury
US7588892B2 (en) 2004-07-19 2009-09-15 Entelos, Inc. Reagent sets and gene signatures for renal tubule injury
US20060112026A1 (en) * 2004-10-29 2006-05-25 Nec Laboratories America, Inc. Parallel support vector method and apparatus
US20070077987A1 (en) * 2005-05-03 2007-04-05 Tangam Gaming Technology Inc. Gaming object recognition
US20070094170A1 (en) * 2005-09-28 2007-04-26 Nec Laboratories America, Inc. Spread Kernel Support Vector Machine
US7406450B2 (en) 2005-09-28 2008-07-29 Nec Laboratories America, Inc. Spread kernel support vector machine
US20070198653A1 (en) * 2005-12-30 2007-08-23 Kurt Jarnagin Systems and methods for remote computer-based analysis of user-provided chemogenomic data
US7840060B2 (en) 2006-06-12 2010-11-23 D&S Consultants, Inc. System and method for machine learning using a similarity inverse matrix
US20100021885A1 (en) * 2006-09-18 2010-01-28 Mark Fielden Reagent sets and gene signatures for non-genotoxic hepatocarcinogenicity
US8311341B1 (en) 2006-11-29 2012-11-13 D & S Consultants, Inc. Enhanced method for comparing images using a pictorial edit distance
US7873583B2 (en) 2007-01-19 2011-01-18 Microsoft Corporation Combining resilient classifiers
US8364617B2 (en) 2007-01-19 2013-01-29 Microsoft Corporation Resilient classification of data
US20080177684A1 (en) * 2007-01-19 2008-07-24 Microsoft Corporation Combining resilient classifiers
US20080177680A1 (en) * 2007-01-19 2008-07-24 Microsoft Corporation Resilient classification of data
US7899652B2 (en) 2007-08-31 2011-03-01 Toyota Motor Engineering & Manufacturing North America, Inc. Linear programming support vector regression with wavelet kernel
US20090063115A1 (en) * 2007-08-31 2009-03-05 Zhao Lu Linear programming support vector regression with wavelet kernel
US7961952B2 (en) * 2007-09-27 2011-06-14 Mitsubishi Electric Research Laboratories, Inc. Method and system for detecting and tracking objects in images
US20090087023A1 (en) * 2007-09-27 2009-04-02 Fatih M Porikli Method and System for Detecting and Tracking Objects in Images
US20090204555A1 (en) * 2008-02-07 2009-08-13 Nec Laboratories America, Inc. System and method using hidden information
US8315956B2 (en) * 2008-02-07 2012-11-20 Nec Laboratories America, Inc. System and method using hidden information
US20100246997A1 (en) * 2009-03-30 2010-09-30 Porikli Fatih M Object Tracking With Regressing Particles
US8401239B2 (en) * 2009-03-30 2013-03-19 Mitsubishi Electric Research Laboratories, Inc. Object tracking with regressing particles
US8351712B2 (en) 2009-04-27 2013-01-08 The Neilsen Company (US), LLC Methods and apparatus to perform image classification based on pseudorandom features
US8818112B2 (en) 2009-04-27 2014-08-26 The Nielsen Company (Us), Llc Methods and apparatus to perform image classification based on pseudorandom features
US20100272350A1 (en) * 2009-04-27 2010-10-28 Morris Lee Methods and apparatus to perform image classification based on pseudorandom features
US9335820B2 (en) 2010-05-26 2016-05-10 Ramot At Tel-Aviv University Ltd. Method and system for correcting gaze offset
WO2011148366A1 (en) 2010-05-26 2011-12-01 Ramot At Tel-Aviv University Ltd. Method and system for correcting gaze offset
US9141875B2 (en) 2010-05-26 2015-09-22 Ramot At Tel-Aviv University Ltd. Method and system for correcting gaze offset
EP2442258A1 (en) * 2010-10-18 2012-04-18 Harman Becker Automotive Systems GmbH Method of generating a classification data base for classifying a traffic sign and device and method for classifying a traffic sign
US8897577B2 (en) * 2011-06-09 2014-11-25 Electronics & Telecommunications Research Institute Image recognition device and method of recognizing image thereof
US20120314940A1 (en) * 2011-06-09 2012-12-13 Electronics And Telecommunications Research Institute Image recognition device and method of recognizing image thereof
US8738271B2 (en) 2011-12-16 2014-05-27 Toyota Motor Engineering & Manufacturing North America, Inc. Asymmetric wavelet kernel in support vector learning
US20140219554A1 (en) * 2013-02-06 2014-08-07 Kabushiki Kaisha Toshiba Pattern recognition apparatus, method thereof, and program product therefor
US9342757B2 (en) * 2013-02-06 2016-05-17 Kabushiki Kaisha Toshiba Pattern recognition apparatus, method thereof, and program product therefor
US9405959B2 (en) 2013-03-11 2016-08-02 The United States Of America, As Represented By The Secretary Of The Navy System and method for classification of objects from 3D reconstruction
US20160365096A1 (en) * 2014-03-28 2016-12-15 Intel Corporation Training classifiers using selected cohort sample subsets
US9600231B1 (en) * 2015-03-13 2017-03-21 Amazon Technologies, Inc. Model shrinking for embedded keyword spotting
US20160358100A1 (en) * 2015-06-05 2016-12-08 Intel Corporation Techniques for improving classification performance in supervised learning
US10685289B2 (en) * 2015-06-05 2020-06-16 Intel Corporation Techniques for improving classification performance in supervised learning
US20210150338A1 (en) * 2019-11-20 2021-05-20 Abbyy Production Llc Identification of fields in documents with neural networks without templates
US11816165B2 (en) * 2019-11-20 2023-11-14 Abbyy Development Inc. Identification of fields in documents with neural networks without templates
CN111476709A (en) * 2020-04-09 2020-07-31 广州华多网络科技有限公司 Face image processing method and device and electronic equipment
CN111476709B (en) * 2020-04-09 2023-04-07 广州方硅信息技术有限公司 Face image processing method and device and electronic equipment

Also Published As

Publication number Publication date
EP0887761A2 (en) 1998-12-30
EP0887761A3 (en) 1999-02-24
CA2238164A1 (en) 1998-12-26
JPH1173406A (en) 1999-03-16

Similar Documents

Publication Publication Date Title
US6134344A (en) Method and apparatus for improving the efficiency of support vector machines
EP1090365B1 (en) Methods and apparatus for classifying text and for building a text classifier
US5649068A (en) Pattern recognition system using support vectors
Crammer et al. On the algorithmic implementation of multiclass kernel-based vector machines
US7076473B2 (en) Classification with boosted dyadic kernel discriminants
Duin Classifiers in almost empty spaces
US5640492A (en) Soft margin classifier
Chapelle et al. Vicinal risk minimization
US6327581B1 (en) Methods and apparatus for building a support vector machine classifier
US8209269B2 (en) Kernels for identifying patterns in datasets containing noise or transformation invariances
US7676442B2 (en) Selection of features predictive of biological conditions using protein mass spectrographic data
US8463718B2 (en) Support vector machine-based method for analysis of spectral data
Kressel et al. Pattern classification techniques based on function approximation
US6701016B1 (en) Method of learning deformation models to facilitate pattern matching
Atiya Estimating the posterior probabilities using the k-nearest neighbor rule
Sebban et al. Stopping criterion for boosting-based data reduction techniques: From binary to multiclass problem.
Keysers et al. Maximum entropy and Gaussian models for image object recognition
Yousefi et al. On the efficiency of stochastic quasi-Newton methods for deep learning
Sun et al. Synthetic aperture radar automatic target recognition using adaptive boosting
Serre et al. Feature selection for face detection
Patsei et al. Multi-class object classification model based on error-correcting output codes
Allinson et al. Estimating relevant input dimensions for self-organizing algorithms
Özöğür-Akyüz et al. Prediction with the SVM using test point margins
Hajnal A Perceptron-based Fine Approximation Technique for Linear Separation
Suttorp et al. Resilient approximation of kernel classifiers

Legal Events

Date Code Title Description
AS Assignment

Owner name: LUCENT TECHNOLOGIES, INC., NEW JERSEY

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:BURGES, CHRISTOPHER JOHN;REEL/FRAME:008659/0873

Effective date: 19970626

STCF Information on status: patent grant

Free format text: PATENTED CASE

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

AS Assignment

Owner name: THE CHASE MANHATTAN BANK, AS COLLATERAL AGENT, TEX

Free format text: CONDITIONAL ASSIGNMENT OF AND SECURITY INTEREST IN PATENT RIGHTS;ASSIGNOR:LUCENT TECHNOLOGIES INC. (DE CORPORATION);REEL/FRAME:011722/0048

Effective date: 20010222

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Free format text: PAYER NUMBER DE-ASSIGNED (ORIGINAL EVENT CODE: RMPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

FPAY Fee payment

Year of fee payment: 4

AS Assignment

Owner name: LUCENT TECHNOLOGIES INC., NEW JERSEY

Free format text: TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENT RIGHTS;ASSIGNOR:JPMORGAN CHASE BANK, N.A. (FORMERLY KNOWN AS THE CHASE MANHATTAN BANK), AS ADMINISTRATIVE AGENT;REEL/FRAME:018590/0287

Effective date: 20061130

FPAY Fee payment

Year of fee payment: 8

FPAY Fee payment

Year of fee payment: 12

AS Assignment

Owner name: ALCATEL-LUCENT USA INC., NEW JERSEY

Free format text: MERGER;ASSIGNOR:LUCENT TECHNOLOGIES INC.;REEL/FRAME:033542/0386

Effective date: 20081101