US20030139828A1 - System and method for pre-processing input data to a support vector machine - Google Patents

System and method for pre-processing input data to a support vector machine Download PDF

Info

Publication number
US20030139828A1
US20030139828A1 US10/051,574 US5157402A US2003139828A1 US 20030139828 A1 US20030139828 A1 US 20030139828A1 US 5157402 A US5157402 A US 5157402A US 2003139828 A1 US2003139828 A1 US 2003139828A1
Authority
US
United States
Prior art keywords
data
input data
time
inputs
reconciled
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US10/051,574
Other versions
US7020642B2 (en
Inventor
Bruce Ferguson
Eric Hartman
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Rockwell Automation Technologies Inc
Original Assignee
Pavilion Technologies Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Pavilion Technologies Inc filed Critical Pavilion Technologies Inc
Priority to US10/051,574 priority Critical patent/US7020642B2/en
Assigned to PAVILION TECHNOLOGIES, INC. reassignment PAVILION TECHNOLOGIES, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: FERGUSON, BRUCE, HARTMAN, ERIC
Priority to PCT/US2003/001582 priority patent/WO2003063016A1/en
Publication of US20030139828A1 publication Critical patent/US20030139828A1/en
Assigned to SILICON VALLEY BANK reassignment SILICON VALLEY BANK SECURITY INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: PAVILION TECHNOLOGIES, INC.
Application granted granted Critical
Publication of US7020642B2 publication Critical patent/US7020642B2/en
Assigned to PAVILION TECHNOLOGIES, INC. reassignment PAVILION TECHNOLOGIES, INC. RELEASE Assignors: SILICON VALLEY BANK
Assigned to ROCKWELL AUTOMATION PAVILION, INC. reassignment ROCKWELL AUTOMATION PAVILION, INC. CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: PAVILION TECHNOLOGIES, INC.
Assigned to ROCKWELL AUTOMATION, INC. reassignment ROCKWELL AUTOMATION, INC. MERGER (SEE DOCUMENT FOR DETAILS). Assignors: ROCKWELL AUTOMATION PAVILION, INC.
Assigned to ROCKWELL AUTOMATION TECHNOLOGIES, INC. reassignment ROCKWELL AUTOMATION TECHNOLOGIES, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ROCKWELL AUTOMATION, INC.
Adjusted expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B13/00Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion
    • G05B13/02Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric
    • G05B13/04Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric involving the use of models or simulators
    • G05B13/048Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric involving the use of models or simulators using a predictor
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B17/00Systems involving the use of models or simulators of said systems
    • G05B17/02Systems involving the use of models or simulators of said systems electric
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/2433Single-class perspective, e.g. one-against-all classification; Novelty detection; Outlier detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/10Machine learning using kernel methods, e.g. support vector machines [SVM]

Definitions

  • the present invention relates generally to the field of predictive system models. More particularly, the present invention relates to preprocessing of input data so as to correct for different time scales, transforms, missing or bad data, and/or time-delays prior to input to a support vector machine for either training of the support vector machine or operation of the support vector machine.
  • Many predictive systems may be characterized by the use of an internal model which represents a process or system for which predictions are made.
  • Predictive model types may be linear, non-linear, stochastic, or analytical, among others.
  • non-linear models may generally be preferred due to their ability to capture non-linear dependencies among various attributes of the phenomena.
  • Examples of non-linear models may include neural networks and support vector machines (SVMs).
  • a model is trained with training data, e.g., historical data, in order to reflect salient attributes and behaviors of the phenomena being modeled.
  • training data e.g., historical data
  • sets of training data may be provided as inputs to the model, and the model output may be compared to corresponding sets of desired outputs.
  • the resulting error is often used to adjust weights or coefficients in the model until the model generates the correct output (within some error margin) for each set of training data.
  • the model is considered to be in “training mode” during this process.
  • the model may receive real-world data as inputs, and provide predictive output information which may be used to control the process or system or make decisions regarding the modeled phenomena. It is desirable to allow for pre-processing of input data of predictive models (e.g., non-linear models, including neural networks and support vector machines), particularly in the field of e-commerce.
  • predictive models e.g., non-linear models, including neural networks and support vector machines
  • Predictive models may be used for analysis, control, and decision making in many areas, including electronic commerce (i.e., e-commerce), e-marketplaces, financial (e.g., stocks and/or bonds) markets and systems, data analysis, data mining, process measurement, optimization (e.g., optimized decision making, real-time optimization), quality control, as well as any other field or domain where predictive or classification models may be useful and where the object being modeled may be expressed abstractly.
  • quality control in commerce is increasingly important. The control and reproducibility of quality is be the focus of many efforts. For example, in Europe, quality is the focus of the ISO (International Standards Organization, Geneva, Switzerland) 9000 standards. These rigorous standards provide for quality assurance in production, installation, final inspection, and testing of processes. They also provide guidelines for quality assurance between a supplier and customer.
  • a common problem that is encountered in training support vector machines for prediction, forecasting, pattern recognition, sensor validation and/or processing problems is that some of the training/testing patterns may be missing, corrupted, and/or incomplete. Prior systems merely discarded data with the result that some areas of the input space may not have been covered during training of the support vector machine. For example, if the support vector machine is utilized to learn the behavior of a chemical plant as a function of the historical sensor and control settings, these sensor readings are typically sampled electronically, entered by hand from gauge readings, and/or entered by hand from laboratory results. It is a common occurrence in real-world problems that some or all of these readings may be missing at a given time. It is also common that the various values may be sampled on different time intervals.
  • any one value may be “bad” in the sense that after the value is entered, it may be determined by some method that a data item was, in fact, incorrect. Hence, if a given set of data has missing values, and that given set of data is plotted in a table, the result may be a partially filled-in table with intermittent missing data or “holes”. These “holes” may correspond to “bad” data or “missing” data.
  • time scale is meant to refer to any aspect of the time-dependency of data.
  • input data to a support vector machine is generally required to share the same time scale to be useful. This constraint applies to data sets used to train a support vector machine, i.e., input to the SVM in training mode, and to data sets used as input for run-time operation of a support vector machine, e.g., input to the SVM in run-time mode.
  • the time scale of the training data generally must be the same as that of the run-time input data to insure that the SVM behavior in run-time mode corresponds to the trained behavior learned in training mode.
  • one set of data may be taken on an hourly basis and another set of data taken on a quarter hour (i.e., every fifteen minutes) basis. In this case, for three out of every four data records on the quarter hour basis there will be no corresponding data from the hourly set.
  • the two data sets are differently synchronous, i.e., have different time scales.
  • the data sample periods may be non-periodic, producing asynchronous data, while another data set may be periodic or synchronous, e.g., hourly. These two data sets may not be useful together as input to the SVM while their time-dependencies, i.e., their time scales, differ.
  • one data set may have a “hole” in the data, as described above, compared to another set, i.e., some data may be missing on one of the data sets. The presence of the hole may be considered to be an asynchronous or anomalous time interval in the data set, and thus may be considered to have an asynchronous or inhomogeneous time scale.
  • two data sets may have two different respective time scales, e.g., an hourly basis and a 15 minute basis.
  • the desired time scale for input data to the SVM may have a third basis, e.g., daily.
  • data may also be taken on different machines in different locations with different operating systems and quite different data formats. It is essential to be able to read all of these different data formats, keeping track of the data values and the timestamps of the data, and to store both the data values and the timestamps for future use. It is a daunting task to retrieve these data, keeping track of the timestamp information, and to read it into an internal data format (e.g., a spreadsheet) so that the data may be time merged.
  • an internal data format e.g., a spreadsheet
  • Inherent delays in a system is another issue which may affect the use of time-dependent data.
  • a flow meter output may provide data at time t 0 at a given value.
  • a given change in flow resulting in a different reading on the flow meter may not affect the output for a predetermined delay ⁇ .
  • this flow meter output must be input to the support vector machine at a delay equal to ⁇ . This must also be accounted for in the training of the support vector machine.
  • the timeline of the data must be reconciled with the timeline of the process. In generating data that account for time delays, it has been postulated that it may be possible to generate a table of data that comprises both original data and delayed data.
  • a system and method are presented for preprocessing input data to a non-linear predictive system model based on a support vector machine.
  • the system model may utilize a support vector machine having a set of parameters associated therewith that define the representation of the system being modeled.
  • the support vector machine may have multiple inputs, each of the inputs associated with a portion of the input data.
  • the support vector machine parameters may be operable to be trained on a set of training data that is received from training data and/or a run-time system such that the system model is trained to represent the run-time system.
  • the input data may include a set of target output data representing the output of the system and a set of measured input data representing the system variables.
  • the target data and system variables may be reconciled by the preprocessor and then input to the support vector machine.
  • a training device may be operable to train the support vector machine according to a predetermined training algorithm such that the values of the support vector machine parameters are changed until the support vector machine comprises a stored representation of the run-time system.
  • the term “device” may refer to a software program, a hardware device, and/or a combination of the two.
  • the system may include a data storage device for storing training data from the run-time system.
  • the support vector machine may operate in two modes, a run-time mode and a training mode.
  • run-time data may be received from the run-time system.
  • training mode data may be retrieved from the data storage device, the training data being both training input data and training output data.
  • a data preprocessor may be provided for preprocessing received (i.e., input) data in accordance with predetermined preprocessing parameters to output preprocessed data.
  • the data preprocessor may include an input buffer for receiving and storing the input data. The input data may be on different time scales.
  • a time merge device may be operable to select a predetermined time scale and reconcile the input data so that all of the input data are placed on the same time scale.
  • An output device may output the reconciled data from the time merge device as preprocessed data.
  • the reconciled data may be used as input data to the system model, i.e., the support vector machine.
  • other scales than time scales may be determined for the data, and reconciled as described herein.
  • the support vector machine may have an input for receiving the preprocessed data, and may map it to an output through a stored representation of the run-time system in accordance with associated model parameters.
  • a control device may control the data preprocessor to operate in either training mode or run-time mode.
  • the preprocessor In the training mode, the preprocessor may be operable to process the stored training data and output preprocessed training data.
  • a training device may be operable to train the support vector machine (in the training mode) on the training data in accordance with a predetermined training algorithm to define the model parameters on which the support vector machine operates.
  • the preprocessor In the run-time mode, the preprocessor may be operable to preprocess run-time data received from the run-time system to output preprocessed run-time data.
  • the support vector machine may then operate in the run-time mode, receiving the preprocessed input run-time data and generating a predicted output and/or control parameters for the run-time system.
  • the data preprocessor may further include a pre-time merge processor for applying one or more predetermined algorithms to the received data prior to input to the time merge device.
  • a post-time merge processor (e.g., part of the output device) may be provided for applying one or more predetermined algorithms to the data output by the time merge device prior to output as the processed data.
  • the preprocessed data may then have selective delay applied thereto prior to input to the support vector machine in both the run-time mode and the training mode.
  • the one or more predetermined algorithms may be externally input and stored in a preprocessor memory such that the sequence in which the predetermined algorithms are applied is also stored.
  • the input data associated with at least one of the inputs of the support vector machine may have missing data in an associated time sequence.
  • the time merge device may be operable to reconcile the input data to fill in the missing data.
  • the input data associated with a first one or more of the inputs may have an associated time sequence based on a first time interval, and a second one or more of the inputs may have an associated time sequence based on a second time interval.
  • the time merge device may be operable to reconcile the input data associated with the first one or more of the inputs to the input data associated with the second one or more of the inputs, thereby generating reconciled input data associated with the at least one of the inputs having an associated time sequence based on the second time interval.
  • the input data associated with a first one or more of the inputs may have an associated time sequence based on a first time interval
  • the input data associated with a second one or more of the inputs may have an associated time sequence based on a second time interval.
  • the time merge device may be operable to reconcile the input data associated with the first one or more of the inputs and the input data associated with the second one or more of the inputs to a time scale based on a third time interval, thereby generating reconciled input data associated with the first one or more of the inputs and the second one or more of the inputs having an associated time sequence based on the third time interval.
  • the input data associated with a first one or more of the inputs may be asynchronous, and the input data associated with a second one or more of the inputs may be synchronous with an associated time sequence based on a time interval.
  • the time merge device may be operable to reconcile the asynchronous input data associated with the first one or more of the inputs to the synchronous input data associated with the second one or more of the inputs, thereby generating reconciled input data associated with the first one or more of the inputs, where the reconciled input data comprise synchronous input data having an associated time sequence based on the time interval.
  • the input data may include a plurality of system input variables, each of the system input variables including an associated set of data.
  • a delay device may be provided that may be operable to select one or more input variables after preprocessing by the preprocessor and to introduce a predetermined amount of delay therein to output a delayed input variable, thereby reconciling the delayed variable to the time scale of the data set.
  • This delayed input variable may be input to the system model. Further, this predetermined delay may be determined external to the delay device.
  • the input data may include one or more outlier values which may be disruptive or counter-productive to the training and/or operation of the support vector machine.
  • the received data may be analyzed to determine any outliers in the data set. In other words, the data may be analyzed to determine which, if any, data values fall above or below an acceptable range.
  • the outliers may be removed from the data, thereby generating corrected input data.
  • the removal of outliers may result in a data set with missing data, i.e., with gaps in the data.
  • GUI graphical user interface
  • a user or operator may view the received data set, i.e., to visually inspect the data for bad data points, i.e., outliers.
  • the GUI may further provide various tools for modifying the data, including tools for “cutting” the bad data from the set.
  • the detection and removal of the outliers may be performed by the user via the GUI.
  • the user may use the GUI to specify one or more algorithms which may then be applied to the data programmatically, i.e., automatically.
  • a GUI may be provided which is operable to receive user input specifying one or more data filtering operations to be performed on the input data, where the one or more data filtering operations operate to remove and/or replace the one or more outlier values.
  • the GUI may be further operable to display the input data prior to and after performing the filtering operations on the input data.
  • the GUI may be operable to receive user input specifying a portion of said input data for the data filtering operations.
  • the removed data may optionally be replaced, thereby “filling in” the gaps resulting from the removal of outlying data.
  • Various techniques may be brought to bear to generate the replacement data, including, but not limited to, clipping, interpolation, extrapolation, spline fits, sample/hold of a last prior value, etc., as are well known in the art.
  • the removed outliers may be replaced in a later stage of preprocessing, such as the time merge process described above.
  • the time merge process will detect that data are missing, and operate to fill the gap.
  • the preprocess may operate as a data filter, analyzing input data, detecting outliers, and removing the outliers from the data set.
  • the filter parameters may simply be a predetermined value limit or range against which a data value may be tested. If the value falls outside the range, the value may be removed, or clipped to the limit value, as desired.
  • the limit(s) or range may be determined dynamically, for example, based on the standard deviation of a moving window of data in the data set, e.g., any value outside a two sigma band for a moving window of 100 data points may be clipped or removed.
  • the received input data may comprise training data including target input data and target output data
  • the corrected data may comprise corrected training data which includes corrected target input data and corrected target output data
  • the support vector machine may be operable to be trained according to a predetermined training algorithm applied to the corrected target input data and the corrected target output data to develop model parameter values such that the support vector machine has stored therein a representation of the system that generated the target output data in response to the target input data.
  • the model parameters of the support vector machine may be trained based on the corrected target input data and the corrected target output data, after which the support vector machine may represent the system.
  • the input data may comprise run-time data, such as from the system being modeled, and the corrected data may comprise reconciled run-time data.
  • the support vector machine may be operable to receive the corrected run-time data and generate run-time output data.
  • the run-time output data may comprise control parameters for the system which may be usable to determine control inputs to the system for run-time operation of the system. For example, in an e-commerce system, control inputs may include such parameters as advertisement or product placement on a website, pricing, and credit limits, among others.
  • the run-time output data may comprise predictive output information for the system which may be usable in making decisions about operation of the system.
  • the predictive output information may indicate a recommended shift in investment strategies, for example.
  • the predictive output information may indicate production costs related to increased energy expenses, for example.
  • the preprocessor may be operable to detect and remove and/or replace outlying data in an input data set for the support vector machine.
  • Various embodiments of the systems and methods described above may thus operate to preprocess input data for a support vector machine to reconcile data on different time scales to a common time scale.
  • Various embodiments of the systems and methods may also operate to remove and/or replace bad or missing data in the input data.
  • the resulting preprocessed input data may then be used to train and/or operate a support vector machine.
  • FIG. 1 illustrates an exemplary computer system according to one embodiment of the present invention
  • FIG. 2 is an exemplary block diagram of the computer system illustrated in FIG. 1, according to one embodiment of the present invention.
  • FIGS. 3A and 3B illustrate two embodiments of an overall block diagram of the system for both preprocessing data during the training mode and for preprocessing data during the run mode;
  • FIGS. 4A and 4B are simplified block diagrams of two embodiments of the system of FIGS. 3A and 3B;
  • FIG. 5 is a detailed block diagram of the preprocessor in the training mode according to one embodiment
  • FIG. 6 is a simplified block diagram of the time merging operation, which is part of the preprocessing operation, according to one embodiment
  • FIG. 7A illustrates a data block before the time merging operation, according to one embodiment
  • FIG. 7B illustrates a data block after the time merging operation, according to one embodiment
  • FIGS. 8 A- 8 C illustrate diagrammatic views of the time merging operation, according to various embodiments
  • FIGS. 9 A- 9 C are flowcharts depicting various embodiments of a preprocessing operation
  • FIGS. 10 A- 10 F illustrate the use of graphical tools for preprocessing the “raw” data, according to various embodiments
  • FIG. 11 illustrates the display for the algorithm selection operation, according to one embodiment
  • FIG. 12 presents a series of tables and properties, according to one embodiment
  • FIG. 13 is a block diagram depicting parameters associated with various stages in process flow relative to a plant output, according to one embodiment
  • FIG. 14 illustrates a diagrammatic view of the relationship between the various plant parameters and the plant output, according to one embodiment
  • FIG. 15 illustrates a diagrammatic view of the delay provided for input data patterns, according to one embodiment
  • FIG. 16 illustrates a diagrammatic view of the buffer formation for each of the inputs and the method for generating the delayed input, according to one embodiment
  • FIG. 17 illustrates the display for selection of the delays associated with various inputs and outputs in the support vector machine, according to one embodiment
  • FIG. 18 is a block diagram for a variable delay selection, according to one embodiment
  • FIG. 19 is a block diagram of the adaptive determination of the delay, according to one embodiment.
  • FIG. 20 is a flowchart depicting the time delay operation, according to one embodiment
  • FIG. 21 is a flowchart depicting the run mode operation, according to one embodiment
  • FIG. 22 is a flowchart for setting the value of the variable delay, according to one embodiment.
  • FIG. 23 is a block diagram of the interface of the run-time preprocessor with a distributed control system, according to one embodiment.
  • FIG. 1 Computer System
  • FIG. 1 illustrates a computer system 1 operable to execute a support vector machine for performing modeling and/or control operations.
  • the computer system 1 may be any type of computer system, including a personal computer system, mainframe computer system, workstation, network appliance, Internet appliance, personal digital assistant (PDA), television system or other device.
  • PDA personal digital assistant
  • the term “computer system” can be broadly defined to encompass any device having at least one processor that executes instructions from a memory medium.
  • the computer system 1 may include a display device operable to display operations associated with the support vector machine.
  • the display device may also be operable to display a graphical user interface for process or control operations.
  • the graphical user interface may comprise any type of graphical user interface, e.g., depending on the computing platform.
  • the computer system 1 may include a memory medium(s) on which one or more computer programs or software components according to one embodiment of the present invention may be stored.
  • the memory medium may store one or more support vector machine software programs (support vector machines) which are executable to perform the methods described herein.
  • the memory medium may store a programming development environment application used to create, train, and/or execute support vector machine software programs.
  • the memory medium may also store operating system software, as well as other software for operation of the computer system.
  • the term “memory medium” is intended to include an installation medium, e.g., a CD-ROM, floppy disks, or tape device; a computer system memory or random access memory such as DRAM, SRAM, EDO RAM, Rambus RAM, etc.; or a non-volatile memory such as a magnetic media, e.g., a hard drive, or optical storage.
  • the memory medium may comprise other types of memory as well, or combinations thereof.
  • the memory medium may be located in a first computer in which the programs are executed, or may be located in a second different computer which connects to the first computer over a network, such as the Internet. In the latter instance, the second computer may provide program instructions to the first computer for execution.
  • support vector machine refers to at least one software program, or other executable implementation (e.g., an FPGA), that implements a support vector machine as described herein.
  • the support vector machine software program may be executed by a processor, such as in a computer system.
  • the various support vector machine embodiments described below are preferably implemented as a software program executing on a computer system.
  • FIG. 2 Computer System Block Diagram
  • FIG. 2 is an exemplary block diagram of the computer system illustrated in FIG. 1, according to one embodiment. It is noted that any type of computer system configuration or architecture may be used in conjunction with the system and method described herein, as desired, and FIG. 2 illustrates a representative PC embodiment. It is also noted that the computer system may be a general purpose computer system such as illustrated in FIG. 1, or other types of embodiments. The elements of a computer not necessary to understand the present invention have been omitted for simplicity.
  • the computer system 1 may include at least one central processing unit or CPU 2 which is coupled to a processor or host bus 5 .
  • the CPU 2 may be any of various types, including an x86 processor, e.g., a Pentium class, a PowerPC processor, a CPU from the SPARC family of RISC processors, as well as others.
  • Main memory 3 is coupled to the host bus 5 by means of memory controller 4 .
  • the main memory 3 may store one or more computer programs or libraries according to the present invention.
  • the main memory 3 also stores operating system software as well as the software for operation of the computer system, as well known to those skilled in the art.
  • the host bus 5 is coupled to an expansion or input/output bus 7 by means of a bus controller 6 or bus bridge logic.
  • the expansion bus 7 is preferably the PCI (Peripheral Component Interconnect) expansion bus, although other bus types may be used.
  • the expansion bus 7 may include slots for various devices such as a video display subsystem 8 and hard drive 9 coupled to the expansion bus 7 , among others (not shown).
  • Classifiers generally refer to systems which process a data set and categorize the data set based upon prior examples of similar data sets, i.e., training data.
  • the classifier system may be trained on a number of training data sets with known categorizations, then used to categorize new data sets.
  • classifiers have been determined by choosing a structure, and then selecting a parameter estimation algorithm used to optimize some cost function. The structure chosen may fix the best achievable generalization error, while the parameter estimation algorithm may optimize the cost function with respect to the empirical risk.
  • the support vector method is a recently developed technique which is designed for efficient multidimensional function approximation.
  • the basic idea of support vector machines (SVMs) is to determine a classifier or regression machine which minimizes the empirical risk (i.e., the training set error) and the confidence interval (which corresponds to the generalization or test set error), that is, to fix the empirical risk associated with an architecture and then to use a method to minimize the generalization error.
  • SVMs as adaptive models for binary classification and regression is that they provide a classifier with minimal VC (Vapnik-Chervonenkis) dimension which implies low expected probability of generalization errors.
  • SVMs may be used to classify linearly separable data and nonlinearly separable data.
  • SVMs may also be used as nonlinear classifiers and regression machines by mapping the input space to a high dimensional feature space. In this high dimensional feature space, linear classification may be performed.
  • Support vector machines have been shown to have a relationship with other recent nonlinear classification and modeling techniques such as: radial basis function networks, sparse approximation, PCA (principle components analysis), and regularization. Support vector machines have also been used to choose radial basis function centers.
  • a canonical hyperplane is a hyperplane (in this case we consider the optimal hyperplane) in which the parameters are normalized in a particular manner.
  • the training vectors for which this is the case, are called support vectors.
  • a support vector machine which performs the task of classifying linearly separable data is defined as
  • x i + and x i ⁇ are any input training vector examples from the positive and negative classes respectively.
  • nonlinear classifier For some problems, improved classification results may be obtained using a nonlinear classifier.
  • (20) which is a linear classifier.
  • a nonlinear classifier may be obtained using support vector machines as follows.
  • the classifier is obtained by the inner product x i T x where i ⁇ S, the set of support vectors. However, it is not necessary to use the explicit input data to form the classifier. Instead, all that is needed is to use the inner products between the support vectors and the vectors of the feature space.
  • a kernel function may operate as a basis function for the support vector machine.
  • the kernel function may be used to define a space within which the desired classification or prediction may be greatly simplified.
  • kernel functions including:
  • a multilayer network may be employed as a kernel function as follows. We have
  • is a sigmoid function
  • a high-dimensional “tube” with a radius of acceptable error is constructed which minimizes the error of the data set while also maximizing the flatness of the associated curve or function.
  • the tube is an envelope around the fit curve, defined by a collection of data points nearest the curve or surface, i.e., the support vectors.
  • support vector machines offer an extremely powerful method of obtaining models for classification and regression. They provide a mechanism for choosing the model structure in a natural manner which gives low generalization error and empirical risk.
  • a support vector machine may be built by specifying a kernel function, a number of inputs, and a number of outputs.
  • some type of training process may be used to capture the behaviors and/or attributes of the system or process to be modeled.
  • the modular aspect of one embodiment of the present invention may take advantage of this way of simplifying the specification of a support vector machine. Note that more complex support vector machines may require more configuration information, and therefore more storage.
  • Various embodiments of the present invention contemplate other types of support vector machine configurations.
  • all that is required for the support vector machine is that the support vector machine be able to be trained and retrained so as to provide needed predicted values.
  • the coefficients used in a support vector machine may be adjustable constants which determine the values of the predicted output data for given input data for any given support vector machine configuration.
  • Support vector machines may be superior to conventional statistical models because support vector machines may adjust these coefficients automatically.
  • support vector machines may be capable of building the structure of the relationship (or model) between the input data and the output data by adjusting the coefficients. While a conventional statistical model typically requires the developer to define the equation(s) in which adjustable constant(s) are used, the support vector machine may build the equivalent of the equation(s) automatically.
  • the support vector machine may be trained by presenting it with one or more training set(s).
  • the one or more training set(s) are the actual history of known input data values and the associated correct output data values.
  • the newly configured support vector machine is usually initialized by assigning random values to all of its coefficients.
  • the support vector machine may use its input data to produce predicted output data.
  • These predicted output data values may be used in combination with training input data to produce error data. These error data values may then be used to adjust the coefficients of the support vector machine.
  • the error between the output data and the training input data may be used to adjust the coefficients so that the error is reduced.
  • Support vector machines may be superior to computer statistical models because support vector machines do not require the developer of the support vector machine model to create the equations which relate the known input data and training values to the desired predicted values (i.e., output data). In other words, a support vector machine may learn relationships automatically during training.
  • the support vector machine may require the collection of training input data with its associated input data, also called a training set.
  • the training set may need to be collected and properly formatted.
  • the conventional approach for doing this is to create a file on a computer on which the support vector machine is executed.
  • creation of the training set may be done automatically, using historical data. This automatic step may eliminate errors and may save time, as compared to the conventional approach. Another benefit may be significant improvement in the effectiveness of the training function, since automatic creation of the training set(s) may be performed much more frequently.
  • the time-dependence, i.e., the time resolution and/or synchronization, of training and/or real-time data may not be consistent, due to missing data, variable measurement chronologies or timelines, etc.
  • the data may be preprocessed to homogenize the timing aspects of the data, as described below. It is noted that in other embodiments, the data may be dependent on a different independent variable than time. It is contemplated that the techniques described herein regarding homogenization of time scales are applicable to other scales (i.e., other independent variables), as well.
  • FIG. 3A is an overall block diagram of the data preprocessing operation in both the training mode and the run-time mode, according to one embodiment.
  • FIG. 3B is a diagram of the data preprocessing operation of FIG. 3A, but with an optional delay process included for reconciling time-delayed values in a data set.
  • the training data may be arranged in “sets”, e.g., corresponding to different variables, and the variables may be sampled at different time intervals. These data may be referred to as “raw” data.
  • each set of data is in the form that it was originally received.
  • the operator may first format the data files so that all of the data files may be merged into a data-table or spreadsheet, keeping track of the original “raw” time information. This may be done in such a manner as to keep track of the timestamp for each variable.
  • the “raw” data may be organized as time-value pairs of columns; that is, for each variable x i , there is an associated time of sample t i .
  • the data may then be grouped into sets ⁇ x i , t i ⁇ .
  • any of the time-vectors happen to be identical, it may be convenient to arrange the data such that the data will be grouped in common time scale groups, and data that is on, for example, a fifteen minute sample time scale may be grouped together and data sampled on a one hour sample time scale may be grouped together.
  • any type of format that provides viewing of multiple sets of data is acceptable.
  • the one or more data files 10 may be input to a preprocessor 12 that may function to perform various preprocessing functions, such as determining bad or missing data, reconciling data to replace bad data or fill in missing data, and performing various algorithmic or logic functions on the data, among others. Additionally, the preprocessor 12 may be operable to perform a time merging operation, as described below. During operation, the preprocessor 12 may be operable to store various preprocessing algorithms in a given sequence in a storage area 14 (noted as preprocess algorithm sequence 14 in FIG. 3). As described below, the sequence may define the way in which the data are manipulated in order to provide the overall preprocessing operation.
  • the preprocessed data may be input into a training model 20 , as FIG. 3A shows.
  • the training model 20 may be a non-linear model (e.g., a support vector machine) that receives input data and compares it with target output data. Any of various training algorithms may be used to train the support vector machine to generate a model for predicting the target output data from the input data.
  • the training model may utilize a support vector machine that is trained on one or more of multiple training methods.
  • Various weights within the support vector machine may be set during the training operation, and these may be stored as model parameters in a storage area 22 .
  • the training operation and the support vector machine may be conventional systems.
  • the training model 20 and the runtime system model 26 may be the same system model operated in training mode and runtime mode, respectively.
  • the model when the support vector machine is being trained, i.e., is in training mode, the model may be considered to be a training model, and when the support vector machine is in runtime mode, the model may be considered to be a runtime system model.
  • the runtime system model 26 may be distinct from the training model 20 . For example, after the training model 20 (the SVM in training mode) has been trained, the resulting parameters which define the state of the SVM may be used to configure the runtime system model 26 , which may be substantially a copy of the training model.
  • one copy of the system model (the training model 20 ) may be trained while another copy of the system model (the runtime system model 26 ) is engaged with the real-time system or process being controlled.
  • the model parameter values in storage area 22 resulting from the training model may be used to periodically or continuously update the runtime system model 26 , as shown.
  • a Distributed Control System (DCS) 24 may be provided that may be operable to generate various system measurements and control settings representing system variables (e.g., temperature, flow rates, etc.), that comprise the input data to the system model.
  • the system model may either generate control inputs for control of the DCS 24 or it may provide a predicted output, these being conventional operations which are well known in the art.
  • the control inputs may be provided by the run-time system model 26 , which has an output 28 and an input 30 , as shown.
  • the input 30 may include the preprocessed and, in the embodiment of FIG. 3B, delayed, data and the output may either be a predictive output, or a control input to the DCS 24 .
  • control inputs 28 to the DCS 24 this is illustrated as control inputs 28 to the DCS 24 .
  • the run-time system model 26 is shown as utilizing the model parameters stored in the storage area 22 . It is noted that the run-time system model 26 may include a representation learned during the training operation, which representation was learned on the preprocessed data, i.e., the trained SVM. Therefore, data generated by the DCS 24 may be preprocessed in order to correlate with the representation stored in the run-time system model 26 .
  • the output data of the DCS 24 may be input to a run-time process block 34 , which may be operable to process the data in accordance with the sequence of preprocessing algorithms stored in the storage area 14 , which are generated during the training operation.
  • the output of the run-time processor 34 may be input to a run-time delay process 36 to set delays on the data in accordance with the delay settings stored in the storage area 18 . This may provide the overall preprocessed data output on the line 30 input to the run-time system model 26 .
  • the preprocessed data may optionally be input to a delay block 16 , as shown in FIG. 3B.
  • a flow meter output may provide data at time t 0 at a given value.
  • a given change in flow resulting in a different reading on the flow meter may not affect the output for a predetermined delay ⁇ .
  • this flow meter output must be input to the support vector machine at a delay equal to ⁇ . This may be accounted for in the training of the support vector machine through the use of the delay block 16 .
  • the time scale of the data may be reconciled with the time scale of the system or process as follows.
  • the delay block 16 may be operable to set the various delays for different sets of data. This operation may be performed on both the target output data and the input training data.
  • the delay settings may be stored in a storage area 18 (noted as delay settings 18 in FIG. 3).
  • the output of the delay block 16 may be input to the training model 20 . Note that if the delay process is not used, then the blocks ‘set delay’ 16 , ‘delay settings’ 18 , and ‘runtime delay’ 36 may be omitted, and therefore, the outputs from the preprocessor 12 and the runtime process 34 may be fed into the training model 20 and the runtime system model 26 , respectively, as shown in FIG. 3A.
  • the delay process as implemented by the blocks ‘set delay’ 16 , ‘delay settings’ 18 , and ‘runtime delay’ 36 may be considered as part of the data preprocessor 12 .
  • the introduction of delays into portions of the data may be considered to be reconciling the input data to the time scale of the system or process being modeled, operated, or controlled.
  • FIG. 4A is a simplified block diagram of the system of FIG. 3A, wherein a single preprocessor 34 ′ is utilized, according to one embodiment.
  • FIG. 4B is a simplified block diagram of the system of FIG. 3B, wherein the delay process, i.e., a single delay 36 ′, is also included, according to one embodiment.
  • the output of the preprocessor 34 ′ may be input to a single system model 26 ′.
  • the preprocessor 34 ′ and the system model 26 ′ may operate in both a training mode and a run-time mode.
  • a multiplexer 35 may be provided that receives the output from the data file(s) 10 and the output of the DCS 24 , and generates an output including operational variables, e.g., plant or process variables, of the DCS 24 .
  • the output of the multiplexer may then be input to the preprocessor 34 ′.
  • a control device 37 may be provided to control the multiplexer 35 to select either a training mode or a run-time mode.
  • the data file(s) 10 may have the output thereof selected by the multiplexer 35 and the preprocessor 34 ′ may be operable to preprocess the data in accordance with a training mode, i.e., the preprocessor 34 ′ may be utilized to determine the preprocessed algorithm sequence stored in the storage area 14 .
  • An input/output (I/O) device 41 may be provided for allowing an operator to interface with the control device 37 .
  • the system model 26 ′ may be operated in a training mode such that the target data and the input data to the system model 26 ′ are generated, the training controlled by training block 39 .
  • the training block 39 may be operable to select one of multiple training algorithms for training the system model 26 ′.
  • the model parameters may be stored in the storage area 22 .
  • the term “device” may refer to a software program, a hardware device, and/or a combination of the two.
  • control device 37 may place the system in a run-time mode such that the preprocessor 34 ′ is operable to apply the algorithm sequence in the storage area 14 to the data selected by the multiplexer 35 from the DCS 24 .
  • the data may be output to the system model 26 ′ which may then operate in a predictive mode to either predict an output or to predict/determine control inputs for the DCS 24 .
  • the optional delay process 36 ′ and settings 18 ′ may be included, i.e., the data may be delayed, as shown in FIG. 4B.
  • the data may be output to the delay block 36 ′, which may introduce the various delays in the storage area 18 , and then these may be input to the system model 26 ′ which may then operate in a predictive mode to either predict an output or to predict/determine control inputs for the DCS 24 .
  • the output of the delay 36 ′ may be input to the single system model 26 ′.
  • the delay 36 ′ may be controlled by the control device 37 to determine the delay settings for storage in the storage area 18 , as shown.
  • FIG. 5 is a more detailed block diagram of the preprocessor 12 utilized during the training mode, according to one embodiment.
  • the central operation may be a time merge operation (or a merge operation based on some other independent variable), represented by block 40 .
  • a pre-time merge process may be performed, as indicated by block 42 .
  • the data may be subjected to a post-time merge process, as indicated by block 44 .
  • the output of the post-time merge process block 44 may provide the preprocessed data for input to the delay block 16 , shown in FIGS. 3B and 4B, and described above.
  • a controller 46 may be included for controlling the process operation of the blocks 40 - 44 , the outputs of which may be input to the controller 46 on lines 48 .
  • the controller 46 may be interfaced with a functional algorithm storage area 50 through a bus 52 and a time merge algorithm 54 through a bus 56 .
  • the functional algorithm storage area 50 may be operable to store various functional algorithms that may be mathematical, logical, etc., as described below.
  • the time merge algorithm storage area 54 may be operable to contain various time merge formats that may be utilized, such as extrapolation, interpolation or a boxcar method, among others.
  • a process sequence storage area 58 may be included that may be operable to store the sequence of the various processes that are determined during the training mode. As shown, an interface to these stored sequences may be provided by a bi-directional bus 60 .
  • the controller 46 may determine which of the functional algorithms are to be applied to the data and which of the time merge algorithms are to be applied to the data in accordance with instructions received from an operator input through an input/output device 62 .
  • the process sequence in the storage area 58 may be utilized to apply the various functional algorithms and time merge algorithms to input data, for use in operation or control of the real-time system or process.
  • FIG. 6 is a simplified block diagram of a time merge operation, according to one embodiment. All of the input data x(t) may be input to the time merge block 40 to provide time merge data x D (t) on the output thereof. Although not shown, the output target data y(t) may also be processed through the time merge block 40 to generate time merged output data y′(t). Thus, in one embodiment, input data x(t) and/or target data y(t), may be processed through the time merge block 40 to homogenize the time-dependence of the data.
  • input data x(v) and/or target data y(v) may be processed through the merge block 40 to homogenize the dependence of the data with respect to some other independent variable v (i.e., instead of time t).
  • v independent variable
  • dependence of the data on time t is assumed, however, the techniques are similarly applicable to data which depend on other variables.
  • the time-merge operation may comprise a transform that takes one or more columns of data, x 1 (t i ), such as that shown in FIG. 7A, with n i time samples at times t 1 ′. That is, the time-merge operation may comprise a function, ⁇ , that produces a new set of data ⁇ x′ ⁇ on a new time scale t′ from the given set of data x(t) sampled at t.
  • This function may be performed via any of a variety of conventional extrapolation, interpolation, or box-car algorithms (among others).
  • An example representation as a C-language callable function is shown below:
  • x i , t i are vectors of the old values and old times; x i ′ . . . x k ′ are vectors of the new values; and t′ is the new time-scale vector.
  • FIG. 8A shows a data table with bad, missing, or incomplete data.
  • the data table may consist of data with time disposed along a vertical scale and the samples disposed along a horizontal scale. Each sample may include many different pieces of data, with two data intervals illustrated. It is noted that when the data are examined for both the data sampled at the time interval “1” and the data sampled at the time interval “2”, that some portions of the data result in incomplete patterns. This is illustrated by a dotted line 63 , where it may be seen that some data are missing in the data sampled at time interval “1” and some data are missing in time interval “2”. A complete support vector machine pattern is illustrated in box 64 , where all the data are complete.
  • time difference between the data sampled at time interval “1” and the data sampled at time interval “2”.
  • time interval “1” the data are essentially present for all steps in time, whereas data sampled at time interval “2” are only sampled periodically relative to data sampled at time interval “1”.
  • a data reconciliation procedure may be implemented that may fill in the missing data, for example, by interpolation, and may also reconcile between the time samples in time interval “2” such that the data are complete for all time samples for both time interval “1” and time interval “2”.
  • the support vector machine based models that are utilized for time-series prediction and control may require that the time-interval between successive training patterns be constant. Since the data generated from real-world systems may not always be on the same time scale, it may be desirable to time-merge the data before it is used for training or running the support vector machine based model. To achieve this time-merge operation, it may be necessary to extrapolate, interpolate, average, or compress the data in each column over each time-region so as to give input values x′(t) that are on the appropriate time-scale. All of these operations are referred to herein as “data reconciliation”.
  • the reconciliation algorithm utilized may include linear estimates, spline-fit, boxcar algorithms, etc. If the data are sampled too frequently in the time-interval, it may be necessary to smooth or average the data to generate samples on the desired time scale. This may be done by window averaging techniques, sparse-sample techniques or spline techniques, among others.
  • x′(t) is a function of all or a portion of the raw values x(t) given at
  • ⁇ right arrow over (x) ⁇ ′( t ) ⁇ ( x 1 ( t N ), x 2 ( t N ), . . . x n ( t N ); x 1 ( t N1 ), x 1 ( t N2 ) . . . x 1 ( t N1 ); x 1 ( t 1 ), x 2 ( t 1 ) . . . x n ( t 1 )) (30)
  • Equation 30 Polynomial, spline-fit or support vector machine extrapolation techniques may use Equation 30, according to one embodiment.
  • training of the support vector machine may actually use interpolated values, i.e., Equation 31, wherein the case of interpolation, t N >t.
  • FIG. 8B illustrates one embodiment of an input data pattern and target output data pattern illustrating the preprocess operation for both preprocessing input data to provide time merged output data and also preprocessing the target output data to provide preprocessed target output data for training purposes.
  • the data input x(t) may include a vector with many inputs, x 1 (t), x 2 (t), . . . x n (t), each of which may be on a different time scale. It is desirable that the output x′(t) be extrapolated or interpolated to insure that all data are present on a single time scale.
  • the data at x 1 (t) were on a time scale of one sample every second, represented by the time t k , and the output time scale were desired to be the same, this would require time merging the rest of the data to that time scale.
  • the data x 2 (t) occurs approximately once every three seconds, it also being noted that this may be asynchronous data, although it is illustrated as being synchronized. In other words, in some embodiments, the time intervals between data samples may not be constant.
  • the data buffer in FIG. 8B is illustrated in actual time. The reconciliation may be as simple as holding the last value of the input x 2 (t) until a new value is input thereto, and then discarding the old value.
  • This technique may also be used in the case of missing data.
  • a reconciliation routine as described above may also be utilized to insure that data are always on the output for each time slice of the vector x′(t).
  • This technique may also be used with respect to the target output which is preprocessed to provide the preprocessed target output y′(t).
  • one set of data may be taken on an hourly basis and another set of data taken on a quarter hour (i.e., every fifteen minutes) basis, thus, for three out of every four data records on the quarter hour basis there will be no corresponding data from the hourly set.
  • These areas of missing data must be filled in to assure that all data are presented at commonly synchronized times to the support vector machine.
  • the time scales of the two data sets must be the same, and so must be reconciled.
  • the data sample periods may be non-periodic, producing asynchronous data, while another data set may be periodic or synchronous, e.g., hourly, thus, their time scales differ.
  • the asynchronous data may be reconciled to the synchronous data.
  • one data set may have a “hole” in the data, as described above, compared to another set, i.e., some data may be missing in one of the data sets.
  • the presence of the hole may be considered to be an asynchronous or anomalous time interval in the data set, which may then require reconciliation with a second data set to be useful with the second set.
  • two data sets may have two different respective time scales, e.g., an hourly basis and a 15 minute basis.
  • the desired time scale for input data to the SVM may have a third basis, e.g., daily.
  • the two data sets may need to be reconciled with the third timeline prior to being used as input to the SVM.
  • FIG. 8C illustrates one embodiment of the time merge operation. Illustrated are two formatted tables, one for the set of data x 1 (t) and x 2 (t), the second for the set of data x′ 1 (t) and x′ 2 (t).
  • the data set for x 1 (t) is illustrated as being on one time scale and the data set for x 2 (t) is on a second, different time scale.
  • one value of the data set x 1 (t) is illustrated as being bad, and is therefore “cut” from the data set, as described below.
  • the preprocessing operation fills in, i.e., replaces, this bad data and then time merges the data, as shown.
  • the time scale for x 1 (t) is utilized as a time scale for the time merge data such that the time merge data x′ 1 (t) is on the same time scale with the “cut” value filled in as a result of the preprocessing operation and the data set x 2 (t) is processed in accordance with one of the time merged algorithms to provide data for x′ 2 (t) and on the same time scale as the data x′ 1 (t).
  • FIG. 9A is a high level flowchart depicting one embodiment of a preprocessing operation for preprocessing input data to a support vector machine. It should be noted that in other embodiments, various of the steps may be performed in a different order than shown, or may be omitted. Additional steps may also be performed.
  • the preprocess may be initiated at a start block 902 .
  • input data for the support vector machine may be received, such as from a run-time system, or data storage.
  • the received data may be stored in an input buffer.
  • the support vector machine may comprise a non-linear model having a set of model parameters defining a representation of a system.
  • the model parameters may be capable of being trained, i.e., the SVM may be trained via the model parameters or coefficients.
  • the input data may be associated with at least two inputs of a support vector machine, and may be on different time scales relative to each other. In the case of missing data associated with a single input, the data may be considered to be on different timescales relative to itself, in that the data gap caused by the missing data may be considered an asynchronous portion of the data.
  • the scales of the input data may be based on a different independent variable than time.
  • one time scale may be asynchronous, and a second time scale may be synchronous with an associated time sequence based on a time interval.
  • both time scales may be asynchronous.
  • both time scales may be synchronous, but based on different time intervals.
  • this un-preprocessed input data may be considered “raw” input data.
  • a desired time scale (or other scale, depending on the independent variable) may be determined. For example, a synchronous time scales represented in the data (if one exists) may be selected as the desired time scale. In another embodiment, a predetermined time scale may be selected.
  • the input data may be reconciled to the desired time scale.
  • the input data stored in the input buffer of 904 may be reconciled by a time merge device, such as a software program, thereby generating reconciled data.
  • a time merge device such as a software program
  • all of the input data for all of the inputs may be on the same time scale.
  • the merge device may reconcile the input data such that all of the input data are on the same independent variable scale.
  • the time merge device may be operable to reconcile the input data to fill in the missing data, thereby reconciling the gap in the data to the time scale of the data set.
  • the input data associated with first one or more of the inputs may have an associated time sequence based on a first time interval, and a second one or more of the inputs may have an associated time sequence based on a second time interval.
  • the time merge device may be operable to reconcile the input data associated with the first one or more of the inputs to the input data associated with the second one or more other of the inputs, thereby generating reconciled input data associated with the first one or more of the inputs having an associated time sequence based on the second time interval.
  • the input data associated with a first one or more of the inputs may have an associated time sequence based on a first time interval
  • the input data associated with a second different one or more of the inputs may have an associated time sequence based on a second time interval
  • the time merge device may be operable to reconcile the input data associated with the first one or more of the inputs and the input data associated with the second one or more of the inputs to a time scale based on a third time interval, thereby generating reconciled input data associated with the first one or more of the inputs and the second one or more of the inputs having an associated time sequence based on the third time interval.
  • the input data associated with a first one or more of the inputs may be asynchronous, and wherein the input data associated with a second one or more of the inputs may be synchronous with an associated time sequence based on a time interval.
  • the time merge device may be operable to reconcile the asynchronous input data to the synchronous input data, thereby generating reconciled input data associated with the first one or more, wherein the reconciled input data comprise synchronous input data having an associated time sequence based on the time interval.
  • the reconciled input data may be output.
  • an output device may output the data reconciled by the time merge device as reconciled data, where the reconciled data comprise the input data to the support vector machine.
  • the received input data of 904 may comprise training data which includes target input data and target output data.
  • the reconciled data may comprise reconciled training data which includes reconciled target input data and reconciled target output data which are both based on a common time scale (or other common scale).
  • the support vector machine may be operable to be trained according to a predetermined training algorithm applied to the reconciled target input data and the reconciled target output data to develop model parameter values such that the support vector machine has stored therein a representation of the system that generated the target output data in response to the target input data.
  • the model parameters of the support vector machine may be trained based on the reconciled target input data and the reconciled target output data, after which the support vector machine may represent the system.
  • the input data of 904 may comprise run-time data, such as from the system being modeled, and the reconciled data of 908 may comprise reconciled run-time data.
  • the support vector machine may be operable to receive the run-time data and generate run-time output data.
  • the run-time output data may comprise control parameters for the system.
  • the control parameters may be usable to determine control inputs to the system for run-time operation of the system. For example, in an e-commerce system, control inputs may include such parameters as advertisement or product placement on a website, pricing, and credit limits, among others.
  • the run-time output data may comprise predictive output information for the system.
  • the predictive output information may be usable in making decisions about operation of the system.
  • the predictive output information may indicate a recommended shift in investment strategies, for example.
  • the predictive output information may indicate production costs related to increased energy expenses, for example.
  • FIG. 9B is a high level flowchart depicting another embodiment of a preprocessing operation for preprocessing input data to a support vector machine.
  • various of the steps may be performed in a different order than shown, or may be omitted. Additional steps may also be performed.
  • the input data may include one or more outlier values which may be disruptive or counter-productive to the training and/or operation of the support vector machine.
  • the preprocess may be initiated at a start block 902 . Then, in 904 , input data for the support vector machine may be received, as described above with reference to FIG. 9A, and may be stored in an input buffer.
  • the received data may be analyzed to determine any outliers in the data set.
  • the data may be analyzed to determine which, if any, data values fall above or below an acceptable range.
  • the outliers may be removed from the data, thereby generating corrected input data.
  • the removal of outliers may result in a data set with missing data, i.e., with gaps in the data.
  • GUI graphical user interface
  • the GUI may thus provide a means for the operator to visually inspect the data for bad data points, i.e., outliers.
  • the GUI may further provide various tools for modifying the data, including tools for “cutting” the bad data from the set.
  • the detection and removal of the outliers may be performed by the user via the GUI.
  • the user may use the GUI to specify one or more algorithms which may then be applied to the data programmatically, i.e., automatically.
  • a GUI may be provided which is operable to receive user input specifying one or more data filtering operations to be performed on the input data, where the one or more data filtering operations operate to remove and/or replace the one or more outlier values.
  • the GUI may be further operable to display the input data prior to and after performing the filtering operations on the input data.
  • the GUI may be operable to receive user input specifying a portion of said input data for the data filtering operations. Further details of the GUI are provided below with reference to FIGS. 10 A- 10 F.
  • the removed data may optionally be replaced, as indicated in 911 .
  • the preprocessing operation may “fill in” the gap resulting from the removal of outlying data.
  • Various techniques may be brought to bear to generate the replacement data, including, but not limited to, clipping, interpolation, extrapolation, spline fits, sample/hold of a last prior value, etc., as are well known in the art.
  • the removed outliers may be replaced in a later stage of preprocessing, such as the time merge process described above.
  • the time merge process will detect that data are missing, and operate to fill the gap.
  • the preprocess may operate as a data filter, analyzing input data, detecting outliers, and removing the outliers from the data set.
  • the filter parameters may simply be a predetermined value limit or range against which a data value may be tested. If the value falls outside the range, the value may be removed, or clipped to the limit value, as desired.
  • the limit(s) or range may be determined dynamically. For example, in one embodiment, the range may be determined based on the standard deviation of a moving window of data in the data set, e.g., any value outside a two sigma band for a moving window of 100 data points may be clipped or removed.
  • the data filter may also operate to replace the outlier values with more appropriate replacement values.
  • the received input data of 904 may comprise training data including target input data and target output data
  • the corrected data may comprise corrected training data which includes corrected target input data and corrected target output data
  • the support vector machine may be operable to be trained according to a predetermined training algorithm applied to the corrected target input data and the corrected target output data to develop model parameter values such that the support vector machine has stored therein a representation of the system that generated the target output data in response to the target input data.
  • the model parameters of the support vector machine may be trained based on the corrected target input data and the corrected target output data, after which the support vector machine may represent the system.
  • the input data of 904 may comprise run-time data, such as from the system being modeled, and the corrected data of 908 may comprise reconciled run-time data.
  • the support vector machine may be operable to receive the corrected run-time data and generate run-time output data.
  • the run-time output data may comprise control parameters for the system.
  • the control parameters may be usable to determine control inputs to the system for run-time operation of the system. For example, in an e-commerce system, control inputs may include such parameters as advertisement or product placement on a website, pricing, and credit limits, among others.
  • the run-time output data may comprise predictive output information for the system.
  • the predictive output information may be usable in making decisions about operation of the system.
  • the predictive output information may indicate a recommended shift in investment strategies, for example.
  • the predictive output information may indicate production costs related to increased energy expenses, for example.
  • the preprocessor may be operable to detect and remove and/or replace outlying data in an input data set for the support vector machine.
  • FIG. 9C is a detailed flowchart depicting one embodiment of the preprocessing operation.
  • the preprocessing operations described above with reference to FIGS. 9A and 9B are both included. It should be noted that in other embodiments, various of the steps may be performed in a different order than shown, or may be omitted. Additional steps may also be performed.
  • the flow chart may be initiated at start block 902 and then may proceed to a decision block 903 to determine if there are any pre-time merge process operations to be performed. If so, the program may proceed to a decision block 905 to determine whether there are any manual preprocess operations to be performed. If so, the program may continue along the “Yes” path to a function block 912 to manually preprocess the data. In the manual preprocessing of data 912 , the data may be viewed in a desired format by the operator and the operator may look at the data and eliminate, “cut”, or otherwise modify obviously bad data values.
  • this data value may be “cut” such that it is no longer present in the data set and thereafter appears as missing data.
  • This manual operation is in contrast to an automatic operation where all values may be subjected to a predetermined algorithm to process the data.
  • an algorithm may be generated or selected that either cuts out all data above/below a certain value or clips the values to a predetermined maximum/minimum.
  • the algorithm may constrain values to a predetermined range, either removing the offending data altogether, or replacing the values, using the various techniques described above, including clipping, interpolation, extrapolation, splines, etc.
  • the clipping to a predetermined maximum/minimum is an algorithmic operation that is described below.
  • the program may proceed to a decision block 914 . It is noted that if the manual preprocess operation is not utilized, the program may continue from the decision block 905 along the “No” path to the input of decision block 914 .
  • the decision block 914 may be operable to determine whether an algorithmic process is to be applied to the data. If so, the program may continue along a “Yes” path to a function block 916 to select a particular algorithmic process for a given set of data. After selecting the algorithmic process, the program may proceed to a function block 918 to apply the algorithmic process to the data and then to a decision block 920 to determine if more data are to be processed with the algorithmic process.
  • the program may flow back around to the input of the function block 916 along a “Yes” path, as shown.
  • the program may flow along a “No” path from decision block 920 to a function block 922 to store the sequence of algorithmic processes such that each data set has the desired algorithmic processes applied thereto in the sequence. Additionally, if the algorithmic process is not selected by the decision block 914 , the program may flow along a “No” path to the input of the function block 922 .
  • the program may flow to a decision block 924 to determine if a time merge operation is to be performed.
  • the program also may proceed along a “No” path from the decision block 903 to the input of decision block 924 if the pre-time-merge process is not required.
  • the program may continue from the decision block 924 along the “Yes” path to a function block 926 if the time merge process has been selected, and then the time merge operation may be performed.
  • the time merge process may then be stored with the sequence as part thereof in block 928 .
  • the program then may proceed to a decision block 930 to determine whether the post time merge process is to be performed. If the time merge process is not performed, as determined by the decision block 924 , the program may flow along the “No” path therefrom to the decision block 930 .
  • the program may continue along the “Yes” path from the decision block 930 to a function block 932 to select the algorithmic process and then to a function block 934 to apply the algorithmic process to the desired set of data and then to a decision block 936 to determine whether additional sets of data are to be processed in accordance with the algorithmic process. If so, the program may flow along the “Yes” path back to the input of function block 932 , and if not, the program may flow along the “No” path to a function block 938 to store the new sequence of algorithmic processes with the sequence and then the program may proceed to a DONE block 1000 . If the post time merge process is not to be performed, the program may flow from the decision block 930 along the “No” path to the DONE block 1000 .
  • FIGS. 10 A- 10 E there are illustrated embodiments of three plots of data.
  • FIGS. 10 A- 10 E also illustrate one embodiment of a graphical user interface (GUI) for various data manipulation/reconciliation operations which may be included in one embodiment of the present invention.
  • GUI graphical user interface
  • FIGS. 10 A- 10 E also illustrate one embodiment of a graphical user interface (GUI) for various data manipulation/reconciliation operations which may be included in one embodiment of the present invention.
  • GUI graphical user interface
  • each figure includes one plot for an input “temp1”, one plot for an input “press2” and one plot for an output “ppm”, as may relate to a chemical plant.
  • the first input may relate to a temperature measurement
  • the second input may relate to a pressure measurement
  • the output data may correspond to a parts per million variation.
  • the temp1 data there are two points of data 108 and 110 , which need to be “cut” from the data, as they are obviously bad data points. Such data points that lie outside the acceptable range of a data set are generally referred to as “outliers”. These two data points appear as cut data in the data-set, as shown in FIG. 10C, which then may be filled in or replaced by the appropriate time merge operation utilizing extrapolation, interpolation, or other techniques, as desired.
  • the data preprocessor may include a data filter which may be operable to analyze input data, detect outliers, and remove the outliers from the data set.
  • the applied filter may simply be a predetermined value limit or range against which a data value may be tested. If the value falls outside the range, the value may be removed, or clipped to the limit value, as desired.
  • the limit(s) or range may be determined dynamically. For example, in one embodiment, the range may be determined based on the standard deviation of a moving window of data in the data set, e.g., any value outside a two sigma band for a moving window of 100 data points may be clipped or removed.
  • the filter may replace any removed outliers using any of such techniques as extrapolation and interpolation, among others.
  • the removed outliers may be replaced in a later stage of processing, such as the time merge process described herein. In this embodiment, the time merge process will detect that data are missing, and operate to fill the gaps.
  • FIG. 10A shows the raw data.
  • FIG. 10B shows the use of a cut data region tool 115 .
  • FIG. 10B shows the points 108 and 110 highlighted by dots showing them as cut data points. In one embodiment of the GUI presented on a color screen, these dots may appear in red.
  • FIG. 10D shows a vertical cut of the data, cutting across several variables simultaneously. Applying this cut may cause all of the data points to be marked as cut, as shown in FIG. 10E.
  • FIG. 10F flowcharts one embodiment of the steps involved in cutting or otherwise modifying the data.
  • a region of data may be selected by a set of boundaries 112 (in FIG. 10D), which results may be utilized to block out data. For example, if it were determined that data during a certain time period were invalid due to various reasons, these data may be removed from the data sets, with the subsequent preprocessing operable to fill in the “blocked” or “cut” data.
  • the data may be displayed as illustrated in FIGS. 10 A- 10 E, and the operator may select various processing techniques to manipulate the data via various tools, such as cutting, clipping and viewing tools 107 , 111 , 113 , that may allow the user to select data items to cut, clip, transform or otherwise modify.
  • various tools such as cutting, clipping and viewing tools 107 , 111 , 113 , that may allow the user to select data items to cut, clip, transform or otherwise modify.
  • the mode for removing data this may be referred to as a manual manipulation of the data.
  • algorithms may be applied to the data to change the value of that data.
  • the data may be rearranged in the spreadsheet format of the data.
  • the operator may view the new data as the operation is being performed.
  • the user may be provided the ability to utilize a graphic image of data in a database, manipulate the data on a display in accordance with the selection of the various cutting tools, and modify the stored data in accordance with these manipulations.
  • a tool may be utilized to manipulate multiple variables over a given time range to delete all of that data from the input database and reflect it as “cut” data.
  • the data set may then be considered to have missing data, which may require a data reconciliation scheme in order to replace this data in the input data stream.
  • the data may be “clipped”; that is, a graphical tool may be utilized to determine the level at which all data above (or below) that level is modified. All data in the data set, even data not displayed, may be modified to this level. This in effect may constitute applying an algorithm to that data set.
  • FIG. 10F the flowchart depicts one embodiment of an operation of utilizing the graphical tools for cutting data.
  • An initiation block, data set 117 may indicate the acquisition of the data set.
  • the program then may proceed to a decision block 119 to determine if the variables have been selected and manipulated for display. If not, the program may proceed along a “No” path to a function block 121 to select the display type and then to a function block 123 to display the data in the desired format.
  • the program then may continue to a decision block 125 wherein tools for modifying the data are selected. When this is done, the program may continue along a “DONE” line back to decision block 119 to determine if all of the variables have been selected.
  • the program may proceed to a decision block 127 to determine if an operation is cancelled and, if so, may proceed back around to the decision block 125 . If the operation is not cancelled, the program may continue along a “No” path to function block 129 to apply the algorithmic transformation to the data and then to function block 131 to store the transform as part of a sequence. The program then may continue back to function block 123 . This may continue until the program continues along the “DONE” path from decision block 125 back to decision block 119 .
  • the program may proceed from decision block 119 along a “Yes” path to decision block 133 to determine if the transformed data are to be saved. If not, the program may proceed along an “No” path to “DONE” block 135 . If the transformed data are to be saved, the program may continue from the decision block 133 along the “Yes” path to a function block 137 to transform the data set and then to the “DONE” block 135 .
  • FIG. 11 is a diagrammatic view of a display (i.e., a GUI) for performing algorithmic functions on the data, according to one embodiment.
  • the display may include a first numerical template 114 which may provide a numerical keypad function.
  • a window 116 may be provided that may display the variable(s) that is/are being operated on.
  • the variables that are available for manipulation may be displayed in a window 118 .
  • the various variables are arranged in groups, one group associated with a first date and time, e.g., variables temp1 and press1, and a second group associated with a second date and time, e.g., variables temp2 and press2, for example, prior to time merging.
  • a mathematical operator window 120 may be included that may provide various mathematical operators (e.g., “+”, “ ⁇ ”, etc.) which may be applied to the variables.
  • Various logical operators may also be available in the window 120 (e.g., “AND”, “OR”, etc.).
  • a functions window 122 may be included that may allow selection of various mathematical functions, logical functions, etc. (e.g., exp, frequency, in, log, max, etc.) for application to any of the variables, as desired.
  • variable temp1 may be selected to be processed and the logarithmic function selected for application thereto.
  • the variable temp1 may first be selected from window 118 and then the logarithmic function “log” selected from the window 122 .
  • the left parenthesis may then be selected from window 120 , followed by the selection of the variable temp1 from window 118 , then followed by the selection of the right parenthesis from window 120 . This may result in the selection of an algorithmic process which includes a logarithm of the variable temp1.
  • This may then be stored as a sequence, such that upon running the data through the run-time sequence, data associated with the variable temp1 has the logarithmic function applied thereto prior to inputting to the run-time system model 26 . This process may be continued or repeated for each desired operation.
  • the resultant data may be as depicted in Table 1, as shown in FIG. 12. It may be seen in Table 1 that there is a time scale difference, one group associated with the time TIME — 1 and one group associated with the time TIME — 2. It may be seen that the first time scale is based on an hourly interval and that the second time scale is based on a two hour interval. Any “cut” data (not shown) would appear as missing data.
  • variable temp1 is processed by taking a logarithm thereof. This may result in a variation of the set of data associated with the variable temp1. This is illustrated in Table 2, as shown in FIG. 12.
  • the sequence of operations associated therewith may determine the data that were cut out of the original data set for data temp1 and also the algorithmic processes associated therewith, these being in a sequence which is stored in the sequence block 14 and which may be examined via a data-column properties module 113 , shown in FIGS. 10 A- 10 E, as illustrated in Properties 2, of FIG. 12.
  • the operator may select the time merge function 115 , illustrated in FIG. 10B, and may specify the time scale and type of time merge algorithm. For example, in FIG. 10B, a one-hour time-scale is selected and the box-car algorithm of merging is used.
  • the time scale may be disposed on an hourly interval with the time merge process. This is illustrated in Table 3 of FIG. 12, wherein all of the data are on a common time scale and the cut data has been extrapolated to insert new data.
  • the sequence after time merge may include the data that are cut from the original data sets, the algorithmic processes utilized during the pre-time merge processing, and the time merge data. This is illustrated in Properties 3, as shown in FIG. 12.
  • the display of FIG. 11 may again be pulled up, and another algorithmic process selected.
  • One example may be to take the variable temp1 after time merge and add a value of 5000 to this variable. This may result in each value in the column associated with the variable temp1 being increased by that value, as illustrated by the data in Table 4 of FIG. 12.
  • the sequence may then be updated using the sequence presented in Properties 4, as shown in FIG. 12.
  • FIG. 13 is a block diagram of one embodiment of a process flow, such as, for example, a process flow through a plant.
  • a process flow such as, for example, a process flow through a plant.
  • operation and control of a plant is an exemplary application of one embodiment of the present invention, any other process may also be suitable for application of the systems and methods described herein, including scientific, medical, financial, stock and/or bond management, and manufacturing, among others.
  • the flow meter 130 may provide a variable output flow1.
  • the flow may continue to a process block 132 , wherein various plant processes may be carried out.
  • Various plant inputs may be provided to this process block 132 .
  • the flow may then continue to a temperature gauge 134 , which may output a variable temp1.
  • the flow may proceed to a process block 136 to perform other plant processes, these also receiving plant inputs.
  • the flow may then continue to a pressure gauge 138 , which may output a variable press1.
  • the flow may continue through various other process blocks 139 and other parameter measurement blocks 140 , resulting in an overall plant output 142 which may be the desired plant output.
  • FIG. 14 is a timing diagram illustrating the various effects of the output variables from the plant and the plant output, according to one embodiment.
  • the output variable flow1 may experience a change at a point 144 .
  • the output variable temp1 may experience a change at a point 146
  • the variable press1 may experience a change at a point 148 .
  • the corresponding change in the output may not be time synchronous with the changes in the variables.
  • changes in the plant output may occur at points 150 , 152 and 154 , for the respective changes in the variables at points 144 - 148 , respectively.
  • the change between points 144 and 150 and the variable flow1 and the output, respectively, may experience a delay D 2 .
  • the change in the output of point 152 associated with the change in the variable temp1 may occur after delay D 3 .
  • the change in the output of point 154 associated with the change in the variable press1 may occur after a delay of D 1 .
  • these delays may be accounted for during training, and/or during the run-time operation.
  • FIG. 16 is a diagrammatic view of the method for implementing the delay, according to one embodiment.
  • variable length buffers may be provided in each data set after preprocessing, the length of which may correspond to the longest delay. Multiple taps may be provided in each of the buffers to allow various delays to be selected.
  • FIG. 16 there are illustrated four buffers 156 , 158 , 160 and 162 , associated with the preprocessed inputs x′ 1 (t), x′ 2 (t), x′ 3 (t), and x′ 4 (t).
  • Each of the buffers has a length of N, such that the first buffer outputs the delay input x 1D (t), the second buffer 158 outputs the delay input x 2D (t), and the third buffer 160 outputs the delay input x 3D (t).
  • the buffer 162 has a delay tap that may provide for a delay of “n ⁇ 1” to provide an output x 4D (t).
  • FIG. 17 illustrates one embodiment of a display that may be provided to the operator for selecting the various delays to be applied to the input variables and the output variables utilized in training.
  • a delay for the variable temp1 of ⁇ 4.0, ⁇ 3.5, and ⁇ 3.0 three separate input variables have been selected for input to the training model 20 .
  • three separate outputs are shown as selected, one for delay 0.0, one for a delay 0.5, and one for a delay of 1.0 to predict present and future values of the variable.
  • Each of these may be processed to vary the absolute value of the delays associated with the input variables. It may therefore be seen that a maximum buffer of ⁇ 4.0 for an output of 0.0 may be needed in order to provide for the multiple taps. Further, it may be seen that it is not necessary to completely replicate the data in any of the delayed variable columns as a separate column, thus increasing the amount of memory utilized.
  • FIG. 18 is a block diagram of one embodiment of a system for generating process dependent delays.
  • a buffer 170 is illustrated having a length of N, which may receive an input variable x′ n (t) from the preprocessor 12 to provide on the output thereof an output x nD (t) as a delayed input to the training model 20 .
  • a multiplexer 172 may be provided which has multiple inputs, one from each of the n buffer registers with a ⁇ -select circuit 174 provided for selecting which of the taps to output.
  • the value of ⁇ may be a function of other variables parameters such as temperature, pressure, flow rates, etc. For example, it may be noted empirically that the delays are a function of temperature.
  • the temperature relationship may be placed in the block 174 and then the external parameters input and the value of ⁇ utilized to select the various taps input to the multiplexer 172 for output therefrom as a delay input.
  • the system of FIG. 18 may also be utilized in the run-time operation wherein the various delay settings and functional relationships of the delay with respect to the external parameters are stored in the storage area 18 .
  • the external parameters may then be measured and the value of ⁇ selected as a function of this temperature and the functional relationship provided by the information stored in the storage area 18 . This is to be compared with the training operation wherein this information is externally input to the system. For example, with reference to FIG.
  • FIG. 19 is a block diagram of one embodiment of a preprocessing system for setting delay parameters, where the delay parameters may be learned.
  • the preprocessing system is not illustrated; rather, a table 176 of the preprocess data is shown.
  • the methods for achieving the delay may differ somewhat, as described below.
  • the delay may be achieved by a time delay adjustor 178 , which may utilize the stored parameters in a delayed parameter block 18 ′.
  • the delay parameter block 18 ′ is similar to the delay setting block 18 , with the exception that absolute delays are not contained therein. Rather, information relating to a window of data may be stored in the delay parameter block 18 ′.
  • the time delay adjustor 178 may be operable to select a window of data within each set of data in the table 176 , the data labeled x′ 1 through x′ n .
  • the time delay adjustor 178 may be operable to receive data within a defined window associated with each of the sets of data x′ 1 -x′ n and convert this information into a single value for output therefrom as an input value IN 1 -IN n .
  • These may be directly input to a system model 26 ′, which system model 26 ′ is similar to the run-time system model 26 and the training model 20 in that it is realized with a non-linear model (e.g., a support vector machine).
  • the non-linear model is illustrated as having an input layer 179 , a middle layer 180 and an output layer 182 .
  • the middle layer 180 may be operable to map the input layer 179 to the output layer 182 , as described below.
  • this is a non-linear mapping function.
  • the time delay adjustor 178 may be operable to linearly map each of sets of data x′ 1 -x′ n in the table 176 to the input layer 179 .
  • This mapping function may be dependent upon the delay parameters in the delay parameter block 18 ′. As described below, these parameters may be learned under the control of a learning module 183 , which learning module 183 may be controlled during the support vector machine training in the training mode. It is similar to that described above with respect to FIG. 4.
  • the learning module 183 may be operable to control both the time delay adjustor block 178 and the delay parameter block 18 ′ to change the values thereof in training of the system model 26 ′.
  • target outputs may be input to the output layer 182 and a set of training data input thereto in the form of the chart 176 , it being noted that this is already preprocessed in accordance with the operation as described above.
  • the model parameters of the system model 26 ′ stored in the storage area 22 may then be adjusted in accordance with a predetermined training algorithm to minimize the error.
  • the error may only be minimized to a certain extent for a given set of delays. Only by setting the delays to their optimum values may the error be minimized to the maximum extent. Therefore, the learning module 183 may be operable to vary the parameters in the delay parameter block 18 ′ that are associated with the timing delay adjustor 178 in order to further minimize the error.
  • FIG. 20 is a flowchart illustrating the determination of time delays for the training operation, according to one embodiment.
  • This flowchart may be initiated at a time delay block 198 and may then continue to a function block 200 to select the delays. In one embodiment, this may be performed by the operator as described above with respect to FIG. 17.
  • the program may then continue to a decision block 202 to determine whether variable ⁇ are selected.
  • the program may continue along a “Yes” path to a function block 204 to receive an external input and vary the value of ⁇ in accordance with the relationship selected by the operator, this being a manual operation in the training mode.
  • the program may then continue to a decision block 206 to determine whether the value of ⁇ is to be learned by an adaptive algorithm. If variable ⁇ are not selected in the decision block 202 , the program may then continue around the function block 204 along the “No” path.
  • the program may continue from the decision block 206 to a function block 208 to learn the value of ⁇ adaptively. The program may then proceed to a function block 210 to save the value of ⁇ . If no adaptive learning is required, the program may continue from the decision block 206 along the “No” path to function block 210 .
  • the model 20 may be trained, as indicated by a function block 212 and then the parameters may be stored, as indicated by a function block 214 . Following storage of the parameters, the program may flow to a DONE block 216 .
  • FIG. 21 is a flowchart depicting operation of the system in run-time mode, according to one embodiment.
  • the operation may be initiated at a run block 220 and may then proceed to a function block 222 to receive the data and then to a decision block 224 to determine whether the pre-time merge process is to be entered. If so, the program may proceed along a “Yes” path to a function block 226 to preprocess the data with the stored sequence and then to a decision block 228 . If not, the program may continue along the “No” path to the input of decision block 228 . Decision block 228 may determine whether the time merge operation is to be performed.
  • the program may proceed along the “Yes” path to function block 230 to time merge with the stored method and then to the input of a decision block 232 and, if not, the program may continue along the “No” path to the decision block 232 .
  • the decision block 232 may determine whether the post-time merge process is to be performed. If so, the program may proceed along the “Yes” path to a function block 234 to process the data with the stored sequence and then to a function block 236 to set the buffer equal to the maximum ⁇ for the delay. If not, (i.e., if the post-time merge process is not selected), the program may proceed from the decision block 232 along the “No” path to the input of function block 236 .
  • the program may continue to a decision block 238 to determine whether the value of ⁇ is to be varied. If so, the program may proceed to a function block 240 to set the value of ⁇ variably, then to the input of a function block 242 and, if not, the program may continue along the “No” path to function block 242 .
  • Function block 242 may be operable to buffer data and generate run-time inputs. The program may then continue to a function block 244 to load the model parameters. The program may then proceed to a function block 246 to process the generated inputs through the model and then to a decision block 248 to determine whether all of the data has been processed. If all of the data has not been processed, the program may continue along the “No” path back to the input of function block 246 until all data are processed and then along the “Yes” path to return block 250 .
  • FIG. 22 is a flowchart for the operation of setting the value of ⁇ variably (i.e., expansion of the function block 240 , as illustrated in FIG. 21), according to one embodiment.
  • the operation may be initiated at a block 240 , set ⁇ variably, and then may proceed to a function block 254 to receive the external control input.
  • the value of ⁇ may be varied in accordance with the relationship stored in the storage area 14 , as indicated by a function block 256 . Finally, the operation may proceed to a return function block 258 .
  • FIG. 23 is a simplified block diagram for the overall run-time operation, according to one embodiment.
  • Data may be initially output by the DCS 24 during run-time.
  • the data may then be preprocessed in the preprocess block 34 in accordance with the preprocess parameters stored in the storage area 14 .
  • the data may then be delayed in the delay block 36 in accordance with the delay settings set in the delay block 18 , this delay block 18 may also receive the external block control input, which may include parameters on which the value of ⁇ depends to provide the variable setting operation that was utilized during the training mode.
  • the output of the delay block 36 may then be input to a selection block 260 , which may receive a control input. This selection block 260 may select either a control support vector machine or a prediction support vector machine.
  • a predictive system model 262 may be provided and a control model 264 may be provided, as shown. Both models 262 and 264 may be identical to the training model 20 and may utilize the same parameters; that is, models 262 and 264 may have stored therein a representation of the system that was trained in the training model 20 .
  • the predictive system model 262 may provide on the output thereof predictive outputs, and the control model 264 may provide on the output thereof predicted system inputs for the DCS 24 . These predicted system inputs may be stored in a block 266 and then may be translated to control inputs to the DCS 24 .
  • a predictive support vector machine may operate in a run-time mode or in a training mode with a data preprocessor for preprocessing the data prior to input to a system model.
  • the predictive support vector machine may include an input layer, an output layer and a middle layer for mapping the input layer to the output layer through a representation of a run-time system.
  • Training data derived from the training system may be stored in a data file, which training data may be preprocessed by a data preprocessor to generate preprocessed training data, which may then be input to the support vector machine and trained in accordance with a predetermined training algorithm.
  • the model parameters of the support vector machine may then be stored in a storage device for use by the data preprocessor in the run-time mode.
  • run-time data may be preprocessed by the data preprocessor in accordance with the stored data preprocessing parameters input during the training mode and then this preprocessed data may be input to the support vector machine, which support vector machine may operate in a prediction mode.
  • the support vector machine In the prediction mode, the support vector machine may output a prediction value.
  • a system for preprocessing data prior to training the model is presented.
  • the preprocessing operation may be operable to provide a time merging of the data such that each set of input data is input to a training system model on a uniform time base.
  • the preprocessing operation may be operable to fill in missing or bad data.
  • predetermined delays may be associated with each of the variables to generate delayed inputs. These delayed inputs may then be input to a training model and the training model may be trained in accordance with a predetermined training algorithm to provide a representation of the system. This representation may be stored as model parameters.
  • the preprocessing steps utilized to preprocess the data may be stored as a sequence of preprocessing algorithms and the delay values that may be determined during training may also be stored.
  • a distributed control system may be controlled to process the output parameters therefrom in accordance with the process algorithms and set delays in accordance with the predetermined delay settings.
  • a predictive system model, or a control model may then be built on the stored model parameters and the delayed inputs input thereto to provide a predicted output. This predicted output may provide for either a predicted output or a predicted control input for the run-time system. It is noted that this technique may be applied to any of a variety of application domains, and is not limited to plant operations and control. It is further noted that the delay described above may be associated with other variables than time. In other words, the delay may refer to offsets in the ordered correlation between process variables according to an independent variable other than time t.
  • various embodiments of the systems and methods described above may perform preprocessing of input data for training and/or operation of a support vector machine.

Abstract

A system and method for preprocessing input data to a support vector machine (SVM). The SVM is a system model having parameters that define the representation of the system being modeled, and operates in two modes: run-time and training. A data preprocessor preprocesses received data in accordance with predetermined preprocessing parameters, and outputs preprocessed data. The data preprocessor includes an input buffer for receiving and storing the input data. The input data may be on different time scales. A time merge device determines a desired time scale and reconciles the input data so that all of the input data are placed on the desired time scale. An output device outputs the reconciled data from the time merge device as preprocessed data. The reconciled data may be input to the SVM in training mode to train the SVM, and/or in run-time mode to generate control parameters and/or predictive output information.

Description

    BACKGROUND OF THE INVENTION
  • 1. Field of the Invention [0001]
  • The present invention relates generally to the field of predictive system models. More particularly, the present invention relates to preprocessing of input data so as to correct for different time scales, transforms, missing or bad data, and/or time-delays prior to input to a support vector machine for either training of the support vector machine or operation of the support vector machine. [0002]
  • 2. Description of the Related Art [0003]
  • Many predictive systems may be characterized by the use of an internal model which represents a process or system for which predictions are made. Predictive model types may be linear, non-linear, stochastic, or analytical, among others. However, for complex phenomena non-linear models may generally be preferred due to their ability to capture non-linear dependencies among various attributes of the phenomena. Examples of non-linear models may include neural networks and support vector machines (SVMs). [0004]
  • Generally, a model is trained with training data, e.g., historical data, in order to reflect salient attributes and behaviors of the phenomena being modeled. In the training process, sets of training data may be provided as inputs to the model, and the model output may be compared to corresponding sets of desired outputs. The resulting error is often used to adjust weights or coefficients in the model until the model generates the correct output (within some error margin) for each set of training data. The model is considered to be in “training mode” during this process. After training, the model may receive real-world data as inputs, and provide predictive output information which may be used to control the process or system or make decisions regarding the modeled phenomena. It is desirable to allow for pre-processing of input data of predictive models (e.g., non-linear models, including neural networks and support vector machines), particularly in the field of e-commerce. [0005]
  • Predictive models may be used for analysis, control, and decision making in many areas, including electronic commerce (i.e., e-commerce), e-marketplaces, financial (e.g., stocks and/or bonds) markets and systems, data analysis, data mining, process measurement, optimization (e.g., optimized decision making, real-time optimization), quality control, as well as any other field or domain where predictive or classification models may be useful and where the object being modeled may be expressed abstractly. For example, quality control in commerce is increasingly important. The control and reproducibility of quality is be the focus of many efforts. For example, in Europe, quality is the focus of the ISO (International Standards Organization, Geneva, Switzerland) 9000 standards. These rigorous standards provide for quality assurance in production, installation, final inspection, and testing of processes. They also provide guidelines for quality assurance between a supplier and customer. [0006]
  • A common problem that is encountered in training support vector machines for prediction, forecasting, pattern recognition, sensor validation and/or processing problems is that some of the training/testing patterns may be missing, corrupted, and/or incomplete. Prior systems merely discarded data with the result that some areas of the input space may not have been covered during training of the support vector machine. For example, if the support vector machine is utilized to learn the behavior of a chemical plant as a function of the historical sensor and control settings, these sensor readings are typically sampled electronically, entered by hand from gauge readings, and/or entered by hand from laboratory results. It is a common occurrence in real-world problems that some or all of these readings may be missing at a given time. It is also common that the various values may be sampled on different time intervals. Additionally, any one value may be “bad” in the sense that after the value is entered, it may be determined by some method that a data item was, in fact, incorrect. Hence, if a given set of data has missing values, and that given set of data is plotted in a table, the result may be a partially filled-in table with intermittent missing data or “holes”. These “holes” may correspond to “bad” data or “missing” data. [0007]
  • Conventional support vector machine training and testing methods require complete patterns such that they are required to discard patterns with missing or bad data. The deletion of the bad data in this manner is an inefficient method for training a support vector machine. For example, suppose that a support vector machine has ten inputs and ten outputs, and also suppose that one of the inputs or outputs happens to be missing at the desired time for fifty percent or more of the training patterns. Conventional methods would discard these patterns, leading to no training for those patterns during the training mode and no reliable predicted output during the run mode. The predicted output corresponding to those certain areas may be somewhat ambiguous and/or erroneous. In some situations, there may be as much as a 50% reduction in the overall data after screening bad or missing data. Additionally, experimental results have shown that support vector machine testing performance generally increases with more training data, therefore throwing away bad or incomplete data may decrease the overall performance of the support vector machine. [0008]
  • Another common issue concerning input data for support vector machines relates to situations when the data are retrieved on different time scales. As used herein, the term “time scale” is meant to refer to any aspect of the time-dependency of data. As is well known in the art, input data to a support vector machine is generally required to share the same time scale to be useful. This constraint applies to data sets used to train a support vector machine, i.e., input to the SVM in training mode, and to data sets used as input for run-time operation of a support vector machine, e.g., input to the SVM in run-time mode. Additionally, the time scale of the training data generally must be the same as that of the run-time input data to insure that the SVM behavior in run-time mode corresponds to the trained behavior learned in training mode. [0009]
  • In one example of input data (for training and/or operation) with differing time scales, one set of data may be taken on an hourly basis and another set of data taken on a quarter hour (i.e., every fifteen minutes) basis. In this case, for three out of every four data records on the quarter hour basis there will be no corresponding data from the hourly set. Thus, the two data sets are differently synchronous, i.e., have different time scales. [0010]
  • As another example of different time scales for input data sets, in one data set the data sample periods may be non-periodic, producing asynchronous data, while another data set may be periodic or synchronous, e.g., hourly. These two data sets may not be useful together as input to the SVM while their time-dependencies, i.e., their time scales, differ. In another example of data sets with differing time scales, one data set may have a “hole” in the data, as described above, compared to another set, i.e., some data may be missing on one of the data sets. The presence of the hole may be considered to be an asynchronous or anomalous time interval in the data set, and thus may be considered to have an asynchronous or inhomogeneous time scale. [0011]
  • In yet another example of different time scales for input data sets, two data sets may have two different respective time scales, e.g., an hourly basis and a 15 minute basis. The desired time scale for input data to the SVM may have a third basis, e.g., daily. [0012]
  • While the issues above have been described with respect to time-dependent data, i.e., where the independent variable of the data is time, t, these same issues may arise with different independent variables. In other words, instead of data being dependent upon time, e.g., D(t), the data may be dependent upon some other variable, e.g., D(x). [0013]
  • In addition to data retrieved over different time periods, data may also be taken on different machines in different locations with different operating systems and quite different data formats. It is essential to be able to read all of these different data formats, keeping track of the data values and the timestamps of the data, and to store both the data values and the timestamps for future use. It is a formidable task to retrieve these data, keeping track of the timestamp information, and to read it into an internal data format (e.g., a spreadsheet) so that the data may be time merged. [0014]
  • Inherent delays in a system is another issue which may affect the use of time-dependent data. For example, in a chemical processing system, a flow meter output may provide data at time t[0015] 0 at a given value. However, a given change in flow resulting in a different reading on the flow meter may not affect the output for a predetermined delay τ. In order to predict the output, this flow meter output must be input to the support vector machine at a delay equal to τ. This must also be accounted for in the training of the support vector machine. Thus, the timeline of the data must be reconciled with the timeline of the process. In generating data that account for time delays, it has been postulated that it may be possible to generate a table of data that comprises both original data and delayed data. This may necessitate a significant amount of storage in order to store all of the delayed data and all of the original data, wherein only the delayed data are utilized. Further, in order to change the value of the delay, an entirely new set of input data must be generated from the original set.
  • Thus, improved systems and methods for preprocessing data for training and/or operating a support vector machine are desired. [0016]
  • SUMMARY OF THE INVENTION
  • A system and method are presented for preprocessing input data to a non-linear predictive system model based on a support vector machine. The system model may utilize a support vector machine having a set of parameters associated therewith that define the representation of the system being modeled. The support vector machine may have multiple inputs, each of the inputs associated with a portion of the input data. The support vector machine parameters may be operable to be trained on a set of training data that is received from training data and/or a run-time system such that the system model is trained to represent the run-time system. The input data may include a set of target output data representing the output of the system and a set of measured input data representing the system variables. The target data and system variables may be reconciled by the preprocessor and then input to the support vector machine. A training device may be operable to train the support vector machine according to a predetermined training algorithm such that the values of the support vector machine parameters are changed until the support vector machine comprises a stored representation of the run-time system. Note that as used herein, the term “device” may refer to a software program, a hardware device, and/or a combination of the two. [0017]
  • In one embodiment of the present invention, the system may include a data storage device for storing training data from the run-time system. The support vector machine may operate in two modes, a run-time mode and a training mode. In the run-time mode, run-time data may be received from the run-time system. Similarly, in the training mode, data may be retrieved from the data storage device, the training data being both training input data and training output data. A data preprocessor may be provided for preprocessing received (i.e., input) data in accordance with predetermined preprocessing parameters to output preprocessed data. The data preprocessor may include an input buffer for receiving and storing the input data. The input data may be on different time scales. A time merge device may be operable to select a predetermined time scale and reconcile the input data so that all of the input data are placed on the same time scale. An output device may output the reconciled data from the time merge device as preprocessed data. The reconciled data may be used as input data to the system model, i.e., the support vector machine. In other embodiments, other scales than time scales may be determined for the data, and reconciled as described herein. [0018]
  • The support vector machine may have an input for receiving the preprocessed data, and may map it to an output through a stored representation of the run-time system in accordance with associated model parameters. A control device may control the data preprocessor to operate in either training mode or run-time mode. In the training mode, the preprocessor may be operable to process the stored training data and output preprocessed training data. A training device may be operable to train the support vector machine (in the training mode) on the training data in accordance with a predetermined training algorithm to define the model parameters on which the support vector machine operates. In the run-time mode, the preprocessor may be operable to preprocess run-time data received from the run-time system to output preprocessed run-time data. The support vector machine may then operate in the run-time mode, receiving the preprocessed input run-time data and generating a predicted output and/or control parameters for the run-time system. [0019]
  • The data preprocessor may further include a pre-time merge processor for applying one or more predetermined algorithms to the received data prior to input to the time merge device. A post-time merge processor (e.g., part of the output device) may be provided for applying one or more predetermined algorithms to the data output by the time merge device prior to output as the processed data. The preprocessed data may then have selective delay applied thereto prior to input to the support vector machine in both the run-time mode and the training mode. The one or more predetermined algorithms may be externally input and stored in a preprocessor memory such that the sequence in which the predetermined algorithms are applied is also stored. [0020]
  • In one embodiment, the input data associated with at least one of the inputs of the support vector machine may have missing data in an associated time sequence. The time merge device may be operable to reconcile the input data to fill in the missing data. [0021]
  • In one embodiment, the input data associated with a first one or more of the inputs may have an associated time sequence based on a first time interval, and a second one or more of the inputs may have an associated time sequence based on a second time interval. The time merge device may be operable to reconcile the input data associated with the first one or more of the inputs to the input data associated with the second one or more of the inputs, thereby generating reconciled input data associated with the at least one of the inputs having an associated time sequence based on the second time interval. [0022]
  • In one embodiment, the input data associated with a first one or more of the inputs may have an associated time sequence based on a first time interval, and the input data associated with a second one or more of the inputs may have an associated time sequence based on a second time interval. The time merge device may be operable to reconcile the input data associated with the first one or more of the inputs and the input data associated with the second one or more of the inputs to a time scale based on a third time interval, thereby generating reconciled input data associated with the first one or more of the inputs and the second one or more of the inputs having an associated time sequence based on the third time interval. [0023]
  • In one embodiment, the input data associated with a first one or more of the inputs may be asynchronous, and the input data associated with a second one or more of the inputs may be synchronous with an associated time sequence based on a time interval. The time merge device may be operable to reconcile the asynchronous input data associated with the first one or more of the inputs to the synchronous input data associated with the second one or more of the inputs, thereby generating reconciled input data associated with the first one or more of the inputs, where the reconciled input data comprise synchronous input data having an associated time sequence based on the time interval. [0024]
  • In one embodiment, the input data may include a plurality of system input variables, each of the system input variables including an associated set of data. A delay device may be provided that may be operable to select one or more input variables after preprocessing by the preprocessor and to introduce a predetermined amount of delay therein to output a delayed input variable, thereby reconciling the delayed variable to the time scale of the data set. This delayed input variable may be input to the system model. Further, this predetermined delay may be determined external to the delay device. [0025]
  • In one embodiment, the input data may include one or more outlier values which may be disruptive or counter-productive to the training and/or operation of the support vector machine. The received data may be analyzed to determine any outliers in the data set. In other words, the data may be analyzed to determine which, if any, data values fall above or below an acceptable range. [0026]
  • After the determination of any outliers in the data, the outliers, if any, may be removed from the data, thereby generating corrected input data. The removal of outliers may result in a data set with missing data, i.e., with gaps in the data. [0027]
  • In one embodiment, a graphical user interface (GUI) may be included whereby a user or operator may view the received data set, i.e., to visually inspect the data for bad data points, i.e., outliers. The GUI may further provide various tools for modifying the data, including tools for “cutting” the bad data from the set. [0028]
  • In one embodiment, the detection and removal of the outliers may be performed by the user via the GUI. In another embodiment, the user may use the GUI to specify one or more algorithms which may then be applied to the data programmatically, i.e., automatically. In other words, a GUI may be provided which is operable to receive user input specifying one or more data filtering operations to be performed on the input data, where the one or more data filtering operations operate to remove and/or replace the one or more outlier values. Additionally, the GUI may be further operable to display the input data prior to and after performing the filtering operations on the input data. Finally, the GUI may be operable to receive user input specifying a portion of said input data for the data filtering operations. [0029]
  • After the outliers have been removed from the data, the removed data may optionally be replaced, thereby “filling in” the gaps resulting from the removal of outlying data. Various techniques may be brought to bear to generate the replacement data, including, but not limited to, clipping, interpolation, extrapolation, spline fits, sample/hold of a last prior value, etc., as are well known in the art. [0030]
  • In another embodiment, the removed outliers may be replaced in a later stage of preprocessing, such as the time merge process described above. In this embodiment, the time merge process will detect that data are missing, and operate to fill the gap. [0031]
  • Thus, in one embodiment, the preprocess may operate as a data filter, analyzing input data, detecting outliers, and removing the outliers from the data set. The filter parameters may simply be a predetermined value limit or range against which a data value may be tested. If the value falls outside the range, the value may be removed, or clipped to the limit value, as desired. In one embodiment, the limit(s) or range may be determined dynamically, for example, based on the standard deviation of a moving window of data in the data set, e.g., any value outside a two sigma band for a moving window of 100 data points may be clipped or removed. [0032]
  • In one embodiment, the received input data may comprise training data including target input data and target output data, and the corrected data may comprise corrected training data which includes corrected target input data and corrected target output data. [0033]
  • In one embodiment, the support vector machine may be operable to be trained according to a predetermined training algorithm applied to the corrected target input data and the corrected target output data to develop model parameter values such that the support vector machine has stored therein a representation of the system that generated the target output data in response to the target input data. In other words, the model parameters of the support vector machine may be trained based on the corrected target input data and the corrected target output data, after which the support vector machine may represent the system. [0034]
  • In one embodiment, the input data may comprise run-time data, such as from the system being modeled, and the corrected data may comprise reconciled run-time data. In this embodiment, the support vector machine may be operable to receive the corrected run-time data and generate run-time output data. In one embodiment, the run-time output data may comprise control parameters for the system which may be usable to determine control inputs to the system for run-time operation of the system. For example, in an e-commerce system, control inputs may include such parameters as advertisement or product placement on a website, pricing, and credit limits, among others. [0035]
  • In another embodiment, the run-time output data may comprise predictive output information for the system which may be usable in making decisions about operation of the system. In an embodiment where the system may be a financial system, the predictive output information may indicate a recommended shift in investment strategies, for example. In an embodiment where the system may be a manufacturing plant, the predictive output information may indicate production costs related to increased energy expenses, for example. Thus, in one embodiment, the preprocessor may be operable to detect and remove and/or replace outlying data in an input data set for the support vector machine. [0036]
  • Various embodiments of the systems and methods described above may thus operate to preprocess input data for a support vector machine to reconcile data on different time scales to a common time scale. Various embodiments of the systems and methods may also operate to remove and/or replace bad or missing data in the input data. The resulting preprocessed input data may then be used to train and/or operate a support vector machine. [0037]
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • A better understanding of the present invention may be obtained when the following detailed description of various embodiments is considered in conjunction with the following drawings, in which: [0038]
  • FIG. 1 illustrates an exemplary computer system according to one embodiment of the present invention; [0039]
  • FIG. 2 is an exemplary block diagram of the computer system illustrated in FIG. 1, according to one embodiment of the present invention; [0040]
  • FIGS. 3A and 3B illustrate two embodiments of an overall block diagram of the system for both preprocessing data during the training mode and for preprocessing data during the run mode; [0041]
  • FIGS. 4A and 4B are simplified block diagrams of two embodiments of the system of FIGS. 3A and 3B; [0042]
  • FIG. 5 is a detailed block diagram of the preprocessor in the training mode according to one embodiment; [0043]
  • FIG. 6 is a simplified block diagram of the time merging operation, which is part of the preprocessing operation, according to one embodiment; [0044]
  • FIG. 7A illustrates a data block before the time merging operation, according to one embodiment; [0045]
  • FIG. 7B illustrates a data block after the time merging operation, according to one embodiment; [0046]
  • FIGS. [0047] 8A-8C illustrate diagrammatic views of the time merging operation, according to various embodiments;
  • FIGS. [0048] 9A-9C are flowcharts depicting various embodiments of a preprocessing operation;
  • FIGS. [0049] 10A-10F illustrate the use of graphical tools for preprocessing the “raw” data, according to various embodiments;
  • FIG. 11 illustrates the display for the algorithm selection operation, according to one embodiment; [0050]
  • FIG. 12 presents a series of tables and properties, according to one embodiment; [0051]
  • FIG. 13 is a block diagram depicting parameters associated with various stages in process flow relative to a plant output, according to one embodiment; [0052]
  • FIG. 14 illustrates a diagrammatic view of the relationship between the various plant parameters and the plant output, according to one embodiment; [0053]
  • FIG. 15 illustrates a diagrammatic view of the delay provided for input data patterns, according to one embodiment; [0054]
  • FIG. 16 illustrates a diagrammatic view of the buffer formation for each of the inputs and the method for generating the delayed input, according to one embodiment; [0055]
  • FIG. 17 illustrates the display for selection of the delays associated with various inputs and outputs in the support vector machine, according to one embodiment; [0056]
  • FIG. 18 is a block diagram for a variable delay selection, according to one embodiment; [0057]
  • FIG. 19 is a block diagram of the adaptive determination of the delay, according to one embodiment; [0058]
  • FIG. 20 is a flowchart depicting the time delay operation, according to one embodiment; [0059]
  • FIG. 21 is a flowchart depicting the run mode operation, according to one embodiment; [0060]
  • FIG. 22 is a flowchart for setting the value of the variable delay, according to one embodiment; and [0061]
  • FIG. 23 is a block diagram of the interface of the run-time preprocessor with a distributed control system, according to one embodiment.[0062]
  • While the invention is susceptible to various modifications and alternative forms, specific embodiments thereof are shown by way of example in the drawings and will herein be described in detail. It should be understood, however, that the drawings and detailed description thereto are not intended to limit the invention to the particular form disclosed, but on the contrary, the intention is to cover all modifications, equivalents and alternatives falling within the spirit and scope of the present invention as defined by the appended claims. [0063]
  • DETAILED DESCRIPTION OF SEVERAL EMBODIMENTS
  • Incorporation by Reference [0064]
  • U.S. Pat. No. 5,842,189, titled “Method for Operating a Neural Network With Missing and/or Incomplete Data”, whose inventors are James D. Keeler, Eric J. Hartman, and Ralph Bruce Ferguson, and which issued on Nov. 24, 1998, is hereby incorporated by reference in its entirety as though fully and completely set forth herein. [0065]
  • U.S. Pat. No. 5,729,661, titled “Method and Apparatus for Preprocessing Input Data to a Neural Network”, whose inventors are James D. Keeler, Eric J. Hartman, Steven A. O'Hara, Jill L. Kempf, and Devandra B. Godbole, and which issued on Mar. 17, 1998, is hereby incorporated by reference in its entirety as though fully and completely set forth herein. [0066]
  • FIG. 1—Computer System [0067]
  • FIG. 1 illustrates a [0068] computer system 1 operable to execute a support vector machine for performing modeling and/or control operations. One embodiment of a method for training and/or using a support vector machine is described below. The computer system 1 may be any type of computer system, including a personal computer system, mainframe computer system, workstation, network appliance, Internet appliance, personal digital assistant (PDA), television system or other device. In general, the term “computer system” can be broadly defined to encompass any device having at least one processor that executes instructions from a memory medium.
  • As shown in FIG. 1, the [0069] computer system 1 may include a display device operable to display operations associated with the support vector machine. The display device may also be operable to display a graphical user interface for process or control operations. The graphical user interface may comprise any type of graphical user interface, e.g., depending on the computing platform.
  • The [0070] computer system 1 may include a memory medium(s) on which one or more computer programs or software components according to one embodiment of the present invention may be stored. For example, the memory medium may store one or more support vector machine software programs (support vector machines) which are executable to perform the methods described herein. Also, the memory medium may store a programming development environment application used to create, train, and/or execute support vector machine software programs. The memory medium may also store operating system software, as well as other software for operation of the computer system.
  • The term “memory medium” is intended to include an installation medium, e.g., a CD-ROM, floppy disks, or tape device; a computer system memory or random access memory such as DRAM, SRAM, EDO RAM, Rambus RAM, etc.; or a non-volatile memory such as a magnetic media, e.g., a hard drive, or optical storage. The memory medium may comprise other types of memory as well, or combinations thereof. In addition, the memory medium may be located in a first computer in which the programs are executed, or may be located in a second different computer which connects to the first computer over a network, such as the Internet. In the latter instance, the second computer may provide program instructions to the first computer for execution. [0071]
  • As used herein, the term “support vector machine” refers to at least one software program, or other executable implementation (e.g., an FPGA), that implements a support vector machine as described herein. The support vector machine software program may be executed by a processor, such as in a computer system. Thus, the various support vector machine embodiments described below are preferably implemented as a software program executing on a computer system. [0072]
  • FIG. 2—Computer System Block Diagram [0073]
  • FIG. 2 is an exemplary block diagram of the computer system illustrated in FIG. 1, according to one embodiment. It is noted that any type of computer system configuration or architecture may be used in conjunction with the system and method described herein, as desired, and FIG. 2 illustrates a representative PC embodiment. It is also noted that the computer system may be a general purpose computer system such as illustrated in FIG. 1, or other types of embodiments. The elements of a computer not necessary to understand the present invention have been omitted for simplicity. [0074]
  • The [0075] computer system 1 may include at least one central processing unit or CPU 2 which is coupled to a processor or host bus 5. The CPU 2 may be any of various types, including an x86 processor, e.g., a Pentium class, a PowerPC processor, a CPU from the SPARC family of RISC processors, as well as others. Main memory 3 is coupled to the host bus 5 by means of memory controller 4. The main memory 3 may store one or more computer programs or libraries according to the present invention. The main memory 3 also stores operating system software as well as the software for operation of the computer system, as well known to those skilled in the art.
  • The [0076] host bus 5 is coupled to an expansion or input/output bus 7 by means of a bus controller 6 or bus bridge logic. The expansion bus 7 is preferably the PCI (Peripheral Component Interconnect) expansion bus, although other bus types may be used. The expansion bus 7 may include slots for various devices such as a video display subsystem 8 and hard drive 9 coupled to the expansion bus 7, among others (not shown).
  • Overview of Support Vector Machines [0077]
  • In order to fully appreciate the various aspects and benefits produced by the various embodiments of the present invention, an understanding of support vector machine technology is useful. For this reason, the following section discusses support vector machine technology as applicable to the support vector machine of various embodiments of the system and method of the present invention. [0078]
  • A. Introduction [0079]
  • Classifiers generally refer to systems which process a data set and categorize the data set based upon prior examples of similar data sets, i.e., training data. In other words, the classifier system may be trained on a number of training data sets with known categorizations, then used to categorize new data sets. Historically, classifiers have been determined by choosing a structure, and then selecting a parameter estimation algorithm used to optimize some cost function. The structure chosen may fix the best achievable generalization error, while the parameter estimation algorithm may optimize the cost function with respect to the empirical risk. [0080]
  • There are a number of problems with this approach, however. These problems may include: [0081]
  • 1. The model structure needs to be selected in some manner. If this is not done correctly, then even with zero empirical risk, it is still possible to have a large generalization error. [0082]
  • 2. If it is desired to avoid the problem of over-fitting, as indicated by the above problem, by choosing a smaller model size or order, then it may be difficult to fit the training data (and hence minimize the empirical risk). [0083]
  • 3. Determining a suitable learning algorithm for minimizing the empirical risk may still be quite difficult. It may be very hard or impossible to guarantee that the correct set of parameters is chosen. [0084]
  • The support vector method is a recently developed technique which is designed for efficient multidimensional function approximation. The basic idea of support vector machines (SVMs) is to determine a classifier or regression machine which minimizes the empirical risk (i.e., the training set error) and the confidence interval (which corresponds to the generalization or test set error), that is, to fix the empirical risk associated with an architecture and then to use a method to minimize the generalization error. One advantage of SVMs as adaptive models for binary classification and regression is that they provide a classifier with minimal VC (Vapnik-Chervonenkis) dimension which implies low expected probability of generalization errors. SVMs may be used to classify linearly separable data and nonlinearly separable data. SVMs may also be used as nonlinear classifiers and regression machines by mapping the input space to a high dimensional feature space. In this high dimensional feature space, linear classification may be performed. [0085]
  • In the last few years, a significant amount of research has been performed in SVMs, including the areas of learning algorithms and training methods, methods for determining the data to use in support vector methods, and decision rules, as well as applications of support vector machines to speaker identification, and time series prediction applications of support vector machines. [0086]
  • Support vector machines have been shown to have a relationship with other recent nonlinear classification and modeling techniques such as: radial basis function networks, sparse approximation, PCA (principle components analysis), and regularization. Support vector machines have also been used to choose radial basis function centers. [0087]
  • A key to understanding SVMs is to see how they introduce optimal hyperplanes to separate classes of data in the classifiers. The main concepts of SVMs are reviewed in the next section. [0088]
  • B. How Support Vector Machines Work [0089]
  • The following describes support vector machines in the context of classification, but the general ideas presented may also apply to regression, or curve and surface fitting. [0090]
  • 1. Optimal Hyperplanes [0091]
  • Consider an m-dimensional input vector x=[x[0092] 1, . . . ,xm]TεX⊂Rm and a one-dimensional output yε{−1,1}. Let there exist n training vectors (xi,yi) i=1, . . . ,n. Hence we may write X=[x1x2 . . . xn] or X = [ X 11 X 1 n X m1 X mn ] ( 1 )
    Figure US20030139828A1-20030724-M00001
  • A hyperplane capable of performing a linear separation of the training data is described by[0093]
  • w T x+b=0  (2)
  • where w=[w[0094] 1w2 . . . wm]T, wεW⊂Rm.
  • The concept of an optimal hyperplane was proposed by Vladimir Vapnik. For the case where the training data are linearly separable, an optimal hyperplane separates the data without error and the distance between the hyperplane and the closest training points is maximal. [0095]
  • 2. Canonical Hyperplanes [0096]
  • A canonical hyperplane is a hyperplane (in this case we consider the optimal hyperplane) in which the parameters are normalized in a particular manner. [0097]
  • Consider (2) which defines the general hyperplane. It is evident that there is some redundancy in this equation as far as separating sets of points. Suppose we have the following classes[0098]
  • y i [w T x i +b]≧1 i=1, . . . ,n  (3)
  • where yε[−1,1]. [0099]
  • One way in which we may constrain the hyperplane is to observe that on either side of the hyperplane, we may have w[0100] Tx+b>0 or wTx+b<0. Thus, if we place the hyperplane midway between the two closest points to the hyperplane, then we may scale w,b such that
  • min |w T x i +b|=0  (4)
  • i=1 . . . n
  • Now, the distance d from a point x[0101] i to the hyperplane denoted by (w,b) is given by d ( w , b ; x i ) = w T x i + b w ( 5 )
    Figure US20030139828A1-20030724-M00002
  • where ∥w∥=w[0102] Tw. By considering two points on opposite sides of the hyperplane, the canonical hyperplane is found by maximizing the margin p ( w , b ) = min i ; y i = 1 d ( w , b ; x i ) + min j ; yj = 1 d ( w , b ; x j ) = 2 w ( 6 )
    Figure US20030139828A1-20030724-M00003
  • This implies that the minimum distance between two classes i and j is at least [2/(∥w∥)]. [0103]
  • Hence an optimization function which we seek to minimize to obtain canonical hyperplanes, is [0104] J ( w ) = 1 2 w 2 ( 7 )
    Figure US20030139828A1-20030724-M00004
  • Normally, to find the parameters, we would minimize the training error and there are no constraints on w,b. However, in this case, we seek to satisfy the inequality in (3). Thus, we need to solve the constrained optimization problem in which we seek a set of weights which separates the classes in the usually desired manner and also minimizing J(w), so that the margin between the classes is also maximized. Thus, we obtain a classifier with optimally separating hyperplanes. [0105]
  • C. An SVM Learning Rule [0106]
  • For any given data set, one possible method to determine w[0107] 0,b0 such that (8) is minimized would be to use a constrained form of gradient descent. In this case, a gradient descent algorithm is used to minimize the cost function J(w), while constraining the changes in the parameters according to (3). A better approach to this problem however, is to use Lagrange multipliers which is well suited to the nonlinear constraints of (3). Thus, we introduce the Lagrangian equation: L ( w , b , α ) = 1 2 w 2 - i = 1 n α i ( y i [ w T x i + b ] - 1 ) ( 8 )
    Figure US20030139828A1-20030724-M00005
  • where α[0108] i are the Lagrange multipliers and αi>0.
  • The solution is found by maximizing L with respect to (α[0109] i and minimizing it with respect to the primal variables w and b. This problem may be transformed from the primal case into its dual and hence we need to solve
  • max min L(w,b,α)  (9)
  • α w,b
  • At the solution point, we have the following conditions [0110] L ( w 0 , b 0 , α 0 ) w = 0 L ( w 0 , b 0 , α 0 ) b = 0 ( 10 )
    Figure US20030139828A1-20030724-M00006
  • where solution variables w[0111] 0,b00 are found. Performing the differentiations, we obtain respectively, i = 1 n α 0 i y i = 0 w 0 = i = 1 n α 0 i x i y i ( 11 )
    Figure US20030139828A1-20030724-M00007
  • and in each case α[0112] 0i>0, i=1, . . . ,n.
  • These are properties of the optimal hyperplane specified by (w[0113] 0,b0). From (14) we note that given the Lagrange multipliers, the desired weight vector solution may be found directly in terms of the training vectors.
  • To determine the specific coefficients of the optimal hyperplane specified by (w[0114] 0,b0) we proceed as follows. Substitute (13) and (14) into (9) to obtain L D ( w , b , α ) = i = 1 n α 1 - 1 2 i = 1 n j = 1 n α i α j y i y j ( x i T x j ) ( 12 )
    Figure US20030139828A1-20030724-M00008
  • It is necessary to maximize the dual form of the Lagrangian equation in (15) to obtain the required Lagrange multipliers. Before doing so however, consider (3) once again. We observe that for this inequality, there will only be some training vectors for which the equality holds true. That is, only for some (x[0115] i,yi) will the following equation hold:
  • y i [w T x i +b]=1 i=1, . . . ,n  (13)
  • The training vectors for which this is the case, are called support vectors. [0116]
  • Since we have the Karush-Kühn-Tucker (KKT) conditions that α[0117] 0i>0, i=1, . . . ,n and that given by (3), from the resulting Lagrangian equation in (9), we may write a further KKT condition
  • α0i(yi [w 0 T x i +b 0]−1)=0 i=1, . . . ,n  (14)
  • This means, that since the Lagrange multipliers α[0118] 0i are nonzero with only the support vectors as defined in (16), the expansion of w0 in (14) is with regard to the support vectors only.
  • Hence we have[0119]
  • w 0=Σα0i x i y i  (15)
  • i⊂S
  • where S is the set of all support vectors in the training set. To obtain the Lagrange multipliers α[0120] 0i, we need to maximize (15) only over the support vectors, subject to the constraints α0i>0, i=1, . . . ,n and that given in (13). This is a quadratic programming problem and may be readily solved. Having obtained the Lagrange multipliers, the weights w0 may be found from (18).
  • D. Classification of Linearly Separable Data [0121]
  • A support vector machine which performs the task of classifying linearly separable data is defined as[0122]
  • f(x)=sgn{w T x+b}  (16)
  • where w,b are found from the training set. Hence may be written as [0123] f ( x ) = sgn { i S α 0 i y i ( x i T x ) + b 0 } ( 17 )
    Figure US20030139828A1-20030724-M00009
  • where α[0124] 0i are determined from the solution of the quadratic programming problem in (15) and b0 is found as b 0 = 1 2 ( w 0 T x i + + w 0 T x i - ) ( 18 )
    Figure US20030139828A1-20030724-M00010
  • where x[0125] i + and xi are any input training vector examples from the positive and negative classes respectively. For greater numerical accuracy, we may also use b 0 = 1 2 n i = 1 n ( w 0 T x i + + w 0 T x i - ) ( 19 )
    Figure US20030139828A1-20030724-M00011
  • E. Classification of Nonlinearly Separable Data [0126]
  • For the case where the data are nonlinearly separable, the above approach can be extended to find a hyperplane which minimizes the number of errors on the training set. This approach is also referred to as soft margin hyperplanes. In this case, the aim is to[0127]
  • y i [w T x i +b]≧1−ξi i=1, . . . ,n  (20)
  • where ξ[0128] i>0, i=1, . . . ,n. In this case, we seek to minimize to optimize J ( w , ξ ) = 1 2 w 2 + C i = 1 n ξ i ( 21 )
    Figure US20030139828A1-20030724-M00012
  • F. Nonlinear Support Vector Machines [0129]
  • For some problems, improved classification results may be obtained using a nonlinear classifier. Consider (20) which is a linear classifier. A nonlinear classifier may be obtained using support vector machines as follows. [0130]
  • The classifier is obtained by the inner product x[0131] i Tx where i⊂ S, the set of support vectors. However, it is not necessary to use the explicit input data to form the classifier. Instead, all that is needed is to use the inner products between the support vectors and the vectors of the feature space.
  • That is, by defining a kernel[0132]
  • K(x i ,x)=x i T x  (22)
  • a nonlinear classifier can be obtained as [0133] f ( x ) = sgn { i S α 0 i y 1 K ( x i , x ) + b 0 } ( 23 )
    Figure US20030139828A1-20030724-M00013
  • G. Kernel Functions [0134]
  • A kernel function may operate as a basis function for the support vector machine. In other words, the kernel function may be used to define a space within which the desired classification or prediction may be greatly simplified. Based on Mercer's theorem, as is well known in the art, it is possible to introduce a variety of kernel functions, including: [0135]
  • 1. Polynomial [0136]
  • The p[0137] th order polynomial kernel function is given by
  • K(x i ,x)=  (24)
  • 2. Radial Basis Function[0138]
  • K(x i ,x)=e  (25)
  • where γ>0. [0139]
  • [0140] 3. Multilayer Networks
  • A multilayer network may be employed as a kernel function as follows. We have[0141]
  • K(x i ,x)=σ(θ(x i T x)+φ)  (26)
  • where σ is a sigmoid function. [0142]
  • Note that the use of a nonlinear kernel permits a linear decision function to be used in a high dimensional feature space. We find the parameters following the same procedure as before. The Lagrange multipliers may be found by maximizing the functional [0143] L D ( w , b , α ) = i = 1 n α i - 1 2 i = 1 n j = 1 n α i α j y i y j K ( x i , x ) ( 27 )
    Figure US20030139828A1-20030724-M00014
  • When support vector methods are applied to regression or curve-fitting, a high-dimensional “tube” with a radius of acceptable error is constructed which minimizes the error of the data set while also maximizing the flatness of the associated curve or function. In other words, the tube is an envelope around the fit curve, defined by a collection of data points nearest the curve or surface, i.e., the support vectors. [0144]
  • Thus, support vector machines offer an extremely powerful method of obtaining models for classification and regression. They provide a mechanism for choosing the model structure in a natural manner which gives low generalization error and empirical risk. [0145]
  • H. Construction of Support Vector Machines [0146]
  • A support vector machine may be built by specifying a kernel function, a number of inputs, and a number of outputs. Of course, as is well known in the art, regardless of the particular configuration of the support vector machine, some type of training process may be used to capture the behaviors and/or attributes of the system or process to be modeled. [0147]
  • The modular aspect of one embodiment of the present invention may take advantage of this way of simplifying the specification of a support vector machine. Note that more complex support vector machines may require more configuration information, and therefore more storage. [0148]
  • Various embodiments of the present invention contemplate other types of support vector machine configurations. In one embodiment, all that is required for the support vector machine is that the support vector machine be able to be trained and retrained so as to provide needed predicted values. [0149]
  • I. Support Vector Machine Training [0150]
  • The coefficients used in a support vector machine may be adjustable constants which determine the values of the predicted output data for given input data for any given support vector machine configuration. Support vector machines may be superior to conventional statistical models because support vector machines may adjust these coefficients automatically. Thus, support vector machines may be capable of building the structure of the relationship (or model) between the input data and the output data by adjusting the coefficients. While a conventional statistical model typically requires the developer to define the equation(s) in which adjustable constant(s) are used, the support vector machine may build the equivalent of the equation(s) automatically. [0151]
  • The support vector machine may be trained by presenting it with one or more training set(s). The one or more training set(s) are the actual history of known input data values and the associated correct output data values. [0152]
  • To train the support vector machine, the newly configured support vector machine is usually initialized by assigning random values to all of its coefficients. During training, the support vector machine may use its input data to produce predicted output data. [0153]
  • These predicted output data values may be used in combination with training input data to produce error data. These error data values may then be used to adjust the coefficients of the support vector machine. [0154]
  • It may thus be seen that the error between the output data and the training input data may be used to adjust the coefficients so that the error is reduced. [0155]
  • J. Advantages of Support Vector Machines [0156]
  • Support vector machines may be superior to computer statistical models because support vector machines do not require the developer of the support vector machine model to create the equations which relate the known input data and training values to the desired predicted values (i.e., output data). In other words, a support vector machine may learn relationships automatically during training. [0157]
  • However, it is noted that the support vector machine may require the collection of training input data with its associated input data, also called a training set. The training set may need to be collected and properly formatted. The conventional approach for doing this is to create a file on a computer on which the support vector machine is executed. [0158]
  • In one embodiment of the present invention, in contrast, creation of the training set may be done automatically, using historical data. This automatic step may eliminate errors and may save time, as compared to the conventional approach. Another benefit may be significant improvement in the effectiveness of the training function, since automatic creation of the training set(s) may be performed much more frequently. [0159]
  • Preprocessing Data for the Support Vector Machine [0160]
  • As mentioned above, in many applications, the time-dependence, i.e., the time resolution and/or synchronization, of training and/or real-time data may not be consistent, due to missing data, variable measurement chronologies or timelines, etc. In one embodiment of the invention, the data may be preprocessed to homogenize the timing aspects of the data, as described below. It is noted that in other embodiments, the data may be dependent on a different independent variable than time. It is contemplated that the techniques described herein regarding homogenization of time scales are applicable to other scales (i.e., other independent variables), as well. [0161]
  • FIG. 3A is an overall block diagram of the data preprocessing operation in both the training mode and the run-time mode, according to one embodiment. FIG. 3B is a diagram of the data preprocessing operation of FIG. 3A, but with an optional delay process included for reconciling time-delayed values in a data set. As FIG. 3A shows, in the training mode, one or more data files [0162] 10 may be provided (however, only one data file 10 is shown). The one or more data files 10 may include both input training data and output training data. The training data may be arranged in “sets”, e.g., corresponding to different variables, and the variables may be sampled at different time intervals. These data may be referred to as “raw” data. When the data are initially presented to an operator, the data are typically unformatted, i.e., each set of data is in the form that it was originally received. Although not shown, the operator may first format the data files so that all of the data files may be merged into a data-table or spreadsheet, keeping track of the original “raw” time information. This may be done in such a manner as to keep track of the timestamp for each variable. Thus, the “raw” data may be organized as time-value pairs of columns; that is, for each variable xi, there is an associated time of sample ti. The data may then be grouped into sets {xi, ti}.
  • If any of the time-vectors happen to be identical, it may be convenient to arrange the data such that the data will be grouped in common time scale groups, and data that is on, for example, a fifteen minute sample time scale may be grouped together and data sampled on a one hour sample time scale may be grouped together. However, any type of format that provides viewing of multiple sets of data is acceptable. [0163]
  • The one or more data files [0164] 10 may be input to a preprocessor 12 that may function to perform various preprocessing functions, such as determining bad or missing data, reconciling data to replace bad data or fill in missing data, and performing various algorithmic or logic functions on the data, among others. Additionally, the preprocessor 12 may be operable to perform a time merging operation, as described below. During operation, the preprocessor 12 may be operable to store various preprocessing algorithms in a given sequence in a storage area 14 (noted as preprocess algorithm sequence 14 in FIG. 3). As described below, the sequence may define the way in which the data are manipulated in order to provide the overall preprocessing operation.
  • After preprocessing by the [0165] preprocessor 12, the preprocessed data may be input into a training model 20, as FIG. 3A shows. The training model 20 may be a non-linear model (e.g., a support vector machine) that receives input data and compares it with target output data. Any of various training algorithms may be used to train the support vector machine to generate a model for predicting the target output data from the input data. Thus, in one embodiment, the training model may utilize a support vector machine that is trained on one or more of multiple training methods. Various weights within the support vector machine may be set during the training operation, and these may be stored as model parameters in a storage area 22. The training operation and the support vector machine may be conventional systems. It is noted that in one embodiment, the training model 20 and the runtime system model 26 may be the same system model operated in training mode and runtime mode, respectively. In other words, when the support vector machine is being trained, i.e., is in training mode, the model may be considered to be a training model, and when the support vector machine is in runtime mode, the model may be considered to be a runtime system model. In another embodiment, the runtime system model 26 may be distinct from the training model 20. For example, after the training model 20 (the SVM in training mode) has been trained, the resulting parameters which define the state of the SVM may be used to configure the runtime system model 26, which may be substantially a copy of the training model. Thus, one copy of the system model (the training model 20) may be trained while another copy of the system model (the runtime system model 26) is engaged with the real-time system or process being controlled. In one embodiment, the model parameter values in storage area 22 resulting from the training model may be used to periodically or continuously update the runtime system model 26, as shown.
  • A Distributed Control System (DCS) [0166] 24 may be provided that may be operable to generate various system measurements and control settings representing system variables (e.g., temperature, flow rates, etc.), that comprise the input data to the system model. The system model may either generate control inputs for control of the DCS 24 or it may provide a predicted output, these being conventional operations which are well known in the art. In one embodiment, the control inputs may be provided by the run-time system model 26, which has an output 28 and an input 30, as shown. The input 30 may include the preprocessed and, in the embodiment of FIG. 3B, delayed, data and the output may either be a predictive output, or a control input to the DCS 24. In the embodiments of FIGS. 3A and 3B, this is illustrated as control inputs 28 to the DCS 24. The run-time system model 26 is shown as utilizing the model parameters stored in the storage area 22. It is noted that the run-time system model 26 may include a representation learned during the training operation, which representation was learned on the preprocessed data, i.e., the trained SVM. Therefore, data generated by the DCS 24 may be preprocessed in order to correlate with the representation stored in the run-time system model 26.
  • The output data of the [0167] DCS 24 may be input to a run-time process block 34, which may be operable to process the data in accordance with the sequence of preprocessing algorithms stored in the storage area 14, which are generated during the training operation. in one embodiment, the output of the run-time processor 34 may be input to a run-time delay process 36 to set delays on the data in accordance with the delay settings stored in the storage area 18. This may provide the overall preprocessed data output on the line 30 input to the run-time system model 26.
  • In one embodiment, after preprocessing by the [0168] preprocessor 12, the preprocessed data may optionally be input to a delay block 16, as shown in FIG. 3B. As mentioned above, inherent delays in a system may affect the use of time-dependent data. For example, in a chemical processing system, a flow meter output may provide data at time t0 at a given value. However, a given change in flow resulting in a different reading on the flow meter may not affect the output for a predetermined delay τ. In order to predict the output, this flow meter output must be input to the support vector machine at a delay equal to τ. This may be accounted for in the training of the support vector machine through the use of the delay block 16. Thus, the time scale of the data may be reconciled with the time scale of the system or process as follows.
  • The [0169] delay block 16 may be operable to set the various delays for different sets of data. This operation may be performed on both the target output data and the input training data. The delay settings may be stored in a storage area 18 (noted as delay settings 18 in FIG. 3). In this embodiment, the output of the delay block 16 may be input to the training model 20. Note that if the delay process is not used, then the blocks ‘set delay’ 16, ‘delay settings’ 18, and ‘runtime delay’ 36 may be omitted, and therefore, the outputs from the preprocessor 12 and the runtime process 34 may be fed into the training model 20 and the runtime system model 26, respectively, as shown in FIG. 3A. In one embodiment, the delay process, as implemented by the blocks ‘set delay’ 16, ‘delay settings’ 18, and ‘runtime delay’ 36 may be considered as part of the data preprocessor 12. Similarly, the introduction of delays into portions of the data may be considered to be reconciling the input data to the time scale of the system or process being modeled, operated, or controlled.
  • FIG. 4A is a simplified block diagram of the system of FIG. 3A, wherein a [0170] single preprocessor 34′ is utilized, according to one embodiment. FIG. 4B is a simplified block diagram of the system of FIG. 3B, wherein the delay process, i.e., a single delay 36′, is also included, according to one embodiment.
  • As FIG. 4A shows, the output of the [0171] preprocessor 34′ may be input to a single system model 26′. In operation, the preprocessor 34′ and the system model 26′ may operate in both a training mode and a run-time mode. A multiplexer 35 may be provided that receives the output from the data file(s) 10 and the output of the DCS 24, and generates an output including operational variables, e.g., plant or process variables, of the DCS 24. The output of the multiplexer may then be input to the preprocessor 34′. In one embodiment, a control device 37 may be provided to control the multiplexer 35 to select either a training mode or a run-time mode. In the training mode, the data file(s) 10 may have the output thereof selected by the multiplexer 35 and the preprocessor 34′ may be operable to preprocess the data in accordance with a training mode, i.e., the preprocessor 34′ may be utilized to determine the preprocessed algorithm sequence stored in the storage area 14. An input/output (I/O) device 41 may be provided for allowing an operator to interface with the control device 37. The system model 26′ may be operated in a training mode such that the target data and the input data to the system model 26′ are generated, the training controlled by training block 39. The training block 39 may be operable to select one of multiple training algorithms for training the system model 26′. The model parameters may be stored in the storage area 22. Note that as used herein, the term “device” may refer to a software program, a hardware device, and/or a combination of the two.
  • In one embodiment, after training, the [0172] control device 37 may place the system in a run-time mode such that the preprocessor 34′ is operable to apply the algorithm sequence in the storage area 14 to the data selected by the multiplexer 35 from the DCS 24. After the algorithm sequence is applied, the data may be output to the system model 26′ which may then operate in a predictive mode to either predict an output or to predict/determine control inputs for the DCS 24.
  • It is noted that in one embodiment, the [0173] optional delay process 36′ and settings 18′ may be included, i.e., the data may be delayed, as shown in FIG. 4B. In this embodiment, after the algorithm sequence is applied, the data may be output to the delay block 36′, which may introduce the various delays in the storage area 18, and then these may be input to the system model 26′ which may then operate in a predictive mode to either predict an output or to predict/determine control inputs for the DCS 24. As FIG. 4B shows, the output of the delay 36′ may be input to the single system model 26′. In one embodiment, the delay 36′ may be controlled by the control device 37 to determine the delay settings for storage in the storage area 18, as shown.
  • FIG. 5 is a more detailed block diagram of the [0174] preprocessor 12 utilized during the training mode, according to one embodiment. In one embodiment, there may be three stages to the preprocessing operation. The central operation may be a time merge operation (or a merge operation based on some other independent variable), represented by block 40. However, in one embodiment, prior to performing a time merge operation on the data, a pre-time merge process may be performed, as indicated by block 42. In one embodiment, after the time merge operation, the data may be subjected to a post-time merge process, as indicated by block 44.
  • In an embodiment in which the delay process is included, the output of the post-time [0175] merge process block 44 may provide the preprocessed data for input to the delay block 16, shown in FIGS. 3B and 4B, and described above.
  • In one embodiment, a [0176] controller 46 may be included for controlling the process operation of the blocks 40-44, the outputs of which may be input to the controller 46 on lines 48. The controller 46 may be interfaced with a functional algorithm storage area 50 through a bus 52 and a time merge algorithm 54 through a bus 56. The functional algorithm storage area 50 may be operable to store various functional algorithms that may be mathematical, logical, etc., as described below. The time merge algorithm storage area 54 may be operable to contain various time merge formats that may be utilized, such as extrapolation, interpolation or a boxcar method, among others.
  • In one embodiment, a process [0177] sequence storage area 58 may be included that may be operable to store the sequence of the various processes that are determined during the training mode. As shown, an interface to these stored sequences may be provided by a bi-directional bus 60. During the training mode, the controller 46 may determine which of the functional algorithms are to be applied to the data and which of the time merge algorithms are to be applied to the data in accordance with instructions received from an operator input through an input/output device 62. During the run-time mode, the process sequence in the storage area 58 may be utilized to apply the various functional algorithms and time merge algorithms to input data, for use in operation or control of the real-time system or process.
  • FIG. 6 is a simplified block diagram of a time merge operation, according to one embodiment. All of the input data x(t) may be input to the [0178] time merge block 40 to provide time merge data xD(t) on the output thereof. Although not shown, the output target data y(t) may also be processed through the time merge block 40 to generate time merged output data y′(t). Thus, in one embodiment, input data x(t) and/or target data y(t), may be processed through the time merge block 40 to homogenize the time-dependence of the data. As mentioned above, in other embodiments, input data x(v) and/or target data y(v), may be processed through the merge block 40 to homogenize the dependence of the data with respect to some other independent variable v (i.e., instead of time t). In the descriptions that follow, dependence of the data on time t is assumed, however, the techniques are similarly applicable to data which depend on other variables.
  • Referring now to FIGS. 7A and 7B, there are illustrated embodiments of data blocks of one input data set x[0179] 1(t), shown in FIG. 7A, and the resulting time merged output x′1D(t), shown in FIG. 7B. It may be seen that the waveform associated with x1(t) has only a certain number, n, of sample points associated therewith. In one embodiment, the time-merge operation may comprise a transform that takes one or more columns of data, x1(ti), such as that shown in FIG. 7A, with ni time samples at times t1′. That is, the time-merge operation may comprise a function, Ω, that produces a new set of data {x′} on a new time scale t′ from the given set of data x(t) sampled at t.
  • {{right arrow over (x)}′,{right arrow over (t)}′}=Ω{{right arrow over (x)},{right arrow over (t)}}  (28)
  • This function may be performed via any of a variety of conventional extrapolation, interpolation, or box-car algorithms (among others). An example representation as a C-language callable function is shown below:[0180]
  • return=time_merge({right arrow over (x)} 1 ,{right arrow over (x)} 2 . . . {right arrow over (x)} k ,{right arrow over (t)} 1′ . . . {right arrow over (x)} k′ ,{right arrow over (t)} 1′)  (29)
  • where x[0181] i, ti are vectors of the old values and old times; xi′ . . . xk′ are vectors of the new values; and t′ is the new time-scale vector.
  • FIG. 8A shows a data table with bad, missing, or incomplete data. The data table may consist of data with time disposed along a vertical scale and the samples disposed along a horizontal scale. Each sample may include many different pieces of data, with two data intervals illustrated. It is noted that when the data are examined for both the data sampled at the time interval “1” and the data sampled at the time interval “2”, that some portions of the data result in incomplete patterns. This is illustrated by a dotted [0182] line 63, where it may be seen that some data are missing in the data sampled at time interval “1” and some data are missing in time interval “2”. A complete support vector machine pattern is illustrated in box 64, where all the data are complete. Of interest is the time difference between the data sampled at time interval “1” and the data sampled at time interval “2”. In time interval “1”, the data are essentially present for all steps in time, whereas data sampled at time interval “2” are only sampled periodically relative to data sampled at time interval “1”. As such, a data reconciliation procedure may be implemented that may fill in the missing data, for example, by interpolation, and may also reconcile between the time samples in time interval “2” such that the data are complete for all time samples for both time interval “1” and time interval “2”.
  • The support vector machine based models that are utilized for time-series prediction and control may require that the time-interval between successive training patterns be constant. Since the data generated from real-world systems may not always be on the same time scale, it may be desirable to time-merge the data before it is used for training or running the support vector machine based model. To achieve this time-merge operation, it may be necessary to extrapolate, interpolate, average, or compress the data in each column over each time-region so as to give input values x′(t) that are on the appropriate time-scale. All of these operations are referred to herein as “data reconciliation”. The reconciliation algorithm utilized may include linear estimates, spline-fit, boxcar algorithms, etc. If the data are sampled too frequently in the time-interval, it may be necessary to smooth or average the data to generate samples on the desired time scale. This may be done by window averaging techniques, sparse-sample techniques or spline techniques, among others. [0183]
  • In general, x′(t) is a function of all or a portion of the raw values x(t) given at[0184]
  • {right arrow over (x)}′(t)=ƒ(x 1(t N),x 2(t N), . . . x n(t N);x 1(t N1),x 1(t N2) . . . x 1(t N1);x 1(t 1),x 2(t 1) . . . x n(t 1))  (30)
  • present and past times up to some maximum past time, X[0185] max. That is,
  • where some of the values of x[0186] i(ti) may be missing or bad.
  • In one embodiment, this method of finding x′(t) using past values may be based strictly on extrapolation. Since the system typically only has past values available during run-time mode, these past valuesmay preferably be reconciled. A simple method of reconciling is to take the next extrapolated value x′[0187] i(t)=xi(tN); that is, take the last value that was reported. More elaborate extrapolation algorithms may use past values xi(t-τij), jεt(0, . . . imax). For example, linear extrapolation may use: x i ( t ) = x i ( t N1 ) + [ x i ( t N ) x i ( t N1 ) t N t N1 ] t ; t > t N ( 31 )
    Figure US20030139828A1-20030724-M00015
  • Polynomial, spline-fit or support vector machine extrapolation techniques may use [0188] Equation 30, according to one embodiment. In one embodiment, training of the support vector machine may actually use interpolated values, i.e., Equation 31, wherein the case of interpolation, tN>t.
  • FIG. 8B illustrates one embodiment of an input data pattern and target output data pattern illustrating the preprocess operation for both preprocessing input data to provide time merged output data and also preprocessing the target output data to provide preprocessed target output data for training purposes. The data input x(t) may include a vector with many inputs, x[0189] 1(t), x2(t), . . . xn(t), each of which may be on a different time scale. It is desirable that the output x′(t) be extrapolated or interpolated to insure that all data are present on a single time scale. For example, if the data at x1(t) were on a time scale of one sample every second, represented by the time tk, and the output time scale were desired to be the same, this would require time merging the rest of the data to that time scale. It may be seen that in this example, the data x2(t) occurs approximately once every three seconds, it also being noted that this may be asynchronous data, although it is illustrated as being synchronized. In other words, in some embodiments, the time intervals between data samples may not be constant. The data buffer in FIG. 8B is illustrated in actual time. The reconciliation may be as simple as holding the last value of the input x2(t) until a new value is input thereto, and then discarding the old value. In this manner, an output may always exist. This technique may also be used in the case of missing data. However, a reconciliation routine as described above may also be utilized to insure that data are always on the output for each time slice of the vector x′(t). This technique may also be used with respect to the target output which is preprocessed to provide the preprocessed target output y′(t).
  • In the example of input data (for training and/or operation) with differing time scales, one set of data may be taken on an hourly basis and another set of data taken on a quarter hour (i.e., every fifteen minutes) basis, thus, for three out of every four data records on the quarter hour basis there will be no corresponding data from the hourly set. These areas of missing data must be filled in to assure that all data are presented at commonly synchronized times to the support vector machine. In other words, the time scales of the two data sets must be the same, and so must be reconciled. [0190]
  • As another example of reconciling different time scales for input data sets, in one data set the data sample periods may be non-periodic, producing asynchronous data, while another data set may be periodic or synchronous, e.g., hourly, thus, their time scales differ. In this case, the asynchronous data may be reconciled to the synchronous data. [0191]
  • In another example of data sets with differing time scales, one data set may have a “hole” in the data, as described above, compared to another set, i.e., some data may be missing in one of the data sets. The presence of the hole may be considered to be an asynchronous or anomalous time interval in the data set, which may then require reconciliation with a second data set to be useful with the second set. [0192]
  • In yet another example of different time scales for input data sets, two data sets may have two different respective time scales, e.g., an hourly basis and a 15 minute basis. The desired time scale for input data to the SVM may have a third basis, e.g., daily. Thus, the two data sets may need to be reconciled with the third timeline prior to being used as input to the SVM. [0193]
  • FIG. 8C illustrates one embodiment of the time merge operation. Illustrated are two formatted tables, one for the set of data x[0194] 1(t) and x2(t), the second for the set of data x′1(t) and x′2(t). The data set for x1(t) is illustrated as being on one time scale and the data set for x2(t) is on a second, different time scale. Additionally, one value of the data set x1(t) is illustrated as being bad, and is therefore “cut” from the data set, as described below. In this example, the preprocessing operation fills in, i.e., replaces, this bad data and then time merges the data, as shown. In this example, the time scale for x1(t) is utilized as a time scale for the time merge data such that the time merge data x′1(t) is on the same time scale with the “cut” value filled in as a result of the preprocessing operation and the data set x2(t) is processed in accordance with one of the time merged algorithms to provide data for x′2(t) and on the same time scale as the data x′1(t). These algorithms will be described in more detail below.
  • FIG. 9A is a high level flowchart depicting one embodiment of a preprocessing operation for preprocessing input data to a support vector machine. It should be noted that in other embodiments, various of the steps may be performed in a different order than shown, or may be omitted. Additional steps may also be performed. [0195]
  • The preprocess may be initiated at a [0196] start block 902. Then, in 904, input data for the support vector machine may be received, such as from a run-time system, or data storage. The received data may be stored in an input buffer.
  • As mentioned above, the support vector machine may comprise a non-linear model having a set of model parameters defining a representation of a system. The model parameters may be capable of being trained, i.e., the SVM may be trained via the model parameters or coefficients. The input data may be associated with at least two inputs of a support vector machine, and may be on different time scales relative to each other. In the case of missing data associated with a single input, the data may be considered to be on different timescales relative to itself, in that the data gap caused by the missing data may be considered an asynchronous portion of the data. [0197]
  • It should be noted that in other embodiments, the scales of the input data may be based on a different independent variable than time. In one embodiment, one time scale may be asynchronous, and a second time scale may be synchronous with an associated time sequence based on a time interval. In one embodiment, both time scales may be asynchronous. In yet another embodiment, both time scales may be synchronous, but based on different time intervals. As also mentioned above, this un-preprocessed input data may be considered “raw” input data. [0198]
  • In [0199] 906, a desired time scale (or other scale, depending on the independent variable) may be determined. For example, a synchronous time scales represented in the data (if one exists) may be selected as the desired time scale. In another embodiment, a predetermined time scale may be selected.
  • In [0200] 908, the input data may be reconciled to the desired time scale. In one embodiment, the input data stored in the input buffer of 904 may be reconciled by a time merge device, such as a software program, thereby generating reconciled data. Thus, after being reconciled by a time merge process, all of the input data for all of the inputs may be on the same time scale. In embodiments where the independent variable of the data is not time, the merge device may reconcile the input data such that all of the input data are on the same independent variable scale.
  • In one embodiment, where the input data associated with at least one of the inputs has missing data in an associated time sequence, the time merge device may be operable to reconcile the input data to fill in the missing data, thereby reconciling the gap in the data to the time scale of the data set. [0201]
  • In one embodiment, the input data associated with first one or more of the inputs may have an associated time sequence based on a first time interval, and a second one or more of the inputs may have an associated time sequence based on a second time interval. In this case, the time merge device may be operable to reconcile the input data associated with the first one or more of the inputs to the input data associated with the second one or more other of the inputs, thereby generating reconciled input data associated with the first one or more of the inputs having an associated time sequence based on the second time interval. [0202]
  • In another embodiment, the input data associated with a first one or more of the inputs may have an associated time sequence based on a first time interval, and the input data associated with a second different one or more of the inputs may have an associated time sequence based on a second time interval. The time merge device may be operable to reconcile the input data associated with the first one or more of the inputs and the input data associated with the second one or more of the inputs to a time scale based on a third time interval, thereby generating reconciled input data associated with the first one or more of the inputs and the second one or more of the inputs having an associated time sequence based on the third time interval. [0203]
  • In one embodiment, the input data associated with a first one or more of the inputs may be asynchronous, and wherein the input data associated with a second one or more of the inputs may be synchronous with an associated time sequence based on a time interval. The time merge device may be operable to reconcile the asynchronous input data to the synchronous input data, thereby generating reconciled input data associated with the first one or more, wherein the reconciled input data comprise synchronous input data having an associated time sequence based on the time interval. [0204]
  • In [0205] 910, in response to the reconciliation of 908, the reconciled input data may be output. In one embodiment, an output device may output the data reconciled by the time merge device as reconciled data, where the reconciled data comprise the input data to the support vector machine.
  • In one embodiment, the received input data of [0206] 904 may comprise training data which includes target input data and target output data. The reconciled data may comprise reconciled training data which includes reconciled target input data and reconciled target output data which are both based on a common time scale (or other common scale).
  • In one embodiment, the support vector machine may be operable to be trained according to a predetermined training algorithm applied to the reconciled target input data and the reconciled target output data to develop model parameter values such that the support vector machine has stored therein a representation of the system that generated the target output data in response to the target input data. In other words, the model parameters of the support vector machine may be trained based on the reconciled target input data and the reconciled target output data, after which the support vector machine may represent the system. [0207]
  • In one embodiment, the input data of [0208] 904 may comprise run-time data, such as from the system being modeled, and the reconciled data of 908 may comprise reconciled run-time data. In this embodiment, the support vector machine may be operable to receive the run-time data and generate run-time output data. In one embodiment, the run-time output data may comprise control parameters for the system. The control parameters may be usable to determine control inputs to the system for run-time operation of the system. For example, in an e-commerce system, control inputs may include such parameters as advertisement or product placement on a website, pricing, and credit limits, among others.
  • In another embodiment, the run-time output data may comprise predictive output information for the system. For example, the predictive output information may be usable in making decisions about operation of the system. In an embodiment where the system may be a financial system, the predictive output information may indicate a recommended shift in investment strategies, for example. In an embodiment where the system may be a manufacturing plant, the predictive output information may indicate production costs related to increased energy expenses, for example. [0209]
  • FIG. 9B is a high level flowchart depicting another embodiment of a preprocessing operation for preprocessing input data to a support vector machine. As noted above, in other embodiments, various of the steps may be performed in a different order than shown, or may be omitted. Additional steps may also be performed. In this embodiment, the input data may include one or more outlier values which may be disruptive or counter-productive to the training and/or operation of the support vector machine. [0210]
  • The preprocess may be initiated at a [0211] start block 902. Then, in 904, input data for the support vector machine may be received, as described above with reference to FIG. 9A, and may be stored in an input buffer.
  • In [0212] 907, the received data may be analyzed to determine any outliers in the data set. In other words, the data may be analyzed to determine which, if any, data values fall above or below an acceptable range.
  • After the determination of any outliers in the data, in [0213] 909, the outliers, if any, may be removed from the data, thereby generating corrected input data. The removal of outliers may result in a data set with missing data, i.e., with gaps in the data.
  • In one embodiment, a graphical user interface (GUI) may be included whereby a user or operator may view the received data set. The GUI may thus provide a means for the operator to visually inspect the data for bad data points, i.e., outliers. The GUI may further provide various tools for modifying the data, including tools for “cutting” the bad data from the set. [0214]
  • In one embodiment, the detection and removal of the outliers may be performed by the user via the GUI. In another embodiment, the user may use the GUI to specify one or more algorithms which may then be applied to the data programmatically, i.e., automatically. In other words, a GUI may be provided which is operable to receive user input specifying one or more data filtering operations to be performed on the input data, where the one or more data filtering operations operate to remove and/or replace the one or more outlier values. Additionally, the GUI may be further operable to display the input data prior to and after performing the filtering operations on the input data. Finally, the GUI may be operable to receive user input specifying a portion of said input data for the data filtering operations. Further details of the GUI are provided below with reference to FIGS. [0215] 10A-10F.
  • After the outliers have been removed from the data in [0216] 909, the removed data may optionally be replaced, as indicated in 911. In other words, the preprocessing operation may “fill in” the gap resulting from the removal of outlying data. Various techniques may be brought to bear to generate the replacement data, including, but not limited to, clipping, interpolation, extrapolation, spline fits, sample/hold of a last prior value, etc., as are well known in the art.
  • In another embodiment, the removed outliers may be replaced in a later stage of preprocessing, such as the time merge process described above. In this embodiment, the time merge process will detect that data are missing, and operate to fill the gap. [0217]
  • Thus, in one embodiment, the preprocess may operate as a data filter, analyzing input data, detecting outliers, and removing the outliers from the data set. The filter parameters may simply be a predetermined value limit or range against which a data value may be tested. If the value falls outside the range, the value may be removed, or clipped to the limit value, as desired. In one embodiment, the limit(s) or range may be determined dynamically. For example, in one embodiment, the range may be determined based on the standard deviation of a moving window of data in the data set, e.g., any value outside a two sigma band for a moving window of [0218] 100 data points may be clipped or removed. As mentioned above, the data filter may also operate to replace the outlier values with more appropriate replacement values.
  • In one embodiment, the received input data of [0219] 904 may comprise training data including target input data and target output data, and the corrected data may comprise corrected training data which includes corrected target input data and corrected target output data.
  • In one embodiment, the support vector machine may be operable to be trained according to a predetermined training algorithm applied to the corrected target input data and the corrected target output data to develop model parameter values such that the support vector machine has stored therein a representation of the system that generated the target output data in response to the target input data. In other words, the model parameters of the support vector machine may be trained based on the corrected target input data and the corrected target output data, after which the support vector machine may represent the system. [0220]
  • In one embodiment, the input data of [0221] 904 may comprise run-time data, such as from the system being modeled, and the corrected data of 908 may comprise reconciled run-time data. In this embodiment, the support vector machine may be operable to receive the corrected run-time data and generate run-time output data. In one embodiment, the run-time output data may comprise control parameters for the system. The control parameters may be usable to determine control inputs to the system for run-time operation of the system. For example, in an e-commerce system, control inputs may include such parameters as advertisement or product placement on a website, pricing, and credit limits, among others.
  • In another embodiment, the run-time output data may comprise predictive output information for the system. For example, the predictive output information may be usable in making decisions about operation of the system. In an embodiment where the system may be a financial system, the predictive output information may indicate a recommended shift in investment strategies, for example. In an embodiment where the system may be a manufacturing plant, the predictive output information may indicate production costs related to increased energy expenses, for example. [0222]
  • Thus, in one embodiment, the preprocessor may be operable to detect and remove and/or replace outlying data in an input data set for the support vector machine. [0223]
  • FIG. 9C is a detailed flowchart depicting one embodiment of the preprocessing operation. In this embodiment, the preprocessing operations described above with reference to FIGS. 9A and 9B are both included. It should be noted that in other embodiments, various of the steps may be performed in a different order than shown, or may be omitted. Additional steps may also be performed. [0224]
  • The flow chart may be initiated at [0225] start block 902 and then may proceed to a decision block 903 to determine if there are any pre-time merge process operations to be performed. If so, the program may proceed to a decision block 905 to determine whether there are any manual preprocess operations to be performed. If so, the program may continue along the “Yes” path to a function block 912 to manually preprocess the data. In the manual preprocessing of data 912, the data may be viewed in a desired format by the operator and the operator may look at the data and eliminate, “cut”, or otherwise modify obviously bad data values.
  • For example, if the operator notices that one data value is significantly out of range with the normal behavior of the remaining data, this data value may be “cut” such that it is no longer present in the data set and thereafter appears as missing data. This manual operation is in contrast to an automatic operation where all values may be subjected to a predetermined algorithm to process the data. [0226]
  • In one embodiment, an algorithm may be generated or selected that either cuts out all data above/below a certain value or clips the values to a predetermined maximum/minimum. In other words, the algorithm may constrain values to a predetermined range, either removing the offending data altogether, or replacing the values, using the various techniques described above, including clipping, interpolation, extrapolation, splines, etc. The clipping to a predetermined maximum/minimum is an algorithmic operation that is described below. [0227]
  • After displaying and processing the data manually, the program may proceed to a decision block [0228] 914. It is noted that if the manual preprocess operation is not utilized, the program may continue from the decision block 905 along the “No” path to the input of decision block 914. The decision block 914 may be operable to determine whether an algorithmic process is to be applied to the data. If so, the program may continue along a “Yes” path to a function block 916 to select a particular algorithmic process for a given set of data. After selecting the algorithmic process, the program may proceed to a function block 918 to apply the algorithmic process to the data and then to a decision block 920 to determine if more data are to be processed with the algorithmic process. If so, the program may flow back around to the input of the function block 916 along a “Yes” path, as shown. Once all data have been subjected to the desired algorithmic processes, the program may flow along a “No” path from decision block 920 to a function block 922 to store the sequence of algorithmic processes such that each data set has the desired algorithmic processes applied thereto in the sequence. Additionally, if the algorithmic process is not selected by the decision block 914, the program may flow along a “No” path to the input of the function block 922.
  • After the sequence is stored in the [0229] function block 922, the program may flow to a decision block 924 to determine if a time merge operation is to be performed. The program also may proceed along a “No” path from the decision block 903 to the input of decision block 924 if the pre-time-merge process is not required. The program may continue from the decision block 924 along the “Yes” path to a function block 926 if the time merge process has been selected, and then the time merge operation may be performed. The time merge process may then be stored with the sequence as part thereof in block 928. The program then may proceed to a decision block 930 to determine whether the post time merge process is to be performed. If the time merge process is not performed, as determined by the decision block 924, the program may flow along the “No” path therefrom to the decision block 930.
  • If the post time merge process is to be performed, the program may continue along the “Yes” path from the [0230] decision block 930 to a function block 932 to select the algorithmic process and then to a function block 934 to apply the algorithmic process to the desired set of data and then to a decision block 936 to determine whether additional sets of data are to be processed in accordance with the algorithmic process. If so, the program may flow along the “Yes” path back to the input of function block 932, and if not, the program may flow along the “No” path to a function block 938 to store the new sequence of algorithmic processes with the sequence and then the program may proceed to a DONE block 1000. If the post time merge process is not to be performed, the program may flow from the decision block 930 along the “No” path to the DONE block 1000.
  • Referring now to FIGS. [0231] 10A-10E, there are illustrated embodiments of three plots of data. FIGS. 10A-10E also illustrate one embodiment of a graphical user interface (GUI) for various data manipulation/reconciliation operations which may be included in one embodiment of the present invention. It is noted that these embodiments are meant to be exemplary illustrations only, and are not meant to limit the application of the invention to any particular application domain or operation. In this example, each figure includes one plot for an input “temp1”, one plot for an input “press2” and one plot for an output “ppm”, as may relate to a chemical plant. In this example, the first input may relate to a temperature measurement, the second input may relate to a pressure measurement, and the output data may correspond to a parts per million variation.
  • As shown in FIGS. [0232] 10A-10C, in the first data set, the temp1 data, there are two points of data 108 and 110, which need to be “cut” from the data, as they are obviously bad data points. Such data points that lie outside the acceptable range of a data set are generally referred to as “outliers”. These two data points appear as cut data in the data-set, as shown in FIG. 10C, which then may be filled in or replaced by the appropriate time merge operation utilizing extrapolation, interpolation, or other techniques, as desired.
  • Thus, in one embodiment, the data preprocessor may include a data filter which may be operable to analyze input data, detect outliers, and remove the outliers from the data set. As mentioned above, in one embodiment, the applied filter may simply be a predetermined value limit or range against which a data value may be tested. If the value falls outside the range, the value may be removed, or clipped to the limit value, as desired. In one embodiment, the limit(s) or range may be determined dynamically. For example, in one embodiment, the range may be determined based on the standard deviation of a moving window of data in the data set, e.g., any value outside a two sigma band for a moving window of 100 data points may be clipped or removed. In one embodiment, the filter may replace any removed outliers using any of such techniques as extrapolation and interpolation, among others. In another embodiment, as mentioned above, the removed outliers may be replaced in a later stage of processing, such as the time merge process described herein. In this embodiment, the time merge process will detect that data are missing, and operate to fill the gaps. [0233]
  • FIG. 10A shows the raw data. FIG. 10B shows the use of a cut [0234] data region tool 115. FIG. 10B shows the points 108 and 110 highlighted by dots showing them as cut data points. In one embodiment of the GUI presented on a color screen, these dots may appear in red. FIG. 10D shows a vertical cut of the data, cutting across several variables simultaneously. Applying this cut may cause all of the data points to be marked as cut, as shown in FIG. 10E. FIG. 10F flowcharts one embodiment of the steps involved in cutting or otherwise modifying the data. In one embodiment, a region of data may be selected by a set of boundaries 112 (in FIG. 10D), which results may be utilized to block out data. For example, if it were determined that data during a certain time period were invalid due to various reasons, these data may be removed from the data sets, with the subsequent preprocessing operable to fill in the “blocked” or “cut” data.
  • In one embodiment, the data may be displayed as illustrated in FIGS. [0235] 10A-10E, and the operator may select various processing techniques to manipulate the data via various tools, such as cutting, clipping and viewing tools 107, 111, 113, that may allow the user to select data items to cut, clip, transform or otherwise modify. In one mode, the mode for removing data, this may be referred to as a manual manipulation of the data. However, algorithms may be applied to the data to change the value of that data. Each time the data are changed, the data may be rearranged in the spreadsheet format of the data. In one embodiment,, the operator may view the new data as the operation is being performed.
  • With the provisions of the various clipping and [0236] viewing tools 107, 111, and 113, the user may be provided the ability to utilize a graphic image of data in a database, manipulate the data on a display in accordance with the selection of the various cutting tools, and modify the stored data in accordance with these manipulations. For example, a tool may be utilized to manipulate multiple variables over a given time range to delete all of that data from the input database and reflect it as “cut” data. The data set may then be considered to have missing data, which may require a data reconciliation scheme in order to replace this data in the input data stream. Additionally, the data may be “clipped”; that is, a graphical tool may be utilized to determine the level at which all data above (or below) that level is modified. All data in the data set, even data not displayed, may be modified to this level. This in effect may constitute applying an algorithm to that data set.
  • In FIG. 10F, the flowchart depicts one embodiment of an operation of utilizing the graphical tools for cutting data. An initiation block, [0237] data set 117, may indicate the acquisition of the data set. The program then may proceed to a decision block 119 to determine if the variables have been selected and manipulated for display. If not, the program may proceed along a “No” path to a function block 121 to select the display type and then to a function block 123 to display the data in the desired format. The program then may continue to a decision block 125 wherein tools for modifying the data are selected. When this is done, the program may continue along a “DONE” line back to decision block 119 to determine if all of the variables have been selected. However, if the data are still in the modification stage, the program may proceed to a decision block 127 to determine if an operation is cancelled and, if so, may proceed back around to the decision block 125. If the operation is not cancelled, the program may continue along a “No” path to function block 129 to apply the algorithmic transformation to the data and then to function block 131 to store the transform as part of a sequence. The program then may continue back to function block 123. This may continue until the program continues along the “DONE” path from decision block 125 back to decision block 119.
  • Once all the variables have been selected and displayed, the program may proceed from [0238] decision block 119 along a “Yes” path to decision block 133 to determine if the transformed data are to be saved. If not, the program may proceed along an “No” path to “DONE” block 135. If the transformed data are to be saved, the program may continue from the decision block 133 along the “Yes” path to a function block 137 to transform the data set and then to the “DONE” block 135.
  • FIG. 11 is a diagrammatic view of a display (i.e., a GUI) for performing algorithmic functions on the data, according to one embodiment. In one embodiment, the display may include a first [0239] numerical template 114 which may provide a numerical keypad function. A window 116 may be provided that may display the variable(s) that is/are being operated on. The variables that are available for manipulation may be displayed in a window 118. In this embodiment, the various variables are arranged in groups, one group associated with a first date and time, e.g., variables temp1 and press1, and a second group associated with a second date and time, e.g., variables temp2 and press2, for example, prior to time merging. A mathematical operator window 120 may be included that may provide various mathematical operators (e.g., “+”, “−”, etc.) which may be applied to the variables. Various logical operators may also be available in the window 120 (e.g., “AND”, “OR”, etc.). Additionally, in one embodiment, a functions window 122 may be included that may allow selection of various mathematical functions, logical functions, etc. (e.g., exp, frequency, in, log, max, etc.) for application to any of the variables, as desired.
  • In the example illustrated in FIG. 11, the variable temp1 may be selected to be processed and the logarithmic function selected for application thereto. For example, the variable temp1 may first be selected from [0240] window 118 and then the logarithmic function “log” selected from the window 122. In one embodiment, the left parenthesis may then be selected from window 120, followed by the selection of the variable temp1 from window 118, then followed by the selection of the right parenthesis from window 120. This may result in the selection of an algorithmic process which includes a logarithm of the variable temp1. This may then be stored as a sequence, such that upon running the data through the run-time sequence, data associated with the variable temp1 has the logarithmic function applied thereto prior to inputting to the run-time system model 26. This process may be continued or repeated for each desired operation.
  • After the data have been manually preprocessed as described above with reference to FIGS. [0241] 10A-10F, the resultant data may be as depicted in Table 1, as shown in FIG. 12. It may be seen in Table 1 that there is a time scale difference, one group associated with the time TIME 1 and one group associated with the time TIME 2. It may be seen that the first time scale is based on an hourly interval and that the second time scale is based on a two hour interval. Any “cut” data (not shown) would appear as missing data.
  • After the data have been manually preprocessed, the algorithmic processes may be applied thereto. In the example described above with reference to FIG. 11, the variable temp1 is processed by taking a logarithm thereof. This may result in a variation of the set of data associated with the variable temp1. This is illustrated in Table 2, as shown in FIG. 12. [0242]
  • The sequence of operations associated therewith may determine the data that were cut out of the original data set for data temp1 and also the algorithmic processes associated therewith, these being in a sequence which is stored in the [0243] sequence block 14 and which may be examined via a data-column properties module 113, shown in FIGS. 10A-10E, as illustrated in Properties 2, of FIG. 12.
  • To perform the time merge, the operator may select the [0244] time merge function 115, illustrated in FIG. 10B, and may specify the time scale and type of time merge algorithm. For example, in FIG. 10B, a one-hour time-scale is selected and the box-car algorithm of merging is used.
  • After the time merge, the time scale may be disposed on an hourly interval with the time merge process. This is illustrated in Table 3 of FIG. 12, wherein all of the data are on a common time scale and the cut data has been extrapolated to insert new data. [0245]
  • The sequence after time merge may include the data that are cut from the original data sets, the algorithmic processes utilized during the pre-time merge processing, and the time merge data. This is illustrated in [0246] Properties 3, as shown in FIG. 12.
  • After the time merge operation, additional processing may be utilized. For example, the display of FIG. 11 may again be pulled up, and another algorithmic process selected. One example may be to take the variable temp1 after time merge and add a value of 5000 to this variable. This may result in each value in the column associated with the variable temp1 being increased by that value, as illustrated by the data in Table 4 of FIG. 12. The sequence may then be updated using the sequence presented in [0247] Properties 4, as shown in FIG. 12.
  • FIG. 13 is a block diagram of one embodiment of a process flow, such as, for example, a process flow through a plant. Again, it is noted that although operation and control of a plant is an exemplary application of one embodiment of the present invention, any other process may also be suitable for application of the systems and methods described herein, including scientific, medical, financial, stock and/or bond management, and manufacturing, among others. [0248]
  • There is a general flow input to the plant which may be monitored at some point by [0249] flow meter 130. The flow meter 130 may provide a variable output flow1. The flow may continue to a process block 132, wherein various plant processes may be carried out. Various plant inputs may be provided to this process block 132. The flow may then continue to a temperature gauge 134, which may output a variable temp1. The flow may proceed to a process block 136 to perform other plant processes, these also receiving plant inputs. The flow may then continue to a pressure gauge 138, which may output a variable press1. The flow may continue through various other process blocks 139 and other parameter measurement blocks 140, resulting in an overall plant output 142 which may be the desired plant output. It may be seen that numerous processes may occur between the output of parameter flow1 and the plant output 142. Additionally, other plant outputs such as press1 and temp1 may occur at different stages in the process. This may result in delays between a measured parameter and an effect on the plant output. The delays associated with one or more parameters in a data set may be considered a variance in the time scale for the data set. In one embodiment, adjustments for these delays may be made by reconciling the data to homogenize the time scale of the data set, as described below.
  • FIG. 14 is a timing diagram illustrating the various effects of the output variables from the plant and the plant output, according to one embodiment. The output variable flow1 may experience a change at a [0250] point 144. Similarly, the output variable temp1 may experience a change at a point 146, and the variable press1 may experience a change at a point 148. However, the corresponding change in the output may not be time synchronous with the changes in the variables. Referring to the line labeled OUTPUT, changes in the plant output may occur at points 150, 152 and 154, for the respective changes in the variables at points 144-148, respectively. The change between points 144 and 150 and the variable flow1 and the output, respectively, may experience a delay D2. The change in the output of point 152 associated with the change in the variable temp1 may occur after delay D3. Similarly, the change in the output of point 154 associated with the change in the variable press1 may occur after a delay of D1. In accordance with one embodiment of the present invention, these delays may be accounted for during training, and/or during the run-time operation.
  • FIG. 15 is a diagrammatic view of the delay for a given input variable x[0251] 1(t), according to one embodiment. It may be seen that a delay D is introduced to the system to provide an output x1D(t) such that x1D(t)=x1(t−D), this output may then be input to the support vector machine. As such, the measured plant variables may now coincide in time with the actual effect that is realized in the measured output such that, during training, a system model may be trained with a more accurate representation of the system.
  • FIG. 16 is a diagrammatic view of the method for implementing the delay, according to one embodiment. Rather than providing an additional set of data for each delay that is desired, x(t+τ), variable length buffers may be provided in each data set after preprocessing, the length of which may correspond to the longest delay. Multiple taps may be provided in each of the buffers to allow various delays to be selected. In FIG. 16, there are illustrated four [0252] buffers 156, 158, 160 and 162, associated with the preprocessed inputs x′1(t), x′2(t), x′3(t), and x′4(t). Each of the buffers has a length of N, such that the first buffer outputs the delay input x1D(t), the second buffer 158 outputs the delay input x2D(t), and the third buffer 160 outputs the delay input x3D(t). The buffer 162, on the other hand, has a delay tap that may provide for a delay of “n−1” to provide an output x4D(t). An output x5D(t) may be provided by selecting the first tap in the buffer 156 such that the relationship x5D(t)=x′1(t+1). Additionally, the delayed input x6D(t) may be selected as a tap output of the buffer 160 with a value of τ=2. This may result in the overall delay inputs to the training model 20. Additionally, these delays may be stored as delay settings for use during the run-time.
  • FIG. 17 illustrates one embodiment of a display that may be provided to the operator for selecting the various delays to be applied to the input variables and the output variables utilized in training. In this example, it may be seen that by selecting a delay for the variable temp1 of −4.0, −3.5, and −3.0, three separate input variables have been selected for input to the [0253] training model 20. Additionally, three separate outputs are shown as selected, one for delay 0.0, one for a delay 0.5, and one for a delay of 1.0 to predict present and future values of the variable. Each of these may be processed to vary the absolute value of the delays associated with the input variables. It may therefore be seen that a maximum buffer of −4.0 for an output of 0.0 may be needed in order to provide for the multiple taps. Further, it may be seen that it is not necessary to completely replicate the data in any of the delayed variable columns as a separate column, thus increasing the amount of memory utilized.
  • FIG. 18 is a block diagram of one embodiment of a system for generating process dependent delays. A [0254] buffer 170 is illustrated having a length of N, which may receive an input variable x′n(t) from the preprocessor 12 to provide on the output thereof an output xnD(t) as a delayed input to the training model 20. A multiplexer 172 may be provided which has multiple inputs, one from each of the n buffer registers with a τ-select circuit 174 provided for selecting which of the taps to output. The value of τ may be a function of other variables parameters such as temperature, pressure, flow rates, etc. For example, it may be noted empirically that the delays are a function of temperature. As such, the temperature relationship may be placed in the block 174 and then the external parameters input and the value of τ utilized to select the various taps input to the multiplexer 172 for output therefrom as a delay input. The system of FIG. 18 may also be utilized in the run-time operation wherein the various delay settings and functional relationships of the delay with respect to the external parameters are stored in the storage area 18. The external parameters may then be measured and the value of τ selected as a function of this temperature and the functional relationship provided by the information stored in the storage area 18. This is to be compared with the training operation wherein this information is externally input to the system. For example, with reference to FIG. 17, it may be noticed that all of the delays for the variable temp1 may be shifted up by a value of 0.5 when the temperature reached a certain point. With the use of the multiple taps, as described with respect to FIGS. 16 and 18, it may only be necessary to vary the value of the control input to the multiplexers 172 associated with each of the variables, it being understood that in the example of FIG. 17, three multiplexers 172 would be required for the variable temp1, since there are three separate input variables.
  • FIG. 19 is a block diagram of one embodiment of a preprocessing system for setting delay parameters, where the delay parameters may be learned. For simplicity, the preprocessing system is not illustrated; rather, a table [0255] 176 of the preprocess data is shown. Further, the methods for achieving the delay may differ somewhat, as described below. The delay may be achieved by a time delay adjustor 178, which may utilize the stored parameters in a delayed parameter block 18′. The delay parameter block 18′ is similar to the delay setting block 18, with the exception that absolute delays are not contained therein. Rather, information relating to a window of data may be stored in the delay parameter block 18′. The time delay adjustor 178 may be operable to select a window of data within each set of data in the table 176, the data labeled x′1 through x′n. The time delay adjustor 178 may be operable to receive data within a defined window associated with each of the sets of data x′1-x′n and convert this information into a single value for output therefrom as an input value IN1-INn. These may be directly input to a system model 26′, which system model 26′ is similar to the run-time system model 26 and the training model 20 in that it is realized with a non-linear model (e.g., a support vector machine). The non-linear model is illustrated as having an input layer 179, a middle layer 180 and an output layer 182. The middle layer 180 may be operable to map the input layer 179 to the output layer 182, as described below. However, note that this is a non-linear mapping function. By comparison, the time delay adjustor 178 may be operable to linearly map each of sets of data x′1-x′n in the table 176 to the input layer 179. This mapping function may be dependent upon the delay parameters in the delay parameter block 18′. As described below, these parameters may be learned under the control of a learning module 183, which learning module 183 may be controlled during the support vector machine training in the training mode. It is similar to that described above with respect to FIG. 4.
  • During learning, the [0256] learning module 183 may be operable to control both the time delay adjustor block 178 and the delay parameter block 18′ to change the values thereof in training of the system model 26′. During training, target outputs may be input to the output layer 182 and a set of training data input thereto in the form of the chart 176, it being noted that this is already preprocessed in accordance with the operation as described above. The model parameters of the system model 26′ stored in the storage area 22 may then be adjusted in accordance with a predetermined training algorithm to minimize the error. However, the error may only be minimized to a certain extent for a given set of delays. Only by setting the delays to their optimum values may the error be minimized to the maximum extent. Therefore, the learning module 183 may be operable to vary the parameters in the delay parameter block 18′ that are associated with the timing delay adjustor 178 in order to further minimize the error.
  • FIG. 20 is a flowchart illustrating the determination of time delays for the training operation, according to one embodiment. This flowchart may be initiated at a [0257] time delay block 198 and may then continue to a function block 200 to select the delays. In one embodiment, this may be performed by the operator as described above with respect to FIG. 17. The program may then continue to a decision block 202 to determine whether variable τ are selected. The program may continue along a “Yes” path to a function block 204 to receive an external input and vary the value of τ in accordance with the relationship selected by the operator, this being a manual operation in the training mode. The program may then continue to a decision block 206 to determine whether the value of τ is to be learned by an adaptive algorithm. If variable τ are not selected in the decision block 202, the program may then continue around the function block 204 along the “No” path.
  • If the value of τ is to be learned adaptively, the program may continue from the [0258] decision block 206 to a function block 208 to learn the value of τ adaptively. The program may then proceed to a function block 210 to save the value of τ. If no adaptive learning is required, the program may continue from the decision block 206 along the “No” path to function block 210. After the τ parameters have been determined, the model 20 may be trained, as indicated by a function block 212 and then the parameters may be stored, as indicated by a function block 214. Following storage of the parameters, the program may flow to a DONE block 216.
  • FIG. 21 is a flowchart depicting operation of the system in run-time mode, according to one embodiment. The operation may be initiated at a [0259] run block 220 and may then proceed to a function block 222 to receive the data and then to a decision block 224 to determine whether the pre-time merge process is to be entered. If so, the program may proceed along a “Yes” path to a function block 226 to preprocess the data with the stored sequence and then to a decision block 228. If not, the program may continue along the “No” path to the input of decision block 228. Decision block 228 may determine whether the time merge operation is to be performed. If so, the program may proceed along the “Yes” path to function block 230 to time merge with the stored method and then to the input of a decision block 232 and, if not, the program may continue along the “No” path to the decision block 232. The decision block 232 may determine whether the post-time merge process is to be performed. If so, the program may proceed along the “Yes” path to a function block 234 to process the data with the stored sequence and then to a function block 236 to set the buffer equal to the maximum τ for the delay. If not, (i.e., if the post-time merge process is not selected), the program may proceed from the decision block 232 along the “No” path to the input of function block 236.
  • After completion of [0260] function block 236, the program may continue to a decision block 238 to determine whether the value of τ is to be varied. If so, the program may proceed to a function block 240 to set the value of τ variably, then to the input of a function block 242 and, if not, the program may continue along the “No” path to function block 242. Function block 242 may be operable to buffer data and generate run-time inputs. The program may then continue to a function block 244 to load the model parameters. The program may then proceed to a function block 246 to process the generated inputs through the model and then to a decision block 248 to determine whether all of the data has been processed. If all of the data has not been processed, the program may continue along the “No” path back to the input of function block 246 until all data are processed and then along the “Yes” path to return block 250.
  • FIG. 22 is a flowchart for the operation of setting the value of τ variably (i.e., expansion of the [0261] function block 240, as illustrated in FIG. 21), according to one embodiment. The operation may be initiated at a block 240, set τ variably, and then may proceed to a function block 254 to receive the external control input. The value of τ may be varied in accordance with the relationship stored in the storage area 14, as indicated by a function block 256. Finally, the operation may proceed to a return function block 258.
  • FIG. 23 is a simplified block diagram for the overall run-time operation, according to one embodiment. Data may be initially output by the [0262] DCS 24 during run-time. The data may then be preprocessed in the preprocess block 34 in accordance with the preprocess parameters stored in the storage area 14. The data may then be delayed in the delay block 36 in accordance with the delay settings set in the delay block 18, this delay block 18 may also receive the external block control input, which may include parameters on which the value of τ depends to provide the variable setting operation that was utilized during the training mode. The output of the delay block 36 may then be input to a selection block 260, which may receive a control input. This selection block 260 may select either a control support vector machine or a prediction support vector machine. A predictive system model 262 may be provided and a control model 264 may be provided, as shown. Both models 262 and 264 may be identical to the training model 20 and may utilize the same parameters; that is, models 262 and 264 may have stored therein a representation of the system that was trained in the training model 20. The predictive system model 262 may provide on the output thereof predictive outputs, and the control model 264 may provide on the output thereof predicted system inputs for the DCS 24. These predicted system inputs may be stored in a block 266 and then may be translated to control inputs to the DCS 24.
  • In one embodiment of the present invention, a predictive support vector machine may operate in a run-time mode or in a training mode with a data preprocessor for preprocessing the data prior to input to a system model. The predictive support vector machine may include an input layer, an output layer and a middle layer for mapping the input layer to the output layer through a representation of a run-time system. Training data derived from the training system may be stored in a data file, which training data may be preprocessed by a data preprocessor to generate preprocessed training data, which may then be input to the support vector machine and trained in accordance with a predetermined training algorithm. The model parameters of the support vector machine may then be stored in a storage device for use by the data preprocessor in the run-time mode. In the run-time mode, run-time data may be preprocessed by the data preprocessor in accordance with the stored data preprocessing parameters input during the training mode and then this preprocessed data may be input to the support vector machine, which support vector machine may operate in a prediction mode. In the prediction mode, the support vector machine may output a prediction value. [0263]
  • In another embodiment of the present invention, a system for preprocessing data prior to training the model is presented. The preprocessing operation may be operable to provide a time merging of the data such that each set of input data is input to a training system model on a uniform time base. Furthermore, the preprocessing operation may be operable to fill in missing or bad data. Additionally, after preprocessing, predetermined delays may be associated with each of the variables to generate delayed inputs. These delayed inputs may then be input to a training model and the training model may be trained in accordance with a predetermined training algorithm to provide a representation of the system. This representation may be stored as model parameters. Additionally, the preprocessing steps utilized to preprocess the data may be stored as a sequence of preprocessing algorithms and the delay values that may be determined during training may also be stored. A distributed control system may be controlled to process the output parameters therefrom in accordance with the process algorithms and set delays in accordance with the predetermined delay settings. A predictive system model, or a control model, may then be built on the stored model parameters and the delayed inputs input thereto to provide a predicted output. This predicted output may provide for either a predicted output or a predicted control input for the run-time system. It is noted that this technique may be applied to any of a variety of application domains, and is not limited to plant operations and control. It is further noted that the delay described above may be associated with other variables than time. In other words, the delay may refer to offsets in the ordered correlation between process variables according to an independent variable other than time t. [0264]
  • Thus, various embodiments of the systems and methods described above may perform preprocessing of input data for training and/or operation of a support vector machine. [0265]
  • Although the system and method of the present invention have been described in connection with several embodiments, the invention is not intended to be limited to the specific forms set forth herein, but on the contrary, it is intended to cover such alternatives, modifications, and equivalents as may be reasonably included within the spirit and scope of the invention as defined by the appended claims. [0266]

Claims (90)

What is claimed is:
1. A data preprocessor for preprocessing input data for a support vector machine having multiple inputs, each of the inputs associated with a portion of the input data, comprising:
an input buffer for receiving and storing the input data, the input data associated with at least two of the inputs being on different time scales relative to each other;
a time merge device for selecting a predetermined time scale and reconciling the input data stored in the input buffer such that all of the input data for all of the inputs are on the same time scale; and
an output device for outputting the data reconciled by the time merge device as reconciled data, said reconciled data comprising the input data to the support vector machine.
2. The data preprocessor of claim 1, wherein the support vector machine comprises a non-linear model having a set of model parameters defining a representation of a system, said model parameters capable of being trained;
wherein the input data comprise training data including target input data and target output data, wherein said reconciled data comprise reconciled training data including reconciled target input data and reconciled target output data, and wherein said reconciled target input data and reconciled target output data are both based on a common time scale; and
wherein the support vector machine is operable to be trained according to a predetermined training algorithm applied to said reconciled target input data and said reconciled target output data to develop model parameter values such that said support vector machine has stored therein a representation of the system that generated the target output data in response to the target input data.
3. The data preprocessor of claim 1, wherein the support vector machine comprises a non-linear model having a set of model parameters defining a representation of a system, wherein said model parameters of said support vector machine have been trained to represent said system;
wherein the input data comprise run-time data, and wherein said reconciled data comprise reconciled run-time data; and
wherein the support vector machine is operable to receive said reconciled run-time data and generate run-time output data, wherein said run-time output data comprise one or both of control parameters for said system and predictive output information for said system.
4. The data preprocessor of claim 3, wherein said control parameters are usable to determine control inputs to said system for run-time operation of said system.
5. The data preprocessor of claim 1, wherein the input data associated with at least one of the inputs has missing data in an associated time sequence and said time merge device is operable to reconcile said input data to fill in said missing data.
6. The data preprocessor of claim 1, wherein the input data associated with a first one or more of the inputs has an associated time sequence based on a first time interval, and a second one or more of the inputs has an associated time sequence based on a second time interval; and
wherein said time merge device is operable to reconcile said input data associated with said first one or more of the inputs to said input data associated with said second one or more of the inputs, thereby generating reconciled input data associated with said at least one of the inputs having an associated time sequence based on said second time interval.
7. The data preprocessor of claim 1, wherein the input data associated with a first one or more of the inputs has an associated time sequence based on a first time interval, and wherein the input data associated with a second one or more of the inputs has an associated time sequence based on a second time interval; and
wherein said time merge device is operable to reconcile said input data associated with said first one or more of the inputs and said input data associated with said second one or more of the inputs to a time scale based on a third time interval, thereby generating reconciled input data associated with said first one or more of the inputs and said second one or more of the inputs having an associated time sequence based on said third time interval.
8. The data preprocessor of claim 1, wherein the input data associated with a first one or more of the inputs is asynchronous, and wherein the input data associated with a second one or more of the inputs is synchronous with an associated time sequence based on a time interval; and
wherein said time merge device is operable to reconcile said asynchronous input data associated with said first one or more of the inputs to said synchronous input data associated with said second one or more of the inputs, thereby generating reconciled input data associated with said first one or more of the inputs, wherein said reconciled input data comprise synchronous input data having an associated time sequence based on said time interval.
9. The data preprocessor of claim 1, wherein said input buffer is controllable to arrange the input data in a predetermined format.
10. The data preprocessor of claim 9, wherein the input data, prior to being arranged in said predetermined format, has a predetermined time reference for all data, such that each piece of input data has associated therewith a time value relative to said predetermined time reference.
11. The data preprocessor of claim 1, wherein each piece of data has associated therewith a time value corresponding to the time the input data was generated.
12. The data preprocessor of claim 1, further comprising:
a pre-time merge processor for applying a predetermined algorithm to the input data received by said input buffer prior to input to said time merge device.
13. The data preprocessor of claim 12, wherein each piece of data has associated therewith a time value corresponding to the time the input data was generated.
14. The data preprocessor of claim 12, further comprising:
an input device for selecting said predetermined algorithm from a group of available algorithms.
15. The data preprocessor of claim 1, wherein said output device further comprises a post-time merge processor for applying a predetermined algorithm to the data reconciled by said time merge device prior to output as said reconciled data.
16. The data preprocessor of claim 15, further comprising:
an input device for selecting said predetermined algorithm from a group of available algorithms.
17. The data preprocessor of claim 1, wherein the input data comprise a plurality of variables, each of the variables comprising an input variable with an associated set of data wherein each of said variables comprises an input to said input buffer; and
wherein each of at least a subset of said variables comprises a corresponding one of the inputs to the support vector machine.
18. The data preprocessor of claim 17, further comprising:
a delay device for receiving reconciled data associated with a select one of said input variables and introducing a predetermined mount of delay to said reconciled data to output a delayed input variable and associated set of delayed input reconciled data.
19. The data preprocessor of claim 18, wherein said predetermined amount of delay is a function of an external variable, the data preprocessor further comprising:
means for varying said predetermined amount of delay as a function of said external variable.
20. The data preprocessor of claim 18, further comprising:
means for learning said predetermined delay as a function of training parameters generated by a system modeled by the support vector machine.
21. The data preprocessor of claim 1, further comprising:
a graphical user interface (GUI) which is operable to receive user input specifying one or more data manipulation and/or reconciliation operations to be performed on said input data.
22. The data preprocessor of claim 21, wherein said GUI is further operable to display said input data prior to and after performing said manipulation and/or reconciliation operations on said input data.
23. The data preprocessor of claim 21, wherein said GUI is further operable to receive user input specifying a portion of said input data for said data manipulation and/or reconciliation operations.
24. A data preprocessor for preprocessing input data for a support vector machine having multiple inputs, each of the inputs associated with a portion of the input data, comprising:
an input buffer for receiving and storing the input data, the input data associated with at least two of the inputs being on different independent variable scales relative to each other;
a merge device for selecting a predetermined independent variable scale and reconciling the input data stored in the input buffer such that all of the input data for all of the inputs are on the same independent variable scale; and
an output device for outputting the data reconciled by the merge device as reconciled data, said reconciled data comprising the input data to the support vector machine.
25. The data preprocessor of claim 24, wherein the support vector machine comprises a non-linear model having a set of model parameters defining a representation of a system, said model parameters capable of being trained;
wherein the input data comprise training data including target input data and target output data, wherein said reconciled data comprise reconciled training data including reconciled target input data and reconciled target output data, and wherein said reconciled target input data and reconciled target output data are both based on a common independent variable scale; and
wherein the support vector machine is operable to be trained according to a predetermined training algorithm applied to said reconciled target input data and said reconciled target output data to develop model parameter values such that said support vector machine has stored therein a representation of the system that generated the target output data in response to the target input data.
26. The data preprocessor of claim 24, wherein the support vector machine comprises a non-linear model having a set of model parameters defining a representation of a system, wherein said model parameters of said support vector machine have been trained to represent said system;
wherein the input data comprise run-time data, and wherein said reconciled data comprise reconciled run-time data; and
wherein the support vector machine is operable to receive said reconciled run-time data and generate run-time output data, wherein said run-time output data comprise one or both of control parameters for said system and predictive output information for said system.
27. The data preprocessor of claim 26, wherein the input data associated with at least one of the inputs has missing data in an associated independent variable sequence; and
wherein said merge device is operable to reconcile said input data to fill in said missing data.
28. The data preprocessor of claim 24, wherein the input data associated with a first one or more of the inputs has an associated independent variable sequence based on a first interval, and a second one or more of the inputs has an associated independent variable sequence based on a second interval; and
wherein said merge device is operable to reconcile said input data associated with said first one or more of the inputs to said input data associated with said second one or more of the inputs, thereby generating reconciled input data associated with said first one or more of the inputs having an associated independent variable sequence based on said second interval.
29. The data preprocessor of claim 24, wherein a first one or more of the inputs has an associated independent variable sequence based on a first interval, and wherein the input data associated with a second one or more of the inputs has an associated independent variable sequence based on a second interval; and
wherein said merge device is operable to reconcile said input data associated with said first one or more of the inputs and said input data associated with said second one or more of the inputs to an independent variable scale based on a third interval, thereby generating reconciled input data associated with said first one or more of the inputs and said second one or more of the inputs having an associated independent variable sequence based on said third interval.
30. The data preprocessor of claim 24, wherein the input data associated with a first one or more of the inputs is asynchronous with respect to an independent variable, and wherein the input data associated with a second one or more of the inputs is synchronous with an associated independent variable sequence based on an interval; and
wherein said merge device is operable to reconcile said asynchronous input data associated with said first one or more of the inputs to said synchronous input data associated with said second one or more of the inputs, thereby generating reconciled input data associated with said first one or more of the inputs, and wherein said reconciled input data comprise synchronous input data having an associated independent variable sequence based on said interval.
31. A method for preprocessing input data prior to input to a support vector machine having multiple inputs, each of the inputs associated with a portion of the input data, the method comprising:
receiving and storing the input data, the input data associated with at least two of the inputs being on different time scales relative to each other;
time merging the input data for the inputs such that all of the input data are reconciled to the same time scale; and
outputting the reconciled time merged data as reconciled data, the reconciled data comprising the input data to the support vector machine.
32. The method of claim 31, wherein the support vector machine comprises a non-linear model having a set of model parameters defining a representation of a system, said model parameters capable of being trained; and
wherein the input data comprise training data including target input data and target output data, wherein said reconciled data comprise reconciled training data including reconciled target input data and reconciled target output data, and wherein said reconciled target input data and reconciled target output data are both based on a common time scale;
the method further comprising:
training the support vector machine according to a predetermined training algorithm applied to said reconciled target input data and said reconciled target output data to develop model parameter values such that said support vector machine has stored therein a representation of the system that generated the target output data in response to the target input data.
33. The method of claim 31, wherein the support vector machine comprises a non-linear model having a set of model parameters defining a representation of a system, wherein said model parameters of said support vector machine have been trained to represent said system; and
wherein the input data comprise run-time data, and wherein said reconciled data comprise reconciled run-time data;
the method further comprising:
inputting said reconciled run-time data into the support vector machine to generate run-time output data, wherein said run-time output data comprise one or both of control parameters for said system and predictive output information for said system.
34. The method of claim 33, wherein said control parameters are usable to determine control inputs to said system for run-time operation of said system.
35. The method of claim 31, wherein the input data associated with at least one of the inputs has missing data in an associated time sequence; and
wherein said time merging comprises:
reconciling said input data to fill in said missing data.
36. The method of claim 31, wherein the input data associated with a first one or more of the inputs has an associated time sequence based on a first time interval, and a second one or more of the inputs has an associated time sequence based on a second time interval; and
wherein said time merging comprises:
reconciling said input data associated with said first one or more of the inputs to said input data associated with said second one or more of the inputs, thereby generating reconciled input data associated with said at least one of the inputs having an associated time sequence based on said second time interval.
37. The method of claim 31, wherein the input data associated with a first one or more of the inputs has an associated time sequence based on a first time interval, and wherein the input data associated with a second one or more of the inputs has an associated time sequence based on a second time interval; and
wherein said time merging comprises:
reconciling said input data associated with said first one or more of the inputs and said input data associated with said second one or more of the inputs to a time scale based on a third time interval, thereby generating reconciled input data associated with said first one or more of the inputs and said second one or more of the inputs having an associated time sequence based on said third time interval.
38. The method of claim 31, wherein the input data associated with a first one or more of the inputs is asynchronous, and wherein the input data associated with a second one or more of the inputs is synchronous with an associated time sequence based on a time interval; and
wherein said time merging comprises:
reconciling said asynchronous input data associated with said first one or more of the inputs to said synchronous input data associated with said second one or more of the inputs, thereby generating reconciled input data associated with said first one or more of the inputs, wherein said reconciled input data comprise synchronous input data having an associated time sequence based on said time interval.
39. The method of claim 31, wherein said receiving and storing the input data comprise:
arranging the input data in a predetermined format.
40. The method of claim 39, wherein, prior to said arranging in said predetermined format, the input data has a predetermined time reference for all data, such that each piece of input data has associated therewith a time value relative to said predetermined time reference.
41. The method of claim 31, wherein each piece of data has associated therewith a time value corresponding to the time the input data was generated.
42. The method of claim 31, further comprising:
applying a predetermined algorithm to the input data received by said input buffer prior to said time merging.
43. The method of claim 42, wherein each piece of data has associated therewith a time value corresponding to the time the input data was generated.
44. The method of claim 42, further comprising:
selecting said predetermined algorithm from a group of available algorithms.
45. The method of claim 31, further comprising:
applying a predetermined algorithm to the reconciled time merged data prior to outputting said reconciled time merged data.
46. The method of claim 45, further comprising:
an input device for selecting said predetermined algorithm from a group of available algorithms.
47. The method of claim 31, wherein the input data comprise a plurality of variables, each of the variables comprising an input variable with an associated set of data wherein each of said variables comprises an input to said input buffer; and
wherein each of at least a subset of said variables comprises a corresponding one of the inputs to the support vector machine.
48. The method of claim 47, further comprising:
receiving reconciled data associated with a select one of said input variables; and
introducing a predetermined mount of delay to said reconciled data to output a delayed input variable and associated set of delayed reconciled input data.
49. The method of claim 48, wherein said predetermined amount of delay is a function of an external variable, the method further comprising:
varying said predetermined amount of delay as a function of said external variable.
50. The method of claim 48, further comprising:
learning said predetermined delay as a function of training parameters generated by a system modeled by the support vector machine.
51. The method of claim 31, further comprising:
a graphical user interface (GUI) receiving user input specifying one or more data manipulation and/or reconciliation operations to be performed on said input data.
52. The method of claim 51, further comprising:
the GUI displaying said input data prior to and after performing said manipulation and/or reconciliation operations on said input data.
53. The method of claim 51, further comprising:
the GUI receiving user input specifying a portion of said input data for said data manipulation and/or reconciliation operations.
54. A method for preprocessing input data for a support vector machine having multiple inputs, each of the inputs associated with a portion of the input data, comprising:
receiving and storing the input data, the input data associated with at least two of the inputs being on different independent variable scales relative to each other;
reconciling the input data stored in the input buffer such that all of the input data for all of the inputs are on the same independent variable scale to generate reconciled data; and
outputting reconciled data, said reconciled data comprising the input data to the support vector machine.
55. The method of claim 54, wherein the support vector machine comprises a non-linear model having a set of model parameters defining a representation of a system, said model parameters capable of being trained; and
wherein the input data comprise training data including target input data and target output data, wherein said reconciled data comprise reconciled training data including reconciled target input data and reconciled target output data, and wherein said reconciled target input data and reconciled target output data are both based on a common independent variable scale;
the method further comprising:
training the support vector machine according to a predetermined training algorithm applied to said reconciled target input data and said reconciled target output data to develop model parameter values such that said support vector machine has stored therein a representation of the system that generated the target output data in response to the target input data.
56. The method of claim 54, wherein the support vector machine comprises a non-linear model having a set of model parameters defining a representation of a system, wherein said model parameters of said support vector machine have been trained to represent said system; and
wherein the input data comprise run-time data, and wherein said reconciled data comprise reconciled run-time data;
the method further comprising:
inputting said reconciled run-time data into the support vector machine to generate run-time output data, wherein said run-time output data comprise one or both of control parameters for said system and predictive output information for said system.
57. The method of claim 56, wherein the input data associated with at least one of the inputs has missing data in an associated independent variable sequence; and
wherein said merging comprises:
reconciling said input data to fill in said missing data.
58. The method of claim 54, wherein the input data associated with a first one or more of the inputs has an associated independent variable sequence based on a first interval, and a second one or more of the inputs has an associated independent variable sequence based on a second interval; and
wherein said merging comprises:
reconciling said input data associated with said first one or more of the inputs to said input data associated with said second one or more of the inputs, thereby generating reconciled input data associated with said first one or more of the inputs having an associated independent variable sequence based on said second interval.
59. The method of claim 54, wherein a first one or more of the inputs has an associated independent variable sequence based on a first interval, and wherein the input data associated with a second one or more of the inputs has an associated independent variable sequence based on a second interval; and
wherein said merging comprises:
reconciling said input data associated with said first one or more of the inputs and said input data associated with said second one or more of the inputs to an independent variable scale based on a third interval, thereby generating reconciled input data associated with said first one or more of the inputs and said second one or more of the inputs having an associated independent variable sequence based on said third interval.
60. The method of claim 54, wherein the input data associated with a first one or more of the inputs is asynchronous with respect to an independent variable, and wherein the input data associated with a second one or more of the inputs is synchronous with an associated independent variable sequence based on an interval; and
wherein said merging comprises:
reconciling said asynchronous input data associated with said first one or more of the inputs to said synchronous input data associated with said second one or more of the inputs, thereby generating reconciled input data associated with said first one or more of the inputs, and wherein said reconciled input data comprise synchronous input data having an associated independent variable sequence based on said interval.
61. A system for preprocessing input data for a support vector machine having multiple inputs, each of the inputs associated with a portion of the input data, comprising:
means for receiving and storing the input data, the input data associated with at least two of the inputs being on different independent variable scales relative to each other;
means for reconciling the input data stored in the input buffer such that all of the input data for all of the inputs are on the same independent variable scale to generate reconciled data; and
means for outputting reconciled data, said reconciled data comprising the input data to the support vector machine.
62. The system of claim 61, wherein the support vector machine comprises a non-linear model having a set of model parameters defining a representation of a system, said model parameters capable of being trained; and
wherein the input data comprise training data including target input data and target output data, wherein said reconciled data comprise reconciled training data including reconciled target input data and reconciled target output data, and wherein said reconciled target input data and reconciled target output data are both based on a common independent variable scale;
the system further comprising:
means for training the support vector machine according to a predetermined training algorithm applied to said reconciled target input data and said reconciled target output data to develop model parameter values such that said support vector machine has stored therein a representation of the system that generated the target output data in response to the target input data.
63. The system of claim 61, wherein the support vector machine comprises a non-linear model having a set of model parameters defining a representation of a system, wherein said model parameters of said support vector machine have been trained to represent said system; and
wherein the input data comprise run-time data, and wherein said reconciled data comprise reconciled run-time data;
the system further comprising:
means for inputting said reconciled run-time data into the support vector machine to generate run-time output data, wherein said run-time output data comprise one or both of control parameters for said system and predictive output information for said system.
64. The system of claim 63, wherein the input data associated with at least one of the inputs has missing data in an associated independent variable sequence; and
wherein said means for merging comprises:
means for reconciling said input data to fill in said missing data.
65. The system of claim 61, wherein the input data associated with a first one or more of the inputs has an associated independent variable sequence based on a first interval, and a second one or more of the inputs has an associated independent variable sequence based on a second interval; and
wherein said means for merging comprises:
means for reconciling said input data associated with said first one or more of the inputs to said input data associated with said second one or more of the inputs, thereby generating reconciled input data associated with said first one or more of the inputs having an associated independent variable sequence based on said second interval.
66. The system of claim 61, wherein a first one or more of the inputs has an associated independent variable sequence based on a first interval, and wherein the input data associated with a second one or more of the inputs has an associated independent variable sequence based on a second interval; and
wherein said means for merging comprises:
means for reconciling said input data associated with said first one or more of the inputs and said input data associated with said second one or more of the inputs to an independent variable scale based on a third interval, thereby generating reconciled input data associated with said first one or more of the inputs and said second one or more of the inputs having an associated independent variable sequence based on said third interval.
67. The system of claim 61, wherein the input data associated with a first one or more of the inputs is asynchronous with respect to an independent variable, and wherein the input data associated with a second one or more of the inputs is synchronous with an associated independent variable sequence based on an interval; and
wherein said means for merging comprises:
means for reconciling said asynchronous input data associated with said first one or more of the inputs to said synchronous input data associated with said second one or more of the inputs, thereby generating reconciled input data associated with said first one or more of the inputs, and wherein said reconciled input data comprise synchronous input data having an associated independent variable sequence based on said interval.
68. A carrier medium which stores program instructions for preprocessing input data prior to input to a support vector machine having multiple inputs, each of the inputs associated with a portion of the input data, wherein said program instructions are executable to:
receive and store the input data, wherein the input data associated with at least two of the inputs are on different time scales relative to each other;
time merge the input data for the inputs such that all of the input data are reconciled to the same time scale; and
output the reconciled time merged data as reconciled data, the reconciled data comprising the input data to the support vector machine.
69. The carrier medium of claim 68, wherein the support vector machine comprises a non-linear model having a set of model parameters defining a representation of a system, said model parameters capable of being trained; and
wherein the input data comprise training data including target input data and target output data, wherein said reconciled data comprise reconciled training data including reconciled target input data and reconciled target output data, and wherein said reconciled target input data and reconciled target output data are both based on a common time scale;
wherein said program instructions are further executable to:
train the support vector machine according to a predetermined training algorithm applied to said reconciled target input data and said reconciled target output data to develop model parameter values such that said support vector machine has stored therein a representation of the system that generated the target output data in response to the target input data.
70. The carrier medium of claim 68, wherein the support vector machine comprises a non-linear model having a set of model parameters defining a representation of a system, wherein said model parameters of said support vector machine have been trained to represent said system; and
wherein the input data comprise run-time data, and wherein said reconciled data comprise reconciled run-time data;
wherein said program instructions are further executable to:
input said reconciled run-time data into the support vector machine to generate run-time output data, wherein said run-time output data comprise one or both of control parameters for said system and predictive output information for said system.
71. The carrier medium of claim 70, wherein said control parameters are usable to determine control inputs to said system for run-time operation of said system.
72. The carrier medium of claim 68, wherein the input data associated with at least one of the inputs has missing data in an associated time sequence; and
wherein in performing said time merging said program instructions are further executable to:
reconcile said input data to fill in said missing data.
73. The carrier medium of claim 68, wherein the input data associated with a first one or more of the inputs has an associated time sequence based on a first time interval, and a second one or more of the inputs has an associated time sequence based on a second time interval; and
wherein in performing said time merging said program instructions are further executable to:
reconcile said input data associated with said first one or more of the inputs to said input data associated with said second one or more of the inputs, thereby generating reconciled input data associated with said at least one of the inputs having an associated time sequence based on said second time interval.
74. The carrier medium of claim 68, wherein the input data associated with a first one or more of the inputs has an associated time sequence based on a first time interval, and wherein the input data associated with a second one or more of the inputs has an associated time sequence based on a second time interval; and
wherein in performing said time merging said program instructions are further executable to:
reconcile said input data associated with said first one or more of the inputs and said input data associated with said second one or more of the inputs to a time scale based on a third time interval, thereby generating reconciled input data associated with said first one or more of the inputs and said second one or more of the inputs having an associated time sequence based on said third time interval.
75. The carrier medium of claim 68, wherein the input data associated with a first one or more of the inputs is asynchronous, and wherein the input data associated with a second one or more of the inputs is synchronous with an associated time sequence based on a time interval; and
wherein in performing said time merging said program instructions are further executable to:
reconcile said asynchronous input data associated with said first one or more of the inputs to said synchronous input data associated with said second one or more of the inputs, thereby generating reconciled input data associated with said first one or more of the inputs, wherein said reconciled input data comprise synchronous input data having an associated time sequence based on said time interval.
76. The carrier medium of claim 68, wherein in performing said receiving and storing said program instructions are further executable to:
arrange the input data in a predetermined format.
77. The carrier medium of claim 76, wherein, prior to said arranging in said predetermined format, the input data has a predetermined time reference for all data, such that each piece of input data has associated therewith a time value relative to said predetermined time reference.
78. The carrier medium of claim 68, wherein each piece of data has associated therewith a time value corresponding to the time the input data was generated.
79. The carrier medium of claim 68, wherein said program instructions are further executable to:
apply a predetermined algorithm to the input data prior to said performing said time merging.
80. The carrier medium of claim 79, wherein each piece of data has associated therewith a time value corresponding to the time the input data was generated.
81. The carrier medium of claim 79, wherein said program instructions are further executable to:
select said predetermined algorithm from a group of available algorithms.
82. The carrier medium of claim 68, wherein said program instructions are further executable to:
apply a predetermined algorithm to the reconciled time merged data prior to outputting said reconciled time merged data.
83. The carrier medium of claim 82, wherein said program instructions are further executable to:
select said predetermined algorithm from a group of available algorithms.
84. The carrier medium of claim 68, wherein the input data comprise a plurality of variables, each of the variables comprising an input variable with an associated set of data wherein each of said variables comprises an input to said input buffer; and
wherein each of at least a subset of said variables comprises a corresponding one of the inputs to the support vector machine.
85. The carrier medium of claim 84, wherein said program instructions are further executable to:
receive reconciled data associated with a select one of said input variables; and
introduce a predetermined mount of delay to said reconciled data and output a delayed input variable and associated set of delayed reconciled input data.
86. The carrier medium of claim 85, wherein said predetermined amount of delay is a function of an external variable, wherein said program instructions are further executable to:
vary said predetermined amount of delay as a function of said external variable.
87. The carrier medium of claim 85, wherein said program instructions are further executable to:
learn said predetermined delay as a function of training parameters generated by a system modeled by the support vector machine.
88. The carrier medium of claim 68, wherein said program instructions are further executable to present a graphical user interface (GUI), wherein said GUI is operable to receive user input specifying one or more data manipulation and/or reconciliation operations to be performed on said input data.
89. The carrier medium of claim 88, wherein said GUI is further operable to display said input data prior to and after performing said manipulation and/or reconciliation operations on said input data.
90. The carrier medium of claim 88, wherein said GUI is further operable to receive user input specifying a portion of said input data for said data manipulation and/or reconciliation operations.
US10/051,574 2002-01-18 2002-01-18 System and method for pre-processing input data to a support vector machine Expired - Lifetime US7020642B2 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US10/051,574 US7020642B2 (en) 2002-01-18 2002-01-18 System and method for pre-processing input data to a support vector machine
PCT/US2003/001582 WO2003063016A1 (en) 2002-01-18 2003-01-17 System and method for pre-processing input data to a support vector machine

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US10/051,574 US7020642B2 (en) 2002-01-18 2002-01-18 System and method for pre-processing input data to a support vector machine

Publications (2)

Publication Number Publication Date
US20030139828A1 true US20030139828A1 (en) 2003-07-24
US7020642B2 US7020642B2 (en) 2006-03-28

Family

ID=21972133

Family Applications (1)

Application Number Title Priority Date Filing Date
US10/051,574 Expired - Lifetime US7020642B2 (en) 2002-01-18 2002-01-18 System and method for pre-processing input data to a support vector machine

Country Status (2)

Country Link
US (1) US7020642B2 (en)
WO (1) WO2003063016A1 (en)

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060104504A1 (en) * 2004-11-16 2006-05-18 Samsung Electronics Co., Ltd. Face recognition method and apparatus
US20070174234A1 (en) * 2006-01-24 2007-07-26 International Business Machines Corporation Data quality and validation within a relational database management system
JP2008257732A (en) * 2007-04-06 2008-10-23 Xerox Corp Method for document clustering or categorization
US20090299912A1 (en) * 2008-05-30 2009-12-03 Strategyn, Inc. Commercial investment analysis
US20100082691A1 (en) * 2008-09-19 2010-04-01 Strategyn, Inc. Universal customer based information and ontology platform for business information and innovation management
US20100153183A1 (en) * 1996-09-20 2010-06-17 Strategyn, Inc. Product design
US20110145230A1 (en) * 2009-05-18 2011-06-16 Strategyn, Inc. Needs-based mapping and processing engine
US20110218837A1 (en) * 2010-03-03 2011-09-08 Strategyn, Inc. Facilitating growth investment decisions
US20120204086A1 (en) * 2011-02-07 2012-08-09 Hooray LLC E-reader with dynamic content and reader tracking capability
US8626791B1 (en) * 2011-06-14 2014-01-07 Google Inc. Predictive model caching
EP3293683A1 (en) * 2016-09-09 2018-03-14 Yandex Europe AG Method and system for training a machine learning algorithm for selecting process parameters for an industrial process
EP3296823A3 (en) * 2016-09-16 2018-06-13 Honeywell Limited Model-plant mismatch detection with support vector machine for cross-directional process behavior monitoring
WO2019028468A1 (en) * 2017-08-04 2019-02-07 Fair Ip, Llc Computer system for building, training and productionizing machine learning models
CN109643085A (en) * 2016-08-23 2019-04-16 埃森哲环球解决方案有限公司 Real-time industrial equipment production forecast and operation optimization
US20190286215A1 (en) * 2018-03-16 2019-09-19 Vigyanlabs Innovations Private Limited System and method to enable prediction-based power management
JP2020091535A (en) * 2018-12-03 2020-06-11 日本電信電話株式会社 Preprocessing device, preprocessing method and preprocessing program
CN113326801A (en) * 2021-06-22 2021-08-31 哈尔滨工程大学 Human body moving direction identification method based on channel state information
CN113343391A (en) * 2021-07-02 2021-09-03 华电电力科学研究院有限公司 Control method, device and equipment for scraper plate material taking system
US11159789B2 (en) * 2018-10-24 2021-10-26 City University Of Hong Kong Generative adversarial network based intra prediction for video coding
US11281867B2 (en) * 2019-02-03 2022-03-22 International Business Machines Corporation Performing multi-objective tasks via primal networks trained with dual networks
US20220286512A1 (en) * 2017-03-21 2022-09-08 Preferred Networks, Inc. Server device, learned model providing program, learned model providing method, and learned model providing system
US20220327384A1 (en) * 2017-10-19 2022-10-13 Syntiant System and Methods for Customizing Neural Networks

Families Citing this family (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050015626A1 (en) * 2003-07-15 2005-01-20 Chasin C. Scott System and method for identifying and filtering junk e-mail messages or spam based on URL content
US7209908B2 (en) * 2003-09-18 2007-04-24 Microsoft Corporation Data classification using stochastic key feature generation
US7680890B1 (en) * 2004-06-22 2010-03-16 Wei Lin Fuzzy logic voting method and system for classifying e-mail using inputs from multiple spam classifiers
US7953814B1 (en) * 2005-02-28 2011-05-31 Mcafee, Inc. Stopping and remediating outbound messaging abuse
US8484295B2 (en) 2004-12-21 2013-07-09 Mcafee, Inc. Subscriber reputation filtering method for analyzing subscriber activity and detecting account misuse
US9015472B1 (en) 2005-03-10 2015-04-21 Mcafee, Inc. Marking electronic messages to indicate human origination
US8738708B2 (en) 2004-12-21 2014-05-27 Mcafee, Inc. Bounce management in a trusted communication network
US9160755B2 (en) 2004-12-21 2015-10-13 Mcafee, Inc. Trusted communication network
US7599897B2 (en) * 2006-05-05 2009-10-06 Rockwell Automation Technologies, Inc. Training a support vector machine with process constraints
US7778949B2 (en) * 2006-05-15 2010-08-17 Nec Laboratories America, Inc. Method and apparatus for transductive support vector machines
US20090307274A1 (en) * 2008-06-06 2009-12-10 Microsoft Corporation Delayed merge
US10354229B2 (en) 2008-08-04 2019-07-16 Mcafee, Llc Method and system for centralized contact management
US8250003B2 (en) 2008-09-12 2012-08-21 Microsoft Corporation Computationally efficient probabilistic linear regression
US8473431B1 (en) 2010-05-14 2013-06-25 Google Inc. Predictive analytic modeling platform
US8438122B1 (en) 2010-05-14 2013-05-07 Google Inc. Predictive analytic modeling platform
US8595154B2 (en) 2011-01-26 2013-11-26 Google Inc. Dynamic predictive modeling platform
US8533222B2 (en) * 2011-01-26 2013-09-10 Google Inc. Updateable predictive analytical modeling
US8533224B2 (en) * 2011-05-04 2013-09-10 Google Inc. Assessing accuracy of trained predictive models
CN102289675A (en) * 2011-07-24 2011-12-21 哈尔滨工程大学 Method for intelligently predicting ship course
US9165256B2 (en) * 2013-07-23 2015-10-20 Google Technology Holdings LLC Efficient prediction
US10325219B2 (en) * 2014-07-18 2019-06-18 Facebook, Inc. Parallel retrieval of training data from multiple producers for machine learning systems
CN111830913A (en) * 2019-04-22 2020-10-27 北京国电智深控制技术有限公司 Data acquisition method and device
CN110287434B (en) * 2019-07-02 2022-02-25 郑州悉知信息科技股份有限公司 Website processing method and device and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5479573A (en) * 1992-11-24 1995-12-26 Pavilion Technologies, Inc. Predictive network with learned preprocessing parameters
US5729661A (en) * 1992-11-24 1998-03-17 Pavilion Technologies, Inc. Method and apparatus for preprocessing input data to a neural network
US5842189A (en) * 1992-11-24 1998-11-24 Pavilion Technologies, Inc. Method for operating a neural network with missing and/or incomplete data
US6128608A (en) * 1998-05-01 2000-10-03 Barnhill Technologies, Llc Enhancing knowledge discovery using multiple support vector machines
US6161130A (en) * 1998-06-23 2000-12-12 Microsoft Corporation Technique which utilizes a probabilistic classifier to detect "junk" e-mail by automatically updating a training and re-training the classifier based on the updated training set

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5479573A (en) * 1992-11-24 1995-12-26 Pavilion Technologies, Inc. Predictive network with learned preprocessing parameters
US5729661A (en) * 1992-11-24 1998-03-17 Pavilion Technologies, Inc. Method and apparatus for preprocessing input data to a neural network
US5842189A (en) * 1992-11-24 1998-11-24 Pavilion Technologies, Inc. Method for operating a neural network with missing and/or incomplete data
US6128608A (en) * 1998-05-01 2000-10-03 Barnhill Technologies, Llc Enhancing knowledge discovery using multiple support vector machines
US6157921A (en) * 1998-05-01 2000-12-05 Barnhill Technologies, Llc Enhancing knowledge discovery using support vector machines in a distributed network environment
US6427141B1 (en) * 1998-05-01 2002-07-30 Biowulf Technologies, Llc Enhancing knowledge discovery using multiple support vector machines
US6161130A (en) * 1998-06-23 2000-12-12 Microsoft Corporation Technique which utilizes a probabilistic classifier to detect "junk" e-mail by automatically updating a training and re-training the classifier based on the updated training set

Cited By (38)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100153183A1 (en) * 1996-09-20 2010-06-17 Strategyn, Inc. Product design
US20060104504A1 (en) * 2004-11-16 2006-05-18 Samsung Electronics Co., Ltd. Face recognition method and apparatus
US7953278B2 (en) * 2004-11-16 2011-05-31 Samsung Electronics Co., Ltd. Face recognition method and apparatus
US20070174234A1 (en) * 2006-01-24 2007-07-26 International Business Machines Corporation Data quality and validation within a relational database management system
JP2008257732A (en) * 2007-04-06 2008-10-23 Xerox Corp Method for document clustering or categorization
US8924244B2 (en) 2008-05-30 2014-12-30 Strategyn Holdings, Llc Commercial investment analysis
US20090299912A1 (en) * 2008-05-30 2009-12-03 Strategyn, Inc. Commercial investment analysis
US8214244B2 (en) * 2008-05-30 2012-07-03 Strategyn, Inc. Commercial investment analysis
US20120317054A1 (en) * 2008-05-30 2012-12-13 Haynes Iii James M Commercial investment analysis
US8543442B2 (en) 2008-05-30 2013-09-24 Strategyn Holdings, Llc Commercial investment analysis
US10592988B2 (en) * 2008-05-30 2020-03-17 Strategyn Holdings, Llc Commercial investment analysis
US20150081594A1 (en) * 2008-05-30 2015-03-19 Strategyn Holdings, Llc Commercial investment analysis
US8655704B2 (en) * 2008-05-30 2014-02-18 Strategyn Holdings, Llc Commercial investment analysis
US20100082691A1 (en) * 2008-09-19 2010-04-01 Strategyn, Inc. Universal customer based information and ontology platform for business information and innovation management
US8494894B2 (en) 2008-09-19 2013-07-23 Strategyn Holdings, Llc Universal customer based information and ontology platform for business information and innovation management
US20110145230A1 (en) * 2009-05-18 2011-06-16 Strategyn, Inc. Needs-based mapping and processing engine
US9135633B2 (en) 2009-05-18 2015-09-15 Strategyn Holdings, Llc Needs-based mapping and processing engine
US8666977B2 (en) 2009-05-18 2014-03-04 Strategyn Holdings, Llc Needs-based mapping and processing engine
US8583469B2 (en) 2010-03-03 2013-11-12 Strategyn Holdings, Llc Facilitating growth investment decisions
US20110218837A1 (en) * 2010-03-03 2011-09-08 Strategyn, Inc. Facilitating growth investment decisions
US20120204086A1 (en) * 2011-02-07 2012-08-09 Hooray LLC E-reader with dynamic content and reader tracking capability
US8626791B1 (en) * 2011-06-14 2014-01-07 Google Inc. Predictive model caching
US11264121B2 (en) 2016-08-23 2022-03-01 Accenture Global Solutions Limited Real-time industrial plant production prediction and operation optimization
CN109643085A (en) * 2016-08-23 2019-04-16 埃森哲环球解决方案有限公司 Real-time industrial equipment production forecast and operation optimization
EP3293683A1 (en) * 2016-09-09 2018-03-14 Yandex Europe AG Method and system for training a machine learning algorithm for selecting process parameters for an industrial process
EP3296823A3 (en) * 2016-09-16 2018-06-13 Honeywell Limited Model-plant mismatch detection with support vector machine for cross-directional process behavior monitoring
US11449046B2 (en) * 2016-09-16 2022-09-20 Honeywell Limited Model-plant mismatch detection with support vector machine for cross-directional process behavior monitoring
US20220286512A1 (en) * 2017-03-21 2022-09-08 Preferred Networks, Inc. Server device, learned model providing program, learned model providing method, and learned model providing system
US11227188B2 (en) 2017-08-04 2022-01-18 Fair Ip, Llc Computer system for building, training and productionizing machine learning models
WO2019028468A1 (en) * 2017-08-04 2019-02-07 Fair Ip, Llc Computer system for building, training and productionizing machine learning models
US20220327384A1 (en) * 2017-10-19 2022-10-13 Syntiant System and Methods for Customizing Neural Networks
US20190286215A1 (en) * 2018-03-16 2019-09-19 Vigyanlabs Innovations Private Limited System and method to enable prediction-based power management
US10761584B2 (en) * 2018-03-16 2020-09-01 Vigyanlabs Innovations Private Limited System and method to enable prediction-based power management
US11159789B2 (en) * 2018-10-24 2021-10-26 City University Of Hong Kong Generative adversarial network based intra prediction for video coding
JP2020091535A (en) * 2018-12-03 2020-06-11 日本電信電話株式会社 Preprocessing device, preprocessing method and preprocessing program
US11281867B2 (en) * 2019-02-03 2022-03-22 International Business Machines Corporation Performing multi-objective tasks via primal networks trained with dual networks
CN113326801A (en) * 2021-06-22 2021-08-31 哈尔滨工程大学 Human body moving direction identification method based on channel state information
CN113343391A (en) * 2021-07-02 2021-09-03 华电电力科学研究院有限公司 Control method, device and equipment for scraper plate material taking system

Also Published As

Publication number Publication date
US7020642B2 (en) 2006-03-28
WO2003063016A1 (en) 2003-07-31

Similar Documents

Publication Publication Date Title
US7020642B2 (en) System and method for pre-processing input data to a support vector machine
US6941301B2 (en) Pre-processing input data with outlier values for a support vector machine
US7599897B2 (en) Training a support vector machine with process constraints
US6243696B1 (en) Automated method for building a model
US6944616B2 (en) System and method for historical database training of support vector machines
US7054847B2 (en) System and method for on-line training of a support vector machine
EP0680637B1 (en) Method and apparatus for preprocessing input data to a neural network
US6879971B1 (en) Automated method for building a model
US20030149603A1 (en) System and method for operating a non-linear model with missing data for use in electronic commerce
US20030140023A1 (en) System and method for pre-processing input data to a non-linear model for use in electronic commerce
WO1994017489A1 (en) A predictive network with learned preprocessing parameters
WO2005043331B1 (en) Method and apparatus for creating and evaluating strategies
Nair et al. Covariate shift: A review and analysis on classifiers
US20210142122A1 (en) Collaborative Learning Model for Semiconductor Applications
WO2023159115A9 (en) System and method for aggregating and enriching data
CN114503124A (en) Multi-level prediction for processing time series data
WO2021076609A1 (en) Collaborative learning model for semiconductor applications
US20060074830A1 (en) System, method for deploying computing infrastructure, and method for constructing linearized classifiers with partially observable hidden states
Gutta Stock Prediction Using Machine Learning
Taha et al. Machine Learning Techniques for Predicting Heart Diseases
US11790036B2 (en) Bias mitigating machine learning training system
US20220222167A1 (en) Automated feature monitoring for data streams
US11922311B2 (en) Bias mitigating machine learning training system with multi-class target
US20220222670A1 (en) Generation of divergence distributions for automated data analysis
Xiong Machine Learning in Financial Market Risk: VaR Exception Classification Model

Legal Events

Date Code Title Description
AS Assignment

Owner name: PAVILION TECHNOLOGIES, INC., TEXAS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:FERGUSON, BRUCE;HARTMAN, ERIC;REEL/FRAME:012555/0360

Effective date: 20020114

AS Assignment

Owner name: SILICON VALLEY BANK, CALIFORNIA

Free format text: SECURITY INTEREST;ASSIGNOR:PAVILION TECHNOLOGIES, INC.;REEL/FRAME:017240/0396

Effective date: 20051102

STCF Information on status: patent grant

Free format text: PATENTED CASE

CC Certificate of correction
AS Assignment

Owner name: PAVILION TECHNOLOGIES, INC., TEXAS

Free format text: RELEASE;ASSIGNOR:SILICON VALLEY BANK;REEL/FRAME:020609/0702

Effective date: 20080220

FPAY Fee payment

Year of fee payment: 4

AS Assignment

Owner name: ROCKWELL AUTOMATION PAVILION, INC., TEXAS

Free format text: CHANGE OF NAME;ASSIGNOR:PAVILION TECHNOLOGIES, INC.;REEL/FRAME:024741/0984

Effective date: 20071109

AS Assignment

Owner name: ROCKWELL AUTOMATION, INC., WISCONSIN

Free format text: MERGER;ASSIGNOR:ROCKWELL AUTOMATION PAVILION, INC.;REEL/FRAME:024755/0492

Effective date: 20080124

AS Assignment

Owner name: ROCKWELL AUTOMATION TECHNOLOGIES, INC., OHIO

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:ROCKWELL AUTOMATION, INC.;REEL/FRAME:024767/0350

Effective date: 20100730

FPAY Fee payment

Year of fee payment: 8

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 12TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1553)

Year of fee payment: 12