CA2149913A1 - Method and apparatus for operating a neural network with missing and/or incomplete data - Google Patents

Method and apparatus for operating a neural network with missing and/or incomplete data

Info

Publication number
CA2149913A1
CA2149913A1 CA002149913A CA2149913A CA2149913A1 CA 2149913 A1 CA2149913 A1 CA 2149913A1 CA 002149913 A CA002149913 A CA 002149913A CA 2149913 A CA2149913 A CA 2149913A CA 2149913 A1 CA2149913 A1 CA 2149913A1
Authority
CA
Canada
Prior art keywords
output
input
data
vector
prediction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
CA002149913A
Other languages
French (fr)
Inventor
James David Keeler
Eric Jon Hartman
Ralph Bruce Ferguson
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Rockwell Automation Pavilion Inc
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Family has litigation
First worldwide family litigation filed litigation Critical https://patents.darts-ip.com/?family=25527747&utm_source=google_patent&utm_medium=platform_link&utm_campaign=public_patent_search&patent=CA2149913(A1) "Global patent litigation dataset” by Darts-ip is licensed under a Creative Commons Attribution 4.0 International License.
Application filed by Individual filed Critical Individual
Publication of CA2149913A1 publication Critical patent/CA2149913A1/en
Abandoned legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/17Function evaluation by approximation methods, e.g. inter- or extrapolation, smoothing, least mean square method
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computational Linguistics (AREA)
  • Pure & Applied Mathematics (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Probability & Statistics with Applications (AREA)
  • Algebra (AREA)
  • Databases & Information Systems (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Reverberation, Karaoke And Other Acoustics (AREA)

Abstract

A neural network system is provided that models the system in a system model (12) with the output thereof providing a predicted output. This predicted output is modified or controlled by an output control (14). Input data is processed in a data preprocess step (10) to reconcile the data for input to the system model (12).
Additionally, the error resulted from the reconciliation is input to an uncertainty model to predict the uncertainty in the predicted output. This is input to a decision processor (20) which is utilized to control the output control (14). The output control (14) is controlled to either vary the predicted output or to inhibit the predicted output whenever the output of the uncertainty model (18) exceeds a predetermined decision threshold, input by a decision threshold block (22). Additionally, a validity model (16) is also provided which represents the reliability or validity of the output as a function of the number of data points in a given data region during training of the system model (12). This predicts the confidence in the predicted output which is also input to the decision processor (20). The decision processor (20) therefore bases its decision on the predicted confidence and the predicted uncertainty. Additionally, the uncertainty output by the data preprocess block (10) can be utilized to train the system model (12).

Description

.~o 94/1294~ 93t~
2ig9~13 MEl~IOD hND APPARATUS FOR
OPE~ATl~G A NE~L NETWORK
WITH M[ISSING AND/OR INCOI~PLE:TE DATA

TECHN~CAL FIELD OF THE ~NVENTIQN

The presen~ invention pertains in general to neural networks, and more particularly, to methods for estimating the accuracy of a trained neural networkmodel, ~or determining the validity of the neural network's prediction, ar d fortraining neural networks ha~ing missing data in the input pattem and generatin~gS information as to the uncertainty in the da~, this uncertainty utili~ to con~rol the output of the neura~ network. : ~

:
:
' ' ~o 9~/12948 2 1 4 ~ 9 1 3 ~us93/l~cl B~ClKGROUND OF THE INVE:NTION

A common problem that is encountered in training neural networks for prediction? forecasting, pattern r~cognition, sensor validation and/or processing problems is that some of the trainlngltesting patterns might be missing, c~rrupted, and/or incornplete. Prior systems merely discarded data with the result that some S areas of the input space may not have been ~vered during training of the neural network. For example~ if the network is utilized to learn the behavior of a chemical plant as a func~ion of the histoncal sensor and control settings, these ~ensor readings are typically sampled electronically, en~ered by hand from gauge readings and/oren~ered by hand from laboratory results. It is a common occurrence that some or all 10 of these readings rnay be` missing at a given time. It is also common that the various values may be sampled on different time intervals. Additionally, any onevalue may be "bad" in the sense thal after the value is entered, it may be determined by: some method that a data item was, in fact, incorrest. Hence, if the ::~ data were plotted in a table, the resul~ would be a par~ally filled-in table wlth : ~ 15 intermittent missing data or "holes', the~e being reminiscent of the holes ~n Swiss cheese. These "holes" correspond to "bad~ or "missirlg~ data. The "Swiss-cheese"data table described above occurs quite often in real-world problems.

.
onventional neural network training and testing methods require complete pat~erns such that they are required to discard patterJas with missing or bad daaa.
: :
20 The deletion of the bad data in this~manner is an inefficient method for ~aining a :
neural network. For example, suppose that a neural network has ten inpu~s and ten ; output~, ~d also~ suppose that one of the input~ or outputs happens to be missing at he desired time for fifty percent or more of the training pa~terns. onven~ionalmethods would discard th~e~patterns~,:leading to t~ining:for ~ose patterns du~ing 25 ~ the tr~ ing~modoand no reiiablepredict~d output dunng the run mode. This isinefficient$ considenng~ that ~or this :case more than ninety percenl of the inform~tion is s~iII there ~or:the patterns that ~nventional methods would dis~d. The predicted : output corresponding to those~ tain areas will be somewhat ambiguous and errone~us;. In some situations, there may be as much as a 50% reduction in ~e 30: overall data after screening bad or missing da~. Additionally, exp~rimental results ~ 2 1 4 ~ ~ 1 3 P~T/US93/l~2sl ', . i ' have shown that neural network testing performance ~enerally increases with moretraining data, such that throwing away bad or incomplete data decreases the overall performance of the neural networkD

If a neura} network is ~ned on a smaller amount ot` data, this decreases the 5 overall confidence that one has in the pr~dicted output. To date9 no techni~ue exists ~or predicting the integrity of the training operation of the network "on the fly"
during the run mode. For each input data pattern in the input space, the neural network has a training integrity. lf, ~or example, a large number of good data points existed during the training, a high confidence level would e~ist when the input 10 data occurred in ~hat region. However, if ~here were a region of the irlput space that was sparsely populated with good data, e.g., a large amount of bad data had beenthrown out from there, the confidence level in the predicted output of a network;
would be very low. AlthQugh some prior techniques may exist for actually checking the actual training of the network, these techniques do not operate in a real-time run 15 mode.

' :

1 ! I ` ` ~ ' : I .

, ~ .

? 214 !~ ~13 PCT/US93/112;1 ~ . I

SUMMARY OF IHE ~VENTlON

The present invention disclose~ and claimed herein compris~s a network for estimating lhe elTor in lhe prediction output space of a predictive system model for a prediction input space. The network includes an input ~or receiving an input vector comprising a plurality of input values that occupy the predictlon input space. An S output is operable to output an output prediction ~rror vector that occupies an output space corresponding to lhe prediction output space of the system model. A ;~
processing layer maps the input space to the out~ut space through a representation o~ ;
the prediction error in the system model to provide~ said output prediction error vector.
.
ln another aspect of the present invention, a data prepr~essor is provided. -~
The data preprocessor is operab}e to receive an unprocessed data input vector that is associated wilh substantially Ihe same input space as the inpu~ vector. l`he unprocessed data input vector has assoc~ated therewith errors in c~rtain portions of the input space. The preprocessor is operable to process the unpr~ssed data input vector to minimi~e the èrrors therein to provide the inpùt vector on an output. The unprocessed ~data input in one embodiment is compnsed of data having portions thereof that are unusable. The~ data~ preprocessor is o~ble to reconcile the unprocessed data to replace ihe unusablo portion ~with re~onciled data.; Additionally, the data preprocessor is operable to output an uncertai~lty value for each value of the ~ ;~ 20 reconciled data that is output as tl~e inpu~ voctor.
`: :: ~ :
a further as~ect of t~e present invention, thie system model is;comprised of i -~; - a non-linear model having an input for receiYing the inpu~ vector within lhe input~
space arld an ou~put for outputting a~ dicted output vector.~ A mapping function is provided~ that maps the~input~layer to the output layer ~or a non-linear model of a 25 ~ system. A control ~circuit is~ prQvided for controlling the predicLion output vector such~that a change can be eff~te~ therein in ~accordance with ~predetermined csite~a.
A plurali~y of decision thresholds ~are provided tha~ define predetersnined ~hreshold raies fo~ the pr~iction er~r outpuL A de~ision processor is ~perable to compare the output prediction error~ vector with the decisiorl thresholds iand operate ~he ~utput ~wo 94/~948 PC~/US9311~
2149~13 control to effect the predetermined changes wheneYer a predetermined relationship exists between the decision thresholds and the output predietion error vector.

In an even fur~her aspect of the present invention, the non-linear representa~ion of the system model is a trained representation that is trained on a S finite set of input data within the input space. A validi~y model is provided that yields a representation of the validity of the predicted output of a system model for a given value in the input space. The validity model includes an input for receiving the input vector with an input space and an output for outputting a validity ou~put vector corresponding to the output space. A processor is operable to generate the 10 validity output vector in response to input of a prede~ermined value of the input vector and the location of the input vector within the input space. The value of the validity output vector corresponds to the re~ative amount of ~aining data on which the system model was trained in the region of the input space about the value of the input vector.
, In a yet further aspect of the present invention, the system model is trained by a predetermined t~ning algonthm that utilizes a targel output and a æt of training data. During training, an uncertainty value is ~Iso received, representing the uncertainty of the input data. The training algorithm is modified during training as a function of the uncertainty value.
, , 1 ~ ' ' ; . :
:.
, :

~o ~4/12948 2 1 4 ~ ~ 1 3 Pcrruss3lll~l BRIEF DESClRIPr~ON OF THE DI~AWINGS
`

For a more complete understanding of Ihe presen~ Invention an:d the advantages thereof, reference is now made to the following descnption taken in conjunction with the accompanying D~awings in whlch: :

FIGURE I i!lustrates an overall block diagra~n of the system model S illustratin~ both a validity m:odel and a prediction~ error mo~el~ to process reconciled data and control the output with ~e ~use of the ~validity model and prediction error FIGUREs 2a and 2c illustrates an overall block~diàgram of a:meth~d for training the system model utilizing the uncertainty generated dunng data 10 reconciliation; ~
FIGU~2b~: illust:lates an ~xample~:of reconciliation and the associated unce,~tainty;
F}GU~s 3a-3c illult~te~da~ patte~s~repr~n~bng~theda~ distribution, ~e prediction~:èrror and the:~alidity level;~
i5 ;~ FlGURE; 4a illustrates~a;diagrammadc~view~of~a~da~ pattern sampled at two~
inte~vals: ~illustrating ~a complet~ nèu~ network~ pat~ern;
;FIGURE 4b ~illustrates:~a~ diagrammatic ~view~ of a~ data pauern il1us~sting time~
me ing of~da~
FlGU~-~5;illust~ auto-en~n~network:forr on~ng~einputda~
20:~ ;tQ~fill~in ba~d~or missing da~;
FIGURE 6 illust~ates a~ block diagram of the~ training ~pera~on: ~for training FIG~RE 7 illust tes an ov~l~blo k dia~ n for i-lng~e idlt 5 ~ FlGU~&aand~8billust ~ee ~of d~n s~ed use~ with~:t~ning e~ id ~e `
FlGU~9-illust~t~ a diag~mm ~ '~ of ~ i~ ba s ~ncbon ~t~
i'' ' ;a~ two-dlmension~
FIGURE~;l0~;iilust~tes~a~diagramma~ic~:viewof~he~v~idity~;func~o~

~0 94tl294~ 2 1 4 ~ 3 1 3 ~CTfUS93111~

!

FIGURE~ 11 illustra~es dis~ribu~ion of training data and two ~est pat~erns for x. and Xb; and FlGURE 12 illustra~es an overall bloek diagram for generaiting the validity targets that are utilized during the training of lhe va~idity model.

.
.

:
::: ~ .
:
.
:

~ -; . ' .

~.~o 94/L~94~ P~T/US93/1~251 ` 21~9~13 8 :~

DET~ILED DESCRIPIION OF THE INVENTION
.~
In FlGURE 1, there is illustrated an overall block diagram of the system of the present invention. A d~ta input vector x(t) is provided that represents the input data occupying an input space. This data can have missing or bad data which mustbe replaced. This data replacement occurs in a data preprocess section 10, which is S operable to reconcile the data patterns to fill in the bad or missing data and provide an output x'(t) veetor. Additionally, the:error or uncertainty~vector ,u,~.(t) is output.
This represents the distribu~ion of the data about the avera~e r~onciled data vector x'(t), and this is typical~y what is discarded in prior systems. The reconciled data ., x'~t) is input to a syslem model 12, which is realized with a neural network. The 10 neural network is a convenuonal neural network ~hat is compnsed of an input layer for ~receiving the input vector and an oulput layer for providing a predicted output vector. The input layer is map~ to ~the output layer through a non-linear mapping :
function th~t is embodied in one or more hidden layers. This is a c~nventional type .
of architeclure. As will be described hereinbelow, this network is trained through ~ : :
: ~ 15: ~:any one of;a~ number of t~ning algo~ithms and architectures such as Radil Basis . Func~ions, Gaussian Bars, or convention~ Backpropagation techniquos. The :Backpropagation learning ~echnique:is gene~ally described~in D.E. Rumelhart, G.E.
Hinton & R.J. Williams, Leaming~ e~n~l Representado~ by~ Er~or Propagalion (in : : ` .-.
D:.E. :Rumelhart~ ~: J.L. Mc~}ennand1 P~rallel Distnb~ed :Processin~,~ Chapter 8, ~`
:~ ~ 20 ~ Vol. :1, 1986),~which~documen~ is~mcorporated herein~by réference. However,~
:Baclqlropagadon~ hniques~or~`training conventional neural networlcs is well known.
The output of the sys!em model 12 is a p~icte~ output y(t). ~is is input to an : output ~dtrol ci~cuitli4, which;provides as an output a modifi~ /output~v~ctor y'(t~
ln~general, whenever data~is~input~to the system modcl~12, a predicte~outpiut 25 rtsults, lhe~int~rity~hereolbdng~a function:of how:well ~h-network~s t~ ed.

In~ addition~to~the system~m`~d, a~validity~model i6 and:a predlction- error :
m~d~18~ are provided. The validity~ m~el ~l6 proYides a model of the "validity" of the~predicted~ ou~put~ as a function of the: "dist~ibution- of dat~ in the input space :
during~ the t~aining operation. ~ Any system m~del has given ~redicdon ~ors : :
30~ ~h assoeiated therewith, ~which ~prediction errors are inherent in the archit~ture utilized. :

~0 94/~29~8 2 1 ~ 9 g 1 3 PCT/US93/11251 This assumes that the system model was trained with an adequate training data set.
If not, then an additional source of error exists that is due to an inadequate distribution of training data at the location in the input space proxima~e to the input data. The validity model 16 provides a measure of this additional source of error.
5 The prediction-error model 18 provides a model of the expected error of ~he predicted output.

A given system model has an associated ~rediction error which is a function of the architecture, which prediction error is premised upon an adequate set of tr,aining data over the entire inpul space. However, if there is an error or IO uncertainty associated with the sot of training data, this error or uncertainty is :
additive to the inherent prediction error of the system model. The overall predic~ion error is distinguished from the validity in that validity is a function of the distribution of the training data over the input space and the prediceion error is a function of the architecture of ~he system model and the associaled error or - 15 uncertain~y of the set of training data.

The nutput of the validity model 16 provides a validity outpu~ vector ~v(t), and the output of the prediction error model 18 provides an estimated predictionerror vector e(t). These two output vectors are input to a decision processor 20, which output i~ used to gen~rate a control signa~ for Inpllt to the output ~ntrol 14.
20 The decision processor 20 is operable ~o compare the output vectors ~(t~ and e(f~
with the various decision thresholds which are inpu~ ahereto from a decision th~eshold gencrator 22. Examples of the type of control that are pl~ided are: ifthe accuracy is less thaT a control change recommendation, then no change is made.
1~ . Otherwise, the controls are changed to the recommended value. Simihrly, if the 25 validity value is greaiter than the ~alidity threshold, :~en the control recommendation is acoepted~ Otherwise, the control recommendation is not accepted. The output con~rol 14 could also modify the predicted outputs. For example, in a ~ntrol situationj an output control change value could b~ modified to result in only 50% of the change value for a given threshold, 2$~o of the change value for a se:ond : 30 threshold~and 0% of the change~value ~or a t~ird ~hreshQld.
~; ' : .~

. .

~yo 94/12948 PCT~US93/1~51 ` ``` ~ 2 1 ~ ~ 91 3 '' Refemng now to FIGURE 2a, there is illustrated one embodiment of a method for training the system model 12 utili~ing the uncertainty ~(t) of the input training data. ln general, learning of the system model 12 is achieved through any of a variety of neural network architectures, and algolithms such as S Backpropagation, Radial Basis ~;unctions or Gaussian Bars. The lçamin~ operation is adj~lsted such that a pattem with less data in the input space is trained with less importance. In the backpropaga~ion technique, one m thod is to chan~ge the learning rate based on the uncer~inty of a given pattern. The input uncertainty Y~ctor ~.(tj is input to an uncertainty training modifier 24, which provldes control signals to the 10 system model 12 during ~raining.

The data pre-processor lO calcula~es the data value x'(t) at the desired time "t" from other da~ values using a reconciliation technique such as linear estimate, spline-fit, box-car reconciliation or more elaborate techniques such as an auto-: ..
encoding neural network, described hereinbelow. All of these techniques are 15 referred to as data reconciliation, with the input data x(t3 reconciled with the outputreconciled data x'(t). ln general, x'(~) is a function of all of the raw values x(t) given at present and past times up to some maximum past time, Xma~. That is, :
~(t) ' J~X~(~N)~ ~(tNj~ -- X"(tN); XI~tN _ ~ (tN ~
XI~N_~ (t~)~ X2(t~) -- X~t~
~: where some of the values of xj(~ may ~e misting or bad.

T his method of finding x'(t) using past values is strict~y extrapolation. ~ Since 20 the systern~ only has past v~alues available during nrintime mode, the values, must be reconciled. The simplest method of doing this is to take ~he next extrapolated value : X'j(tj = Xj(tN); that ls, lake ~ie last value that was reported;. More :elaborate ` ~ extrapolation algorithms may use past ~alues x~ u~ t(o, .. i,~. For example9 ;~ linear extrapolation would use:

: ~ ` :

.~(t) =~tN~ t - tN I ] (2) . , .

,: : ~ :

. ~0 94/U94$ 2 1 4 ~ 9 ~ 3 P~T/US93t~

Polynomial, spline-fit or neural-network e~trapolation techniques ~se Equ~tion 1.
~See eg. W.H. Press, "Numencal Recipesnt Carnbridge University Press (1986), pp.77-101) Training of the neural net would actually use lnterpolated values, i.e.,Equation 2, wherein the case of interpolation t~,> t.

Any time v~lues are ext~apolate~ or interpolated, these values have some inherent uncertainty, ~y.(t). The uncertainty may be given by a priori measurement or information and/or by the reconciliation technique. An es~ima~e of the uncertainty ~,(t) in a reconciled value x'(t3 would be:

~, = { ~ , + ~ 2 U

where ~ is the maximum uncertainty set ~s a parameter (such as ~he ma~imum 10 range of data) and where:

~O~. is the a prion uncertainty i.e., the local velocity average magnitude and where: ~

~i.e., 'h the local acceleration ave~e magnitude.
: : . : ~:
A plot of this is ;llustrated in ~:IG~JRE 2~. -~ :
~: : Once the input uncertainty vector ~,~.(t) is determined, the missing or uncerl~in input values have ~o be treated di~îerently than ~ussing or unoertain OlltpUt ; : :

0 9411294~ 2 1 ~ 9 9 1 3 PCT~93/11251 values. In this case, the erTor term backpropagated to each uncertain input is modified based on the input's uncertainty, whereas an error in the output affec~s ~he leaming of all neuronal connections below that output. Since the uncertainty in the inpu~ is always renected by a corresponding uncertainty in the output, this S uncertainty in the output needs to be accounted for in the training of the system model 12, the overall uncertainty of the system, and the validity of the system's output.

The target output y(t) has the uncertainty thereof de~ermined by a target preprocess block 26 which is substantially similar to the data preprocess block 10 in 10 ~hat it fills in bad or missing data. This generates a target input for input to a bloclc 28, which comprises a layer that is linearly mapped to the output layer of the neural ,~
network in the system model 12. This provides the reconciled target y'tt). ` .

, Refernng now to FIGURE 2c, Ihere is illustrated an alterna~e specific embodiment wherein a system model 12 is trained on both the r~conciled data X9(t) and lhe uncertainty ~".(tl in the reconciled data x'(t). This data is output from the `
data prepr~ess block 10 to a summation block 30 that is controlled on vanous passes through the model to either process the reconciled data x'(t) itself or to process the summation of the reconciled data X3(t) and the uncer~ainty ~ t). Two P
outputs result, a predicted OUtpl3t p(~) and an uncertainty predicted output ~ep(t).
20 These are input to a target error processor block 34, which also re~ives as inputs the reconciled target output y'(t) and ~he uncertainty in the reconciled target ou~ut ~.(t). This generates a vallle ~y,~. This value is utilized to calculate th~
m~difie~ To~ Sum Squared ~I`SS) error function ~hat is used for t~inin8 the system model with either a Backpropagation Radial Basis Function or Gàussian Bar neu~al 25 network.

In operation, a first ~orward pass is ~rformed by controlling the summation block 30 to process only the reconciled data x~(t) to output the predicted output p(t).
In a second pass, the sum of ~he reconcil~d data input x'(t) and the un~ertainty input (t) is provided as follows:

:
.
--"~Q ~4/1;~94~ K~lU~93/112~il 2149~13 ~I(t) ~ ) = (xl~ + ~ x/ + 1l.~/) (6) This resul~s in the predic~ed output p'(t). The predicted uncer~ainty flp(t) is then calculated as follows:

~p(t) ~ ~ (t) - p(t) ~ Pl~ P i - P2 ~ ~ P ~ P~
The total target error ~YIoL9l is ~hen set ~qual to the sum of the absolute values of yp(t) and ~.(t) as follows: .
~Y ~ + 1~1, ) (8) 5 The output error function, the TSS error function, is then calculated with the modified uncertainty as follows:

where NPATS is the number of training patterns.
F:or Backpropagation training, the weights Wjj are updated as follows:

T1 ~2Wt~ npu~
~Wq ( ~ )J ~ û) As such, the network can now have ~he weights thereof modified by ~n e~ror 10 function that accounts for uncerlainty.

For neura} networks~ that do not ut~liæ Bacl~propagation, similar behavior can be achieved by ~raining the system m~el th~ou~h multiple passes through l~he same dala set where random noise is add&~ to the input pattems to simui~te ~he e~ects of uncertainty in these patterns. In this lraining mPthod, for each x'(t) and associat~d 15 ~x.(t). a random Yec;or cari be chosen by choosing each x"~ as x"; = x'l ~ ni~, where3n ~i is a no;se ~erm cho~en ~rorn the distnbu~ior~

4112945 2 1 4 ~ ~ 1 3 PCT/U593/l~Sl ~_~/2 ~ 2~

In this case:
~t) ,. f (~ _ f ~ 12) Where f(x(t)) is the system model producing this system predicted output p(t3.

Referring now to FIGUREs 3a-3c, there are illustraeed plots of the original training data, the system-model prediction and the prediction error~ and Ihe validity, S respectively. ln F~GURE 3al the actual data input-target patterns are illustrated. It can be seen that the data varies in density and variance across the x-axis. Once the system model is trained, it yie]ds a pr~ ion, y ~x3~ line 42. The system-model has an inherent prediction-error (due to inaccuracies in the training data). These prediction errors are illustrated by two dutted lines 44 and 46 that bound on either 10 side of the predicted value on line 42. This represents basically the staradard deviation of the data about the line 42. The validity is then determined, which is illustrated in FIGURE 3c. The validity is essentially a measure of the amount oftraining data at any point. lt can be seen that the initia~ point of the curve has a high validlty Y~ue, illustrated by re~erence numeral 48, and the latter part of the 15 cur~e where a data was missing has a low level, as represe~ted by re~erence numeral , 50. Therefore, when one exam;nes a neu~ network t~ained by the clata in ~:IGURE
3a, one would expe~ ~he r~liability or integdty of the neural networl~ ~ be ~igh as, a fi~nction of the ts~ining data input thereto whenever a lar~e amourlt of t~sining data was p~ent. : ; :

Re~erring now to FIGURE 4a, there is illustrated a data table with bad, rnissing~ or incompiete data. ~he dala table consists of data with timc disposedalong a vert;cal scale and lhe samples dis~sed along a ho~izontal scale. Each sample comprises many dif~erent ~ieces of data with two data intervals illustrated.
:: It can be~se~n that when the d~ta is examined for l~oth the data sampled at the time ~ .

~Wo 94~129~8 2 i ~ 1 3 ~CT/US93/1~
, i !
lS
inte~val I and the data sampled at the time interval 2, tha~ some porlions of the data result in incomplete patters~s. This is illust~ated by a dotted line 52, where it can be seen that some data is missing in the data sample~ at tirne int~rval I and some is missin~g in time intelval 2. A complete neu~al network pattern is illustrated box 54, 5 where all the data is complete. Of intcrest is the ~me difference between the data sampled at time interva~ 1 and the data sampled at time intelval 2. In time interval 1, the data is essentially present for all steps in time, whereas data sarnpled at time interval 2 is only sampled periodically relative t3 data sampled at time interval 1.
As such, the reconciliation procedure fills in the missing data and also reconciles 10 between the time samples in time interval 2 such that the data is complete for all time samples for both time interval I and time interval 2.

The neural network n~odels that are utilized for ~me-senes predic~ion and;
control require tha~ the time-interval between successive Iraining patterns be constar t. Since the data that comes in from real-world systems is not always on the same time scale, it is desirable to time-merge the data before it can be used for training or running the neura} network model. To achieve this ~me-merge : ~ operation, it may be necessary to extrapolate, interpolate, average or compress the data in each column over each t:ime-region so as to give an input ~alue xtt~ that is on the appropriate time-sca~e. The reconciliation algorithm utilized may include linear 20 estimates, spline-fits, boxcar :algorithms, etc., or more elaborate techniques such as the auto-encoding network described hereinbelow. If ~he data is sampled too frequently in the time~interval:, ii will be n~ to smooth or average the dah to get~ a sample on Ihe desired time seale. This w~ be done by window averaging techniques, sparse-sample teehniqlles or spline techniques.

Re~erring now to FIGURE 4b, there is illush~ted an input data pattern and target output daea pattern illustrating the pre-proc~s~ o~tion for both preprocessing input data to proYide :time merged output data and ~lso pre-p~o~essing ~the target output data to provide pre-processed targe~ ou~put data for :~aining; ~ ~ ` pu~poses. The data input x(t) is comprised of a vector with many inputs, xl(t), ~2(t), 30 ... x"(t), each of which can be on a dif~erent~ime ~le. It is des;rable ~hat the output x7(t) be ex~apolated or inte~latod to insure that all data is p~esent on-a single time seale.~ For example, if the data at x,(t) were on a time sule vf one , ~0 94l1294B 2 1 4 9 ~ T/US93/11251 sample every second, a sample represen~ed by the time t,~, and the output time sGlle were desired ~o be the same, this would require time merging the rest of the data to that time scale. lt can be seen that the data x2(t) ~curs approximately once every three seconds, it also being noted that this may be asynchronous data, al~hough it is S illustrated as being synchroniz~d. The data bu~fer in FIGIJRE 4b is illustTated in actual time. However, the data output as x,'(t) is reconciled with an uncertainty ~.,(t) since the input time scale and the output time scale are the sarne, there will be no uncertainty. However, for the output 7~'2(t), the output will need to be reconciled and ~n uncertainty ~,~2(t) will exist. The reconciliation could be as simple as holding 10 the last value of the input x2~t) until a new value is input therelo, and then discarding the old value. In this manner, an outpul will always exist. This would also be the ~;
case ~or missing data. However, a reconciliation routine ~s described above could also be utilized to insure that data is always on ~he output for each time slice of thé
vector x'(t). This also is the case with respect to the targe~ outpu~ which is 15 preprocessed to provide the preprocessed target outpl-t y'(t).
. ~.,.
Refemng now to FIGURE S, there is illustrated a diagrammatic view of an auto-encoding network utili~ed for the reconciliation operation. The network is comprised of an input ]ayer of input nodes 60 and an output layer of output nodes 62. Three hidden layers 64, 66 and 68 arP provided for mapping the layer 6~ to the 20 output layer 62 through a non linear mapping algoAthm. The input data patterns x~(l), x2(t)~ x"(t) are input thereto, reconciled and reproduced over reg;ons ofmissing data to provide ~he output data pattern x~'(t), x1'~t), x3'(t), ..., x"'(t). This network can be t~ined via the backpropagation t~chnique. Note tha~ ~his system will reeoncile ~he data over a given time base even if the data were not onginally 25 sample~ iover that ti`me base~ such tha! data at two different samplirig intervals can be synchronized in time.

The teehniques describ~l above involve p~imarily building, ~ning and running a system model on data that may have missing par~s, be on the wrong time-sale increment and/or possesses bad data points. The primary technique involves 30 reconciliation over the bad or missing data andlor time-merging the data. However, once à m~del is built and trained, there are two other ~actors that should be taken into account before the model can be used to i~s ~ull extent to solve a real-world ~0 ~41~2948 PCTIUS93/11~51 , 2 149!313 problem. These two factors are the prediction accuracy of the model and the model validity. The model typically does not provide an accurate representaLion of thedynamics of the process that is modeled. Hence, the predietion output by the model will have some prediction-error e(t) associated with each input pattern x~t), where:

e(t)~ (t) ~13) S This provides a difference betwe~n the ac~ual output at time "t" and the predicted output at "t". The prediction error e~t) can be used to train a system that estimates the system~model accuracy. That is, a structure can be train~d with an internal representation of Ihe model prediction error e(t). For most applications, prediehng the magnitude ¦¦ e(t) ¦¦ of the error (rather than the direction) is sufficient. This 10 prediction-error model is represented hereinbelow.

- Referring now to PIGURE 6, there is illustrated a block diagram of the systcm for tra~ning the prediction~rror model 18. The system of FlGURE 2c is udlized by first passing the reconeiled input data x'(t~ and the uncer~ainty ~ .(t) throu~gh the trained syslem model l2, this training achieved in the process ~escribed lS with respect to FIGURE 2c. The target error ~YI~ is ealculated using the target error processor in accordance with the same pr~ess il~ustrated with respect to Equation 8, in addition lo ~y as a ~unction of ~y"i This is then input as a ~arget to the prediction eITor model 18 with the inputs being the reconciled input data x~(t) and the uncer~inty ~3~(1t). The predict~on~rror model can be instan~at~ in many 20 ways9 such as with a lookup lable, or with a neu~al network. If instantiatèd as a ' ~` ' neur~ll network,itmay`betrain~via~onventionalBae~ropagation,RadialBasis j!
functions, Gaussian Bars, or any other neu~ network training algonthm.

, The measurement of the ~alidity of a model is bzsed primarily on ~e historic~l training da~a distnbution. In general, neural networks are mathematical ~5 ~ models that le~n behavior from data. As such, ~they are only valid in ~he regions `of d~ta for which they were trained. Once they ~e tr~in~d and run in a te~ fioFwardor test mode, (in a standard neural network) there is no way to distinguish, using the current state of the model lonej between a valid data point (a point in the region , 2 1 4 ~ ~ 1 3 where the neural network was trained) versus an invalid data point (a point in aregion where lhere was no data~. To validate the integrity of the model prediction, a mechanism must be provided for keeping t~clc of the model's valid regions.

Referring now to FIGURE 7, ther~ is illustrated an overall block diagram of ~`
S lhe processor for trainîng the validity model l6. The data preprocess block lO is utilized to provide the reconciled input data x'(t) to the input of the validity rnodel l6. The inpu~ data x(t) and the reconciled input data x'(t) are input to a validity target generator 70 to generate the validity pa~meters for input to a lay~ 72.

A validity measure Y(X) is defined as:

~NMJS
v(~) = S l ` ~,hj (x, xj~ - bjJ (14) 10 where: ~(x) is the validity of the point x S is a saturating, monotonically increasing function such as a sigmoid:
S( ) l (1~
I + ~-z :

a; is a eoefficient of impor~ance, a free parameter, h; is a localiz~ funclion of the dat~ x~t) and the t~ining data psin~
x~
` 15 ~ I Np",l is the total number of training pattern~
and b; is a bias parametcr.

The parameter h; is chosen to be a localized function of ~he data tbat is basi~ly a filnçtion of the number of points iD a local proximity to the poislt x(e~. As a specifir 20 embodimentt the following rela~ionship for h; is chosen:

~ 94/~94~ PCTn~Ss3/112sl h~ (x~ e ~ ' ~ - xll < a~
o ~ a~

The resultant function is illustrated in FIG~ 8a with the func~ion cut of ato~a so :
that far-away points do not contribute. Other ~unctions such as the one illust~ted in FlGURE 8b could a]SG be used.

Referring now to FIGURE 9, there is illust~ated an input space represented S by inputs x~ and x2. lt ean be seen that tliere are thr~ regions, each having centers xl, x~ and X3, e~ch having a given number of points n" n2 and n3, respectively, and a radius rl, r2 and r3. The centers of the regions are defined by the clustering al~,onthms with ahe number of points determined therein.

Referring now lo FIGU~E 10, there is illust~ated a representation of the 10 validi~y function wherein the validity model 16 is iilustrated as having :the new data x(t) input thereto and the~ output ~(x(t)) outpu~ therefrom. A dotted line is provided to the right o~ the validity model 16 illustrating the t~aining mode wh~rein the inputs in the ~training mode are the histo ical data patterns x~(t), X2(t), ... XNp~"~(t)~ at~
b;. In a specific embodimentj the v~lues in the above are chosen such that a;--1, b;
15 = 2, for all i, o; i- 0.1, ~ = 3, for all i.

The Equation lA can be diffi~ult to compute, so it is ~more efficient to break th~ sum up into regions which are defined as follows:

V~ ~ S~ ;a~h~ ~ S~ n ~; .
, where the cells are simple geometrit ~divisions of: the space, as illus~ated in PIGIIRE: 10, which depi~ a test pattern.

20 ~ In:~lGlJRE 11, the test pattem x"(t) has a validity that is detennine~ by cells C15,~C16, C12 and CII a~ long as the ce~ size~is~greater than or equal to the cu~of~

~ 2 1 ~ 9 !) 1 3 a~o, where the dala point Xb(t) iS only influenced by cel} C15 and C14. Hence, the algorithm for finding the validity is straightfon~ard.

I) Train syss~m model on t~ining patterns (x" x2~ X3~ Np~lJ
2) Train validity modsl by keeping track of x, ... XNp,,", e.g., via S a binary tree or K-d tree.
3) Par~ition the data spaee into ~ells Cl, C2 ~.. CM~ . (eg. K-d tree) 4) Determine which cell the new data point ~lls into, eg. cell-index (x) = (la~) ~kx23 ... ~b~"), if the Gells are equally divided into k partitionsldimension and xj~(0,1) S) Compute sum in cell 6) Compute sum in n-neighbors..
7) The validity functlon will then be define~ as:

: ; '' ~ x~ +,t~d~) ~ (18), C~ N~18hbors ~.

:~ where d; is the distance from x' to neighbor i, and~di) is a de~reasing function of d;.

Again, Bquation 18 can be difficult to calculate. Furthermore, it may be the case ~: that few data points: fall inlo the individual cells. A ~useful ~approximation of Ule full ;-sum may be m~de by induding only those neighbors with large f~d~. A se~nd, simp~r, and faster way ~of c~mputing the sums~ in Equa~ion 18 is to approxima~
~; 20 sums by averaging all ~oints in a ~egion as follows:

lx3 ~ S~N~ (xJ, ~1) + N2~2h2(X~ b~
.

:
:
S ~ N~ hj (x', x~) - b (20) : : : `~

:

~o ~4/12948 2 1 4 ~ 9 1 3 Pc~ s3/~

The region centers x; ean be selected as the centers of the cells x;, or as the centers of k-d tree cells, or as the centers of Radial Basis ~unc~ions that are s~lected via a k-means clustering algorithm.

Refemng now to PIGURE 12, there is illustrated a block diagram of the S validity model 16 for receiving the output of the pre-processor lO and genera~ing the validity value v(x'(t~). As described above, the output of the preprocessor lO
comprises both the reconciled data x'~t3 and the uncertainty ~,.(t). 171is is input to a region selector 76 which is operable to determine which regivn of the test pattern the reconciled data resides in. During training, a counter 78 is incremented to }O determine the m~mber of points in the region over which the system model 12 was trained. This is stor~i on a region-by-regian basis id, diuring a run moae, ehe incrementing operation that is controlled by a line 77 is dlsabl~ and only a region line 79 is activated to point to the region determined by the region selector 76. The output of:the counter comprises the number of points in the region N;, which is then 15 input to a region activation block 80. The block 80 provides the function h(x'(t)), :: xj(t)), which, as described above, is the l~i~d function of the data x1(t) and the :`
training data points x'i~t). The output of the region activation block 80 is input to a difference circuit 8l to subtract there~rom a va'i;dity bias ~alue 'Ib". This is i~
essentialJy an offsel correction which is an ar~itrary numbèr determined by the 20 operator. The output of the difference ci~cuit 81 is input to a sigmoidal ~uneti~n .`
genera~or that provides the output Y(X~(t)). The sigmoidal function provides a sigmoidal activation value for each ou~put of the vector v(x'O)~

ln operation, th~ validity model 16 of E:IGURE 12 allows fo~ ~n-t~g-fly I ( i calculation of the validity estimation. This r~quires for the calculation ~he : ~ :
5 knowleclge of the number of points in each region ~d knowledge of the region in : ~ which the input pattem resides. With this information, the estimation of the ~alidity value can be determined. During the tlaining mode, the~incremen~ line 77 is enabled such that the number of point~ in each region ~an be deterrnined and stored in the countù 78. As descnbed above, the run mode only r~quires ou~put of the 30 value N~

~ 94l1294~ PCT/US~3/11251 2 1 4 ~ ~ 1 3 In the emb~diment of FIGURE 7, the va~idity ~rget generator 70 could utilize ~he structure of FIGURE 12 to calculate ~ target output for each value of x(t) input to the preprocessor 10. This would allow the validity model 16 to be realized with a neural networkl which is then trained on the ~ralidity targets and the input 5 data in accordance with a training algorithm sueh as backpropagation.

In summary, there has been provided a method for accounting for bad or missing data in an input data sequence utiliæd durin~ ahe run mode of a neural network and in the training mode thereof. The bad or missing data is reconciled to provide a reconciled input ~ata time series for input to the neural network that10 models ~he system. Additionally, the error that represents uncertainty of thepredicted output as a function of the uneertainty of the data, or the manner in which the data behaves about a particular data point or region in the input space~ is utili~ed to control the predicted system output. ~The uncertainty is modelled dunng the training phase in a neural network and this network utili~ to provide a prediction 15 of the uncertainty of the output. This can be utilized to control the output or modify the predicted system output value of the system model. Addi~ionally, the relative amount of data that was present during training of the system is also utilized to provide a confidence value for the output. This validity model is operable to receive the reconciled data and the uncertainty to predict a validity value for the output of 20 the system model. This is also us~d to control ~he output. Additionally, the uncertainty can b~ utili~ed to train the system mo~el,~ such lhat in regions of high data uncertainty, a modificatioll can be made to the ~letwork to modify ~he learnsng rate as a furlction of the desired output e~r during tsaining. This ou~ut error is a function of the uncertainty of the predicted ou~put.

25Although the prefe~ed embodime~t has b~n described in detail, it should be understood that various changes, substitutions and alterations can be made th~rein :: wi~hout departin~g from the splrit and~scope of the inYe~ on as defined by the ~ appended claims.

.
, :

Claims (31)

WHAT IS CLAIMED IS:
1. A network for estimating the error in the prediction output space of a predictive system model operating over a prediction input space, comprising:
an input for receiving an input vector comprising a plurality of input values that occupy the prediction: input space;
an output for outputting an output prediction error vector that occupies an output space corresponding to the prediction output space of the predictive system model; and a processing layer for mapping the prediction input space to the prediction output space through a representation of the prediction error in the predictive system model to provide said output prediction error vector.
2. The network of Claim 1, and further comprising:
a preprocess input for receiving an unprocessed data input vector having associated therewith unprocessed data associated with substantially the same input space as said input vector, said unprocessed data input vector having errors associated with the associated unprocessed data in select portions of the prediction input space; and a data preprocessor for processing the unprocessed data in the unprocessed data input vector to minimize the errors therein to provide said input vector on an output.
3. The network of Claim 2, wherein said unprocessed data input vector is comprised of data having portions thereof that are unusable and, said data preprocessor comprises a reconciliation device for reconciling the unprocessed data to replace the unusable portions with reconciled data.
4. The network of Claim 2, wherein said data prepossessor is operable to calculate and output the uncertainty for each value output by said data preprocessor.
5. The network of Claim 1, wherein the predictive system model comprises a non-linear model having an input for receiving the input vector that is within the prediction input space and an output for outputting a predicted output vector within the prediction output space, said non-linear model mapping the prediction input space to the prediction output space through a non-linear representation of a system.
6. The network of Claim 5, wherein the predictive system model is trained on a set of training data having uncertainties associated therewith and wherein said processing layer is operable to map the prediction input space to the prediction output space through a representation of the combined prediction error in the predictive system model and the prediction error in the set of training due to the uncertainty in the set of training data.
7. The network of Claim 5 and further comprising:
a plurality of decision thresholds for defining predetermined threshold values for said output prediction error vector;
an output control for effecting a change in the value of said predicted output vector from the predictive system model; and a decision processor for receiving said output prediction error vector and comparing it to said decision thresholds and operating said output control to effect said change on the value of said predicted output vector when the value of said output prediction error vector meets a predetermined relationship with respect to said decision thresholds.
8. The network of Claim 6, wherein said non-linear representation is a trained representation that is trained on a finite set of input data within the input space in accordance with a predetermined training algorithm and further comprising a validity model for providing a representation of the validity of the predicted output vector of the system model for a given value of the input vector within the input space, said validity model having:
an input for receiving the input vector within the input space;
an output for outputting a validity output vector corresponding to the output space;
a validity processor for generating said validity output vector in response to input of said input vector and the location of said input vector in the input space, the value of said validity output vector corresponding to the amount of training data on which the system model was trained in the region of the input space about the value of the input vector.
9. The network of Claim 8, and further comprising:
a plurality of decision thresholds for defining predetermined threshold values for the validity output vector;
an output control for effecting a change in the value of said predicted output vector from the predictive system model; and a decision processor for receiving said validity output vector and comparing said validity output vector to said decision thresholds, and operating said output control to effect said change in the value of said predicted output vector when the value of said validity output vector meets a predetermined relationship withrespect to said decision thresholds.
10. A network for providing a measure of the validity in the prediction output space of a predictive system model that provides a prediction output and operates over a prediction input space, comprising:
an input for receiving an input vector comprising a plurality of input values that occupy the prediction input space;
an output for outputting a validity measure output vector that occupies an output space corresponding to the prediction output space of the predictive system model; and a processing layer for mapping the prediction input space to the prediction output space through a representation of the validity of the system model that was learned on a set of training data, the representation of the validity of the system model being a function of the distribution of the training data in the prediction input space that was input thereto during training to provide a measure of the validity of the system model prediction output.
11. The network of Claim 10, and further comprising:
a preprocess input for receiving an unprocessed data input vector having associated therewith unprocessed data associated with substantially the same input space as said input vector, said unprocessed data input vector having errors associated with the associated unprocessed data in select portions of the prediction input space; and a data preprocessor for processing the unprocessed data in the unprocessed data input vector to minimize the errors therein to provide said input vector on an output.
12. The network of Claim 11, wherein said unprocessed data input vector is comprised of data having portions thereof that are unusable and said data preprocessor comprises a reconciliation device for reconciling data to replace the unusable portions with reconciled data.
13. The network of Claim 12, wherein said data preprocessor is operable to calculate and output the uncertainty for each value of reconciled data output by said data preprocessor.
14. The network of Claim 10, wherein the predictive system model comprises a non-linear model having an input for receiving the input vector that is within the prediction input space and an output for outputting a predicted output vector within the prediction output space, said non-linear model mapping the prediction input space to the prediction output space through a non-linear representation of a system.
15. The network of Claim 14, and further comprising:
a plurality of decision thresholds for defining predetermined threshold values for said validity measure output vector;
an output control for effecting a change in the value of said predicted output vector from the predictive system model; and a decision processor for receiving said validity measure output vector and comparing it to said decision threshold and operating said output control toeffect said change on the value of said predicted output vector when the value of said validity measure output vector meets a predetermined relationship with respect to said decision threshold.
16. The network of Claim 10, wherein said processing layer comprises:
a memory for storing a profile of the training data density over the input space; and a processor for processing the location of the input data in the input space and the density of the training data at said location as defined by said stored profile to generate said validity measure output vector as a function of the distribution of said training data proximate to the location in the input space of the input data.
17. A method for estimating the error in the prediction output space of a predictive system model over a prediction input space, comprising the steps of:
receiving an input vector comprising a plurality of input values that occupy the prediction input space;
outputting an output prediction error vector that occupies an output space corresponding to the prediction output space of the predictive system model;
and mapping the prediction input space to the prediction output space through a representation of the prediction error in the predictive system model to provide the output prediction error vector in the step of outputting.
18. The method of Claim 17, and further comprising the steps of:
receiving an unprocessed data input vector having associated therewith unprocessed data associated with substantially the same input space as the inputvector, the unprocessed data input vector having errors associated with the associated unprocessed data in select portions of the prediction input space; and processing the unprocesssed data in the unprocessed data vector to minimize the errors therein to provide the input vector on an output.
19. The method of Claim 18, wherein the step of receiving an unprocessed data input vector comprises receiving an unprocessed data input vector that is comprised of data having portions thereof that are unusable and the step of processing the unprocessed data comprises reconciling the unprocessed data to replace the unusable portions with reconciled data.
20, The method of Claim 19, wherein the step of processing the data is further operable to calculate and output the uncertainly for each value of the reconciled data output by the step of processing.
21. The method of Claim 17, wherein the predictive system model comprises a non-linear model having an input for receiving the input vector that is within the prediction input space and an output for outputting a predicted output vector within the prediction output space, the non-linear model mapping the prediction input space to the prediction output space to provide a non-linear representation of a system, and further comprising:
storing a plurality of decision thresholds for defining predetermined threshold values for the output prediction error vector;
comparing the output prediction error vector to the stored decision thresholds; and changing the value of the predicted output vector from the predictive system model when the value of the output prediction error vector meets a predetermined relationship with respect to the stored decision thresholds.
22. A method for providing a measure of the validity in the prediction output space of a predictive system model that provides a prediction output and operates over a prediction input space, comprising the steps of:
receiving an input vector comprising a plurality of input values that occupy the prediction input space;
outputting a validity measure output vector that occupies an output space corresponding lo the prediction output space of the predictive system model;
mapping the prediction input space to the prediction output space through a representation of the validity of the system model that was learned on a set of training data, the representation of the validity of this system model being a function of the distribution of the training data on the prediction input space that was input thereto during training to provide a measure of the validity of the systemmodel prediction output.
23. The method of Claim 22, and further comprising the steps of:
receiving an unprocessed data input vector having associated therewith unprocessed data associated with substantially the same input space as the inputvector, the unprocessed data input vector having errors associated with the associated unprocessed data in select portions of the prediction input space; and processing the unprocesssed data in the unprocessed data vector to minimize the errors therein to provide the input vector on an output.
24. The method of Claim 23, wherein the step of receiving an unprocessed data input vector comprises receiving an unprocessed data input vector that is comprised of data having portions thereof that are unusable and the step of processing the unprocessed data comprises reconciling the unprocessed data to replace the unusable portions with reconciled data.
25. The method of Claim 23, wherein the step of processing the unprocessed data is further operable to calculate and output the uncertainty for each value of the reconciled data output by the step of processing.
26. The method of Claim 22, wherein the predictive system model comprises a non-linear model having an input for receiving the input vector that is within the prediction input space and an output for outputting a predicted output vector within the prediction output space, the non-linear model mapping the prediction input space lo the prediction output space to provide a non-linear representation of a system, and further comprising:
storing a plurality of decision thresholds for defining predetermined threshold values for the validity measure output vector;
comparing the validity measure output vector to the stored decision thresholds; and changing the value of the predicted output vector from the predictive system model when the value of the validity measure output vector meets a predetermined relationship with respect to the stored decision thresholds.
27. A method for training a predictive network, comprising:
providing a predictive model having adjustable parameters that map an input space to an output space to provide a representation of a system;
providing target output data within the output data space that corresponds to input training data within the input space;
inputting the input training data to the input space of a system model during training of the model, with the system model providing a predicted output in the output space;
comparing the predicted output with the target output to generate an error;
adjusting the parameters of the predictive model to minimize the error in accordance with a predetermined training algorithm;
receiving an uncertainty value corresponding to the input training data;
modifying the training algorithm as a function of the uncertainty value of the received training data on the input to compensate for the uncertainty value in accordance with a predetermined modification scheme.
28. The method of Claim 27, wherein the predictive model is a non-linear model.
29. The method of Claim 27, wherein the predetermined training algorithm has a rate associated therewith and the predetermined modification scheme comprises changing the rate at which the training algorithm operates.
30. The method of Claim 27, wherein the predictive model is a neural network and the predetermined training algorithm is a backpropagation error algorithm and the step of modifying the training algorithm comprises changing the rate of backpropagation as a function of the uncertainty value of the received training data on the input to provide compensation of the stored representation of the system as a function of the uncertainty values.
31. The method of Claim 27, and further comprising:
receiving an unprocessed data input vector having associated therewith unprocessed data associated with substantially the same input space as the inputvector, the unprocessed data input vector having errors associated with the associated unprocessed data in select portions of the prediction input space; and processing the unprocesssed data in the unprocessed data vector to minimize the errors therein to provide the input vector on an output.
CA002149913A 1992-11-24 1993-11-19 Method and apparatus for operating a neural network with missing and/or incomplete data Abandoned CA2149913A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US98066492A 1992-11-24 1992-11-24
US07/980,664 1992-11-24

Publications (1)

Publication Number Publication Date
CA2149913A1 true CA2149913A1 (en) 1994-06-09

Family

ID=25527747

Family Applications (1)

Application Number Title Priority Date Filing Date
CA002149913A Abandoned CA2149913A1 (en) 1992-11-24 1993-11-19 Method and apparatus for operating a neural network with missing and/or incomplete data

Country Status (8)

Country Link
US (4) US5613041A (en)
EP (1) EP0671038B1 (en)
JP (1) JPH08505967A (en)
AT (1) ATE240557T1 (en)
AU (1) AU674227B2 (en)
CA (1) CA2149913A1 (en)
DE (1) DE69332980T2 (en)
WO (1) WO1994012948A1 (en)

Families Citing this family (161)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5852817A (en) * 1991-08-14 1998-12-22 Kabushiki Kaisha Toshiba Intelligent control apparatus
US6243696B1 (en) * 1992-11-24 2001-06-05 Pavilion Technologies, Inc. Automated method for building a model
EP0777881B1 (en) * 1994-08-24 1998-05-27 Siemens Aktiengesellschaft Method of determining the validity range for an artificial neural network
DE19530049B4 (en) * 1995-08-16 2004-12-23 Thomas Froese Method for recognizing incorrect predictions in a neuromodel-based or neuronal control
DE19530646C1 (en) * 1995-08-21 1996-10-17 Siemens Ag Learning method for recurrent neural network
US6314414B1 (en) 1998-10-06 2001-11-06 Pavilion Technologies, Inc. Method for training and/or testing a neural network with missing and/or incomplete data
US5970426A (en) * 1995-09-22 1999-10-19 Rosemount Analytical Inc. Emission monitoring system
US6879971B1 (en) * 1995-12-22 2005-04-12 Pavilion Technologies, Inc. Automated method for building a model
US5845271A (en) * 1996-01-26 1998-12-01 Thaler; Stephen L. Non-algorithmically implemented artificial neural networks and components thereof
EP0892956A4 (en) * 1996-02-09 2002-07-24 Sarnoff Corp Method and apparatus for training a neural network to detect and classify objects with uncertain training data
US5781454A (en) * 1996-03-25 1998-07-14 Raytheon Company Process modeling technique
US6249252B1 (en) 1996-09-09 2001-06-19 Tracbeam Llc Wireless location using multiple location estimators
CA2265875C (en) * 1996-09-09 2007-01-16 Dennis Jay Dupray Location of a mobile station
US7714778B2 (en) * 1997-08-20 2010-05-11 Tracbeam Llc Wireless location gateway and applications therefor
US7903029B2 (en) 1996-09-09 2011-03-08 Tracbeam Llc Wireless location routing applications and architecture therefor
US6236365B1 (en) * 1996-09-09 2001-05-22 Tracbeam, Llc Location of a mobile station using a plurality of commercial wireless infrastructures
US9134398B2 (en) 1996-09-09 2015-09-15 Tracbeam Llc Wireless location using network centric location estimators
JPH1097514A (en) * 1996-09-24 1998-04-14 Masahiko Shizawa Polyvalent mapping learning method
JP2981193B2 (en) * 1997-09-02 1999-11-22 エヌケイエス株式会社 Method for predicting time-series continuous data and recording medium
US6298328B1 (en) * 1998-03-26 2001-10-02 Telecompetition, Inc. Apparatus, method, and system for sizing markets
US6810368B1 (en) * 1998-06-29 2004-10-26 International Business Machines Corporation Mechanism for constructing predictive models that allow inputs to have missing values
US20030146871A1 (en) * 1998-11-24 2003-08-07 Tracbeam Llc Wireless location using signal direction and time difference of arrival
US8135413B2 (en) 1998-11-24 2012-03-13 Tracbeam Llc Platform and applications for wireless location and other complex services
US7562135B2 (en) * 2000-05-23 2009-07-14 Fisher-Rosemount Systems, Inc. Enhanced fieldbus device alerts in a process control system
US8044793B2 (en) 2001-03-01 2011-10-25 Fisher-Rosemount Systems, Inc. Integrated device alerts in a process control system
US7206646B2 (en) * 1999-02-22 2007-04-17 Fisher-Rosemount Systems, Inc. Method and apparatus for performing a function in a plant using process performance monitoring with process equipment monitoring and control
US6975219B2 (en) * 2001-03-01 2005-12-13 Fisher-Rosemount Systems, Inc. Enhanced hart device alerts in a process control system
US7346404B2 (en) * 2001-03-01 2008-03-18 Fisher-Rosemount Systems, Inc. Data sharing in a process plant
US7062441B1 (en) * 1999-05-13 2006-06-13 Ordinate Corporation Automated language assessment using speech recognition modeling
US6564195B1 (en) 1999-07-22 2003-05-13 Cerebrus Solutions Limited Data classifier output interpretation
US7424439B1 (en) * 1999-09-22 2008-09-09 Microsoft Corporation Data mining for managing marketing resources
EP1286735A1 (en) 1999-09-24 2003-03-05 Dennis Jay Dupray Geographically constrained network services
US6678585B1 (en) * 1999-09-28 2004-01-13 Pavilion Technologies, Inc. Method and apparatus for maximizing power usage in a power plant
US6611735B1 (en) 1999-11-17 2003-08-26 Ethyl Corporation Method of predicting and optimizing production
US10641861B2 (en) 2000-06-02 2020-05-05 Dennis J. Dupray Services and applications for a communications network
US10684350B2 (en) 2000-06-02 2020-06-16 Tracbeam Llc Services and applications for a communications network
US9875492B2 (en) 2001-05-22 2018-01-23 Dennis J. Dupray Real estate transaction system
US6760716B1 (en) * 2000-06-08 2004-07-06 Fisher-Rosemount Systems, Inc. Adaptive predictive model in a process control system
WO2002031606A1 (en) * 2000-10-10 2002-04-18 Core A/S Expectation sampling and perception control
US6795798B2 (en) 2001-03-01 2004-09-21 Fisher-Rosemount Systems, Inc. Remote analysis of process control plant data
US7720727B2 (en) 2001-03-01 2010-05-18 Fisher-Rosemount Systems, Inc. Economic calculations in process control system
CN1310106C (en) * 2001-03-01 2007-04-11 费舍-柔斯芒特系统股份有限公司 Remote analysis of process control plant data
US6954713B2 (en) * 2001-03-01 2005-10-11 Fisher-Rosemount Systems, Inc. Cavitation detection in a process plant
US7389204B2 (en) 2001-03-01 2008-06-17 Fisher-Rosemount Systems, Inc. Data presentation system for abnormal situation prevention in a process plant
US8073967B2 (en) 2002-04-15 2011-12-06 Fisher-Rosemount Systems, Inc. Web services-based communications for use with process control systems
US20020194148A1 (en) * 2001-04-30 2002-12-19 Billet Bradford E. Predictive method
US20020174081A1 (en) * 2001-05-01 2002-11-21 Louis Charbonneau System and method for valuation of companies
US20020174088A1 (en) * 2001-05-07 2002-11-21 Tongwei Liu Segmenting information records with missing values using multiple partition trees
US8082096B2 (en) 2001-05-22 2011-12-20 Tracbeam Llc Wireless location routing applications and architecture therefor
US7293002B2 (en) * 2001-06-19 2007-11-06 Ohio University Self-organizing data driven learning hardware with local interconnections
US7162534B2 (en) 2001-07-10 2007-01-09 Fisher-Rosemount Systems, Inc. Transactional data communications for process control systems
DE10135586B4 (en) 2001-07-20 2007-02-08 Eads Deutschland Gmbh Reconfiguration method for a sensor system with two observers and sensor system for carrying out the method
US20030140023A1 (en) * 2002-01-18 2003-07-24 Bruce Ferguson System and method for pre-processing input data to a non-linear model for use in electronic commerce
US20030149603A1 (en) * 2002-01-18 2003-08-07 Bruce Ferguson System and method for operating a non-linear model with missing data for use in electronic commerce
US6941301B2 (en) * 2002-01-18 2005-09-06 Pavilion Technologies, Inc. Pre-processing input data with outlier values for a support vector machine
US7020642B2 (en) * 2002-01-18 2006-03-28 Pavilion Technologies, Inc. System and method for pre-processing input data to a support vector machine
GB0209780D0 (en) * 2002-04-29 2002-06-05 Neural Technologies Ltd Method of encoding data for decoding data from and constraining a neural network
US7600234B2 (en) 2002-12-10 2009-10-06 Fisher-Rosemount Systems, Inc. Method for launching applications
US8017411B2 (en) * 2002-12-18 2011-09-13 GlobalFoundries, Inc. Dynamic adaptive sampling rate for model prediction
US7493310B2 (en) 2002-12-30 2009-02-17 Fisher-Rosemount Systems, Inc. Data visualization within an integrated asset data system for a process plant
US8935298B2 (en) 2002-12-30 2015-01-13 Fisher-Rosemount Systems, Inc. Integrated navigational tree importation and generation in a process plant
US7152072B2 (en) 2003-01-08 2006-12-19 Fisher-Rosemount Systems Inc. Methods and apparatus for importing device data into a database system used in a process plant
US7953842B2 (en) 2003-02-19 2011-05-31 Fisher-Rosemount Systems, Inc. Open network-based data acquisition, aggregation and optimization for use with process control systems
US7103427B2 (en) 2003-02-28 2006-09-05 Fisher-Rosemont Systems, Inc. Delivery of process plant notifications
US6915235B2 (en) 2003-03-13 2005-07-05 Csi Technology, Inc. Generation of data indicative of machine operational condition
US7634384B2 (en) 2003-03-18 2009-12-15 Fisher-Rosemount Systems, Inc. Asset optimization reporting in a process plant
US7242989B2 (en) * 2003-05-30 2007-07-10 Fisher-Rosemount Systems, Inc. Apparatus and method for batch property estimation
US7299415B2 (en) 2003-06-16 2007-11-20 Fisher-Rosemount Systems, Inc. Method and apparatus for providing help information in multiple formats
US7030747B2 (en) 2004-02-26 2006-04-18 Fisher-Rosemount Systems, Inc. Method and system for integrated alarms in a process control system
US7079984B2 (en) 2004-03-03 2006-07-18 Fisher-Rosemount Systems, Inc. Abnormal situation prevention in a process plant
US7676287B2 (en) 2004-03-03 2010-03-09 Fisher-Rosemount Systems, Inc. Configuration system and method for abnormal situation prevention in a process plant
US7515977B2 (en) 2004-03-30 2009-04-07 Fisher-Rosemount Systems, Inc. Integrated configuration system for use in a process plant
US20050267709A1 (en) * 2004-05-28 2005-12-01 Fisher-Rosemount Systems, Inc. System and method for detecting an abnormal situation associated with a heater
US7536274B2 (en) 2004-05-28 2009-05-19 Fisher-Rosemount Systems, Inc. System and method for detecting an abnormal situation associated with a heater
WO2005124491A1 (en) 2004-06-12 2005-12-29 Fisher-Rosemount Systems, Inc. System and method for detecting an abnormal situation associated with a process gain of a control loop
US20060008781A1 (en) * 2004-07-06 2006-01-12 Ordinate Corporation System and method for measuring reading skills
US7181654B2 (en) 2004-09-17 2007-02-20 Fisher-Rosemount Systems, Inc. System and method for detecting an abnormal situation associated with a reactor
US9636450B2 (en) 2007-02-19 2017-05-02 Udo Hoss Pump system modular components for delivering medication and analyte sensing at seperate insertion sites
US7356371B2 (en) * 2005-02-11 2008-04-08 Alstom Technology Ltd Adaptive sensor model
US8768664B2 (en) * 2005-03-18 2014-07-01 CMC Solutions, LLC. Predictive emissions monitoring using a statistical hybrid model
US7421348B2 (en) * 2005-03-18 2008-09-02 Swanson Brian G Predictive emissions monitoring method
US8005647B2 (en) 2005-04-08 2011-08-23 Rosemount, Inc. Method and apparatus for monitoring and performing corrective measures in a process plant using monitoring data with corrective measures data
US9201420B2 (en) 2005-04-08 2015-12-01 Rosemount, Inc. Method and apparatus for performing a function in a process plant using monitoring data with criticality evaluation data
US7536364B2 (en) * 2005-04-28 2009-05-19 General Electric Company Method and system for performing model-based multi-objective asset optimization and decision-making
US20060247798A1 (en) * 2005-04-28 2006-11-02 Subbu Rajesh V Method and system for performing multi-objective predictive modeling, monitoring, and update for an asset
US7272531B2 (en) * 2005-09-20 2007-09-18 Fisher-Rosemount Systems, Inc. Aggregation of asset use indices within a process plant
US8880138B2 (en) 2005-09-30 2014-11-04 Abbott Diabetes Care Inc. Device for channeling fluid and methods of use
US7451004B2 (en) 2005-09-30 2008-11-11 Fisher-Rosemount Systems, Inc. On-line adaptive model predictive control in a process control system
WO2007047510A2 (en) * 2005-10-14 2007-04-26 Aethon, Inc. Robotic inventory management
US7826879B2 (en) 2006-02-28 2010-11-02 Abbott Diabetes Care Inc. Analyte sensors and methods of use
US8473022B2 (en) 2008-01-31 2013-06-25 Abbott Diabetes Care Inc. Analyte sensor with time lag compensation
US7653425B2 (en) 2006-08-09 2010-01-26 Abbott Diabetes Care Inc. Method and system for providing calibration of an analyte sensor in an analyte monitoring system
US7618369B2 (en) 2006-10-02 2009-11-17 Abbott Diabetes Care Inc. Method and system for dynamically updating calibration parameters for an analyte sensor
US8140312B2 (en) 2007-05-14 2012-03-20 Abbott Diabetes Care Inc. Method and system for determining analyte levels
US9392969B2 (en) 2008-08-31 2016-07-19 Abbott Diabetes Care Inc. Closed loop control and signal attenuation detection
US8374668B1 (en) 2007-10-23 2013-02-12 Abbott Diabetes Care Inc. Analyte sensor with lag compensation
US7801582B2 (en) 2006-03-31 2010-09-21 Abbott Diabetes Care Inc. Analyte monitoring and management system and methods therefor
US8606544B2 (en) 2006-07-25 2013-12-10 Fisher-Rosemount Systems, Inc. Methods and systems for detecting deviation of a process variable from expected values
US8145358B2 (en) 2006-07-25 2012-03-27 Fisher-Rosemount Systems, Inc. Method and system for detecting abnormal operation of a level regulatory control loop
US7657399B2 (en) 2006-07-25 2010-02-02 Fisher-Rosemount Systems, Inc. Methods and systems for detecting deviation of a process variable from expected values
US7912676B2 (en) 2006-07-25 2011-03-22 Fisher-Rosemount Systems, Inc. Method and system for detecting abnormal operation in a process plant
CN102789226B (en) 2006-09-28 2015-07-01 费舍-柔斯芒特系统股份有限公司 Abnormal situation prevention in a heat exchanger
US8014880B2 (en) 2006-09-29 2011-09-06 Fisher-Rosemount Systems, Inc. On-line multivariate analysis in a distributed process control system
US20080188972A1 (en) * 2006-10-11 2008-08-07 Fisher-Rosemount Systems, Inc. Method and System for Detecting Faults in a Process Plant
US20080133275A1 (en) * 2006-11-28 2008-06-05 Ihc Intellectual Asset Management, Llc Systems and methods for exploiting missing clinical data
US8032340B2 (en) 2007-01-04 2011-10-04 Fisher-Rosemount Systems, Inc. Method and system for modeling a process variable in a process plant
US8032341B2 (en) 2007-01-04 2011-10-04 Fisher-Rosemount Systems, Inc. Modeling a process using a composite model comprising a plurality of regression models
US7827006B2 (en) 2007-01-31 2010-11-02 Fisher-Rosemount Systems, Inc. Heat exchanger fouling detection
CA2683930A1 (en) 2007-04-14 2008-10-23 Abbott Diabetes Care Inc. Method and apparatus for providing data processing and control in medical communication system
US9615780B2 (en) 2007-04-14 2017-04-11 Abbott Diabetes Care Inc. Method and apparatus for providing data processing and control in medical communication system
ES2817503T3 (en) 2007-04-14 2021-04-07 Abbott Diabetes Care Inc Procedure and apparatus for providing data processing and control in a medical communication system
WO2008130898A1 (en) 2007-04-14 2008-10-30 Abbott Diabetes Care, Inc. Method and apparatus for providing data processing and control in medical communication system
CA2683953C (en) 2007-04-14 2016-08-02 Abbott Diabetes Care Inc. Method and apparatus for providing data processing and control in medical communication system
US10002233B2 (en) 2007-05-14 2018-06-19 Abbott Diabetes Care Inc. Method and apparatus for providing data processing and control in a medical communication system
US8239166B2 (en) 2007-05-14 2012-08-07 Abbott Diabetes Care Inc. Method and apparatus for providing data processing and control in a medical communication system
US8600681B2 (en) 2007-05-14 2013-12-03 Abbott Diabetes Care Inc. Method and apparatus for providing data processing and control in a medical communication system
US8560038B2 (en) 2007-05-14 2013-10-15 Abbott Diabetes Care Inc. Method and apparatus for providing data processing and control in a medical communication system
US8260558B2 (en) 2007-05-14 2012-09-04 Abbott Diabetes Care Inc. Method and apparatus for providing data processing and control in a medical communication system
US8103471B2 (en) 2007-05-14 2012-01-24 Abbott Diabetes Care Inc. Method and apparatus for providing data processing and control in a medical communication system
US9125548B2 (en) 2007-05-14 2015-09-08 Abbott Diabetes Care Inc. Method and apparatus for providing data processing and control in a medical communication system
US8444560B2 (en) 2007-05-14 2013-05-21 Abbott Diabetes Care Inc. Method and apparatus for providing data processing and control in a medical communication system
US10410145B2 (en) 2007-05-15 2019-09-10 Fisher-Rosemount Systems, Inc. Automatic maintenance estimation in a plant environment
US8834366B2 (en) 2007-07-31 2014-09-16 Abbott Diabetes Care Inc. Method and apparatus for providing analyte sensor calibration
US8301676B2 (en) 2007-08-23 2012-10-30 Fisher-Rosemount Systems, Inc. Field device with capability of calculating digital filter coefficients
US20090143725A1 (en) * 2007-08-31 2009-06-04 Abbott Diabetes Care, Inc. Method of Optimizing Efficacy of Therapeutic Agent
US7702401B2 (en) 2007-09-05 2010-04-20 Fisher-Rosemount Systems, Inc. System for preserving and displaying process control data associated with an abnormal situation
US9323247B2 (en) 2007-09-14 2016-04-26 Fisher-Rosemount Systems, Inc. Personalized plant asset data representation and search system
US8055479B2 (en) 2007-10-10 2011-11-08 Fisher-Rosemount Systems, Inc. Simplified algorithm for abnormal situation prevention in load following applications including plugged line diagnostics in a dynamic process
US8409093B2 (en) 2007-10-23 2013-04-02 Abbott Diabetes Care Inc. Assessing measures of glycemic variability
US8377031B2 (en) 2007-10-23 2013-02-19 Abbott Diabetes Care Inc. Closed loop control system with safety parameters and methods
US20090164239A1 (en) 2007-12-19 2009-06-25 Abbott Diabetes Care, Inc. Dynamic Display Of Glucose Information
US8876755B2 (en) 2008-07-14 2014-11-04 Abbott Diabetes Care Inc. Closed loop control system interface and methods
US8734422B2 (en) 2008-08-31 2014-05-27 Abbott Diabetes Care Inc. Closed loop control with improved alarm functions
US20100057040A1 (en) 2008-08-31 2010-03-04 Abbott Diabetes Care, Inc. Robust Closed Loop Control And Methods
US9943644B2 (en) 2008-08-31 2018-04-17 Abbott Diabetes Care Inc. Closed loop control with reference measurement and methods thereof
US8622988B2 (en) 2008-08-31 2014-01-07 Abbott Diabetes Care Inc. Variable rate closed loop control and methods
US20100063829A1 (en) * 2008-09-08 2010-03-11 Dupray Dennis J Real estate transaction system
US8986208B2 (en) 2008-09-30 2015-03-24 Abbott Diabetes Care Inc. Analyte sensor sensitivity attenuation mitigation
DK3173014T3 (en) 2009-07-23 2021-09-13 Abbott Diabetes Care Inc Real-time control of data on physiological control of glucose levels
EP4289355A3 (en) 2009-07-23 2024-02-28 Abbott Diabetes Care Inc. Continuous analyte measurement system
WO2011014851A1 (en) 2009-07-31 2011-02-03 Abbott Diabetes Care Inc. Method and apparatus for providing analyte monitoring system calibration accuracy
US9538493B2 (en) 2010-08-23 2017-01-03 Finetrak, Llc Locating a mobile station and applications therefor
EP2624745A4 (en) 2010-10-07 2018-05-23 Abbott Diabetes Care, Inc. Analyte monitoring devices and methods
US9927788B2 (en) 2011-05-19 2018-03-27 Fisher-Rosemount Systems, Inc. Software lockout coordination between a process control system and an asset management system
US8626791B1 (en) * 2011-06-14 2014-01-07 Google Inc. Predictive model caching
US9317656B2 (en) 2011-11-23 2016-04-19 Abbott Diabetes Care Inc. Compatibility mechanisms for devices in a continuous analyte monitoring system and methods thereof
US8710993B2 (en) 2011-11-23 2014-04-29 Abbott Diabetes Care Inc. Mitigating single point failure of devices in an analyte monitoring system and methods thereof
US9529348B2 (en) 2012-01-24 2016-12-27 Emerson Process Management Power & Water Solutions, Inc. Method and apparatus for deploying industrial plant simulators using cloud computing technologies
EP3395252A1 (en) 2012-08-30 2018-10-31 Abbott Diabetes Care, Inc. Dropout detection in continuous analyte monitoring data during data excursions
US9825659B2 (en) 2014-06-03 2017-11-21 Massachusetts Institute Of Technology Digital matching of a radio frequency antenna
US10410118B2 (en) 2015-03-13 2019-09-10 Deep Genomics Incorporated System and method for training neural networks
US10185803B2 (en) 2015-06-15 2019-01-22 Deep Genomics Incorporated Systems and methods for classifying, prioritizing and interpreting genetic variants and therapies using a deep neural network
AU2016291569B2 (en) 2015-07-10 2021-07-08 Abbott Diabetes Care Inc. System, device and method of dynamic glucose profile response to physiological parameters
WO2018175489A1 (en) 2017-03-21 2018-09-27 Abbott Diabetes Care Inc. Methods, devices and system for providing diabetic condition diagnosis and therapy
JP7091820B2 (en) * 2018-05-14 2022-06-28 オムロン株式会社 Control system, learning data creation device, learning device and judgment device
US20190378619A1 (en) * 2018-05-30 2019-12-12 Alexander Meyer Using machine learning to predict health conditions
EP3973461A4 (en) 2019-05-23 2023-01-25 Cognizant Technology Solutions U.S. Corporation Quantifying the predictive uncertainty of neural networks via residual estimation with i/o kernel
KR102652117B1 (en) 2019-07-10 2024-03-27 삼성전자주식회사 Image processing method and image processing system
JP7222344B2 (en) * 2019-12-06 2023-02-15 横河電機株式会社 Determination device, determination method, determination program, learning device, learning method, and learning program
EP4211399A1 (en) * 2020-09-10 2023-07-19 OnPoint Technologies, LLC Systems and methods for analyzing combustion system operation
EP4238010A1 (en) * 2020-10-29 2023-09-06 Services Pétroliers Schlumberger Cost function engineering for estimating uncertainty correlated with prediction errors

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4773024A (en) * 1986-06-03 1988-09-20 Synaptics, Inc. Brain emulation circuit with reduced confusion
US4872122A (en) * 1987-06-19 1989-10-03 University Of Pennsylvania Interactive statistical system and method for predicting expert decisions
US5067095A (en) * 1990-01-09 1991-11-19 Motorola Inc. Spann: sequence processing artificial neural network
US5052043A (en) * 1990-05-07 1991-09-24 Eastman Kodak Company Neural network with back propagation controlled through an output confidence measure
US5113483A (en) * 1990-06-15 1992-05-12 Microelectronics And Computer Technology Corporation Neural network with semi-localized non-linear mapping of the input space
US5402519A (en) * 1990-11-26 1995-03-28 Hitachi, Ltd. Neural network system adapted for non-linear processing
US5467428A (en) * 1991-06-06 1995-11-14 Ulug; Mehmet E. Artificial neural network method and architecture adaptive signal filtering
US5335291A (en) * 1991-09-20 1994-08-02 Massachusetts Institute Of Technology Method and apparatus for pattern mapping system with self-reliability check
US5276771A (en) * 1991-12-27 1994-01-04 R & D Associates Rapidly converging projective neural network
US5353207A (en) * 1992-06-10 1994-10-04 Pavilion Technologies, Inc. Residual activation neural network
US5659667A (en) * 1995-01-17 1997-08-19 The Regents Of The University Of California Office Of Technology Transfer Adaptive model predictive process control using neural networks

Also Published As

Publication number Publication date
EP0671038B1 (en) 2003-05-14
US5819006A (en) 1998-10-06
US5842189A (en) 1998-11-24
US5613041A (en) 1997-03-18
JPH08505967A (en) 1996-06-25
DE69332980T2 (en) 2004-03-04
ATE240557T1 (en) 2003-05-15
US6169980B1 (en) 2001-01-02
DE69332980D1 (en) 2003-06-18
EP0671038A1 (en) 1995-09-13
AU674227B2 (en) 1996-12-12
WO1994012948A1 (en) 1994-06-09
AU6515594A (en) 1994-06-22

Similar Documents

Publication Publication Date Title
CA2149913A1 (en) Method and apparatus for operating a neural network with missing and/or incomplete data
US6314414B1 (en) Method for training and/or testing a neural network with missing and/or incomplete data
US6775619B2 (en) Neural net prediction of seismic streamer shape
Bishop Regularization and complexity control in feed-forward networks
Fritzke Fast learning with incremental RBF networks
CN108647583A (en) A kind of face recognition algorithms training method based on multiple target study
US10943352B2 (en) Object shape regression using wasserstein distance
Murray et al. The neural network classification of false killer whale (Pseudorca crassidens) vocalizations
BRPI0608711A2 (en) methods and systems for performing face recognition, for training a reference face model, for calculating a similarity threshold value for a reference face model, and for optimizing an image for use in face recognition
AU2001288849A1 (en) Neural net prediction of seismic streamer shape
CN111695290B (en) Short-term runoff intelligent forecasting mixed model method suitable for changing environment
Shah-Hosseini et al. Automatic multilevel thresholding for image segmentation by the growing time adaptive self-organizing map
US7489313B2 (en) Method of segmenting a three-dimensional data set allowing user corrections
Whittaker et al. Ultrasonic signal classification for beef quality grading through neural networks
CN111275059A (en) Image processing method and device and computer readable storage medium
Buján et al. Optimization of topological active nets with differential evolution
KR20220104368A (en) Transfer learning method for time series data classification based on similarity
US20040064425A1 (en) Physics based neural network
Reynolds et al. Spoken letter recognition with neural networks
Eppler et al. Optimization of piecewise linear networks (PLN) by pruning
CN115579124A (en) Degradable drug stent delivery system
Acır et al. An application of support vector machine in bioinformatics: automated recognition of epileptiform patterns in EEG using SVM classifier designed by a perturbation method
Ahmed et al. Automated detection of grayscale bar and distance scale in ultrasound images
Geng et al. An algorithm for case generation from a database
CN113326975A (en) Ultrahigh prediction method for track irregularity based on random oscillation sequence gray model

Legal Events

Date Code Title Description
EEER Examination request
FZDE Discontinued