CA2066458C - Computer neural network process measurement and control system and method - Google Patents

Computer neural network process measurement and control system and method Download PDF

Info

Publication number
CA2066458C
CA2066458C CA002066458A CA2066458A CA2066458C CA 2066458 C CA2066458 C CA 2066458C CA 002066458 A CA002066458 A CA 002066458A CA 2066458 A CA2066458 A CA 2066458A CA 2066458 C CA2066458 C CA 2066458C
Authority
CA
Canada
Prior art keywords
neural network
module
input data
data
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
CA002066458A
Other languages
French (fr)
Other versions
CA2066458A1 (en
Inventor
Richard D. Skeirik
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Rockwell Automation Pavilion Inc
Original Assignee
Pavilion Technologies Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Family has litigation
First worldwide family litigation filed litigation Critical https://patents.darts-ip.com/?family=24249095&utm_source=google_patent&utm_medium=platform_link&utm_campaign=public_patent_search&patent=CA2066458(C) "Global patent litigation dataset” by Darts-ip is licensed under a Creative Commons Attribution 4.0 International License.
Application filed by Pavilion Technologies Inc filed Critical Pavilion Technologies Inc
Publication of CA2066458A1 publication Critical patent/CA2066458A1/en
Application granted granted Critical
Publication of CA2066458C publication Critical patent/CA2066458C/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B13/00Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion
    • G05B13/02Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric
    • G05B13/0265Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric the criterion being a learning criterion
    • G05B13/027Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric the criterion being a learning criterion using neural networks only
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10TECHNICAL SUBJECTS COVERED BY FORMER USPC
    • Y10STECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10S706/00Data processing: artificial intelligence
    • Y10S706/902Application using ai with detail of the ai system
    • Y10S706/903Control
    • Y10S706/906Process plant

Abstract

A computer neural network process control system and method provides predicted values to a controller used to control a process for producing a product. The predicted value is produced by a neural network provided with stored input data from a historical database. The neural network is configured by a developer using a neural network configura-tion stage and step. The neural network is trained using training data stored in the historical database, which is com-pared with the predicted value produced by the neural ne-twork using input data obtained at a time associated with the time of the training data. An error signal is produced by the comparison of the predicted value with the training data, the error signal is used to train the neural network, and is al-so used to determine (1) whether the predicted values of the neural network should continue to be used by the controller, and (2) to retrain the network. Predicted values are thus pro-duced by the neural network which have a close correlation to the training values training data.

Description

COMPUTER NEURAL NETWORK PROCESS MEASUREMENT AND
CONTROL SYSTEM AND METHOD
BACKGROUND OF THE INVENTION
I. Field of the Invention The present invention relates generally to monitoring and control of manufacturing processes, particularly chemical processes, and more specifically, to neural networks used in process control of such processes.
II. Related Art Quality of products is increasingly important. The control of quality and the reproducibility of quality are the focus of many efforts. For example, in Europe, quality is the focus of the ISO (International Standards Organization, _2_ Geneva, Switzerland) 9000 standards. These rigorous standards provide. for quality assuranc e-in production, installation, final inspeetion, and testing. ~~~~.They''also provide guidelines for quality assurance between a supplier and customer. These ~5 standards are expected to become an. effective requirement~for participation in the EC (European. Community) after the removal of trade barriers in 1992. w - .-:.~r ., The quality of a manufactured product is a combination of all of the properties 'of ' the product which affect its usefulness to its user. Process control is the collection of methods used to produce the. best possible product properties in a manufacturing prgcess.
Process control is very important in the manufacture of products. Improper process control can result in a product which is totally useless to the user, or in a product. which has a lower value to the user. ldhen either of these situations occur, the manufacturer suffers (1) by paying the cost of manufacturing useless products, (2) by losing the opportunity to profitably make a product during that time, and (3) by lost revenue from reduced selling price of poor products. In the final analysis, the effectiveness of the process control used by a manufacturer can determine whether the manufacturer's business survives or'fails.
A. Quality and Process Condition s Figure 19 shows, in block diagram form, key concepts concerning. products made in a manufacturing process.
Referring now to Figure 19, raw materials 1222 are processed - under (controlled) process conditions 1906 in a process 1212 to produce a product 12f6 having product properties 1904.
Examples of raw materials 1222, process conditions 1906, and product properties 1904 are shown in Figure 19. It should be vvo ~ziozs~c; ~ ~, ~i ~ ~ r , ~cri~.r~y~i9szs~
... ~ t! (j J .. tJ
_3_ ., understood that these are. merely examples for purposes of illustration. - .

' Figure 20 shows a more detailed block diagram~.-of:
the various aspects of the manufacturing of products 1216: using process 1212. Referring now t o Figures 19 and 20,' product 1216 is defined by one ot".,.more product property aim values) 2006 of-its product properties 1904. The product property aim values 2006 of the product'properties 1904 are those which the ' its product 1216 needs to have in order for it to be.i,deal for intended end use. The objective in running process 1212 is to manufacture products 1216 having product properties 1904 which are exactly at the product property aim values) 2006.

The following simple example of a process 1212_ iS

presented merely for purposes of illustration..
The~example process 1212 is the baking'~of a cake. Raw materials (such as flour, milk, baking powder, lemon flavoring, etc.) are processed in a baking process 1212 under (controlled) process conditions 1906. Examples of the (controlled) process conditions 1906 are: mix batter until uniform, bake batter in ~20 a pan at a preset oven temperature for a preset time., remove baked cake from pan, and allow, removed cake to cool to room temperature.

The product 1216 produeed in this example is a cake having desired properties 1904.. For example, these desired product properties 1904 can be a cake that is fully cooked but not burned, brown on the outside, yellow on the inside, having a suitable lemon flavoring, etc.

Returning now to the general case, the actual product properties 1904 of product 1216 produced in a process 1212 are determined by the combination of all of the.process conditions 79D6 of process 1212 and the raw materials 1222 that are utilized. Process conditions 1906 can be, for example, the properties of the raw materials 1222, the speed at which W~ 92162856 PCT/L'~91/G5259 ~ ~. ~~ ~.
,, , process 1212 runs (also called the production rate~of the process 1212), the process conditions 1906 in each~step: or stage of the~process 1212 (such as temperature;pressure, etc.), the duration~of each step or stage; and so on.
r ; : . :. ;
B. Controlling Process Conditions Figure 20 shows a more~'detailed block diagram of the . various aspects of the manufacturing of-' products 1216 using process 1212. . Figures 19 and 20 should be referred fo in connection with the following description.
To effectively ~ operate process 1212, the process conditions 1906 must be maintained at one or more process condition setpoint(s) or .aim values) (called a- regulatory controller setpoint(s) in' the example of Figure 14) 1404 so that the product 1216 produced will have the product properties 1904 matching the desired product property aim values) 2006. This task can be divided into three parts or aspects for purposes of explanation. , . In the first part or aspect, the manufacturer must set (step 2008) initial settings of the process condition setpoint(s) (or aim value(s)) 1404 in order for the process 1212 to produce a product 1216 having the desired produet property aim values 2006. Referring back to the example set forth above, this would be analogous to deciding to set the temperature of the oven to a particular setting before beginning the baking of the cake batter.
The second step or aspect involves measurement and adjustment of the process 1212. Specifitally, process conditions 1906 must be measured to produce a process condition measurements) 1224. The process condition measurements) 1224 must be used to generate adjustments) 1208 (called eontroller output data in the example of Figure 12) to a controllable process states) 2002 so as to hold the v~o 9zm2~~s pcri>ri~~oos2~~

process conditions 1906 as close as possible to process condition setpoint 1404. Referring again to the example.
above, ,this is analogous to the way the oven measures the temperature~and turns the heating element on or off so as to maintain the'' temperature of the oven: at: the desired ~-. ~. ...
temperature value.
The third stage or aspect involves holding a product property measurements) of the product properties 1904 as close as possible to the produet property aim values) 2006.
This involves producing'product property measurements) 1304 , based on the product properties 1904 of the product 1215..
From these measurements, adjustment .to process condition setpoint 1402 to the process condition setpoint(s) 1404 must be made so as to~ maintain process conditions) 1906.
Referring again to the example above, this would be analogous.
to measuring how well the cake is baked. This could be done, for example, by sticking a toothpick into the cake and adjusting the temperature during the baking step so that the toothpick eventually comes out clean.
It should be understood that.the previous description is intended only to show the general conditions of process control and the problems associated with it in terms of producing products of predetermined quality and properties.
It can be readily understood that there are many variations and. combinations of. tasks that are, encountered in a given process situation. Often, process control problems can be very complex.
One aspect of a process being controlled is the speed with which the process responds. Although processes may be very complex in their response patterns, it is often helpful to define a time constant for control of a process. The time constant is simply an estimate of how quickly control actions WO 92f~286G PCT/US91/U52S9 :l ,.

must' be carried out in order to effectively control the process. , ,.....
In recent years, there has been a great push towards the automation of process control..- The mot.ivat,ion ,for_this is that such automation results in the mariufacture~of products of desired product properties where, the manufacturing process that is used is tao. complex, too time-consuming,.or both, for . people to deal with manually. ..
Thus, the process control task tan be generalized as being made up of five basic steps or stages as follows:
(1) the . initial setting 2008 of process condition setpoint(s); _ , y ..
(2) producing process Condition measurements) 1224 of the process conditions) 1906; .
(3) adjusting 1208 controllable process s,tate(s) 2002 in response to the process condition measurements) 1224;
(4) producing product property measurements) 1304 based on product properties 1904 of the manufactured product 1216; and (5) adjusting 1402 process condition setpoint(s) 1404 in response to the produet property measurements 1304.
The explanation which follows explains the problems associated with meeting and optimizing these five steps.
C. The Measurement Problem As shown above,_the second and fourth steps or aspects of process control involve measurement 1224 of process conditions 1906 and measurement 1304 of product properties 1904. Such measurements are sometimes very difficult, if not impossible, to effectively perform for process control.
For many products, the important product properties 1904 rel ate to the end use of the product and not to the process w~ ~ziozsss Pcriusg~ioszsg ~~~~~~ ~3~
conditions 1906 of the process 1212 . One illustration of this involves the manufacture of carpet fiber. An important produtt property 1904 of carpet fiber is how uniformly the fiber accepts the dye applied by the carpet maker. . Another example involves the cake example set forth above. An' important product property 1904' of a baked cake is how well the cake resists break ing apart when the frosting is applied.
Typically, the measurement of such product properties 1904 is difficult and/or time consuming and/or expensive to make.
An example of this problem can be shown in connection with the carpet fiber example. The ability of the fiber, to uniformly aceept dye can be measured by a laboratory (lab) in which dye samples of the carpet fiber are used. However, such measurements can be unreliable. For example, it may take a number of tests before a reliable result can be obtained.
Furthermore, such measurements can also be slow. In this example, it may take ~so long to conduct the dye test that the manufacturing process can significantly change and be producing different product properties 1904 before the lab test results are available for use in controlling the process 1212.
It should be noted, however, that some process condition measurements 1224 are inexpensive, take little time, and are quite reliable. Temperature typically can be measured easily, inexpensively, quickly, and reliably: For example, the temperature of the water in a tank can often be easily measured. 5ut oftentimes process conditions 1905 make such easy ,measurements much more difficult to achieve. For example,°it may be difficult to determine the level of a foaming liquid in a vessel. Moreover, a corrosive process may destroy measurement sensors, such as those used to measure pressure.

'WO 92/02~366 PC'T/L~91/05259 f."6 k ~ ~~
Regardless of whether or not measurement of a particular process condition 1906 or product. property 1904 is,. easy or difficult to obtain, such measurement may be vitally important to the effective and necessary cohtrol of...the process 1212.
It can thus be appreciated that it would be preferable if. a direct measurement of a specific process condition 1906,and/or -product property 1904 could be, obtained .in an inexpensive, ' reliable, short time period and effective manner.
. D. Conventional Computer Models as Predictors of Desired Measurement s As stated above,. the direct measurement of the process conditions 1906 and the product properties. 1904 is often difficult, if not impossible, to do effectively., One response to this deficiency in process control has been the development of computer models (not shown) as predictors of desired measurements. These computer models are used to create values used to control~the process 1212 based on inputs that are not identical to the particular process conditions 1906 and/or product properties 1904 that are critical to the control of the process 1212. In other words, these computer models are used to develop predictions (estimates) of the particular process, conditions 1906 or product properties 1904. 'these predictions are used to adjust the controllable proeess state 2002 or the process condition setpoint 1404.
Such conventional computer models, as explained below, have limitations. To better understand these limitations and how the present invention overcomes them, a brief description . of each of these conventional models is set forth.

'. i _9_ 1. Fundamental Models A computer-based fundamental model~(not shown) uses known information about the process 1212 to predict desired unknown information, such as product conditions 1906 and product ~~ properties 1904. A fundamental model- is based on scientific and' engineering principles. Such principlesv include the conservation of material and energy,' the equality of forces, and so on. These basic scientific and engineering principles are expressed as equations which are solved mathematically or numerically, usually using a computer program. Once solved, these~~equations give the desired prediction of unknown .
information.
Conventional computer fundamental models have significant limitations, such as:
(1) Thiey are difficult to create since the process 1212 must be described at the level of scientific understanding, which is usually very detailed;
(2) Not all processes 121 2 are understood in basic . engineering and scientific principles in a way that can be computer modeled;
(3) Some' product properties 190 are not adequately described by the results of the computer fundamental models; and (4) The number of skilled computer model builders is limited, and the cost associated with building such models is thus quite high.
These problems result in computer fundamental models being practical only in some cases where measurement is difficult or impossible to achieve.

~O 92/02866 F'Cf/U~~Jl/f15259 -lo 2. Fmoirical Statistical Models Another conventional approach.. to solving measurement problems is. the use of a computer-based statistical model (not - shown).
S Such a computer-based statistical model uses known information about the process 12,12 to determine desi-red information that cannot be.,effectively., measured. . A
statistical model is- based on the correlation of measurable process conditions 1906 or product properties 1904 of., the process 1212.
To use an example of a computer-based statistical model, assume that it is desired to be able to predict the color of a plastic product 1216. This is very difficult to measure directly, and takes considerable time to perform.. In order to build a computer-based statistical model which will'produce this desired product property 1904 information, the model builder would need to have a base of experience, including known information and actual measurements of desired information. For example, known information is such things as the temperature at, which the plastic is processed. Actual measurements of desired information are the actual measurements of the color of the.plastic.
A mathematical relationship (also called an equation) between the known information and the desired unknown ' information must be created by the developer of the empirical statistical model. The relationship contains one or more constants (which are assigned numerical values) whieh affect the value of the predicted information from any given known information. A computer program uses many different measurements of known information, with their corresponding actual measurements of desired information, to adjust these constants so that the best possible prediction results are WQ 921U~t~66 PC?/US91/05259 2~~J~~$ ~.

achieved by the empirical statistical model. Such, computer program, for example, can use non-linear regression.
. Computer-based statistical models can sometimes predict' ,.:
product .properties 1904 which are not well described by computer fundamental models. , However, there are significant problems associated with computer statistical models, which include,the ,follovring: .
(1) Computer statistical 'models require a good design of - the model relationships (that is, the equations) or the predictions will be poor; .
(2) Statistical methods used to adjust the constants typically are difficult to use;
(3) Good adjustment of the constants cannot always be achieved in such statistical models; and (4) As is, the case with fundamental models, the number of skilled statistical model builders is limited, and thus the cost of creating and maintaining such statistical models is high.
The result of these deficiencies is that computer-based ~ empirical statistical models are practical in only some cases where the process conditions 1900 and/or product properties 1904 cannot be effectively measured.
E. Deficiencies in the Related Art As set forth above, there are considerable deficiencies in conventional approaches to obtaining desired measurements for the process conditions 1906 and product properties 1904 using conventional direct measurement, computer fundamental models, and computer statistical models. Some of these deficiencies are as follows:
Product properties 1904 are often difficult to measure.

wo ~zioz~~s rcrius~~ioszs~
~~~c~~..

(2) Process conditions 190fi are often difficult to . , r. !; .
measure.
(3) Determining the initial value or' settings of the process conditions 1906 when makings a'~new~product 1216 is often difficult. w (4) ~ConventionalY computer models work~~only inva small percentage of cases when used~'as substitutes for measurements.
These and other deficieneies in the eonventional technology are overcome by the system and method of the present invention.
SUMMARY OF T~iE 1NVENT10N
The present invention is a computer neural network process control system and method. Predicted values of data that cannot be readily measured are produced by a trained neural network using input data at specified intervals. The predieted values are stored in an historical database and supplied to a controller used to control a process for producing a product.
The neural network is configured by a developer who supplies neural network configuration information. The neural network configuration aspect of the present invention allows the developer to very easily configure the neural network using a template approach.
The training of the neural network is accomplished using training input data having associated timestamps. The training input data (provided by, for example, a laboratory or lab) with associated timestamps is stored .in and retrieved from the historical database, which also stores input data with associated timestamps. Timestamps are used to determine wo 9ziozss6 ~criu~9~ioszsg 2~~~!~ ~8 which training data stored in the historical database...is , associated'wi~th i~hich input data. . , , "__, w v The' training data is compared with the predicted..output data values-~produced by the neural network from the .stored input data whose timestamps correspond to the.,timestamps; of ~tlie training data. The error between the output data and~,the ~tr.aining input data is used ~to train the network.... Once'the ~ Y
network has been trained, that is, once the error value is.
less' than an accepted metric, the predicted output data from the neural network is supplied via the historical database to the controller for use in controlling the process producing the product. . .
As long as the error 'data: is less than an acceptable metric, the output data from theca neural network is supplied by the historical database to the controller for use in controlling the process. In this way, the output data is used by the controller for control purposes in place of the actual measurement:
The error between the predicted output data and the training input data is also used to retrain the network at selected intervals. This continuous retraining of the network allows the network to adapt to changes in the process and the product. In this way; the retrained network produces predicted output data which more closely approach the training input data that is periodically supplied.
When the error data exceeds an acceptable metric, the present invention can cause the predicted output data no longer to be used by the controller: In addition, the present invention can cause retraining to cease, and cause the , controller to stop the process. Other control functions can be provided.
' A modular approach can be utilized for the neural network. In particular, a limited set of neural network V4'O 92/02866 ~ ~~~ ~5. PCT/iJS91/05259 module types can be provided. Further, a limited set of neural network procedures and a iimited,set of neural.network parameter~storage areas can;. also be provided.. The modular neural network modules can. include procedure.,pointers ,and parameter pointers, which are used ,to point .to network type procedures and network parameter storage areas. This modular approach greatly:. increases the; robustness of the present invention.
Templates are utilized to allow a developer (or user) to configure the neural network. The templates allow the size and arrangement of the network; parameters which control the operation of the network; and the storage locations of the input data; the output data;. the error data; and, other values associated with the present invention to be specified by,.the developer. Moreover, since the templates utilize standard natural language, no computer expertise is needed by the person configuring the network. In this way, a process control expert need not utilize a neural network expert in configuring, re-configuring and using the present invention.

1~VO 92/02Rbb PCT/tJ591/OSZS9 ~IS~ 2~~~~~~~
BRIEF DESCRIPTION OF THE DRAWINGS
The present invention as defined by the claims is better understood with the text read iv conjunction with the following drawings:
Figure 1 is ahigh level block diagram of the six broad steps which make up the computer neural net process control system and methad of the present invention. w w Figure 2 is an intermediate block diagram of important steps and modules whieh make up the stare input data and training input data step and module 102 of Figure 1..
Fiaure 3 is an intermediate block diagram of the important steps and modules which make up the configure and train neural network step and~.module 104 of Figure 1.
Figure 4 is an intermediate block diagram of input steps and modules which make up the predict output data using neural network step and module 106 of Figure 1.
Figure 5 is an intermediate block diagram of the important steps and modules which make up the "retrain neural ' 20 network" step and module 108 of Figure 1.~
Figure 6 is an intermediate block diagram of the important steps and modules which make up the enable/disable control step and module 110 of Figure 1.
Figure 7 is an intermediate block diagram of the important steps and modules which make up the control process using output data step and module 112 of Figure 1.
Figure 8 is a detailed block diagram of the configure neural network step and module 302 of the relationship of Figure 3, ' 30 Figure 9 is a detailed block diagram of the new training input data? step and module 306 of Figure 3.
Figure 10 is a detailed block diagram of the train neural network step and module 308 of Figure 3.

WO 9z7o2~iss .~ YCT/U59I/US25~..
-Figure 11 is a detailed block diagram of the error acceptable? step and module 310~of Figure 3.
Figure 12 is a.representation of the architecture of an embodiment of the present invention.
Figure 13 is a representation of the, architecture of an embodiment of the present invention having the additional capability of using laboratory values from, the historical database 1210.
Figure l4,is an embodiment of controller 1202 of Figures 12 and 13 having a supervisory controller 1408 and a regulatory controller 1406.
Figure 15 shows various embodiments of controller 1202 of Figure 14 used in the architecture of Figure 12.
Figure 16 is a modular version of block 1502 of Figure 15 showing the various different types of modules that can be utilized with a modular neural~network 1206.
Figure 17 shows an architecture for block 1502,having a plurality of modular neural networks 1702-1702n with pointers 1710-1710n pointing to a limited set of neural network procedures 1704-1704m.
Figure 18 shows an alternate architecture for block 1502 having a plurality of modular neural networks 1702-1702n with pointers 1710-1710m to a limited. set of neural network procedures 1704-1704n, and with parameter pointers 1802-1802 ".
. to a limited'set of network parameter storage areas 1806-1806n.
Figure 19 is a high level block diagram showing the key.
aspects of a process 1212 having process conditions 1906 used to produce a product 1216 having product properties 1904 from raw materials 1222.

CVO 92/U2866 PCf/L'g91/05259 _ -17- ~~
Figure 20 shows the various steps and parameters which may be used to perform the control of process 1212 to produce products 1216 from raw materials 1222.
Figure 21 shows a representative example of a fully connected feed forward neural network 1206 having an input layer 2104, a middle (hidden) layer 2108,w an-output layer 2I10, and weights 2112 with each connection.
Figure 22 .is' an exploded block diagram showing: the various parameters and aspect s that can make up_ the neural network 1206. ' -' Figure 23 is an exploded block diagram of the input data specification 2204 and the output data specification 2206 of the neural network 1206 of Figure 22.
Figure 24 is an exploded block diagram of the prediction timing control 2212 and the training timing control 2214 of the neural network 1206 of Figure 22.
Figure 25 is an exploded block diagram of various examples and aspects of controller 1202 of Figure 12.
Figure 26 is a representative computer display or "screen" of a preferred embodiment of the present invention showing part of the configuration specification of the neural network block 1206.
Figure 27 is a representative compute r display or "screen" of a preferred embodiment of the present invention showing part of the data specification of the neural network block 1206.
Figure 28 is a~computer screen which shows a pop-up menu for specifying the data system element of the data specification.
Figure 29 is a computer screen of the preferred embodiment showing in detail the individual items making up the data specification display of Figure 27.

WrJ 9~/028b6 PCT/tJS91/05259 ~Q~~~~~, Figure 30 is a detailed block diagram of an embodiment of the enable control step and module 602 of Figure 6.
Figure 31 is a very detailed block. diagram of embodiments of steps and modules 802; 804 and 806 of, Figure 8.
. ~ Figure 32 is a very detailed block diagram~of embodiments of steps and modules 808, 810, 812 and 814 of,Figure 8., Figure 33 is a-.nomenclature diagram showing, the present . invention at a high level. ~ _ Figure 34 shows a representative example of the neural network 1206 of Figure 21 with training capability.

WfJ 92/b2866 . PCT/US~l/05259 -lg_ ~~~~~~~i QETATLED , DESCRIPTION
OF THE PREFERRED
EM80D1MENTS.
.,, Table of Contents I. ~Overview 21.
of Neural Networks . . . .
. . . .
~

Networks 23 . . . .
. .
A.f,. Construction of Neural B. Prediction 24 .
. . . .
. . . .
. .

C. Neural w 25 Network Trainin4 : ~ . .
. : . .
~.

10~ D. Advantac,~es 2T
of Neural Networks . . . .
. . .

II. Brief Overview 28 . . . :
. . . .
. . . .
: . . .
. . . .

III. Use in Combination 30 with Expert Systems .
~

15 A. Store Step .
In put Data and Trainin4 Input Data and Modul e 102 . ;32 8. Confi4ure and Train Neural Network Step and Modul a 104 34 . : . .
: . . .
. . . .
. . .

~
~

Steo and .
Module Network Neural 1. Config~ure 20 , 3T
302 . . .
. . . .
. . . .
. . . .
. .

~

Data Interval Step and Inout 2. fait Trainina 25 4~

. C.' D. Retrain 54 Neural Network Steo or Module .
. . . .

30 E. Enabl~0isable 55 Control Module or Step 110 . . .

E. Control Process Usin4 Output Data Step or Modul a 112 56 . . . .
. . . .
. . . .
. . . .
. .

IY. -Preferred 58 Structure LArchitecturel . . . .
. . . .
.

Y. User Interface 80 . . . .
. . . .
. . . .
. . . .
. ..

~'C~ 92/0266 PC'T/U~91/05259 . ' . -20-In describing the preferred embodiment of the present invention, reference will. be made to Figure 33: This figure is a nomenclature diagram which shows the various names for elements and actions used in describing the present invention.
5~ Figure 33 is not necessarily intended to represent the method of the present invention, nor does it necessarily depict the architecture of the present invention. ' However, it does provide a reference point by which consistent terms can be used in describing the present invention. , In referring to Figure 33, the boxes indicate elements in the architecture and the labeled arrows indicate actions that are carried out. In addition, words that do not appear in boxes which break arrows represent information or data which is being transmitted from one element in the present invention to another.
' As discussed below in greater detail, the present invention essentially utilizes neural nets to provide predicted values of important and not readily obtainable process conditions 1906 and/or product properties 1904 to be 20~ used by a controller 1202 to produce controller output data 1208 used to control the process 1212. As shown in Figure l2, a neural network 1206 aperates in conjunction with an historical database 1210 which provides input sensors) data 1220.
Referring now to Figures 1 and 12, input data and training input data are stored in a historical database with associated timestamps as indieated by a step or module 702.
In paralle ~, the neural network lzos is configured and . trained~in a step or module 104. The neural network 1206 is used to predict autput data. 1218 using input data 1220, as indieated by a step or module 106. The neural network 1206 is then retrained in a step or module 108, and control using the output data i s enabl ed or di sabl ed i n a step or modul a 110.

WO ~2/U28b6 P~T/U593/OS2S9~
., In parallel, control of the process using the output data is performed in a step or module 112.. Thus,. th,e, present invention collects and stores the appro,priate.data, ,configures and trains the neural network, uses, the, neural network to predict output data, and enables control of the, process.using the predicted output data, w Central to the present invention is,.the. neural network 1206. Various embodiments of the neural network,1206 can be utilized, and are described in detail below.
I. Overview of Neural Networks In order to fully appreciate the various aspeets and benefits produced by the present invention, a good understanding of neural network technology,is required. For this reason., the following seetion discusses neural network technology as applicable to the neural network 1.206 of the system and method of the present invention.
Artificial or computer neural networks are computer simulations of a network of intereonnected neurons. A
2~0 biological example of interconnected neurons is the human brain. Neural networks are computer representations of architectures which model the working of the brain. It should be understood that the analogy to the human brain is important and useful in understanding the present invention.
However, neural networks used in neural network 1206 of the present invention are computer simulations (or possibly analog devices) which provide useful predicted values based on input data provided at specified intervals.
Essentially, a neural network 1206 is a hierarchical collection of elements, each of which computes the results of an equation (transfer function or activation function). The equation may include a threshold. Each element equation uses multiple input values, but produces only one output value.

~YJ 92/fl2866 PCT/IJ591/fl52~9 The outputs of elements in a lower level (that is, closer to the input data.) are ~providedw as inputs to the elements of higher layers: The highest layer produces the output(s).
Referring now to Figure 21, a representative example of the neural network 1206 is shown. It should be noted that the example shown ~in Figure 21 is merely illustrative: of an embodiment of neural network 1206. As discussed below, other embodiments for neural network 1206~can be used.w.
The embodiment. of Figure 21 has an input layer 2104, a 14 middle (hidden) layer 2108, and an output layer 2110. Input layer 2104 includes a layet° of input elements-2102 which take their input values from the external input data 1220 . This is the known information used to produee the predicted values (output data) at outputs 1218. Even though input layer 2104 is referred to as a layer in. the neural network 1206, input layer 2104 does nut contain any processing elements; instead, it is a set of storage locations for input values on lines 2120.
The next layer is called the middle or hidden layer 2108.
Such a middle layer 2108 is not required, but is usually used.
It includes a set of elements 2106. The outputs from inputs 2102 of input layer 2104 are used as inputs by each element 2106. Thus, it can be appreeiated that the outputs of the previous layer are used to feed the inputs of the next layer.' ' Additional middle layers 2108 can be used. Again, they would lake the outputs from the previous layer as their inputs. Any number of middle layers 2108 can be utilized. y Output layer 2110 has a set of elements 2106. As their input values, they take the output of elements 2106 of the middle layer 2108. The outputs 1218 of elements 2106 of output layer 2110 are the predicted values (called output data) produced by the neural net 1206 using the input data 1220.

wo ~ziaz~66 ~crius~m~szs~
:23- ~~~~~~8 For each input value for each element of each of .the layers 2108 and 2110, an adjustable constant called a weight 2112 is defined: For purposes of illustration only, only two weights 2112 are shown. However, each connection between the layers 210A, 2108 and 2110 has an associated weight. Weights determine how much relative effect an input value has on;;the output value of the element in question.
When each middle element connects to all of the outputs from the previous layer, and each,output element connects to all of the outputs .from the previous layer, the. network is called fully eonnected. Note that if all elements use output values from elements of a previous layer, the network is a feedforward network. The network of Figure 21 is such a fully connected, feedforward network. Note that if any element uses output values from an element in a later layer, the network is said to have feedback. Most neural networks used for neural network 1206 use the same equation in every element in the network.
A. Construction of Neural Networks Neural network 1206 is built by specifying the number, arrangement and connection of the elements of which it is made up. In a highly structured embodiment of neural network 1206, the configuration is fairly simple. For example, in a fully connected network with one middle layer (and of course including one input and one output~layer), and no feedback, the number of connections and consequently the number of weights is fixed by the number of elements in each layer.
Such is the ease in the example shown in Figure 21. Since the same equation is usually used in all elements, for this type of network we need to know the number of elements in each layer. This determines the number of weights and hence total storage needed to build the network. The modular aspect of wa ~ziozgss pcrius~~iasz~~
~~~~ ~ ~~
_24_ the present invention of Figure 16 takes advantage of this way of~ simplifying the specification of a neural, network. Note that more complex networks require more configuration information, and therefore more storage. -.
The present invention contemplates other types of neural network configurations for use with neural network 1206., All that is required for neural network 1206 is that the neural network be able to be trained and retrained so; as. to provide .the needed predicted values, utilized in the process control.
B. Prediction Referring now to Figure 21, a representative embodiment of a feed forward neural network will now be described. This is only illustrative of one way in which a neural network. can function. ' Input data 1220 is provided to input storage locations called inputs 2102. Middle layer elements 2106 each retrieve the input values from all of the inputs 2102 in the input layer 2104. Each element has a weight 2112 associated with each input value. Each element 2106 multiplies each input value 2102 times its associated weight 2112, and sums these values for all~of the inputs. This sum is then used as. input to an equation (also called a transfer function or activation function) to produce an output or activation for that element.
The processing for elements 2106 in the middle or hidden layer 2100 can be performed in parallel, or .they can be performed sequentially.
In the neural network with only one middle layer as shown in 'Figure 21, the output values or activations would then be computed. For each output element ZI06, the output values or activations from each of the middle elements 2106 is retrieved. Eaeh output or activation is multiplied by its associated weight 2112, and these values are summed. This sum wo ~zioz~t>s Pcrius9ainszs~
2~~~~ ~8~

is then used as input to an equation which produces as its result the output data 1218. Thus using input data 1220, a neural network 1206. produces predicted values of output data 1218. . , ...
Equivalent function can be achieved using analog means.
C. neural Network Training The weights 2112 used in neural network 1206 are adjustable constants which determine (for any given neural network configuration) the values of th~'predicted output data for given input data. Neural networks are superio r to conventional statistical models because neural networks can adjust these weights automatically. Thus, neural networks are capable of building the structure of they relationship (or model) between the input data 1220 and the output data 1218 by adjusting the weights 2112. While a conventional statistical model requires the developer to define the equations) in which adjustable constants) will be used, the neural network 1206 builds the equivalent of the equations) automatically.
Referring now to Figure 34; the present invention contemplates various approaches for training neural network 1206. one suitable approach is back propagation. Back propagation uses the error between the predicted output data 1218 and the associated training input data 1306 as provided by the training set (not shown) to determine how much to adjust the weights 2112 in the network 1206. In effect, the error between the predicted output data values and the associated training input data values is propagated back through the output layer 2110 and through the middle layer 2108. This accounts for the name back propagation.
The correct output data values are called training input data values.

WO 92/U2~366 P~ f/L'S9~/U52S9 ~t~~, ~;a~'~°~° -26-The neural network 1204 is trained by presenting it with a training set(s), which is the actual history of known input data values and 'the associated correct output data values.- As described below, the present invention uses the historical 5. database with its associated timestamps to automatically . create a training set(s).
To train the network, the newly configured neural network is usually initialized by assigning random values to all of its weights 2112. Referring now to Figure 34, a representative embodiment of a neural network 1206y as configured for training purposes is shown. During training, the neural network 120,6 uses its input data 1220 to produce predicted output data 1218 as described above under Section I.B., Prediction.
These predicted output data values 1218 are used' in combination with training input data 1306 to produce error data 3404. These error data values 3404 are then propagated back through the network through the output elements 2106 and used in accordance with the equations or functions present in those elements to adjust the weights 2112 between the output layer 2110 and the middle or hidden layer 2108.
According to the back propagation method, which is illustrative of training methods that can be used for the neural network 1206, an error value for~each element 2105 in the middle or hidden layer 2108 is computed by summing the errors of the output elements 2106 each multiplied by its associated weight 2112 on the connection between the middle element 216 in the middle layer 2108 and the corresponding output elements in the output layer 2110. This estimate of . the error for each middle (hidden) layer element is then used in the same manner to adjust the weights 2112 between the input layer 2104 and the middle (hidden) layer 2108.

_z,_ It can thus be seen that the error between the output data 1218 and the training input data 1306 is propagated back through the network 1206 to adjust the weights 2112_ so,.that the error is reduced: More detail can be found in Parallel distributed Processin4 Explorations in the Microstructure of Cognition, by David E. Rumelhart.and James L. McClelland, The M1T Press, Cambridge,: Massachusetts, USA, . 1986,,:- and Explorations In Parallel' Distributed Processing, A.Handbook of Models, Programs, and exercises, by-James L. McClelland,-and David E. Rumelhart, The MIT Press; Cambridge, MA, 1988, which are incorporated herein by reference.
D. Advanta4es of Neural Networks Neural networks are superior to computer statistical models because neural networks do not require the developer of the neural network model to create the equations which relate the known input data and training values to the desired predicted values (output data). In other words, neural network 1206 learns the relationships automatically in the training step 104.
However, it should be noted that neural network 1206 requires the collection of training input data with its associated input data, also called training set. The training set must be collected and properly formatted. The conventional approach for doing this is to create a disk file on a computer on which the neural network runs.
1n the present invention, in contrast, this is done automatically using an historical database 1210 (Figure 12).
This eliminates the errors and the time associated with the conventional approach. This also significantly improves the effectiveness of the training function since it can be performed~much more frequently.

W~ 92/02866 PCT/U591/0525~

II. Brief Overview ~ Referring to Figures 1 and 12, the present invention is a computer neural network system and "method which produces predicted output data values 1218 using a trained network supplied with input data 1220 at a"specified interval. The predicted data ~1218~is supplied via, an historical database w'1210 to a controller.1202, which controls, a process.1212 which produces.a product 1216. In this way, the proeess conditions 1906 and product properties 1904. (Figures 19 and 20) are maintained at a.desired quality level, even though important ones of them can not be effettively measured directly, or modeled using conventional, fundamental or conventional statistical approaches.
The present invention can be configured by a developer using a neural network configuration and module 104 step.
Various parameters of the neural network can be specified by the developer by using natural language without knowledge of specialized computer syntax and training. In this way, the present invention allows an expert in the process being 20~ measured to configure the present invention without the use of a neural network expert.
Referring also to figure 34, the neural network is automatically trained on-line using input data 1220 and associated training input data 1306 having timestamps (for ' example, from clock 1230). The input data and associated training input data are stored in an historical database 1210, which supplies this data 1220, 1306 to the neuraly network 7206 for training at specified intervals.
The (predicted) output data value 1218 produced by the neural network is stored in the historical database. The stored output data value 1218 is supplied to the controller 1202 for controlling the process as long as the error data 1~N~C? 92/02$fr6 F'CT/LJS93/U5259 -29- , 1504 between the output data 1218 and the training .input data 1306 is below an acceptable metric.. ,,-The error- data 1504: is also used for ~automatical.ly .
retraining the neural network.. This.retraining,..typically occurs while the neural network is, providing the .controller via the historical database with the output ,. da,ta.1 . ,,The retraining of the neural network. results-in the .output..data approaching the training input data as much as possible over the operation of the process. _ In .thi.s way,, the "present invention can effectively adapt to changes in the process, which can occur in a commercial application. " .
A modular approach for the neural network,.. as shown in Figure 16, is utilized to simplify configuration and to produce greater robustness. In essence, the modularity is broken out into specifying data and calling subroutines using pointers.
In configuring the neural network, as shown in Figure 22, data pointers 2204, 2206 are specified. A template approach, as shown in Figures 26 and 27, is used to assist the developer in configuring the neural network without having to perform any actual programming.
The present invention is an on-line process control system and method. The term "on-line" indicates that the data used in the present invention is collected directly from the data acquisition systems which generate this data. An on-line system may have several eharacteristics. One characteristic is the processing of.data as i<he data is generated. This may also be called real-time' operation. Real-time operation in general demands that data be detected, processed and acted upon fast enough to effectively respond to the situation. In a process control context, real time means that the data can be responded to fast enough to keep the proeess in the desired .
control state.
6 PCT/'1.J993/05259 - ~~ ~ ~. ~s~ -30-In contrast, off-line~methods can also be used. .In_off-line methods, the data being used was. generated at some point in the past and there is no attempt to respond in a, way that can effect the situation. It should be understood that while the preferred embodiment of the present invention uses an on-line approach, alternate embodiments. can substitute";o.ff-line approaches in various steps or modules:.:. ,-' III. _Use in Combination Faith ~xpert Systems The above description of neural networks and neural networks as used in the present invention, combined with the description of the problem of making measurements in a process control environment given in the background section, illustrate that neural. networks add_.a;unique: and powerful eapability to process control systems. They' allow the inexpensive creation of predictions of measurements that are difficult or impossible to obtain. This capability opens up a . new realm of possibilities for improving quality control in manufacturing processes. As used in the present invention, neural networks serve as a source of input data to be used by controllers of various types in controlling the process.
Expert systems provide a completely separate and _ completely complimentary capability for process control systems. Expert systems are essentially decision-making programs which base their decisions on process knowledge which is typically represented in the form of if-then rules. Each rule in an expert system makes a small statement of truth, relating something that is known or could be known about the process to something that can be inferred from that knowledge.
I3y combining the applicable rules, an expert. system can reach conclusions or make decisions which mimic the decision-making of human experts.

WU 92/02866 P~T/IJ591/fl5259 _3I_ ~~~~~ ~l~
The systems and methods described in several of the United States patents and patent applications incorporated by reference above use expert systems in a control system architecture and method to add this decision-making capability to process control systems. As described in these patents and .
patent applications, expert systems provide a very advantageous function in the implementation of process control systems. ~ .
The present invention adds a different capability of substituting neural networks for measurement s which are difficult to obtain. The advantages of the present invention are both consistent with and complimentary to the capabilities provided in the above-noted patents and patent applications using expert systems. In fact, the combination of neural network capability with expert system capability in a control system provides even greater benefits that either capability provided alone. For example, a process control problem may have a difficult measurement and also require the use of decision-making teehniques in structuring or implementing the 20. control response. By combining neural network and expert system capabilities in a single control application, greater results can be achieved than using either technique alone.
It should thus be understood that while the present invention relates primarily to the use of neural networks for process control, it can very advantageously be combined with the expert system inventions described in the above-noted.
patents and patent applications to give even greater process control problem solving capability. As described below, when implemented in the modular proeess control system architecture, neural network functions are easily combined with expert system functions and other control functions to build such integrated process control applications. Thus, while the present invention can be used alone, it provides ~.vo 9ziozass ~critjs~noszs~
_ ~ ~~~,'y'' even greater value when used in combination with the expert.
system inventions in the. above-,noted patents and ~paten~t applications.
IY. Preferred ~4ethod of Qperation The preferr°ed method of operation;~of the present invention scores input data and.training data, configures and ' trains a neural network, predicts output data using the~neural network, retrains the neural network; enables or disables 1 10 control using the, output data, and controls the process using output data. As shown in Figure 1, more than one step or module is carried out .in parallel in the method of therpresent invention. As indicated by the divergent order pointer 120, the first two steps or modules in the present invention are carried out in parallel. ,First in a step or module 102,~input data and training input data are stored in the historical database with associated timestamps. In parallel, the neural network is configured and trained in~a step 104. Next, two series of steps or modules are carried out in parallel as indicated by the order pointer 122. First, in a step or module 106; the neural network is used topredict output data using input data stored in the historical database. Next, in a step or module 108, the neural network is retrained using training input data stored in the historical database. Next, in a step or module 110, control using the output data is enabled or disabled in parallel in a step or module 112, control of the process using the output data is carried out when enabled by step or module 110.
A. Store In~~ut Data and Training Input Data ~teo and t4odule 102 As shown in Figure l, an order pointer 120 indieates that a step 102 and a step 104 are performed in parallel.

~~~ ~z~oz~66 ~cri~.js~~ioszsg -33- ~~~~.1'~~~
Referring now to step 102, it is denominated as the store input data and training input data step and module. Figure 2 shows step and module 102 in more detail.
. Referring now to Figures 1 and 2, step and~module 102 has the function of storing input data 1220 and storing training input data 1306. Both. types of data are .stored '~in ~ an historical database 12x0 (see Figure l2 and~related structure diagrams), for example.., Each stored input data and training input data"entry in historical database 1210 .utilize s an associated. timestamp. The associated~~timestamp allows the system and method of the present invention to determine the .
relative time that the particul,ar.measurement or predicted value or measured value was taken, produced or derived.
A representative example 'of step and module 102 i~s shown in Figure 2; which is described as follows. The order pointer 120, as shown in Figure 2, indicates that input data 1220 and training input data 1306 are stored in parallel in the historical database 1210. Specifically, input data from sensors 1226 (see Figures 12 and. 13) are produced by sampling at specific time intervals the sensor signal 1224 provided at the output of the sensor 1226. This sampling produces an input data value or number ar signal. Each of these is called an input data 1220 as used in this application. The input data is stored with an associated timestamp in the historical database 1210, as indicated by a step and module 202. The associated timestamp that is stored in the historical database with the input data indicates the time at which the input data ~!as produced, derived, calculated, etc.
A step or module 204 shows that the next input data value is stored by step 202 after a specified input data storage interval has lapsed or timed out. This input data storage interval realized by step and module 204 can be set at any WJ 92/02866 PC~'/US97/0525~
~~~~ (~~~ ~
-3a-specific value. Typically, it is selected based on the characteristics of the process being controlled.
As shown in Figure Z, in ~addiLion to the sampling~and storing of input data at~ specified input data storage intervals, training input' data 1306 is also being stored.
Specifically, as shown by step and module 206; training input data is stored with associated timestamps in=the historical database 1210. Again; the~associsted timestamps~ utilized with the stored~training input data indicate the relative. time at which the training input data was derived,~produced or obtained. It should be understood that this usually is the time when the process condition or product property actually existed in the process or product. In other words, since it typically takes a relatively long period of time to producQ
the training input data (because lab analysis and the like usually has to be performed), it is more accurate to use a timestamp which indicates the actual time when the measured state existed in the process rather than to indicate when the actual training input data was entered into the historical database. This produces a much closer correlation between the training input data 1306 and the associated input data 1220.
This close correlation is needed, as is discussed in detail below, in order to more effectively train and control the system and method of the present invention.
~ The training input data is stored in the historical database 1210 in. accordance with a specified training input data storage interval, as indicated by a step and module 20e.
~hile'this can be a fixed time period, it typically is not.
More typically, it is a time interval which is dictated by when the training input data is actually produced by the laboratory or other mechanism utilized to produce the training input data 1306. As is discussed in detail herein, this often times takes a variable amount of time to accomplish depending 13't3 92/t12866 F~CT~t%J9I~OJ''ZS9 2~~'~~a upon the process, the mechanisms being used to produce the training data, and other variables' associated both~with the.
process and with the measurement/analysis process utilized to produce the training input data.
Nhat is important to' understand here isthat the specified input data storage interval is usually considerably shorter than the specified training inputw data storage interval of step and' module 204.
As can be seen, step and module 102 thus r~esults~ in the historical database 1210 receiving values of input data and training input data with associated timestamps. These values.
are stored for use by the system and method of the present invention in accordance with the steps and modules discussed in detail below.
la S. Oonfiaure and Train Neural Rletwork Steo and Module _104 As shown in Figure l, the order pointer 120 shows that a configure and train neural network step and module 104 is performed in parallel with the store input data and training input data step and module 102. The purpose of step and module 104 is to configure and train the neural network 1206 (see Figure 12).
Specifically, the order pointer 120 indicates that. the step and module 104 plus all of its subsequent steps and modules are performed in parallel to the step and module 102.
Figure 3 shows 'a representative example of the step and module 704. As shown in Figure 3, this representative embodiment is made up of five steps and modules 302, 304, 306, 308 and 310.
Referring now to Figure 3, an order pointer 120 shows that the first step and module of this representative embodiment is a configure neural network step and module 302.

WO 9210266 P~T/U591/fJ5259 - ~~~~. ~~ -36-Configure neural network step and module 302 is used to set up the structure and parameters of the neural network ,1206 ,that is utilized by the system, and method of the present invention.
As discussed below in detail, the actual steps and modules utilized to set up, the, structure and perimeters of~neural network.1206 are,.shown in Figure 8.
After the neural network 1206 has been configured i.n~~step . and module 302, an order pointer 312 indicates that a~ wait training data interval step and module 304 occurs or is utilized. :The wait training data interval step and module 304 specifies how freguently the historical database 1210 will be looked at to determine if there is any new training data to be utilized. for training of the neural network 1206. 1t should be noted that the training data interval of step and module 304 is not the same as the specified training input data storage interval of step.and module 206 of Figure 2. Any desired value for the training data interval can be utilized for step and module 304.
An order pointer 314, indicates that the next step and module is a new training input data? step and module 306.
This step and module 306 is utilized after the lapse of the training data interval speeified by step and module 304, The purpose .of step and module 306 is to examine the historical database 1210 to determine if new training data has been stored in the historical database since the last time the historical database 1210 was examined for new training data.
The presence of new training data will permit the system and method of the present invention to train the neural network 1206 if other parameters/conditions are met. Figure 9 discussed below shows a specific embodiment.for the step and module 306.
An order pointer 318 indicates that if stew and module 306 indicates that new training data is not present in the ono ~a~az~sb Pcrius~moszs~
historical database 1210, the step and,module 306 returns the ' operation of step and module 104 to. the 'step and module 304.
In contrast,wif new training~,data, is present in the .
historical database 1210, the step and, module 306 as.
indicated~by an order pointer..316.causes~,the step and. module 104 to move to a train neural'networkv:step and module. 308. , Train neural network step and modul a 30B ~is , the actual .:
training of the neural network 1206 using the new. training data retrieved from the historical database 1210. Figure 10, discussed below in detail, shows a representative embodiment of the train neural network step and module 308.
After the neural network has been trained, in step and module 308,-the step and module 104 as:,indicated by an order pointer 320 moves to an error acceptable? step and module 310.
Error acceptable? step and module 310 determines whether the error data 1504 produced by the neural network 1206 is within an acceptable metric, indicating error that the neural network 1206 is providing output data 1218 that is close enough to the training input data 1306 to permit the use of the output data 1218 from the neural network 1206. In other words, an acceptable error indicates that the neural network 1206 has been "trained" as training is specified by the user of the system and method of the present invention. A representative example of the error acceptable? step and module 310 is shown , in Figure 11, which is discussed in detail below. .' If an unacceptable error is determined by error acceptable? step and module 310; an order pointer 322 indicates that the step and module 104 returns to the wait training~data interval step and module 304. In other words, this means 'that the step and module 104 'has vaot completed training %he neural network 1206. Because the neural network I206 has not yet been trained, training must continue before Vd~U 92/02866 PCT/US91/05259 ~~,~'~~; I~ra . ,. . . .38_ J
the system and method of the present ,invention. can. move, to ,a step and module 106 discussed below. - ,. - ,.
,;. v;~, In contrast, if the error acceptable'? s.tep,and module 310 determines that an acceptable error from. the neural network 1206 has- been obtained,: then. the step.,and module 104 has trained neural network'°1206: Since the neural network.;1206 has now been trained,' step 104 allows the.system and method of the present invention to move to the steps and methods 106. and 112 discussed below:' ' - .
The specific~embodiments for step and module 104 are now discussed. . .
. 1. ~onfia~ure Neural Network Steo and P9odule 302 Referring now=to Figure 8, a representative embodiment of the configure neural network step a.nd module 302 is shown.
This step and module allow the uses of the present invention to both configure and re-configure the neural network.
Referring now to Figure 8, the order pointer 120 indicates that the first step and module is a specify training and prediction, timing control step and module 802. Step and module 802.allow the person configuring the system and method of the present invention to specify the training intervals) and the prediction timing intervals) 'of the neural network 1206.
t5 Figure 31 shows a representative embodiment of the step and module 802. Referring now t,o Figure 31, step and module 802 can be~rnade up of four steps and modules 3102, 3104, 3106, and 3108 . Step and module 3102 is a specify training timing method step and module. The specify training timing method step and module 3102 allows the user configuring the present invention to specify the method or procedure. that will be followed to determine when the neural network 1206 will be trained. A representative example of this is when all of the w0 9z~oz866 PCf/l;S9i/OSZ59 2p~~;~~~g training data has been updated: Another example is the lapse of a fixed time interval. Other methods and procedures can be utilized.
An order pointer indicates that a specify training timing parameters step and module 3104 is then carried out by the user of the present invention: This step and module 3104 allows for any needed training timings parameters to be specified. It should be realized that the method or procedure of step and module 3102 can result in zero or more training timing parameters, each of which has a value. This value could be a time value, a module number (in the .modular embodiment of the present invention of Figure l6), or a data pointer. In other words, the user can configure the present invention so that considerable flexibility can be obtained in how trainihg of the neural network 1206 occurs based on the method or procedure of step and module 3102.
An order pointer ,indicates that once the training timing parameters) 3104 has been specified, a specify prediction timing method step and module 3106 is configured by the user of the present invention. This step and module 3106 specifies the method or procedure that will be used by the neural network 1206 to determine when to predict output data values 1218 after it has been trained. This is in contrast to the actual training of the neural network 1206. Representative examples of methods or procedures for step and module 3106 are execute at a fixed time interval, execute after the execution of a specific module, or execute after a specific data value is updated. Other methods and procedures can be used.
An order indicator in Figure 31 shows that a specify prediction timing parameters step and module 3108 is then carried out by the user of the present invention. Any needed prediction tiring paramQt,ers for the method or procedure of step cr module 3106 can be specified. For example, the time !~O ~J2/02866 P~CT/US9ll(35259 - ~ ,~

interval tan be specified as a parameter for the execute at a specific time interval method or procedure. .Another example is the specification of~a module identifier when the execute after the execution of a particular module method or procedure is specified: Another example is a data, pointer when the updating of a data value method or procedure ~is used. . Other operation timing parameters can be used.
Referring again to Figure 8, after the specify training and prediction .timing control step-and module .802 has been specified, a specify neural network size step and module 804 is carried out. This step and module 804 allows the user to specify the size and structure of the neural network 1206 that is used by the present invention.
Specifically, referring to Figure 31, again, a representative example of how. the neural network size can be specified by step and module 804 is shown. An order pointer indicates that a specific number of inputs step and module 3110 allows the user to indicate the number of inputs that the neural network 1206 will have. Note that the source of the input data for the specified number of inputs has not yet been fixed by the user in the step and module 3110. Only the actual number of inputs have been specified in the step and module 3110.
Once the number of inputs have been specified in step and ~ module 3110, the user can specify the number of middle (hidden) layer elements in the neural network 1206 by using a step or method 3112. 8y .middle elements it is meant that one or ire internal layers 2108 of the neural network tan be specified by the user. The present invention contemplates a neural network having zero or more middle layers 2108.
Typieally, one middle layer is. used; however, two or more middle layers are contemplated.

'!~O 9210266 ~CT/U~91/05259 ~~~~~~$

An order pointer indicates that once the number of middle elements have been specified in step and module 3112; the number of output data from 2106v'~the output s of the neural nete~ork 1206 can be specified as indicated by a step or module .. 3114. Plote that where the outputs of the neural network 1206 are to 'be stored is not specified in step or module 3114.
lnstead, only the number of outputs are specified in this step of the present inv.entiori': ' ~ ~ .
As discussed herein, the present invention contemplates any form of presently known or future developed configuration for the structure of the neural network 1206. Thus; steps or modules 3110,3112, and 3114 can be~modified so as to allow the user to specify these different configurations for the neural network 1206.. , Referring again to Figure 8, once the neural network size has been specified in step and module 804, the user can specify the training and prediction modes in a step and module 806. Step and module 806 allows both the training and prediction modes to be specified. It also allows for controlling the storage of the data produced in the training and prediction modes. It also allows for data coordination to be used in training mode.
A representative example of the speeific training and prediction modes step and module 806 is shown in Figure 31.
lt~is made up of step and modules 3116, 311$, and X120.
As shown, an order pointer indicates that the user can specify prediction and train modes in a step and module 3116.
These are yes/no or on/off settings. Since the system and method of the present invention is in the train mode at this stage in its operation, step and 3116 typically goes to its default setting of train mode only. However, it should be understood that the present invention contemplates allowing Wa 92/02866 pCf/U~91/U5259 N
-. . '~j-tI . , .
-42' the user- to independently control the prediction. or train modes. .. : , ..
When prediction mode is enabled, or "on,°' the-,neural network 1206 , will predict output, data ~~ values 1218 ~ using . retrieved input .data values,1220, as described, below. ~~When ' training mode: is enabled, or "on," the neural~network 1206~wi11 monitor the,historical database 1210.for new training data and . will train using the training data, as described, below.
An order pointer indicates that once the prediction and train modes have~been specified in the step and module,3116, the user can specify prediction and train storage modes. in a step and module 3118. These are on/off, yes/no values. They allow the user to speeify whether the output data. produced in the prediction and/or modes will be stared for possible later use. In some situations, the user.will specify that. they will ' not be stored, and in such a situation they will be discarded after the prediction or train mode has occurred. Examples of situations where storage may not be needed are as follows.
First, if the error acceptable metric value in the train mode indicates that the output data is poor and retraining is necessary, there may be no reason to keep the output data.
Another example is in the prediction mode, where the output data is not stored but is only used. Other situations may arise where no storage is warranted.
An order pointer indicates that a specify. training data coordination mode step and module 3120 is then specified by the user. Oftentimes, training input data 1306 must be correlated in some manner with input data 1220. This step and module 3120 allows~the user to deal with the relatively long time period required to produce training input data 1306 from when the measured states) existed in the process. First, the user can specify ~ahather the most recent input data will be used with the training data, or whether prior input data will 'S~'O 92/02F3b6 PCf/L'~91/05259 _43- ~~~~~ts~
be used with the training data. If the' user specifies that prior input data is to be used, the method of determining the time of the prior input data can be specified in this step and module 3I20. ' Referring again to Figure 8,, once the specified training and prediction modes step and module 806 has been completed by .
' the user, steps and modules 808,,810; 812 and 814 are carried w out. Specifically, the user follows a specify input data step and module 808, a specify output data "step and module 810, a specify training input data step and module 812, and a specify error data step and module 814. Essentially, these four steps and modules 808-814 allow the user to specify the source and destination of. input and output data for both the (run) prediction and .training modes, and the storage location of the error data determined in the training mode.
Figure 32 shows~a representative embodiment used for all of the steps and modules 808-814 as follows.
Steps and modules 3202, 3204 and 3206 essentially are directed to specifying the data location for the data being specified by the user. In contrast, steps and modules 3208 3216 may be optional in that they allow the user to specify certain options or sanity checks that can be performed on the data as discussed below in more detail.
Turning first to specifying the storage location of the data being specified, a step or module 3202 is called specify data system. Typically, in a chemical plant, there is more than one computer system utilized with a process being controlled. Step or module 3202 allows for the user to specify which computer systems) contains the data or storage location that is being specified.
Once the data system Haas been specified, the user can specify the data type using a specify data type step and module 3204. The data type indicates which of the many types ' ~~~i°~ ~~ -44-of data. and/or storage modes are desired. Examples are current (most' recent) , values of measurements;- historical ' values, time averaged values, set~oint values,- limits;~'etc.
After the data type ~has.been specified; the~user canspecify a data item number or identifier using a step or module 3206.
' The data item number or identi'fier~ind.icates which of the many instances of, the~specify date 'type 'in the specifieddata system is desired. ' Examples are the'measurement number; the control loop number, the control tag, name,~etc. ~ These three steps and modules~3202-3206 thus allow the user to specify the source or. destination of the data~(used/produced by the neural network) being specified.
Once this has been specified, the user can specify the following additional.parameters: Specifically, where data is being specified which is time varying, the user can specify the oldest time interval boundary using a step and module 3208, and can specify the newest time, interval boundary using a step and module 32,10. For example, these boundaries can be . utilized where a time weighted average of a specified data value is needed., Alternatively, the user can specify one particular, time when the data value being specified is an historical. data point value.
Sanity checks on the data being specified can be specified by the user using steps and modules 3212, 3214 and 3216 as follows. Specifically, the user can specify a high limit value using a step and module 3212, and can specify a how limit.~value using a step and module 3214 . Since sensors, for example, sometimes fail, this sanity check allows the user to prevent the system and method of the present invention from using false data from a failed sensor. Other examples of faulty data can also be detected by setting 'these limits.
The high and low limit values can be used for scaling the input data. Neural networks are typically trained and ~WfJ 92/02866 PCT/U591/05259 -45- ~~~~~ ~ ..
operated using input, output and training input data scaled within a fixed range. Using the high and low limit values allows this sealing to be accomplished so that the scaled val ues uses ~inost of the range. ' Typi cal ranges are 0 to 1 _ and -1 to. 1. ' . .-In addition, the user often knows that certain values will normally change ,a certain amount over a specific. time interval. Thus, changes which exceed these limits can be used as an additional sanity check.. This can be accomplished by the user specifying a maximum change amount in step and module 3216:
Sanity checks can be used in the method of the present invention to prevent erroneous training, prediction, and control. Whenever any datavalue fails to pass the ,sanity checks, the data may be elamped at the limit(s), or the operation/control may be disabled. These tests significantly increase the robustness of the present invention.
It should be noted that these steps and modules in Figure 32 apply to the input, output, training input, and error data steps and modules 808, 810, 812 and 814.
When the .neural network is fully configured, the weights are normally. set to random values in their allowed ranges (-1 to 1 is commonly used as a weight range). This tan be done automatically, or it can be performed on demand by the user (for example, using softkey 2616 in Figure 26).
2. Bait Trainin4 Inout Data Interval Step and ~todul a 304 Referring again to Figure 3, the wait training data interval step and module 304 is now described in greater detail.
Typically, the wait training input data interval is much shorter than the time period (interval) when training input WO 92/02865 f'Ct/LS91/05259 ..
data becomes available. This wait training input: data interval determines how often the training input data will be checked to determine whether new training input data, has, been received. Obviouslyy the oaore frequently the, training,. input data is checked, the shorter the time interval will, be from when. new training input data becomes available to when retraining has occurred.. . , ,. ~, It should be noted that the configuration for the. neural network 1206 and specifying its wait training input, data interval is done. by the user. This interval may be inherent in the software systerti and method which contains the neural network of the present invention. Preferably,. it is specifically defined by the entire software system and method of the~present invention. Now the neural,: network 1206 is being trained. .
3. Hew Tra_inina Input Data?~ Step and Plodule 306 An order pointer 314 indicates that once the wait training input data interval 304, has elapsed, the new training input data? step or module 306 occurs.
Figure 9 shows a representative embodiment of the new training input data? step and module 306. ,Referring now to Figure 9, a representative~example-of determining whether new training input data has been received is shown. A retrieve current training input timestamp from historical database step and module 902 first retrieves from the. historical database 1210 the current training input data timestamp(s)..
As indicated by an order pointer, a compare eurrent training input data timestamp to stored training input data timestamp step and ~odule 904 compares the current training input data timestamp(s) with a saved training input data timestamp(s).
Note that when the system and method of the present invention is first started, an initialization value must be used far the WO 92/02866 PCT/US91/052~9 r4~_ 2~~~~t~~
saved training input data timestamp." 1f the current training ' input data ~timestamp is the same as the saved training input data timestaanp, this, indicates that. new training input data ' does not exist. This situation on no new training input data is indicated by,an order: pointer 318.
This. step a,nd module 904 functions to determine whether any new training input data is available for use in training the neural network. ~ It should be understood that, in various embodiments of the present, invention,~the presence of new training input data may be detected (determined) in alternate ways. One specific example is where only one storage location is: available for training input data and the associated timestamp. In this case, detecting (determining) the presence of new training input data can be carried out by saving ' 15 internally in the neural network the associated timestamp of~
the training input data from the last time the training input data was checked, and periodically retrieving the timestamp from the storage location for the training input data and comparing it to the internally saved value of the timestamp.
Other distributions and combinations of storage locations for timestamps and/or data values can be used in detecting (determining) the presence of new training input data.
however, if the comparison of step and module 904 indicates that the current training input data timestamp is different from the saved training input data timestamp, this indicates that new training input data has been received (detected). This new training input data timestamp is saved by a save current training input data timestamp step and module 906. After this current timestamp of training input data has been saved, the new training data?~ step and module 306 has been completed, and the present invention moves to the train neural network step and module 308 of Figure 3 as indicated by the order pointer.

WU 92!02$66 ~'CI'/U~91/05253 .
A ri _Train Neural Pletwork Step and Module 308 Referring again to"Figure 3, the trainv neural- network step and modul a 308 i s the step and modul a where the neural network 1206 is trained. Figure 10 shows a representative embodiment of the train neural network step'and module 308::
Referring~now to step and module 308 shown in Figure 10, an order pointer 316 indicates that a = retrieve current training input data.from historical~database step-and module 1002 oceurs. In step and module 1002, one or more current training input data values are retrieved from the historical database 1210. The number of current training input data values that is retrieved is equal~to the number of outputs 2106 of the neural network 1206 that is being trained.-The training input data is normally sealed. This scaling can use the high and low limit values specified in the~configure and train neural network step 104.
An order pointer shows that a choose training input data time step and module 1004 is next carried out. Typically, when there are two or more current training input data values that are retrieved, the data time (as indicated by their associated timestamps) for them is different. The reason for this is that typically the sampling schedule used to produce the training input data is different for the various training input data. Thus, current training input data often has different associated timestamps. In order to resolve these differences, certain assumptions have to be made. In certain situations, the average between the timestamps is used.
Alternately, the i;imestamp of one of the current training input data could be used. Other approache s also can be employed.
Once the training input data time has been ehosen in step and module 1004, the input data at the training input data WO 92/02866 ~~'T!L'S91/05259 -4~- 2~~~~~
time is retrieved from the historical database 1210 as indicated by a step and module ,1006. The input data is ' normally scaled.,. This scaling can use. the high and low limit values specified ,it1 the configure and train neural network S step 104. Thereafter, the neural net 1206 predicts output data from the retrieved input data, as indicated by a step-and module 406.
The.predicted output data from the neural network 1206 is .:
then stored in the historical database 1210, as indicated by a step and module 408. The output data is normally produced in a scaled form, since all the input and training input data is scaled. In this case, the output data must be de-scaled.
This de-scaling can use the high and low limit values specified.:. in the configure and train neural network step 104.
Thereafter, error data is computed using the output data from the neural network 1206 and the training input data, as indicated by a step and module 1012. It should be noted that the term error data 1504 as used in step and module 1012 is a set of error data value for all of the predicted outputs 2106 from the neural network 1206. However, the present invention also contemplates using a global or cumulative error data for evaluating whether the predicted output data values are acceptable.
After the error data 1504 has been computed (calculated) in the step and madule.1012, the neural network 1206 is retrained using the error data 1504 and/or the training input data 1306. The present invention contemplates any method of training the neural network 1306.
After the training step and module 1014 has been eompleted, the error data 1504 is stored in the historical database 1210 in a step and module 1016. It should be noted that the errar data 1504 shown here is the individual data for each output 2106. These stored error data 1504 provide a t~'~ 9~/0~~365 PCT/U~~31/05269 t~ ~e~y.,3 _50_ r v historical record of theerror performance for each output 2106 of the neural network~1206.
The sequence ofsteps described~~abover is the preferred embodiment used ~ when the ' neural network w 1206 - cari be ,.,,t effectively trained using a ~ single presentation of the ' . training set created for each new traiii~ing~ input~data' 1306.
flowever, in using certain training methods or for certain applications, the neural networkv 1206' inay require many presentations o.f training sets to b~ adequately~(acceptable metric) trained.. In~this~~case, two alternate approaches can be used to train the~neural network 1206.
In the first approach, the neural network 1206 can save the training sets (that is, the traini~ng~input data and the associated input data which is retrieved in step' and module 308) in a database of ~ training sets, which '~ca-n' 'then be ' repeatedly presented to the neural network 1206 to train the neural network. The user might be able to configure the number of training sets to be saved. As new training data becomes available, new training sets are constructed and ~20 saved. When the specified number of training sets has been accumulated (in a "stack"), the next training se.t created based on new lab data would "bump" the oldest training set out of the stack. This oldest training set would be discarded.
Conventional neural network training creates training sets all at once, off-line, and would keep using all the training sets created. .
A second (or "stack") approach which can be used is to maintain .a time history of input data and training input data in the historical database 1210, and to search the historical database 1210, locating training input data and constructing the eorresponding training set by retrieving the associated input data.' VJO 92/t12866 PC?/US~1/05259 It should be understood that the combination of the neural netovork 1206 and the. historical database . 1210 containing both the input data and the training input data with their associated timestamps provides a .very powerful platform for building, training and using the neural-,network 1206. The present invention contemplates various other modes of using the data in the historical database 1210 and the neural network 1206 to prepare training sets for training the neural network 1206:
5. Error Acceptable ~ Step and Nodule 310 Referring again to Figure 3, once the neural network 1206 has been trained in step and module 308, a step and module 310 of determining whether an acceptable error? occurs.-. Figure 11 shows a representative embodiment of the error acceptable?
step and module 310.
Referring now to Figure 11, an order pointer 320 indicates that an compute global error using saved global error step and module 1102 occurs. The.term global error as 2D used herein means the error over all the outputs 2106 and/or over two or more training sets (cycles) of the neural network 1206. The global error reduces the effects of variation in the error from one training set (cycle) to the next. Une cause for the variation is the inherent variation in lab data tests used to generate the training input data.
Unce the global error has been computed (estimated) in the step and module 1102, it is saved in a step and module IlDe,. The global error may be saved internally in the neural network 12D6, or it may be stored in the historical database 1210. Storing the global error in the historical database 1210 provides an historical record of the overall performance of the neural network 1206.

~~ 92/02856 PCT/L'~9l/U~ZS'~
_. ~~ Y~, ~ iJ
~~~J r ~~J~_ Thereafter, if an appropriate history of global error is available (as would be the case in retraining), a step and modul a 1106 can be used to determi ne i f the g1 obal error i s statistically different fror~ zero. This step and module 1106 determines whether a sequence of global error values, falls within the expected range of variation around the expected (desired); value of zero, or whether the global error is statistically significantly different from zero. This step and module .1106. can be important when the training input data used to compute the global error has significant random variability. If the~neural network 1206 is making accurate predictions, the random variability in the training input data (for example, caused by. lab variation) will cause random variation of the globalv error around zero. This step and la module 1106 reduces the tendeney to incorrectly classify as not acceptable the predicted outputs of the neural network 1206.
If the global error is not statistically different from zero, then the global error is acceptable, and the present invention moves to ,an order pointer 122. An acceptable error indicated by order pointer 122 means that the neural network 1206 is trained. This completes step and module 104.
However, if the global error ~is statistically different from zero, the present invention in the retrain mode moves to a step and module 1108, which is called training input data statistically valid?. (Note that step and module' 1108 is not needed in the training mode of step and module 104. In th.e training mode, a global error statistically different from zero moves direetly to an order pointer 322.) If the training input data in the retraining made is not statistically valid, this indicates that the acceptability of the global error cannot be determined, and the present invention moves to the order pointer 122. However, if the w~ ~ziozs~6 ~crius~oosz~~
_53_ (~
N t1 J 'J ,k' J V
training input data is statistically valid, this indicates that the error is not acceptable, and the present invention' . moves back to the wait training input data interval step and module 304, as indicated in Figure 3.' The steps and modules described here for determining .
whether the global error is acceptable constitute one example of implementing a global error acceptable metric. It should be understood that different process characteristics'; and .
different sampling ,frequencies, and different measurement techniques (for process conditions and product properties) may indicate alternate methods of determining whether the error is acceptable. The present invention contemplates any method of creating an error aceeptable metric.
Thus, it has been seen that the present invention in step . 15 and module 104 configures and trains the neural network 1206 for use in the present invention.
C. Predict Dui;put Data Usina Neural Network Steo and Module 106 . Referring again to Figure 1, the order pointer 122 indicates that there are two parallel paths that the present invention uses after the configure and train neural network step and module 104. One of the paths, which the predict output data using neural network step and module 106 described below is part of, is used for predicting output data using the neural network 1206, for retraining- the neural network 1206 using these predicted output data, and for disabling control of the controlled process when the (global) error from the neural network 1206 exceeds a specified error acceptable metric (criterion). The other path is the actual control of the process using the predicted output data from the neural network 1206.

wo 92/02tifi6 PGT/US91/OSZSg Turning now to the prediet output data using neural network step and module 106, this step and module 106 uses the neural network 1206 to produce. output data for use i~n control of the procelss and for retraining the neural network 1206.
. Figure,4 shows a representative embodiment of the step and modula 106.
Turning now to Figure~~4, a wait specified prediction interval step. or module 402 utilizes the method or proeedure specifed by the user in steps or modules 3106 and 3108 for determining when to retrieve input data. Once the. specified prediction interval has elapsed, the present invention moves to a retrieve .input data at current time from historical database step or module 404. The input data is retrieved at the current time. .. That is, . the most recent value available for each input data value is retrieved from the historical database 1210.
The neural network 1206 then predicts output data from the retrieved input data, as indicated by a step and module 406. This output data is used for process control, retraining and control purposes as discussed below in subsequent sections. Prediction is done using any presently known or future developed approach. For example, prediction can be done as specified above in Section I.B.
D. Retrain Neural Network Step or Module.
Referring again to Figure I, once the predicted output data has been produced by the neural network I206, a retrain neural. network step or module 108 is used.
Retraining of~ the neural network 1206 occurs when new training input data becomes available. Figure 5 shows a representative embodiment of the retrain neural network step or module 108.

wo ~xioz~6~ Pcrir~s~moszs~

Referring now to Figure .5, an order pointer I24 shows that a new training input data? step or~module 306 determines if new training input data has become available....~Figure 9 shows a representative embodiment, of the new training input data? step or module 306. Step or module 306 was described above in connection with Figure 3; for this reason, it is not described again here. ' _ , .:
As indicated by an order pointer.126, if new. training data is not present, the present invention returns to the predict output data using neural network step or module 106, as shown in Figure 1.
If new training input data is present, the neural network 1206 is retrained, as indicated by, a module or step 308. A
representative example of module or step 308 is shown in Figure 10, Since training of the neural network is the same as retraining, and .has been described in connection with Figure 3, module or step 308 is not discussed in detail here.
Once the neural network 1206 has been retrained, an order pointer 128 causes the present invention to move.to an enable/disable control step or module 110 discussed below.
E. Enabl~Disable Control Module or Step 110 Referring again to Figure 1, once the neural network 1206 has been retrained, as indicated by the step or module 108, the present invention moves to an enable/disable control step or madule 110. The purpose of the enable/disable control step or module 110 is to prevent the control of the process using output data (predieted values) produced by the neural network 1206 when the error is not unacceptable ("poor").
A representative example of the enabie/disable control step or module 110 is shown in Figure 6. Referring now to Figure 6, the funetion of module 110 is to enable control of the controlled, process if the errar is acceptable, and to ~'U 92/02866 F~Cf/US91/05259 ~,,.~
'~Cr~~_~v~

disable control if the error is unacceptable. As shown in FigurQ 6, an order pointer 128 moves the present invention to ' an error acceptable? step or module 310. If the error between the training input data and the predicted output data is unacceptable, control of the controlled process is disabled ~by a disable control step and module 604:' The disable; control step and module 604 sets a flag (indicator) which;can be examined by the control process using output data step and module 112 indicating that the output,data should not be used for control. ' Figure 30 shows a representative embodiment of the enable control .step and module 602. Referring now to Figure 30, an order painter 142 causes the present~invention first to move to an output data indicates safety or operability problems?
step or module 3002. If the output data does not indicate a safety or operability problem, this indicates that the process 1212 can continue to operate safely. This is indicated by the fact that the present invention moves to the enable control using output data step or module 3006.
In contrast, if the output data does~indicate a safety or operability problem, the present invention reeommends that the process being controlled be shut down, as indicated by a recommend process shutdown step and module 3004. This recommendation to the operator of the process 1212 can be made using any suitable approach. An example is a screen display or an alarm indicator. This safety feature allows the present~in~ention to prevent the controlled process 1212 from reaching a eritical situation.
If the output data does not indicate safety or operability problems in step and module 3002, or after the recommendation to shut down the process has been made in step and module 3004, the present invention moves to the enable control using output data step and module 3006. This step and ew 9zioz~s6 Pcriu~~noszs9 _57-module 3006 sets a flag (indicator) which can be examined by step and module 112; indicating that the output data should be used to control thie process.
Thus; it cari be appreciated that the enable/disable controlstep or module 110provides the~ function to-_the present inventiow of (1) allowing. control of~the process 1212 using the output data in~step'or. module 112; (2) preventing the use of the output. data in controlling the process 1212, but allowing the process 1212 to continue to operate, or (3) shutting down the process 1212 for safety reasons.
F, _Co_nt_rol Process Usina Output Data Step or Module 112.
Referring again to Figure 1, the order pointer-122 indicates that the control of the' process using- the output data from the neural network 1206 runs in parallel with the prediction of output data using the neural network 1206, the retraining of the neural network 1206, and the enable/disable control of the process 1212. .
Figure 7 shows a representative embodiment of the control process using output data step and module 112. Referring now to Figure 7, the order pointer 122 indicates that the present invention first moves to a wait controller interval step or module 702. The interval at whieh the controller operates can be any preselected value. This interval can be a time value, an event, or the occurrence of a data value. Other interval control methods or procedures can be used.
Once the controller interval has occurred, as indicated by the order pointer, the present invention moves to a control enabled? step or module 704. If eontrol has been disabled by the enable/disable eontrol step or module 110, the present invention does not control the process 1212 using the output data. This is indicated by the order pointer marked "NO" from the control enabled ? step or module 704.

~.va ~zroz~~s N o ~crrus~»aszs~
n a "" (J -.. ~ ~';3 'y ~

If control has been enabled, the present invention moves to the retrieve output data from historical database step or module 706. This step or module shows that t,he, output 'data 1218 (see Figure 12) produced by the neural network 1206 and stored in the historical database 1210 is retrieved (1214). and used by the controller 1202 to compute controll~e,r output~data 1208 for control of the process 1212. .
Thi s control by the control l er 1202. of the process 1212 is indicated .by an effectively control process using controller to compute controller output step or module 708 of Figure 7.
Thus, it can be appreciated that the present invention effectively controls the process using the output data from the neural network 1206. It should be understood that the control of the process 1212~can be any presently known or future developed approach, including the architecture shown in Figures 15 and 16.
Alternatively, when the output data from the neural network 1206 is determined to be ~unaceeptable, the process 1212 can continue. to be controlled 1202 by the controller without the use of the output data.
V. Preferred Structure (Architecture) Discussed above in Section III has been the preferred ~ method of operation of the present invention. Discussed in this Section are the preferred structures (architecture) of the present invention. However, it should be understood that in the description set forth above, the modular structure (architecture) of the present invention was also discussed in connection with the operation. Thus, certain portions of the structure of the present invention have inherently been described in connection with the description set forth above in Section III.

'~rl~(7 92/02866 PC~1'/i.'S91/05259 The preferred embodiment of the present invention comprises one or more software systems. In this context., software system is a collection of one or more executable software programs, and one or more storage areas; for axample, RAM or disk. In general terms., a software system should be understood to comprise ~a fully functional software embodiment of a function, which can be~added to an existing computer system to provide n.ew function to Lhat'computer system.
Software systems gene.rally,are constructed in a hayered fashion. In a layered system, a lowest level software system , is usually the computer operating system which enables the hardware to execute software instructions. Additional layers of software systems may provide, for example, historieal database capability. This historical~database system provides a foundation layer on which additional software systems can be built. For example, a neural networksoftware system tan be layered on top of the historical database. Also, a supervisory control software system can be layered on top of the historical database system.
A software system is thus understood to be a software implementation of a function which can be assembled in a layered fashion to produce a computer system providing new functionality. Also, in general, the interface provided by one software system to another software system is well-' defined. It should be understood in the context of the present invention that delineations. between software systems are representative of the preferred implementation. However, the present invention may be implemented using any combination or separation of software systems.
Figure 12 shows a preferred embodiment of the strueture of the present invention. Referring now to Figure 12, the process 1212 being controlled receives the raw materials 1222 and produces the product 1216. Sensors 1226 (of any suitable ~',~f~ 92/02366 ~~~,,~~ PCT/US91/05259 .

type),provide sensor signals 1221, 1224, which are supplied to the historical database 1210 for storage with associated timestamps. It should benoted that any sui~table'type of sensor 1226 can be employed which provides'.sensorsignals 1221. 1224. . . ~ , The historical database 1210 stores the sensor signals 1224 that are supplied to it with associated timestamps as provided by a clock 1230. In addition, as described below, the historical database 1210 also stores output data 1218 from the neural network 1206. This output data 1218 also has associated timestamps provided by the neural network 1206.
Any suitable type of historical database 1210 tan be employed. .A historical database is generally discussed in Hale and Sellars, "Historical Data Recording for Process Computers," 77 Chem. En4'q_prooress 38 AICLE, New York, (1981) (which is hereby incorporated by reference).
The historical database 1210 that is used must be capable ' of storing the sensor input data 1224 with associated timestamps, and the predicted output data 1218 from the neural network 1206 with associated timestamps. Typically, the historical database 1210 will store the sensor data 1224 in a compressed fashion to reduce storage space requirements, and will store sampled (lab) data 1304 in uncompressed form.
Often, the historical database 1210 will be present in a chemical plant in the existing process contro l system. The present invention can utilize this historical database to achieve the improved process control obtained by the present invention. , A historical database is a special type of database in which at least. some of the data is stored with associated time stamps. Usually the time stamps can be referenced in retrieving (obtaining) data from a historical database.

~~o yz»z~f~G ~~riu~g~ia~zs~
~4~~~~~

The historical database 1210 can be implemented: as a ~ stand alone software system which forms a foundation layer. on which other software systems, such as the neural network 1206, .
can be layered. ~ Such a foundation layer historical database-s system can support many functionsv in a' process control environment. For example, the historical~database can serve , as~a foundation for software ~ihi~ch provides graphical displays ,:
of historical process data for use by a ~ plant operator.~~ An historical database can also provide data to~data analysis and display software which can be used by engineers for analyzing the operation of the process 1212. Such ~ a foundation layer historical database system will often contain a large .number of sensor data inputs, possibly a large number of laboratory data inputs, and may also contain a fairly long time history for these inputs., It should be understood, however; that the present invention requires a very limited subset of the functions of the historical database 1210. Specifically, the present invention requires the ability to store at least one training data value with the timestarnp which indicates an associated input data value, and the ability to store such an associated input data value. In certain circumstances where, for example, a historical database foundation layer system. does not exist, it may be desirable to implement the essential historical database functions as part of the neural network software. Sy integrating the essential historical database capabilities into the neural network software, the present invention can be implemented in a single software system. It should be'understood that the various divisions among software systems used to describe the present invention are only illustrative in describing the best mode as currently practiced. Any division or combination among various software ~0 9ziozs~f~ ~criu~ymosz~~
a r,~' v systems of the steps and elements, of the present invention may be used. . , . The historical database 1210,.~as used ~in the present .:invention, can be implemented using~a~number of methods. For example, the historical. database can be built as a random access memory (RAM) database.. The, historical database 1210 can also be implemented as a .d.isk-based database, or as a combination of RAM and disk. databases. If an~analog neural network 1206 is.used in the present. invention,, the historical database 1210 could be implemented using a physicalstorage device. The present inventJion contemplates any computer or .
analog means of performing the functions of the' historical database 1210.
The neural network 1206~retrieves input data 1220 with associated timestamps. The neural network 1206 uses this retrieved input data 1220 to predict output data 1218. The output data 1218 with associated timestamps is supplied to the historical database 1210 for storage.
A representative embodiment of the neural network 1206 is described above in Section I. It should be understood that neural networks, as used in the present invention, can be .
implemented in any way. For example, the preferred embodiment uses a software implementation of a neural network 1206. It should be understood, however, that any form of implementing a neural network 1206 can be used in the present invention, including physical analog forms: Specifically, as described below, the neural network may be implemented as a software module in~a modular neural network control system.
It should also be understood with regard to the present invention that software and computer embodiments are only one possible way of implementing the various elements in the systems and methods. As mentioned above, the neural network 1206 may be implemented in analog or digital form and also, 'f~'O 9Z/02866 YCT/US9I/05Z53 for example, the controller 1202 may also be implemented in analog or digital form. It should be understood, with_.respect to the method steps as described above for the,functioning of the systems as described in this section, that operations such as cpmputing (which imply the aperation of a digital computer) may also be carried out in analog equivalents or by. other methods. - , ' Returning again to Figure 12, the output data 1214 with associated timestamps stored in the historical database 1210 is supplied by a path 1214 to the controller 1202. This output data 1214 is used by the controller 1202 to generate controller output data 1208 sent to an actuators) 1228 used to control a controllable process state 2002 of the process 1212. Representative examples of controller 1202 are discussed below.
The shaded box shown in Figure 12 indicates that the neural network 1206 and the historical database 1210 may, in a variant of the present invention, be implemented as a single software system. This single software system could be delivered to a computer installation in which no historical database previously existed, to provide the functions of the present invention. Alternately, a neural network configuration function (or program) 1204 could also be included in this software system.
Two additional aspects of the architecture and structure shown in Figure 12 are as follows. First, it should be noted that the controller 1202 may also be provided with input data 1221 from sensors 1220. This input data is provided direttly to controller 1202 from these sensor(s).
. Second, the neural network configuration module 1204 is connected in a bi-directional path configuration with the neural network 1206. The neural network configuration module 1204 is used by the user (developer) to configure and control 'WO 92/0266 YCT/(J591/052~9 . ~'3 c~ ~' ~ ~ ;
~~'.1"

the neural network 1206 in a fashion as discussed above in connection with the step and module 104 (Figure.,l),~or:, in connection with the user interface discussion contained below.
Turning now to Figure 13, an alternate~~preferred embodiment of the structure and architecture of the present invention is shown. Only differences between. the embodiment of Figure 12 and that of Figure 13 are discussed here. These differences are as follows.
-A laboratory ("lab") 1307 is supplied with samples 1302.
These samples 1302 could be physical specimens or some type of data from an analytical test or reading. Regardless of the form, the lab takes this material/data and utilizes it to produce actual measurements 1304, which are supplied to the historical database 1210 with associated timestamps.:. The values 1304 are stored in the historical database 1210 with their associated timestamps.
lThus, the historical database 1210 also now contains actual test results or actual lab results in addition to sensor input data. It should be understood that a laboratory 2~0 is illustrative of a source of actual measurements 1304 which are useful as training input data. Other sources are encompassed by the present invention. Laboratory data can be electronic data, printed data, or data exchanged over any communications link. ' The second difference in this embodiment is that the neural network 1206 is supplied with the lab data 1304 and associated timestamps stored in the historical database 1210.r Ilnother addition to the architecture of Figure 12 is error data 1504 (Figure 15) supplied by the neural network 1206 with associated timestamps to the his,toricai database 1210 for storage.

wo gzmzs~f~ ~crius~mt~sz~~
-65_ Thus, it can be appreciated that the embodiment of Figure 13 allows the present invention to utilize lab data 1304 as training input data 1306 to train the neural network. .
Turning now to Figure 14, a representative embodiment of the controller 1202 is shown. The..embodiment utilizes a regulatory controller 1406 for regulatory control of the process 1212. . Anyv type of , regulatory controller is contemplated which .provides such regulatory control. There are many commercially available, embodiments for such a regulatary controller.. Typically, the present invention would .
be implemented using regulatory controllers, already in place.
In other words, the present invention can be integrated into existing process control systems. , In addition to the regulatory controller 1406, the embodiment shown in Figure 14 also includes a supervisory ' controller 1408. The supervisory controller 1408 computes supervisory controller output data, computed in accordancd .
with the predicted output data 1214. In other words, the supervisory controller 1408 utilizes the predicted output ~20 data 1214 from the neural network 1206 to produce supervisory controller output data 1402.
The supervisory controller output data 1402 is supplied to the regulatory controller 1406 for changing the regulatory controller setpoint 1404 (or other parameter of regulatory controller 1406). In other words, the supervisory controller output data 1402 is used for changing the. regulatory controller setpoint 1404 so as to change the regulatory control provided by the regulatory controller 1406.
Any suitable type of supervisory controller 1408 can be employed by the present invention, including commercially available embodiments. The only limitation is that the supervisory controller 1408 be able to use the output data 1408 to compute the supervisory controller output data 1402 ~sro ~zio2~~~ Ycrius~aioszsy .. c~c ~~~~b~~' ° -66-used for changing the regulatory controller setpoint (parameter) 1404. . ..
The present invention contemplates the supei-yisory controller 1408 being in a software and hardware. system which is physically separate from the regulatory controller. 1406.
For example, in many chemical . pracesses, the. regulatory controller 1406 -is implemented as a digital distributed control system (DCS). These digital distributed control systems provide a very high level ' of robustness and reliability for regulating the process 1212. ..The supervisory controller 1408, in contrast, may be implemented. on a host-based computer, such as a YAX (VAX is a trademark of DIGITAL
EQUIPMENT CORPORATION, Maynard, Massachusetts).
Referring now to Figure 15, a more detailed embodiment of the present invention is shown: In this embodiment, the supervisory controller 1408 is separated from the regulatory controller 1406. The three shaded boxes shown in Figure 15 suggest various ways in which the functions of the supervisory controller 1408, the neural network configuration program 1204, the neural network 1206 and the historical database 1210 can be implemented. For example, the box labeled 1502 shows how the supervisory controller 1408 and the neural network 1206 ran be implemented together in a single software system.
This software system may take the form of a modular system as described below in Figure I6. Alternately, the.neural network configuration program 1204 may be included as part of the software system. These various software system groupings are indicative of various ways in which the present invention can be implemented. however, it should be understood that any ~0 combination of functions into various software systems can be used to implement the present invention.
Referring now to Figure 16, a representative embodiment 1502 of the neural network 1206 combined with the supervisory WQ 921U2FSbb PCT/L'S9l/U5259 ~~~~~~~d controller 1408 is shown. This embodiment is .called a modular supervisory controller approach. . . The, modular arehitecture that is shown illustrates .that, the present invention contemplates the use of various types of .modules which care be implemented. by the .user (developer) in configuring neural networks) 1206 in, combination, with supervisory. control functions so as to achieve. superior's .
process control operation. , Several modules that can be implemented by.the user of the present invention are shown in the embodiment of ,Figure 16. Speeifically, in addition to the neural network module 1206, the modular embodiment of Figure 16 also includes a feedback control module 1602, a feedforward control module 1604, an expert system module 1606, a cusum (cumulative summation) module 1608, a Shewhart module 1610, a user~program module 1612, and a batch event module 1614. Each of these can be selected by the user. The user can implement more than one of each of these in configuring the present invention.
Moreover, additional types of modules can be utilized.
The intent of the embodiment shown in Figure 16 is to illustrate three concepts. First, the present invention can utilize a modular approach which will ease user configuration of application of the present invention. Second, the modular approach allows far much more complicated systems to be configured since the modules act as basic building blocks which can be manipulated and used independently of each othQr.
Third, the modular approach shows that the present invention. can be integrated into other process control systems: In .other words, the present invention can be implemented into the system and method of. the United States patents and patent applications, which are incorporated herein by reference as noted above.

Wty 92102E66 PCf/LTS91/05253 ,.a 0 ~ ~~', i: >, Specifically, this modular approac h allows the neural network capability of the present invention to be integrated with the expert system capability described in the above-noted patents and patent applications:' As described above;: this enables the neural network capabilities of : the:; present invention to be easily integrated with other standard control functions such as statistical testsv'~ and : feedback; and feedforward control. However, even greater..function can be achieved by combining the neural network capabilities of the present invention, as implemented in this modular embodiment, with the expert systern capabilities of.,the above-noted patent .
applications, also implemented in the modular embodiment.
This~easy combination and use of standard control functions, neural network functions,.and expert system functions allows a very high level of capability to be achieved in solving process control problems.
The modular approach to building neutral networks gives tivo principal benefits. First, the specification needed from the user is greatly simplified so that only data is required to specify the configuration and function of the neutral network. Secondly, the modular approach allows for much .
easier integration of neutral network function with other related control functions, such as feedback control, feedforward control, etc:
In contrast to a programming approach to building a neutral network, a modular approach provides a partial definitiot~,beforehand of the function to be provided by the neutral network module. The predefined funetion for the module determines the procedures that need to be followed to carry out the module funetion, and it. determines any procedures that need to be followed to verify the proper eonfiguration of the module. The particular function will define the data requirements to eomplete the specification of pro gziozsss ~criu~g~i~5zsg the neutral network module. The specifications for a modular neural network would be comprised of;configuration.information which defines the size,. connectivity and behavior of, the neutral network in general, and the-data interactions of the neu.tr.al network which define the source and, location of data that will be used and created by.the, ne,twork.. , , ,.. . , , '' Two approaches can be~ used to simplify the user configuration of neutral networks._ First, a limited .set of procedures can be prepared and. implemented, in .the modular neutral network software. These predefined functions will by nature define .the specifications needed- to make .these procedures work as a neutral network module. For example, the' creation of a neutral network module which is fully connected, has one hidden. or middle layer, ands has no feedback would I5 require the specification of the number of inputs, the number of middle error elements, and number of outputs. It would not require the specification for the connections between the inputs, the ouputs and elements. Thus, the user input required to specify such a module is greatly simplified. This ,20 predefined procedure approach is the preferred method of implementing the modular neural network.
A second approach could be used to provide modular neutral network function is to allow a limited set of natural language expressions to be used to define the neutral network.
25 In such an implementation, the user or developer would be permitted to enter, through typing or other means, natural - ~ language definitions for, the neutral network. For example, the user may enter the text which might read, far example, "I
want-a fully connected feedforward neutral network." These 30 user inputs can be parsed searching for specifieatian combinations of terms, or their equivalents, which would allow ' the specific configuration information to be extracted from the restricted natural language input.

~VU 92/02866 a .~ P~C'f/US91/05259 .. , Ey parsing the total user input provided in this method, the complete specification for a-neutral. network module could be obtained. Once this information is known, two approaches could bemused to generate a runnable module:w - ,._ S The first approach would be to search for a predefined procedure matching the configuration' information; provided by the restricted natural~language input. This would be useful where users tend to specify the same basic~.neutral~ network functions for many problems. _ . ..._,.
A second approach could provide for much more flexible creation of neutral network function. In thin approach, the specifications obtaived by parsing. the natural language-input could be used to °generate a neutral network procedure by actually generating runnable or compilable. code. In; this approach, the neutral network functions would be defined in relatively small increments as opposed to the approach of providing a complete predefined neutral network function.
This approach may combine, for example, a small function which is able to obtain input data and populate a set of inputs. By combining a number of such small functional pieces and generating code which reflects and incorporates the user specifications, a complete neutral network procedure could be generated.
This approach could optianally include the ability t~
~ query the user for specificatians which have been neglected or omitted in the restricted natural language input. Thus, for example, if the user neglected to specify the number of outputs in the network, ,the user could. be prompted for this information and the system could generate an additional line of user specification reflecting the answer to the query.
The parsing and code generation in this approach use pre-defined, small sub-functions of the overall neural network function. A given key word (term) corresponds to a certain WU 92/02861> YC1'lL'5~310525~3 ,, sub-function of the overall neural network function. Each sub-function has a corresponding set of key words (terms) and associated key words and numeric values. Taken together, each key word and associated key words and. values constitute a symbolic specification ,of the ;neural network sub-function.
The collection of all the symbolic,specifications make up a symbolic specification of the entire neural network function.
The parsing step processes the substantially natural language input. It removes the unnecessary natural language words, and groups the remaining key words and numeric values into symbolic specifications of neural network subfunctions.
One way to implement parsing is to break the input into sentences and clauses bounded by periods and commas, and restrict the specification to a single subfunction per clause.
Each clause is searched for key words, numeric values, and associated key words. The remaining words are discarded. A
given key word (term) corresponds to a certain sub-function of the overall neural network function.
Or, key words can have relational tag words, like "in,"
"with," etc., which can indicate the relation of one key word to another. For example, in the specification "3 nodes in the hidden layer," the word "in" relates "3 nodes" to "hidden layer," so that a hidden layer sub-function specification is indicated. Using such relational tag words, multiple sub function specification could be processed in the same clause.
Key words can be defined to have equivalents. For example, the user might be allowed, in an. embodiment of this aspect of the invention, to. specify the transfer function (activation function) used in the elQments (nodes) in the network. Thus the key word might be "activation function" and an equivalent might be "transfer function." This keyword corresponds to a set of pre-defined subfunctions which implement various kinds of transfer functions in the neural e.~o ~z~~~s~~ ~ ~~-rivs~iios2s~
~~~ .
_72-network elements. The specific data that might be allowed in combination with this term might be, for example;' the term "sigmoidal" or the word "threshold." These specific data, combined with the key word, indicate which of the sub-functions should be used to provide the activation: function capability in the neural networK when it is constructed~-Another example might be key word. "nodes," which might have an equivalent "nodes" or "elements." The associated data would be an integer number which indicates the number of nodes in a given layer. In this particular case, it might be advantageous to look for the numeric data in combination with the word or term "in" and the key word "hidden layer," etc.
In combination, these might specify the number of nodes in the middle layer. Thus, it can be seen that various levels of flexibility in the substantially natural language.
specification can be provided. Increasing levels of flexibility require more detailed and extensive specification . of key words and associated data with their associated key words.
In contrast, the key word "fully connected" might have no associated input. By itself, it conveys the entire meaning.
The neural network itself is constructed, using this method, by processing the specifications, as parsed from the hsubstantially natural language input, probably in a pre defined order, and generating the fully functional procedure code for the neural network from the procedural sub-function code fragments.
The other major advantage of a modular approach is the ease of integration with other functions in the application (problem) domain. For example, in the process control domain, it may be desirable or productive to combine the functions of a neutral network. with other more standard tontrol functions such as statistical tests, feedback control, etc. The WO 92/028~G ~'~.?/L~91/05259 implementation of neutral networks as modular neutral networks in a larger control system can greatly simplify this kind of implementation. , The incorporation of modular. neutral networks into a modular control system is beneficial because it makes it easy to create and. use neutral networklpredictians in a control application... However,.the'.application of modular, neutral networks in a control system is different from the control, functions that are typically found in a control system. For example, the control functions described in some of the United States patents and patent app lications incorporated by reference above generally rely an the current information for their actions, and they do not generally define their function in terms of past data. In order to make a neutral network I5 function effectively in a modular control system, some means is needed to train and operate the neutral network using the data which is not generally available by retrieving current data values. The systems and methods of the present invention, as described above, provide this essential ~20 capability which allow a modular neutral network function to be implemented in a modular control system.
A modular, neutral network has. several characteristics . which significantly ease, its integration with other control functions. First, the execution of neutral network functions, 25 prediction and/or training are easily coordinated in time with other control functions. The timing and sequencing capabilities of a modular implementation of a neutral network provide this capability. Also, when implemented as a modular function,. neutral networks can make their results readily 30 accessible to other control functions that may need them.
This can be done, for example, without needing to store the neutral network outputs in an external system such as a historical database.

wa 9xioz~s6 Pcriu~~~ioszs~

Modular neutral networks Can run either synchronized or unsynehronized with other functions- in the control .system.
. Any number of neutral networks can be created within the. same control application, or~in different control' applications, within the control system. This~may significantly facilitate the use of neutral networks to make predictions of-.output data where several small neutral networks may be more easily or rapidly trained than a single large neutral network :. Modular neutral networks also provid a a consistent specification. and user interface so that a user trained to use the modular neutral network Control system can address - many Control problems without learning new software.
An extension of the modular concept is the specification of data using pointers. ,: Here again, the user (developer) is offered the easy specification of a number of data retrieval or data storage functions by simply selecting the function desired and specifying the data needed to implement the function. For example, the retrieval of a time-weighted average from the historical database is one such predefined .20 function. By selecting a data type such a time-weighted average, the user (developer) need only specify the specific measurement desired, and the starting and ending time boundaries and the predefined retrieval function will use the appropriate Bode or function to retrieve the data. This significantly simplifies the user's access to data which may reside in a number of different process data systems. By contrast, without the modular approach, the. user would have to be skilled in the programming techniques needed to write the calls to retrieve the data from the state assistance.
A further development of the modular, approach of the present invention is shown in Figure 17. Figure 17 shows the neural network 1205 in a modular form.

PVC) 9210286b PC1"/tJS91/U5259 ~~~~i~~d .

Referring now to Figure 17, a specific software embodiment of the modular farm of the present invention is ,shown. In this modular embodimelt, a limited set of neural network~module types 1702' is provided. Each neural network modula type 1702' allows the user to create and~configure a neural network module implementing~.a specific type of neural network:'w ' Different. types of~w neural networks may.: have . indifferent -connectivity; different numbers of layers of.
~elaments9. different training methods and~so forth. For each neural network module type, the user may create and configure neural network modules. Three specific instances of neural network modules are shown as 1702'; 1702 " ; and 1702 " '.
In this modular software embodiment, neural network modules are implemented as data storage areas which contain a procedure pointer 1710',- 1710", 1710"' to procedures which carry out the functions of the neural network type used for that module. The neural network procedures 1706' and 1706 "
are contained in a limited set of neural network procedures 1704 . The procedures 1706', 1706 " correspond one to one with ~. 20 the neural network types contained in the limited set of neural network types 1702.
In this modular software embodiment, many neural network modules may be created which use the same neural network procedure. In this case, the multiple modules each contain a procedure pointer to the same neural network procedure 1706' or 1706 " . In this way, many modular neural networks can be implemented without duplicating the procedure ar code needed to execute or carry out the neural network functions.
,Referring now to Figure 18, a more specific software 80 embodiment of the modular neural network is Shawn. This embodiment is of particular value when the neural network modules are implemented in the same modular software system as 'WU 92/a2~6b PCi'/LS91/(35259 ~~6~~~~.

modules performing other functions such as statistical tests or feedback control. . v Because neural networks can use a large number of.~inputs and outputs' with associated errorv values and training .input S data values, and also because neural networks can require a ' large number of weight values which need; to be,stored,, neural network modules may have significantly.greater storage requirements than other module types in ,the. control. system.
' In this case, it: is advantageous. to store neural network parameters in a .separate neural network parameter_storage area 1806. This.'structure~ means that modules implementing functions other than neural network functions need. not reserve unused storage sufficient for neural networks.
In this modular software embodiment, each instance of a modular neural network 1702' and 1702" contains two pointers.
The first pointers 1710' and 1710 " are the procedure pointer described above in reference to Figure 1T. Each neural network module also contains a second pointer, parameter pointers 1802' anad IB02 " which point to a storage area 1806', 1806 " for network parameters in a neural network parameter storage area 1804. Only neural network modules need contain the parameter pointers 1802' , 1802 " to the neural network parameter storage area 1804: Other module types ~ueh as control modules which do not require sueh extensive storage ~ need not have the storage allocated via the parameter pointer 1802.
Figure 22 shows representative aspects of the architecture of the neural network 1206. The representation in Figure 22 is particularly relevant in connection with the modular neural network approach shown in Figures 16, 17 and I8 discussed above.

wo sz~oz~s~ ~crius9iio~z~~
-~~- 2~~~~~~8~ ~.
Referring now to Figure 22, the components to make and use a representative embodiment of the neural network 1206-are . .shown in an exploded,format.
The neural; network 1206 must.~contain a neural metwork model. ,As stated, above, the present o nvention,contemplates all presently available and future.developed neur_al~network models and architectures. As shown in Figure 22,~the neural network model 2202 can have a fully connected 2220-aspect, or a no feedback 2222 aspeet: These are just examples. ~ ~ Other aspects or architectv res for the neural network mode1f2202 are contemplated. ,.
The neural network 1206 must~_have access to input data and training input data and access to locations in which it can. store output data and error data. The preferred embodiment of the present invention uses an on-line approach.
In this approach, the data is not kept in the neural network 1206. Instead, data pointers are kept in the neural network which point to data storage locations in a separate software system. These data pointers, also called data specifications, can take a number of forms and can be used to point to data used for a number of purposes.
For example, input data pointer 2204 and output data pointer 2206 must be specified. As shown in the exploded view, the pointer can point to or use a particular data source . system 2224 for the data, a data type 2226, and a data item pointer 2228.
Neural network. 1206 must also have a data retrieval function-2208 and a data storage function 2210. Examples of these functions are callable routines 2230, disk access 2232, and network access 2234. These are merely examples of the aspects of retrieval and storage functions.
Neural network 1206 must also have prediction timing and training timing. These are specified by prediction timing wo ~ziozs~b Pcrius9noszs~
control 2212 and training timing control 2214. ~ One way to implement this' is~ to use a timing method 2236-~ and~~ its associated timing parameters 2238. Referring now to Figure 24, example s of timing method 2236 include'~a fixed time interval 2402, new data entry 2404; after another module 2406, on program, request 2408, on expert system request'2410when all trainingt data updates 2412, and batch sequence rtiethods v 2414. These are designed to allow the training and function of the neural network 1206 to be controlled by time; data, completion of modules, or other methods or procedures. ~ The examples are merely illustrative in this regard.
Figure 24 also shows examples of the timing parameters 2238. Sueh 'examples include the time interval 2416, the module specification 2420, and the sequence specifieation 2422. Another exampl a is the data item specification (pointer) 2418. As is shown in Figure 24, examples of the data items specification include specifying the data source system 2224, the data type 2226, and the data item pointer 2228 which have been described above.
Referring again to Figure 22, training data coordination, as discussed previously, may also be required in many applications. Examples of approaches that can be used for such coordination are shown.. One is to use all current values as representative by reference numeral 2240. Another is to use current training input data values and the input data at the earliest training input data time, as indicated by reference numeral 2242. Another approach is to use the current training input data values with the input data from the latest train time, as indicated by reference numeral 2244.
Again, these are merely examples, and should not be construed as limiting in terms of the type of coordination of training data that can be utilized by the present invention.

~,vo ~zmza~s ~~crius>iioszs~
2~~~~~~~
The neural network 1206 also needs- to be trained, as discussed above. As stated. previously, any. presently available or future developed training method is contemplated by the present invention. The training method also may be somewhat dictated by the architecture of the neural network model that is used. Examples of aspects of training methods.
include back propagation 2246;' generalized delta 2248, and 1 gradient descent 2250, all of which are well known in the art.
In this regard, reference is made to the article series entitled "Neural Networks Primer," by Mauree-n Caudill, AI
x ert, Deeember 1987 (Part I), February 1988 (Part II), June 1989 (Part III), August 1988 (Part IV), and November 1988 (Part V), all of which are incorporated by reference.
Referring now to Figure 23, examples of the data source system 2224, the data type 2226, and the data item pointer 2228 are shown for purposes of illustration.
With respect to data source system 2224, examples are an historical database 1210, a distributed control system 1202, a programmable controller 2302, and a networked single loop controller 2304. These are merely illustrative.
Any data- source system can be utilized by the present invention. It should also be understood that such source system could either be a.storage device or an actual measuring or calculating device. All that is required is that a source of data be specified to provide the neural network 1206 with the input data 1220 that is needed to produce the output data 1218. The present invention.contemplates more than one data source system used by the same neural network 1206.
The neural network 1206 needs to know the data type that is being specified. This is particularly important in an historical database 1210 since it can provide more than one type of data. Several examples are shown in Figure 23 as follows: current value 2306, historical value 2308, time WO 92/02Fi66 PCT/L'S91/OS2~J
F~ E~ ~~ ~~~
_8o_ weighted average 2310,, controller _ setpoi,nt 2312, and controller adjustment amount; 2314. Other .types are contemplated. . . - . ..
Finally, the. data item pointer 2228 must be specified.
ra ?he examples, shown are a loop number 2316, a variable number . 2318,r a measurement. number 2320, and a loop tag,.I.O. 2322.
Again, these are .merely examples" for illustration purposes, since the present invention contemplates any type of data.item . pointer 2228. ~ .
It is thus seen that neural network 1206 can be constructed so as to obtain desired input data 1220 or to provide output data 1218 in any -intended fashion. ~ In the preferred embodiment of the present invention, this. is all done through menu selection by the user (developer) using a software based system on a computer platform.
The construction of the controller 1202 is shown in Figure 25 in an exploded format. Again, this is merely for purposes of illustration. First, the controller 1202 must be implemented on some hardware platform 2502. Examples of hardware platforms 2502 include pneumatic single loop controlle r 2414, electronic single loop controller 2516, networked single looped controller 2518, programmable loop controller 2520, distributed control system 2522, and programmable logic controller 2524. Again, these are merely examples for illustration. Any type of hardware platform 2502 is contemplated by the present invention.
In addition to the hardware platform 2502, the controller 1202, 1406, 1408 needs to implement or utilize an algorithm 2504. Any type of algorithm 2504 can be used. Examples shown include: proportional (P)' 2526; proportional, integral (PI) 2528; proportional, integral, derivative (PID) 2530; internal model 2532; adaptive 2534; and, non-linear 2536. These are merely illustrative of feedbaek algorithms. However, the WO 92/0Z~366 PCT/US~I/05259 ""'y present invention .also contemplates feedforward or .other algorithm approaches. ~. , - .
The controller 1202 also inherently includes parameters 2506. '-These'parameters are. utilized by the algorithm 2504.
Examples shown~include setpoint 1404, proportional gain 2538, integral gain 2540; derivative gain 2542, output high limit 2544, output low limit 2546 setpoint high limit 2548, and setpoint low limit 2550.
The controller 1202 also needs some,means for timing its operation. ' One way to do this is to use a timing means 2508.
Timing means 2508, for example, can use,a timing method 2236 with associated timing parameters: 2238, as previously described. Again, these are merely illustrative.
The controller 1202 also. needs to utilize one ~or more input signals 2510, and to provide one or more output signals 2512. These can take the form of pressure signals 2552, voltage signals 2554, amperage (current) signals 2556, or digital values 2558. In other words, input and output signals can be in either analog or digital format.
III. User Interface The present invention utilizes a template and menu driven user interface 2600, 2700 which allows the user to eonfigure, reconfigure and operate the present invention. This approach makes the present invention very user friendly. It also eliminates the need for the user to perform any computer programming, since the configuration, reconfiguration and operation of the present invention is carried out in a templatQ and menu format nat requiring any actual computer prograrrcnir~g expertise or knowledge.
The system and method of the presQnt invention utilizes templates. These templates define certain specified fields that must be addressed by the user in order to configure, Wt~ 921~D~866 FCT/US9I/05259 _az_ reconfigure and operate the present-invention. .The templates tend to guide the user in using the present invention.-.
Representative examples of templates for the menu driven system of the~present- invention are shown in Figures 26-29.
These are merely for purposes of illustration:.;-,;; , _ The preferred embodiment of the present invention.uses a two-template specification 2600, 2700: for:.a neural network module. Referring now to Figure 26, the first template 2600 in this set of two templates is shown. This~template 2600 specifies general characteristics of how the neura l network 1206 will operate.' The portion of the screen within a box labeled 2620, for -example, shows how timing options. are specified for the neural network module 1206. As previously described, more than one timing option may be provided. This template 2600 provides a training timing .option under the label "train" and a prediction timing control specification under the "label run." The timing methods shown in boldface type are chosen from a pop-up, menu of various timing methods that are implemented in the preferred embodiment. The 20, parameters needed far the timing method which is chosen are entered in the shaded blocks under heading "Time Interval and iCey Block." These parameters are specified only for timing methods for which they are required. Not all timing methods require parameters, and not all timing methods that require ~ parameters require all the parameters.
In a box labeled 2606 bearing the heading "Mode and Store Predicted Outputs," the prediction and training functions of the neural network module can be controlled. 5y putting a check in the box next to either the train or the run designation under "Mode," the training and/or prediction functions of the neural network. module 1206 are enabled. 8y putting a check in the box next to the "when training" and "when running" labels, the storage of predicted output data W~ 92/02865 PC.°T/US91/05259 i_~.:1 . .
1218 can be enabled when the neural network 1206; is training and whew the- neural network 1206.; is_predieting ,(running), respectively: - .,..
The size~of.'the neural network 1206 is specified in a box labeled- 2622 bearing the heading "network size." In this embodiment of a neural network module 1206, there. are. three layers only,' and the user may specify how many elements or nodes are to be -used -in each layer. In the preferred embodiment, the number of.,inputs; outputs and middle nodes is limited to some predefined value. .
The coordination of input-vdata with training data is, controlled using a checkbox labeled 2608. By checking. this box, the user can specify that input data 1220 is to be retrieved such that the timestamps. on the input data 1220 correspond with the timestamps on the training input data 1306. The training or learning constant can be entered in a field 2610. This constant determines how aggressively the weights in the neural network I206 are adjusted when there is an error 1504 between the output data 1218 and the training input data 1306.
The user, by pressing a keypad softkey labeled "dataspec page" 2624, may call up the second template 2700 in the neural network module specification. This template 2700 is shown in Figure 27. This template 2700 allows the user to specify (1) the. data inputs 1220 , 1306, and (2) the outputs 1218, 1504 that are to be used by the neural network module. A data specification box 27.02, 2704, 2706, and 2708 is provided for each-of the network inputs 1220, network training inputs 1306, the network outputs 1218, and the summed error output, respectively. These correspond to the, input data, the training input data, the output data, and the error data.
These four boxes use the same data specification methods.

WO 92/028b6 ~ ~ PC1'/L'S97/OS259 .f~ f' ~~'~'.3 ~lithin each data specification box, the data pointers and parameters are~specified..' In, the preferred_embod.iment,:the data specification comprises a three-part data. pointer. as described' above. Iri addition, various time boundaries and , constraint limits tan be specified depending one the data. type .. Specified. . "..~: . ~ : ,~, In Figure 28, an example of a pop-up menu is shown,. In this figure,'the specification for the.data system for the network input number. 1 is being= specified.as shown by the highlighted field reading "OMT PACE.". The box in the center of the screen is a pop-up menu 2802 of choices which may be selected to complete the data. system specification. The templates in the preferred embodiment of the present invention utilize such pop-up menus 2802 whereever applicable.
Figure 29 shows the various elements which make up the data specification black. These include a data 'title 2902, an indication as to whether the block is scrollable 2906, and an indication of the number of the specification in a scrollable region 2904. The box also contains arrow pointers 2~0 indicating that additional data specifications exist in the list either above or below the displayed specification. These pointers 2922 'and 2932 are displayed as a small arrow when other data is present. Otherwise, they are blank.
The items making up the actual data specification are:
a data system 2224, a data type 2226, a data item pointer or number 2228, a name and units label for the data specification 2908, a label 2924, a time boundary 2926 for the oldest time interval boundary, a label 2928, a time specification 2930 'far the newest time interval boundary, a label 2910, a high limit 2912 for the data value, a label 2914, a low limit value 2916 for the low limit on the data value, a label 2918, and a value 2920 for the maximum allowed change in the data value.

'~1~~ 92/02~6b PCT/U591/05259 ----' _ -85-The data specification shown in Figure:,29 is representative of the preferred mode, of implementing the present; invention. However,; it should be understood that various other modifications of the data specification could be used to giv a more or .less ,flexibility depending on~ the complexity needed ,to address the various data sources which may be present., The, present . invention contemplates any ' ' variation on this data specification method.
Although the foregoing refers to ,particular preferred embodiments, it will be understood that the present invention is not sa limited. It will occur to those'of ordinarily skill in the art that various modifications may be made to the disclosed embodiments, and that such modifications are intended to be within the scope of the present invention.

Claims (41)

CLAIMS:
1. A computer neural network process control method for controlling a process for producing a product having at least one product property, comprising the steps of:
operating the process with one or more sensors connected to sense process conditions and product at least one process condition measurement for each sensor;
predicting with a neural network first output data using said at least one process condition measurement as input data by summing at least two weighted inputs to an element of said neural network;
controlling an actuator with a supervisory and/or regulatory process controller by computing controller output data using said first output data as controller input data in place of a sensor input data and/or a product property input data; and changing a controllable process state, using said actuator, in accordance with said controller output data.
2. The computer neural network process control method of claim 1, further comprising the steps:
detecting the presence of a new product property measurement;
retrieving input data associated in time with said new product property measurement;
predicting, using said neural network, second output data from said input data;
computing error data from said new product measurement and said second output data; and enabling control of the process using a third output data when said error data is less than a metric.
3. The computer neural network process control method of claim 2, further comprising the step of using said new product property measurement as controller input data in place of said first output data when said error is equal to or greater than said metric.
4. The computer neural network process control method of claim 1, further comprising the steps of:
detecting either the completion of said predicting step by said neural network or the presence of new output data; and initiating said controlling step upon detection by said detecting step.
5. The computer neural network process control method of claim 1, further comprising the steps of:
storing said at least one process condition measurement in an historical database with an associated timestamp; and retrieving said at least one process condition measurement from said historical database for use by said predicting step.
6. The computer neural network process control method of claim 5, wherein said retrieving step precedes said predicting step.
7. The computer neural network process control method of claim 5, further comprising the steps of:
sampling the process and generating a product property measurement and a second associated timestamp;
storing said product property measurement in said historical database with said second associated timestamp; and training the neural network by adjusting weights of said neural network in accordance with said product property measurement and said at least one process condition measurement.
8. The computer neural network process control method of claim 7, wherein said operating step is followed by said training step.
9. The computer neural network process control method of claim 7, further comprising the steps of:
detecting the presence of a second product property measurement having a third associated timestamp; and retraining said neural network by repeating said training step when said second product property measurement is detected.
10. The computer neural network process control method of claim 9, wherein said detecting step further comprises the step of detecting the presence of said second product property measurement by comparing said second associated timestamp with said third associated timestamp.
11. The computer neural network process control method of claim 9, wherein said retraining step further comprises the step of stopping said controlling step when an error measure exceeds a metric.
12. The computer neural network process control method of claim 7, wherein said training step further comprises the steps of:

retrieving said product property measurement from said historical database with said second associated timestamp, as a first training input data;

selecting an associated timestamp value in accordance with said second associated timestamp and retrieving from said historical database, as a second input data, said at least one process condition measurement having said associated timestamp value; and adjusting weights of said neural network in accordance with said first training input data and said second input data.
13. A computer neural network process control method for controlling a process for producing a product having at least one product property, comprising the steps of:

operating the process with one or more sensors connected to sense process conditions and produce at least one process condition measurement for each sensor;

storing said at least one process condition measurement in an historical database with at least one associated timestamp;

sampling the process and generating a product property measurement and a second associated timestamp; and storing said product property measurement in said historical database with said second associated timestamp;

retrieving said at least one process condition measurement from said historical database for use by said predicting step;
and running a modular neural network process control system, comprising the steps of:

running a module timing and sequencing means and independently triggering, in accordance with respective module timing specifications, a predicting submodule and a training submodule of a neural network module;

training a neural network, using said training submodule of said neural network module, when triggering by said module timing and sequencing means, by adjusting weights of said neural network in accordance with said product property measurement and said at least one process condition measurement; and predicting, using said predicting submodule of said neural network module, first output data using said at least one process condition measurement as input data, when triggered by said module timing and sequencing means;

controlling an actuator with a supervisory and/or regulatory process controller by computer controller output data using said first output data as controller input data in place of a sensor input data and/or a product property input data; and changing a controllable process state, using said actuator, in accordance with said controller output data.
14. The computer neural network process control method of claim 13, wherein said training step comprising the steps of:

retrieving said product property measurement from said historical database with said second associated timestamp, as a first training input data;

selecting an associated timestamp value in accordance with said second associated timestamp and retrieving from said historical database, as a second input data, said at least one process condition measurement having said associated timestamp value; and adjusting weights of said neural network in accordance with said first training input data and said second input data.
15. A computer neural network process control method for controlling a process for producing a product having at least one product property, comprising the steps of:

operating the process with one or more sensors connected to sense process conditions and produce at least one process condition measurement for each sensor;

running a modular neural network process control system, comprising the steps of:

running a module timing and sequencing means and triggering, in accordance with module timing specifications, a neural network module; and predicting, using said neural network module, first output data using said at least one process condition measurement as input data, when triggered by said module timing and sequencing means by summing at least two weighted inputs to an element of said neural network;

controlling an actuator with a supervisory and/or regulatory process controller by computing controller output data using said first output data as controller input data in place of a sensor input data and/or a product property input data; and changing a controllable process state, using said actuator, in accordance with said controller output data.
16. The computer neural network process control method of claim 15, further comprising the steps of:

storing said at least one process condition measurement in an historical database with an associated timestamp; and retrieving said at least one process condition measurement from said historical database for use by said predicting step.
17. The computer neural network process control method of claim 16, wherein said predicting step is preceded by said retrieving step.
18. The computer neural network process control method of claim 15, further comprising the step of triggering, in accordance with respective module timing specifications, said supervisory and/or regulatory controller module; and wherein said controlling step is triggered by said module timing and sequencing means.
19. The computer neural network process control method of claim 15, further comprising the steps of:

storing said at least one process condition measurement in an historical database with an associated timestamp; and retrieving said at least one process condition measurement from said historical database for use by said predicting step.
20. The computer neural network process control method of claim 19, wherein said retrieving step precedes said predicting step.
21. A computer neural network process control method for controlling a process for producing a product having at least one product property, comprising the steps of:

operating the process with one or more sensors connected to sense process conditions and product at least one process condition measurement for each sensor;

storing said at least one process condition measurement in an historical database with an associated timestamp;
sampling the process and generating a product property measurement and a second associated timestamp;

storing said product property measurement in said historical database with said second associated timestamp;

retrieving said at least one process condition measurement from said historical database for use by said predicting step;

running a modular neural network process control system, comprising the steps of:

running a module timing and sequencing means and independently triggering, in accordance with respective module timing specifications a predicting submodule and a training submodule of a neural network module and a supervisory and/or regulatory controller module;

training a neural network, using said training submodule of said neutral network module, when triggered by said module timing and sequencing means, by adjusting weights of said neural network in accordance with said product property measurement and said at least one process condition measurement;

predicting, using said predicting submodule of said neural network module, first output data using said at least one process condition measurement as input data, when triggered by said module timing and sequencing means; and controlling an actuator by computing with said supervisory and/or regulatory controller module controller output data using said first output data as controller input data in place of a sensor input data and/or a product property input data, when triggered by said module timing and sequencing means; and changing a controllable process state, using said actuator, in accordance with said controller output data.
22. The computer neural network process control method of claim 21, wherein said training step further comprising the steps of:

retrieving said product property measurement from said historical database with said second associated timestamp, as a first training input data;

selecting an associated timestamp value in accordance with said second associated timestamp and retrieving from said historical database, as a second input data, said at least one process condition measurement having said associated timestamp value; and adjusting weights of said neural network in accordance with said first training input data and said second input data.
23. A computer neural network process control system for controlling a process for producing a product having at least one product property, comprising:

(a) a sensor, for generating a process condition measurement;
(b) a neural network having predicting means for predicting first output data in accordance with input data;
(c) connection means for providing said process condition measurement to said predicting means far use as said input data;
(d) a supervisory and/or regulatory controller for computing a controller output data in accordance with a controller input data, connected to use said first output data as said controller input data in place of a sensor input data and/or a product property input data; and (e) an actuator, connected to use said controller output data, for changing a controllable process state in accordance with said controller output data.
24. The computer neural process control system of claim 23, wherein said connection means comprises an historical database for storing and providing said process condition measurement with an associated time stamp.
25. The computer neural process control system of claim 24, further comprising:
laboratory means, for generating a product property measurement;

wherein said historical database is further connected to store and provide said product property measurement with an associated time stamp; and wherein said neural network further comprises training means, connected to use said product property measurement provided by said historical database as training input data, for adjusting weights of said neural network.
26. The computer neural network process control system of claim 23, wherein said neural network comprises a modular neural network process control system, said modular neural network process control system comprising:

at least one module having at least one neural network module containing said predicting means; and module timing and sequencing means, responsive to module data specifications, having triggering means for initiating predicting by said predicting means of said neural network module.
27. The computer neural network process control system of claim 26, wherein said at least one module further comprises at least one controller module, comprising said supervisory and/or regulatory controller; and wherein said triggering means further functions for independently initiating controlling by said supervisory and/or regulatory controller module.
28. The computer neural network process control system of claim 27, wherein said controller module comprises a feedback control module supervising a regulatory controller.
29. The computer neural network process control system of claim 27, wherein said controllable process state directly or indirectly affects the product property; and wherein said at least one module further comprises a feedforward control module connected to directly or indirectly control the process by changing said controllable process state or a second controllable process state affecting the product property.
30. The computer neural network process control system of claim 27, wherein said controllable process state directly or indirectly affects the product property; and wherein said at least one module further comprises a statistical test module, connected to provide statistical data for directly or indirectly controlling the process by changing said controllable process state or a second controllable process state affecting the product property.
31. The computer neural network process control system of claim 26, wherein said connection means comprises an historical database, far storing and providing said process condition measurement with an associated timestamp.
32. The computer neural network process control system of claim 27, wherein said at least one module further comprises at least one controller module having said supervisory and/or regulatory controller; and wherein said triggering means further functions for independently initiating controlling by said at least one controller module.
33. The computer neural network process control system of claim 31, further comprising:
laboratory means for generating a product property measurement;
wherein said historical database is further connected to store and provide said product property measurement with an associated timestamp;
wherein said neural network further comprises training means, connected to use said product property measurement provided by said historical database as training input data, for training said at least one neural network module; and wherein said triggering means further functions for independently initiating training by said training means.
34. The computer neural network process control system of claim 33, wherein said at least one module further comprises at least one controller module having said supervisory and/or regulatory controller; and wherein said triggering means further functions for independently initiating controlling by said at least one controller module.
35. The computer neural network process control system of claim 26, further comprising a user interface providing a template for entering a size specification of said neural network module and/or a connectivity specification of said neural network module and/or a specification of a source of said input data, wherein said neural network module operates in accordance with said specification(s).
36. The computer neural network process control system of claim 35, wherein said template comprises data pointers for specifying data to be used by said neural network module.
37. The computer neural network process control system of claim 26, further comprising a user interface for configuring said neural network module using a limited set of natural language format specifications.
38. The computer neural network process control system of claim 26, wherein each of said at least one module(s) further comprises:
first storage means for storing module timing and sequencing specifications;
second storage means for storing a pointer to one of a limited set of standard module procedures; and third storage means for storing parameters for limiting the functions of said standard module procedures.
39. The computer neural network process control system of claim 23, wherein said neural network comprises a software system running on a digital computer.
40. The computer neural network process control system of claim 23, wherein said neural network comprises a dedicated neural network integrated circuit.
41. The computer neural network process control system of claim 23, wherein said neural network comprises an analog neural network.
CA002066458A 1990-08-03 1991-07-25 Computer neural network process measurement and control system and method Expired - Lifetime CA2066458C (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US07/563,095 1990-08-03
US07/563,095 US5282261A (en) 1990-08-03 1990-08-03 Neural network process measurement and control
PCT/US1991/005259 WO1992002866A1 (en) 1990-08-03 1991-07-25 Computer neural network process measurement and control system and method

Publications (2)

Publication Number Publication Date
CA2066458A1 CA2066458A1 (en) 1992-02-04
CA2066458C true CA2066458C (en) 2003-04-22

Family

ID=24249095

Family Applications (1)

Application Number Title Priority Date Filing Date
CA002066458A Expired - Lifetime CA2066458C (en) 1990-08-03 1991-07-25 Computer neural network process measurement and control system and method

Country Status (5)

Country Link
US (1) US5282261A (en)
EP (1) EP0495044B1 (en)
CA (1) CA2066458C (en)
DE (1) DE69130253T2 (en)
WO (1) WO1992002866A1 (en)

Families Citing this family (233)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3334807B2 (en) * 1991-07-25 2002-10-15 株式会社日立製作所 Pattern classification method and apparatus using neural network
US5396415A (en) * 1992-01-31 1995-03-07 Honeywell Inc. Neruo-pid controller
AU658066B2 (en) * 1992-09-10 1995-03-30 Deere & Company Neural network based control system
US5477444A (en) * 1992-09-14 1995-12-19 Bhat; Naveen V. Control system using an adaptive neural network for target and path optimization for a multivariable, nonlinear process
US5513098A (en) * 1993-06-04 1996-04-30 The Johns Hopkins University Method for model-free control of general discrete-time systems
US5668717A (en) * 1993-06-04 1997-09-16 The Johns Hopkins University Method and apparatus for model-free optimal signal timing for system-wide traffic control
CA2129510C (en) * 1993-10-21 1999-04-13 Sasisekharan Raguram Automatic temporospatial pattern analysis and prediction in a telecommunications network using rule induction
DE4336588C2 (en) * 1993-10-27 1999-07-15 Eurocopter Deutschland Procedure for determining the individual lifespan of an aircraft
US5493631A (en) * 1993-11-17 1996-02-20 Northrop Grumman Corporation Stabilized adaptive neural network based control system
US5444820A (en) * 1993-12-09 1995-08-22 Long Island Lighting Company Adaptive system and method for predicting response times in a service environment
US5694524A (en) * 1994-02-15 1997-12-02 R. R. Donnelley & Sons Company System and method for identifying conditions leading to a particular result in a multi-variant system
US6507832B1 (en) 1994-02-15 2003-01-14 R.R. Donnelley & Sons Company Using ink temperature gain to identify causes of web breaks in a printing system
US6336106B1 (en) 1994-02-15 2002-01-01 R.R. Donnelley & Sons Company System and method for partitioning a real-valued attribute exhibiting windowed data characteristics
US6098063A (en) * 1994-02-15 2000-08-01 R. R. Donnelley & Sons Device and method for identifying causes of web breaks in a printing system on web manufacturing attributes
US5486999A (en) * 1994-04-20 1996-01-23 Mebane; Andrew H. Apparatus and method for categorizing health care utilization
DE19518804A1 (en) * 1994-05-27 1995-12-21 Fraunhofer Ges Forschung Process control
EP0704775A1 (en) * 1994-08-22 1996-04-03 Zellweger Luwa Ag Method and apparatus for estimating relevant quantity in the manufacture of textile products
US5566065A (en) * 1994-11-01 1996-10-15 The Foxboro Company Method and apparatus for controlling multivariable nonlinear processes
US5570282A (en) * 1994-11-01 1996-10-29 The Foxboro Company Multivariable nonlinear process controller
US5704011A (en) * 1994-11-01 1997-12-30 The Foxboro Company Method and apparatus for providing multivariable nonlinear control
AT404074B (en) * 1994-11-10 1998-08-25 Kuerzl Hans Dr METHOD AND DEVICE FOR DETERMINING PRODUCT-SPECIFIC VALUES FOR EXPENDITURE, WASTE AND EMISSIONS WITH SIMULTANEOUS PRODUCTION OF DIFFERENT PRODUCTS
US5630159A (en) * 1994-12-29 1997-05-13 Motorola, Inc. Method and apparatus for personal attribute selection having delay management method and apparatus for preference establishment when preferences in a donor device are unavailable
CA2165277C (en) * 1994-12-29 1999-09-21 William Frank Zancho Method and apparatus for personal attribute selection and management using prediction
US5659667A (en) * 1995-01-17 1997-08-19 The Regents Of The University Of California Office Of Technology Transfer Adaptive model predictive process control using neural networks
US5649064A (en) * 1995-05-19 1997-07-15 The United States Of America As Represented By The Administrator Of The National Aeronautics And Space Administration System and method for modeling the flow performance features of an object
US5812992A (en) * 1995-05-24 1998-09-22 David Sarnoff Research Center Inc. Method and system for training a neural network with adaptive weight updating and adaptive pruning in principal component space
DE19519627C2 (en) * 1995-05-29 1999-04-29 Siemens Ag Process for optimizing the process control of production processes
US5943660A (en) * 1995-06-28 1999-08-24 Board Of Regents The University Of Texas System Method for feedback linearization of neural networks and neural network incorporating same
US6092919A (en) 1995-08-01 2000-07-25 Guided Systems Technologies, Inc. System and method for adaptive control of uncertain nonlinear processes
US5946471A (en) * 1995-08-10 1999-08-31 University Of Cincinnati Method and apparatus for emulating laboratory instruments at remote stations configured by a network controller
US5654903A (en) * 1995-11-07 1997-08-05 Lucent Technologies Inc. Method and apparatus for real time monitoring of wafer attributes in a plasma etch process
US5746511A (en) * 1996-01-03 1998-05-05 Rosemount Inc. Temperature transmitter with on-line calibration using johnson noise
JP3412384B2 (en) * 1996-03-13 2003-06-03 株式会社日立製作所 Control model construction support device
US6017143A (en) * 1996-03-28 2000-01-25 Rosemount Inc. Device in a process system for detecting events
US6907383B2 (en) 1996-03-28 2005-06-14 Rosemount Inc. Flow diagnostic system
US6539267B1 (en) 1996-03-28 2003-03-25 Rosemount Inc. Device in a process system for determining statistical parameter
US8290721B2 (en) * 1996-03-28 2012-10-16 Rosemount Inc. Flow measurement diagnostics
US6654697B1 (en) 1996-03-28 2003-11-25 Rosemount Inc. Flow measurement with diagnostics
US7949495B2 (en) * 1996-03-28 2011-05-24 Rosemount, Inc. Process variable transmitter with diagnostics
US7254518B2 (en) * 1996-03-28 2007-08-07 Rosemount Inc. Pressure transmitter with diagnostics
US7630861B2 (en) * 1996-03-28 2009-12-08 Rosemount Inc. Dedicated process diagnostic device
US7085610B2 (en) 1996-03-28 2006-08-01 Fisher-Rosemount Systems, Inc. Root cause diagnostics
US6438430B1 (en) * 1996-05-06 2002-08-20 Pavilion Technologies, Inc. Kiln thermal and combustion control
US8311673B2 (en) * 1996-05-06 2012-11-13 Rockwell Automation Technologies, Inc. Method and apparatus for minimizing error in dynamic and steady-state processes for prediction, control, and optimization
US7610108B2 (en) * 1996-05-06 2009-10-27 Rockwell Automation Technologies, Inc. Method and apparatus for attenuating error in dynamic and steady-state processes for prediction, control, and optimization
US7418301B2 (en) * 1996-05-06 2008-08-26 Pavilion Technologies, Inc. Method and apparatus for approximating gains in dynamic and steady-state processes for prediction, control, and optimization
US6493596B1 (en) 1996-05-06 2002-12-10 Pavilion Technologies, Inc. Method and apparatus for controlling a non-linear mill
US7058617B1 (en) * 1996-05-06 2006-06-06 Pavilion Technologies, Inc. Method and apparatus for training a system model with gain constraints
US7149590B2 (en) 1996-05-06 2006-12-12 Pavilion Technologies, Inc. Kiln control and upset recovery using a model predictive control in series with forward chaining
US5933345A (en) * 1996-05-06 1999-08-03 Pavilion Technologies, Inc. Method and apparatus for dynamic and steady state modeling over a desired path between two end points
US5822220A (en) * 1996-09-03 1998-10-13 Fisher-Rosemount Systems, Inc. Process for controlling the efficiency of the causticizing process
US6236365B1 (en) * 1996-09-09 2001-05-22 Tracbeam, Llc Location of a mobile station using a plurality of commercial wireless infrastructures
US9134398B2 (en) 1996-09-09 2015-09-15 Tracbeam Llc Wireless location using network centric location estimators
US7714778B2 (en) * 1997-08-20 2010-05-11 Tracbeam Llc Wireless location gateway and applications therefor
WO1998010307A1 (en) * 1996-09-09 1998-03-12 Dennis Jay Dupray Location of a mobile station
US6249252B1 (en) 1996-09-09 2001-06-19 Tracbeam Llc Wireless location using multiple location estimators
US7274332B1 (en) 1996-09-09 2007-09-25 Tracbeam Llc Multiple evaluators for evaluation of a purality of conditions
US7903029B2 (en) 1996-09-09 2011-03-08 Tracbeam Llc Wireless location routing applications and architecture therefor
US6601005B1 (en) 1996-11-07 2003-07-29 Rosemount Inc. Process device diagnostics using process variable sensor signal
US5956663A (en) * 1996-11-07 1999-09-21 Rosemount, Inc. Signal processing technique which separates signal components in a sensor for sensor diagnostics
US6754601B1 (en) 1996-11-07 2004-06-22 Rosemount Inc. Diagnostics for resistive elements of process devices
US6434504B1 (en) 1996-11-07 2002-08-13 Rosemount Inc. Resistance based process control device diagnostics
US5828567A (en) * 1996-11-07 1998-10-27 Rosemount Inc. Diagnostics for resistance based transmitter
US6519546B1 (en) 1996-11-07 2003-02-11 Rosemount Inc. Auto correcting temperature transmitter with resistance based sensor
US6449574B1 (en) 1996-11-07 2002-09-10 Micro Motion, Inc. Resistance based process control device diagnostics
DE69714606T9 (en) * 1996-12-31 2004-09-09 Rosemount Inc., Eden Prairie DEVICE FOR CHECKING A CONTROL SIGNAL COMING FROM A PLANT IN A PROCESS CONTROL
CA2230882C (en) * 1997-03-14 2004-08-17 Dubai Aluminium Company Limited Intelligent control of aluminium reduction cells using predictive and pattern recognition techniques
US5831524A (en) * 1997-04-29 1998-11-03 Pittway Corporation System and method for dynamic adjustment of filtering in an alarm system
US20020128990A1 (en) * 1997-05-01 2002-09-12 Kaminskas Paul A. Control methodology and apparatus for reducing delamination in a book binding system
CA2306767C (en) 1997-10-13 2007-05-01 Rosemount Inc. Communication technique for field devices in industrial processes
US6674867B2 (en) * 1997-10-15 2004-01-06 Belltone Electronics Corporation Neurofuzzy based device for programmable hearing aids
US6216119B1 (en) 1997-11-19 2001-04-10 Netuitive, Inc. Multi-kernel neural network concurrent learning, monitoring, and forecasting system
AU1799099A (en) 1997-11-26 1999-06-15 Government of The United States of America, as represented by The Secretary Department of Health & Human Services, The National Institutes of Health, The System and method for intelligent quality control of a process
US6850874B1 (en) * 1998-04-17 2005-02-01 United Technologies Corporation Method and apparatus for predicting a characteristic of a product attribute formed by a machining process using a model of the process
US6229439B1 (en) 1998-07-22 2001-05-08 Pittway Corporation System and method of filtering
US6493691B1 (en) * 1998-08-07 2002-12-10 Siemens Ag Assembly of interconnected computing elements, method for computer-assisted determination of a dynamics which is the base of a dynamic process, and method for computer-assisted training of an assembly of interconnected elements
US7308322B1 (en) * 1998-09-29 2007-12-11 Rockwell Automation Technologies, Inc. Motorized system integrated control and diagnostics using vibration, pressure, temperature, speed, and/or current analysis
US7539549B1 (en) 1999-09-28 2009-05-26 Rockwell Automation Technologies, Inc. Motorized system integrated control and diagnostics using vibration, pressure, temperature, speed, and/or current analysis
US6222456B1 (en) 1998-10-01 2001-04-24 Pittway Corporation Detector with variable sample rate
BR9803848A (en) * 1998-10-08 2000-10-31 Opp Petroquimica S A Inline system for inference of physical and chemical properties, inline system for inference of process variables, and inline control system
US8135413B2 (en) * 1998-11-24 2012-03-13 Tracbeam Llc Platform and applications for wireless location and other complex services
US20030146871A1 (en) * 1998-11-24 2003-08-07 Tracbeam Llc Wireless location using signal direction and time difference of arrival
US6611775B1 (en) 1998-12-10 2003-08-26 Rosemount Inc. Electrode leakage diagnostics in a magnetic flow meter
US6615149B1 (en) 1998-12-10 2003-09-02 Rosemount Inc. Spectral diagnostics in a magnetic flow meter
US6202007B1 (en) 1999-02-19 2001-03-13 John A. Spicer Exact stability integration in network designs
US6298454B1 (en) 1999-02-22 2001-10-02 Fisher-Rosemount Systems, Inc. Diagnostics in a process control system
US6975219B2 (en) * 2001-03-01 2005-12-13 Fisher-Rosemount Systems, Inc. Enhanced hart device alerts in a process control system
US8044793B2 (en) * 2001-03-01 2011-10-25 Fisher-Rosemount Systems, Inc. Integrated device alerts in a process control system
US6633782B1 (en) 1999-02-22 2003-10-14 Fisher-Rosemount Systems, Inc. Diagnostic expert in a process control system
US7562135B2 (en) 2000-05-23 2009-07-14 Fisher-Rosemount Systems, Inc. Enhanced fieldbus device alerts in a process control system
US7206646B2 (en) 1999-02-22 2007-04-17 Fisher-Rosemount Systems, Inc. Method and apparatus for performing a function in a plant using process performance monitoring with process equipment monitoring and control
US6356191B1 (en) 1999-06-17 2002-03-12 Rosemount Inc. Error compensation for a process fluid temperature transmitter
US7010459B2 (en) * 1999-06-25 2006-03-07 Rosemount Inc. Process device diagnostics using process variable sensor signal
AU5780300A (en) 1999-07-01 2001-01-22 Rosemount Inc. Low power two-wire self validating temperature transmitter
US6505517B1 (en) 1999-07-23 2003-01-14 Rosemount Inc. High accuracy signal processing for magnetic flowmeter
US6701274B1 (en) 1999-08-27 2004-03-02 Rosemount Inc. Prediction of error magnitude in a pressure transmitter
EP1286735A1 (en) 1999-09-24 2003-03-05 Dennis Jay Dupray Geographically constrained network services
US6556145B1 (en) 1999-09-24 2003-04-29 Rosemount Inc. Two-wire fluid temperature transmitter with thermocouple diagnostics
BR9906022A (en) 1999-12-30 2001-09-25 Opp Petroquimica S A Process for the controlled production of polyethylene and its copolymers
US6618631B1 (en) 2000-04-25 2003-09-09 Georgia Tech Research Corporation Adaptive control system having hedge unit and related apparatus and methods
US6922706B1 (en) * 2000-04-27 2005-07-26 International Business Machines Corporation Data mining techniques for enhancing shelf-space management
US10684350B2 (en) 2000-06-02 2020-06-16 Tracbeam Llc Services and applications for a communications network
US9875492B2 (en) 2001-05-22 2018-01-23 Dennis J. Dupray Real estate transaction system
US10641861B2 (en) 2000-06-02 2020-05-05 Dennis J. Dupray Services and applications for a communications network
US20020019722A1 (en) * 2000-07-19 2002-02-14 Wim Hupkes On-line calibration process
US6735484B1 (en) 2000-09-20 2004-05-11 Fargo Electronics, Inc. Printer with a process diagnostics system for detecting events
US7092863B2 (en) * 2000-12-26 2006-08-15 Insyst Ltd. Model predictive control (MPC) system using DOE based model
US8073967B2 (en) 2002-04-15 2011-12-06 Fisher-Rosemount Systems, Inc. Web services-based communications for use with process control systems
US6795798B2 (en) 2001-03-01 2004-09-21 Fisher-Rosemount Systems, Inc. Remote analysis of process control plant data
US6965806B2 (en) * 2001-03-01 2005-11-15 Fisher-Rosemount Systems Inc. Automatic work order/parts order generation and tracking
US7720727B2 (en) * 2001-03-01 2010-05-18 Fisher-Rosemount Systems, Inc. Economic calculations in process control system
US7389204B2 (en) * 2001-03-01 2008-06-17 Fisher-Rosemount Systems, Inc. Data presentation system for abnormal situation prevention in a process plant
US6954713B2 (en) 2001-03-01 2005-10-11 Fisher-Rosemount Systems, Inc. Cavitation detection in a process plant
DE60206884T2 (en) 2001-03-01 2006-07-27 Fisher-Rosemount Systems, Inc., Austin Sharing of data in the process plant
US6970003B2 (en) 2001-03-05 2005-11-29 Rosemount Inc. Electronics board life prediction of microprocessor-based transmitters
US6629059B2 (en) 2001-05-14 2003-09-30 Fisher-Rosemount Systems, Inc. Hand held diagnostic and communication device with automatic bus detection
US8082096B2 (en) 2001-05-22 2011-12-20 Tracbeam Llc Wireless location routing applications and architecture therefor
US20020191102A1 (en) * 2001-05-31 2002-12-19 Casio Computer Co., Ltd. Light emitting device, camera with light emitting device, and image pickup method
US7162534B2 (en) * 2001-07-10 2007-01-09 Fisher-Rosemount Systems, Inc. Transactional data communications for process control systems
US6665651B2 (en) 2001-07-18 2003-12-16 Colorado State University Research Foundation Control system and technique employing reinforcement learning having stability and learning phases
US6772036B2 (en) 2001-08-30 2004-08-03 Fisher-Rosemount Systems, Inc. Control system using process model
US7117045B2 (en) * 2001-09-08 2006-10-03 Colorado State University Research Foundation Combined proportional plus integral (PI) and neural network (nN) controller
US6944616B2 (en) * 2001-11-28 2005-09-13 Pavilion Technologies, Inc. System and method for historical database training of support vector machines
EP1408384B1 (en) * 2002-10-09 2006-05-17 STMicroelectronics S.r.l. An arrangement for controlling operation of a physical system, like for instance fuel cells in electric vehicles
US7600234B2 (en) * 2002-12-10 2009-10-06 Fisher-Rosemount Systems, Inc. Method for launching applications
US7493310B2 (en) 2002-12-30 2009-02-17 Fisher-Rosemount Systems, Inc. Data visualization within an integrated asset data system for a process plant
US8935298B2 (en) 2002-12-30 2015-01-13 Fisher-Rosemount Systems, Inc. Integrated navigational tree importation and generation in a process plant
US7152072B2 (en) 2003-01-08 2006-12-19 Fisher-Rosemount Systems Inc. Methods and apparatus for importing device data into a database system used in a process plant
US20040158474A1 (en) * 2003-02-06 2004-08-12 Karschnia Robert J. Service facility for providing remote diagnostic and maintenance services to a process plant
US7953842B2 (en) 2003-02-19 2011-05-31 Fisher-Rosemount Systems, Inc. Open network-based data acquisition, aggregation and optimization for use with process control systems
US7103427B2 (en) * 2003-02-28 2006-09-05 Fisher-Rosemont Systems, Inc. Delivery of process plant notifications
US6915235B2 (en) * 2003-03-13 2005-07-05 Csi Technology, Inc. Generation of data indicative of machine operational condition
US7634384B2 (en) 2003-03-18 2009-12-15 Fisher-Rosemount Systems, Inc. Asset optimization reporting in a process plant
US20040230328A1 (en) * 2003-03-21 2004-11-18 Steve Armstrong Remote data visualization within an asset data system for a process plant
US6736089B1 (en) * 2003-06-05 2004-05-18 Neuco, Inc. Method and system for sootblowing optimization
US7194320B2 (en) * 2003-06-05 2007-03-20 Neuco, Inc. Method for implementing indirect controller
US7299415B2 (en) * 2003-06-16 2007-11-20 Fisher-Rosemount Systems, Inc. Method and apparatus for providing help information in multiple formats
WO2005010522A2 (en) * 2003-07-18 2005-02-03 Rosemount Inc. Process diagnostics
US7402635B2 (en) * 2003-07-22 2008-07-22 Fina Technology, Inc. Process for preparing polyethylene
US7018800B2 (en) * 2003-08-07 2006-03-28 Rosemount Inc. Process device with quiescent current diagnostics
US7627441B2 (en) * 2003-09-30 2009-12-01 Rosemount Inc. Process device with vibration based diagnostics
US7523667B2 (en) * 2003-12-23 2009-04-28 Rosemount Inc. Diagnostics of impulse piping in an industrial process
US8214271B2 (en) * 2004-02-04 2012-07-03 Neuco, Inc. System and method for assigning credit to process inputs
US7030747B2 (en) * 2004-02-26 2006-04-18 Fisher-Rosemount Systems, Inc. Method and system for integrated alarms in a process control system
US7079984B2 (en) * 2004-03-03 2006-07-18 Fisher-Rosemount Systems, Inc. Abnormal situation prevention in a process plant
US7676287B2 (en) * 2004-03-03 2010-03-09 Fisher-Rosemount Systems, Inc. Configuration system and method for abnormal situation prevention in a process plant
TWI231481B (en) * 2004-03-11 2005-04-21 Quanta Comp Inc Electronic apparatus
US7515977B2 (en) * 2004-03-30 2009-04-07 Fisher-Rosemount Systems, Inc. Integrated configuration system for use in a process plant
US6920799B1 (en) 2004-04-15 2005-07-26 Rosemount Inc. Magnetic flow meter with reference electrode
US20050267709A1 (en) * 2004-05-28 2005-12-01 Fisher-Rosemount Systems, Inc. System and method for detecting an abnormal situation associated with a heater
US7536274B2 (en) * 2004-05-28 2009-05-19 Fisher-Rosemount Systems, Inc. System and method for detecting an abnormal situation associated with a heater
CN1969239B (en) 2004-06-12 2011-08-03 费舍-柔斯芒特系统股份有限公司 System and method for detecting an abnormal situation associated with a process gain of a control loop
US7634417B2 (en) * 2004-08-27 2009-12-15 Alstom Technology Ltd. Cost based control of air pollution control
US7522963B2 (en) * 2004-08-27 2009-04-21 Alstom Technology Ltd Optimized air pollution control
US7500437B2 (en) * 2004-08-27 2009-03-10 Neuco, Inc. Method and system for SCR optimization
US7117046B2 (en) * 2004-08-27 2006-10-03 Alstom Technology Ltd. Cascaded control of an average value of a process parameter to a desired value
US7536232B2 (en) * 2004-08-27 2009-05-19 Alstom Technology Ltd Model predictive control of air pollution control processes
US20060047607A1 (en) * 2004-08-27 2006-03-02 Boyden Scott A Maximizing profit and minimizing losses in controlling air pollution
US20060052902A1 (en) * 2004-08-27 2006-03-09 Neuco, Inc. Method and system for SNCR optimization
US7323036B2 (en) * 2004-08-27 2008-01-29 Alstom Technology Ltd Maximizing regulatory credits in controlling air pollution
US7113835B2 (en) * 2004-08-27 2006-09-26 Alstom Technology Ltd. Control of rolling or moving average values of air pollution control emissions to a desired value
US7181654B2 (en) * 2004-09-17 2007-02-20 Fisher-Rosemount Systems, Inc. System and method for detecting an abnormal situation associated with a reactor
US7333861B2 (en) * 2004-10-25 2008-02-19 Neuco, Inc. Method and system for calculating marginal cost curves using plant control models
US7584024B2 (en) * 2005-02-08 2009-09-01 Pegasus Technologies, Inc. Method and apparatus for optimizing operation of a power generating plant using artificial intelligence techniques
US8768664B2 (en) * 2005-03-18 2014-07-01 CMC Solutions, LLC. Predictive emissions monitoring using a statistical hybrid model
US7421348B2 (en) * 2005-03-18 2008-09-02 Swanson Brian G Predictive emissions monitoring method
US9201420B2 (en) 2005-04-08 2015-12-01 Rosemount, Inc. Method and apparatus for performing a function in a process plant using monitoring data with criticality evaluation data
US8005647B2 (en) 2005-04-08 2011-08-23 Rosemount, Inc. Method and apparatus for monitoring and performing corrective measures in a process plant using monitoring data with corrective measures data
US8112565B2 (en) * 2005-06-08 2012-02-07 Fisher-Rosemount Systems, Inc. Multi-protocol field device interface with automatic bus detection
WO2007001252A1 (en) * 2005-06-13 2007-01-04 Carnegie Mellon University Apparatuses, systems, and methods utilizing adaptive control
US7272531B2 (en) * 2005-09-20 2007-09-18 Fisher-Rosemount Systems, Inc. Aggregation of asset use indices within a process plant
US20070068225A1 (en) * 2005-09-29 2007-03-29 Brown Gregory C Leak detector for process valve
US8644961B2 (en) 2005-12-12 2014-02-04 Neuco Inc. Model based control and estimation of mercury emissions
US7912676B2 (en) * 2006-07-25 2011-03-22 Fisher-Rosemount Systems, Inc. Method and system for detecting abnormal operation in a process plant
US7657399B2 (en) * 2006-07-25 2010-02-02 Fisher-Rosemount Systems, Inc. Methods and systems for detecting deviation of a process variable from expected values
US8606544B2 (en) 2006-07-25 2013-12-10 Fisher-Rosemount Systems, Inc. Methods and systems for detecting deviation of a process variable from expected values
US8145358B2 (en) * 2006-07-25 2012-03-27 Fisher-Rosemount Systems, Inc. Method and system for detecting abnormal operation of a level regulatory control loop
US7496414B2 (en) * 2006-09-13 2009-02-24 Rockwell Automation Technologies, Inc. Dynamic controller utilizing a hybrid model
US7953501B2 (en) 2006-09-25 2011-05-31 Fisher-Rosemount Systems, Inc. Industrial process control loop monitor
US8788070B2 (en) * 2006-09-26 2014-07-22 Rosemount Inc. Automatic field device service adviser
EP2392982B1 (en) * 2006-09-28 2015-03-25 Fisher-Rosemount Systems, Inc. Abnormal situation prevention in a heat exchanger
CN101517377B (en) 2006-09-29 2012-05-09 罗斯蒙德公司 Magnetic flowmeter with verification
US8014880B2 (en) * 2006-09-29 2011-09-06 Fisher-Rosemount Systems, Inc. On-line multivariate analysis in a distributed process control system
US20080188972A1 (en) * 2006-10-11 2008-08-07 Fisher-Rosemount Systems, Inc. Method and System for Detecting Faults in a Process Plant
US8032341B2 (en) * 2007-01-04 2011-10-04 Fisher-Rosemount Systems, Inc. Modeling a process using a composite model comprising a plurality of regression models
US8032340B2 (en) 2007-01-04 2011-10-04 Fisher-Rosemount Systems, Inc. Method and system for modeling a process variable in a process plant
US7827006B2 (en) * 2007-01-31 2010-11-02 Fisher-Rosemount Systems, Inc. Heat exchanger fouling detection
CN101636698A (en) * 2007-03-19 2010-01-27 陶氏环球技术公司 Inferential sensors developed using three-dimensional PARETO-FRONT genetic programming
US10410145B2 (en) * 2007-05-15 2019-09-10 Fisher-Rosemount Systems, Inc. Automatic maintenance estimation in a plant environment
US8898036B2 (en) * 2007-08-06 2014-11-25 Rosemount Inc. Process variable transmitter with acceleration sensor
US7869887B2 (en) * 2007-08-06 2011-01-11 Rockwell Automation Technologies, Inc. Discoverable services
US8301676B2 (en) * 2007-08-23 2012-10-30 Fisher-Rosemount Systems, Inc. Field device with capability of calculating digital filter coefficients
US7702401B2 (en) 2007-09-05 2010-04-20 Fisher-Rosemount Systems, Inc. System for preserving and displaying process control data associated with an abnormal situation
US9323247B2 (en) 2007-09-14 2016-04-26 Fisher-Rosemount Systems, Inc. Personalized plant asset data representation and search system
US7590511B2 (en) * 2007-09-25 2009-09-15 Rosemount Inc. Field device for digital process control loop diagnostics
US8340824B2 (en) 2007-10-05 2012-12-25 Neuco, Inc. Sootblowing optimization for improved boiler performance
US8055479B2 (en) 2007-10-10 2011-11-08 Fisher-Rosemount Systems, Inc. Simplified algorithm for abnormal situation prevention in load following applications including plugged line diagnostics in a dynamic process
US20100063829A1 (en) * 2008-09-08 2010-03-11 Dupray Dennis J Real estate transaction system
US7921734B2 (en) * 2009-05-12 2011-04-12 Rosemount Inc. System to detect poor process ground connections
US8666556B2 (en) 2009-12-10 2014-03-04 Alcon Research, Ltd. Systems and methods for dynamic feedforward
PL2385656T3 (en) * 2010-05-06 2013-05-31 Deutsche Telekom Ag Method and system for controlling data communication within a network
EP2572293A4 (en) 2010-05-19 2013-12-04 Univ California Neural processing unit
US8821524B2 (en) 2010-05-27 2014-09-02 Alcon Research, Ltd. Feedback control of on/off pneumatic actuators
US9538493B2 (en) 2010-08-23 2017-01-03 Finetrak, Llc Locating a mobile station and applications therefor
US8457767B2 (en) 2010-12-31 2013-06-04 Brad Radl System and method for real-time industrial process modeling
US9207670B2 (en) 2011-03-21 2015-12-08 Rosemount Inc. Degrading sensor detection implemented within a transmitter
US9927788B2 (en) 2011-05-19 2018-03-27 Fisher-Rosemount Systems, Inc. Software lockout coordination between a process control system and an asset management system
US8521670B2 (en) 2011-05-25 2013-08-27 HGST Netherlands B.V. Artificial neural network application for magnetic core width prediction and modeling for magnetic disk drive manufacture
US8631577B2 (en) 2011-07-22 2014-01-21 Pratt & Whitney Canada Corp. Method of fabricating integrally bladed rotor and stator vane assembly
US8904636B2 (en) 2011-07-22 2014-12-09 Pratt & Whitney Canada Corp. Method of fabricating integrally bladed rotor using surface positioning in relation to surface priority
US8788083B2 (en) 2011-07-22 2014-07-22 Pratt & Whitney Canada Corp. Compensation for process variables in a numerically-controlled machining operation
US8844132B2 (en) 2011-07-22 2014-09-30 Pratt & Whitney Canada Corp. Method of machining using an automatic tool path generator adapted to individual blade surfaces on an integrally bladed rotor
US9060841B2 (en) 2011-08-31 2015-06-23 Alcon Research, Ltd. Enhanced flow vitrectomy probe
US10070990B2 (en) 2011-12-08 2018-09-11 Alcon Research, Ltd. Optimized pneumatic drive lines
US9529348B2 (en) 2012-01-24 2016-12-27 Emerson Process Management Power & Water Solutions, Inc. Method and apparatus for deploying industrial plant simulators using cloud computing technologies
US9052240B2 (en) 2012-06-29 2015-06-09 Rosemount Inc. Industrial process temperature transmitter with sensor stress diagnostics
US9082078B2 (en) 2012-07-27 2015-07-14 The Intellisis Corporation Neural processing engine and architecture using the same
US9207129B2 (en) 2012-09-27 2015-12-08 Rosemount Inc. Process variable transmitter with EMF detection and correction
US9602122B2 (en) 2012-09-28 2017-03-21 Rosemount Inc. Process variable measurement noise diagnostic
US9185057B2 (en) 2012-12-05 2015-11-10 The Intellisis Corporation Smart memory
US9027035B2 (en) * 2012-12-17 2015-05-05 Itron, Inc. Non real-time metrology data management
US9395713B2 (en) * 2014-05-05 2016-07-19 IP Research LLC Method and system of protection of technological equipment
US9552327B2 (en) 2015-01-29 2017-01-24 Knuedge Incorporated Memory controller for a network on a chip device
US10061531B2 (en) 2015-01-29 2018-08-28 Knuedge Incorporated Uniform system wide addressing for a computing system
US10027583B2 (en) 2016-03-22 2018-07-17 Knuedge Incorporated Chained packet sequences in a network on a chip architecture
US20170311095A1 (en) * 2016-04-20 2017-10-26 Starkey Laboratories, Inc. Neural network-driven feedback cancellation
DE102016108053A1 (en) 2016-04-29 2017-11-02 Khs Gmbh Method for optimizing the filling of a container
US10346049B2 (en) 2016-04-29 2019-07-09 Friday Harbor Llc Distributed contiguous reads in a network on a chip architecture
WO2018068858A1 (en) * 2016-10-13 2018-04-19 Huawei Technologies Co., Ltd. Method and device in a wireless communication network for downlink power control
JP6450724B2 (en) * 2016-10-18 2019-01-09 ファナック株式会社 Machine learning device and machining system for learning setting values of machining program of machine tool
US11041644B2 (en) * 2018-05-16 2021-06-22 Distech Controls Inc. Method and environment controller using a neural network for bypassing a legacy environment control software module
KR102607366B1 (en) * 2018-05-18 2023-11-29 삼성전자주식회사 Air conditioner and method for contolling the same
US10992763B2 (en) 2018-08-21 2021-04-27 Bank Of America Corporation Dynamic interaction optimization and cross channel profile determination through online machine learning

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3878379A (en) * 1972-08-14 1975-04-15 Allied Chem Polymer intrinsic viscosity control
US3836834A (en) * 1973-11-13 1974-09-17 Atomic Energy Commission Machine protection system
JPS643705A (en) * 1987-06-26 1989-01-09 Toshiba Corp Process controller
WO1989003092A1 (en) * 1987-09-30 1989-04-06 E.I. Du Pont De Nemours And Company Expert system with process control
US4979126A (en) * 1988-03-30 1990-12-18 Ai Ware Incorporated Neural network with non-linear transformations
US5111531A (en) * 1990-01-08 1992-05-05 Automation Technology, Inc. Process control using neural network
US5142665A (en) * 1990-02-20 1992-08-25 International Business Machines Corporation Neural network shell for application programs

Also Published As

Publication number Publication date
DE69130253T2 (en) 1999-05-20
US5282261A (en) 1994-01-25
EP0495044A1 (en) 1992-07-22
WO1992002866A1 (en) 1992-02-20
CA2066458A1 (en) 1992-02-04
EP0495044B1 (en) 1998-09-23
DE69130253D1 (en) 1998-10-29

Similar Documents

Publication Publication Date Title
CA2066458C (en) Computer neural network process measurement and control system and method
EP0498880B1 (en) Neural network/expert system process control system and method
US5224203A (en) On-line process control neural network using data pointers
CA2066279C (en) On-line process control neural network using data pointers
EP0495046B1 (en) On-line training neural network for process control
CA2066278C (en) Computer neural network supervisory process control system and method
US5197114A (en) Computer neural network regulatory process control system and method
US7054847B2 (en) System and method for on-line training of a support vector machine
US6944616B2 (en) System and method for historical database training of support vector machines
Juuso Integration of intelligent systems in development of smart adaptive systems
US7599897B2 (en) Training a support vector machine with process constraints
WO1992002896A1 (en) Modular neural network process control system with natural language configuration
Ruiz et al. On-line process fault detection and diagnosis in plants with recycle
Hentea Architecture and design issues in a hybrid knowledge-based expert system for intelligent quality control
BABUSKA FUZZY MODELING: PRINCIPLES, METHODS AND

Legal Events

Date Code Title Description
EEER Examination request
MKLA Lapsed
MKEC Expiry (correction)

Effective date: 20121202