|Numéro de publication||US5671335 A|
|Type de publication||Octroi|
|Numéro de demande||US 08/707,191|
|Date de publication||23 sept. 1997|
|Date de dépôt||3 janv. 1994|
|Date de priorité||23 mai 1991|
|État de paiement des frais||Caduc|
|Numéro de publication||08707191, 707191, US 5671335 A, US 5671335A, US-A-5671335, US5671335 A, US5671335A|
|Inventeurs||Gerald Wesley Davis, Michael L. Gasperi|
|Cessionnaire d'origine||Allen-Bradley Company, Inc.|
|Exporter la citation||BiBTeX, EndNote, RefMan|
|Citations de brevets (6), Citations hors brevets (10), Référencé par (84), Classifications (8), Événements juridiques (4)|
|Liens externes: USPTO, Cession USPTO, Espacenet|
This application is a continuation of application Ser. No. 08/018,904, filed Feb. 8, 1993, now abandoned which is a continuation of application Ser. No. 07/704,766, filed May 23, 1991, now abandoned.
1. Field of the Invention
This invention relates to neural network computer architectures and specifically to a method of using a neural network to initialize inputs to complex processes so as to produce a target process output.
2. Background Art
Neural networks are computing devices inspired by biological models and distinguished from other computing devices by an architecture which employs a number of highly interconnected elemental "neurons". Each neuron is comprised of a summing junction for: receiving signals from other neurons, for weighting each signal by a weighting value, and for summing them together. The summing junction is ordinarily followed by a compressor or "squashing" function (typically a logistic curve) that compresses the output from the summing junction into a predetermined range, ordinarily from zero to one. The neuron's inputs are those inputs to the summing junction and the neuron's output, termed an "activation", is the output from the compressor.
The inputs of each neuron may be connected to the outputs of many other neurons and the neuron's activation may be connected, in turn, to the inputs of still other neurons. In a "feedforward" neural net architecture, inputs to the network are received by a first layer of neurons whose activations feed the inputs of a second layer of neurons and so on, for as many layers as desired. The final layer provides the output of the network.
In a "recurrent" neural network architecture, inputs are received by a single layer of neurons and the activations of those neurons are fed back as inputs to that single layer to produce new activations during a "propagation".
Both types of neural network architectures may be realized through programs running on conventional yon Neuman architecture digital computers. Alternatively, neural networks may be constructed with dedicated analog or digital circuitry; for example, by using analog summing junctions and function generators to construct each neuron, as is generally understood in the art.
In operation, the neural network receives an input or a set of inputs and produces an output or a set of outputs dependent on the inputs and on the weighting values assigned to each neuron's inputs. With the appropriate selection of the weighting values, a variety of computational processes may be performed.
The relationship between the weighting values and the computational process is extremely complex and the weighting values are ordinarily determined by a teaching procedure. With the teaching procedure, a teaching set of corresponding inputs and target outputs are presented to the neural network and error values which are used to modify an initial set of weighting values are generated. This process is repeated until the generated error values are acceptably low, at which point the weighting values may be fixed.
Although the teaching method of programming neural networks appears cumbersome when compared with the programming of a conventional yon Neuman computer-because many inputs and outputs must be presented in teaching the neural network-the advantage to the teaching method of programming is that the mechanics of the computational process need not be understood. This makes neural network computers ideal for use in modeling applications where inputs and outputs are available, but the underlying mathematical process is not known.
One particular modeling application for which neural networks may be useful is that of modeling a complex industrial process. Specifically, a manufacturing process may have a number of inputs, and an output that is ultimately a finished product or product component. As an example, the process may be the injection molding of plastic. In this case the inputs might be the mold temperatures and the pressure and speed with which the plastic is injected. The outputs might be the measured qualities of the product, such as dimensions, surface finish, or strength, for example.
During normal operation of the manufacturing process, the inputs and outputs are related to each other in a complex but stable manner, dictated by the physics of the process. A neural network may be taught this relationship by using actual but historical input and output values of the manufacturing process for teaching.
In one application of a neural network, after the neural network has been taught, it is presented with trial inputs and the resulting trial outputs are examined to verify whether the desired product qualities will be produced with those trial inputs. Trial and error may be used to verify the required process inputs with the trained neural network without the waste and expense that may be involved in performing the same tests with the actual process.
Unfortunately, the trial and error procedure is difficult and time-consuming. Further, for multivariable nonlinear processes, where the adjustments of the input values is a complex function of the error between the target output and the trial output, trial and error techniques may fail altogether because the error provides virtually no indication of the next set of inputs to try.
The present invention provides a method of using a neural network, trained to a complex process, to produce a set of input values for that process, that will produce a target set of outputs of that process. Specifically, the network is trained to the process using an historical teaching set of corresponding inputs and outputs of the process. This teaching establishes the weights of the interconnections between the neurons of the neural network. A trial input is then presented to the network and this input is forward-propagated to produce an output value. The difference between the output of the network and the target output is back-propagated to compute an input error value for the input neurons of the network, and this input error value is used to modify the trial input to be closer to the desired input.
Accordingly, it is one object of the invention to eliminate the need for a trial and error methodology to determine optimal input values to a process. This is especially important in complex multivariable processes where the error in the output may provide very little indication of which inputs need to be modified, and in which direction the modification should be made. The back-propagation of the output error value in the neural network provides an indication of the modification needed to each of the values of the trial input.
The process of forward-propagating a trial input, back-propagating an output error value, and modifying the trial input may be repeated and the error between the network output and the target output monitored.
It is thus another object of the invention to provide an iterative means for correcting a trial input to be arbitrarily close to the input value needed to achieve the target output.
Other objects and advantages besides those discussed above shall be apparent to those experienced in the art from the description of a preferred embodiment of the invention which follows. In the description, reference is made to the accompanying drawings, which form a part hereof, and which illustrate one example of the invention. Such example, however, is not exhaustive of the various alternative forms of the invention, and therefore reference is made to the claims which follow the description for determining the scope of the invention.
FIG. 1 is a block diagram showing the use of a neural network in the method of the present invention;
FIG. 2 is a schematic representation of an injection molding machine as may be modeled by a neural network of FIG. 1 according to the present invention;
FIG. 3 is a block diagram of the process inputs and process outputs of the injection molding machine of FIG. 2;
FIG. 4 is a flow chart showing generally the method of FIG. 1;
FIG. 5 is a schematic representation of a feedforward neural network suitable for practice of the method of FIGS. 1 and 4;
FIG. 6 is a detailed schematic representation of a single neuron of the neural network of FIG. 5 showing the operation of the neuron during forward-propagation;
FIG. 7 is a detailed flow chart of the training step shown in the FIGS. 4 and 6;
FIG. 8 is a detailed schematic representation of a single neuron in the last layer of the neural network of FIG. 5 showing the operation of the neuron during back-propagation;
FIG. 9 is a detailed schematic representation of a single neuron not in the last layer of the neural network of FIG. 5 showing the operation of neuron during back-propagation;
FIG. 10 is a detailed flow chart of the input modifying step of FIG. 4;
FIG. 11 is a schematic representation of a yon Neuman architecture computer suitable for practice of the present invention; and
FIG. 12 is a schematic representation of the memory of the computer of FIG. 9 showing the storage of neuron states as memory values;
Referring to FIG. 1, the present invention provides a method of selecting input values for a complex process to produce a desired, target output. The method employs a neural network 10 trained to the complex process, as will be described. A trial input 12 is provided to the neural network 10, and that input 12 is forward-propagated, as indicated by block 14, by the neural network 10, to produce a trial output 16. This trial output 16 is compared to a target output 18 to create an output error value 20 which is back-propagated, as shown by process block 22, and the back-propagated error is used to modify the trial input 10. The modified trial input 10 is then forward-propagated again and the process described above is repeated until the output error value 20 is reduced below a predetermined minimum. Each of these steps will be described in further detail below.
Collecting a Teaching Set
Referring to FIG. 2, the neural network 10 may be trained to model a complex multi-input process, such as injection molding, on an injection molding machine 24. It will be recognized that injection molding is merely illustrative of one of the many processes to which the present invention may be applicable.
In injection molding, a thermoplastic material 26, usually in the form of pellets, is received into a hopper 28 and fed from the hopper into an injection barrel 30 by means of an auger (not shown) within that barrel 30. The rotation of the auger within the barrel 30 "plasticates" or melts the thermoplastic material 26 and forces the thermoplastic material 26 toward a nozzle 32 on one end of the barrel 30 abutting a mold 34. After a sufficient amount of melted thermoplastic material 26 has entered barrel 30, the auger ceases rotation and is used as a ram to inject the molten thermoplastic material 26 into the mold 34. The speed and pressure of the injection may be controlled. After the mold 34 has filled, the auger stops and the thermoplastic material 26 is allowed to cool momentarily in the mold 34. The temperature of the mold 34 may be controlled by means of flowing coolant 36. Finally, the mold halves are separated and a finished part of the molded thermoplastic material (not shown) is ejected.
Referring to FIG. 3, the quality of the part, and in particular, the dimensional accuracy of the part, will depend on the process of inputs 38 to the injection molding process 40, including: the injection speed and pressure, the temperature of the mold halves, and the time allowed for the part to cool. The quality of the part may be quantified by process outputs 42 from the injection molding process 40, the process outputs 42 being part dimensions, for example.
It will be understood that other process inputs, such as plastic type and dryness, for example, and other process outputs, such as surface finish, might be used instead, and that the process inputs 38 and outputs 42 are provided only by way of example.
Referring to FIG. 4, in preparation for teaching the neural network 10 (of FIG. 1) to emulate the injection molding process 40 (of FIG. 3), a teaching set is collected, per process block 44, the teaching set comprising of a number of elements, each element including a different process input 38 and its corresponding process output 42. Each process input 38 and process output 42 will, in turn, be comprised of a number of input and output values, as has been described. In the examples of FIGS. 2 and 3, the teaching set is established from the process inputs 38 and outputs 42 by repeatedly molding parts with the injection molding machine 24 under a variety of different process inputs 38. The input setting become the teaching inputs ti(n,i) and the measurements of the resulting parts become the corresponding teaching outputs to(n,i). In the teaching inputs ti(n,i) and teaching outputs to(n,i) n is an index referring to the element of the teaching set derived from a particular part molding and i is an index of the input and output values of that element.
The values of the elements of the teaching set are "normalized" to a range of between 0 and 1 for computational convenience by an appropriate scaling and offset factor.
As shown by process block 46, the neural network 10 is trained to this teaching set so as to be able to emulate the injection molding process 40. The training of a neural network 10 to a teaching set, as will be described, is generally understood in the art and as described in detail below, and also in "Parallel Distributed Processing: Explorations in the Microstructure of Cognition" Vol. 1 and 2 by David Rumethart et al. MIT Press, 1989, hereby incorporated by reference.
A Feedforward Neural Network
Referring to FIG. 5, an feed-forward neural net 10 suitable for training with the above described training set, has three columns of neurons 48-52 divided, by column, among input neurons 48, output neurons 52, and hidden neurons 50. For simplicity, the illustrated feedforward neural network 10 is comprised of five input neurons 48, five hidden neurons 50, and four output neurons 52. However, it will be understood from the following discussion that the method of the present invention is applicable generally to neural networks with different numbers of neurons and layers.
Each neuron of layers 48 and 50 is connected to the succeeding layer along a multitude of interconnections 54 thus forming the "network" of the neural network 10. The neurons 48-52 may be conveniently identified by their layer and their number within that layer, where input neurons 48 form the first layer and output neurons 52 form the last layer. A given neuron may be thus identified as n(k,i) where k is a layer number from 1 to nk, nk is the maximum number of layers, i is the number of the neuron within that layer from 1 to ni, and ni is the last neuron in that layer.
The input neurons 48 are simply points at which the neural network 10 receives an input, and thus have only a single input and a single output equal to that input. The hidden and output neurons 50 and 52, on the other hand, have multiple inputs and an outputs that derive from a combination of these inputs. The output neurons 52 provide the output values of the neural network 10.
During the operation of the neural network 10, signals are transmitted between layers of neurons along the interconnections 54. Associated with each interconnection, 54 is a weight which scales the signal transmitted on that interconnection according to the weight's value. The weights may be identified by the identification of the neurons 48-50 flanking the interconnection 54 so that a given weight will be designated as w(k,i,j) where k and i are the layer and number of the neuron 48-52 receiving the signal along the interconnection 54 and j is the number of the neuron 48-52 transmitting the signal along the interconnection 54. The layer of the neuron 48-52 transmitting along the interconnection 54 will be apparent from context.
Referring to FIGS. 5, 6 and 7, the first step in teaching the neural network 10 is initializing each of the weights w(k,i,j) of the network 10 to a random value between 0 and 1, as shown by process block 56 of FIG. 7. The weights w(k,i,j) will be modified during the learning process and embody the "learning" of the process. Thus, this randomization is akin to "erasing" the neural network 10 prior to learning.
The second step, as shown by process block 58 of FIG. 7, is the forward-propagation of an input ti(n,i) from the teaching set through the network 10. The input values of the first element of the teaching set, as normalized, are presented to the input neurons 48 and these values forward-propagated through each of the layers as activation values z(k,i) associated with each neuron n(k,i). For example, referring to FIG. 6, the first layer of hidden neurons 50, after the input neurons 48, will receive the output of each input neuron 48 along an interconnection 54 at an input node 60. Each input node 60 scales the signal along its associated interconnection 54 by the weight w(k,i,j) for that interconnection 54.
The weighted inputs from these input nodes 60 are then summed together at a summing junction 62 and the resulting sum, s(k,i), (for neuron n(k,i)) is compressed to a value between 1 and zero by a logistic-based compressor 64 or other compressing function as is known in the art. This compressed sum becomes the activation value z(k,i) from the neuron n(k,i).
Specifically, the activation z(k,i) of each neuron 50 and 52 is determined according to the following formula:
z(k,i)=logistic (s(k,i)) (1)
where: ##EQU1## In equation (2), j is an index variable ranging from 1 to ni for the previous layer and where z(1,i) is the teaching input ti(n,i) associated with that input neuron n(1,i) and a teaching set element n.
This forward-propagation process is repeated for each of the layers until the activations z(nk,i) for the output neurons 52 are determined for that teaching input ti(n,i). The activations z(nk,i) of the output neurons 52 are the outputs of the network 10.
Forward-propagation may be thought of as the normal operating mode of the neural network 10. An input is provided to the network 10 and an output is produced by forward-propagation analogous to the output of a conventional yon Neuman computer. The "program" run by a neural network computer 10 is effectively the values of its weights w(k,i,j).
When a neural network 10 is used to simulate a physical process, the weights w(k,i,j) hold the mechanical properties of the process or its "physics", and the activations z(k,i) hold the present state of the process within its cycle. The determination of the proper weights, w(k,i,j) for the neural network 10, then, is key to using the neural network 10 to model a particular process.
Determining the values of the weights w(k,i,j) so that the neural network 10 may model a particular process involves repeatedly forward-propagating the input values of the teaching set and back-propagating the error between the output of the neural network 10 and the output values of the teaching set. Referring to FIGS. 5, 7, 8 and 9, the back-propagation process, shown as process block 64 of FIG. 7, determines an error value δ(k,i) for each neuron 50-52 starting at the last layer nk.
As shown in FIG. 8, the activation at each output neuron 52 is subtracted from the teaching set output, to(n,i), associated with that output neuron n(nk,i) at summing junction 66. This difference δ(nk, i), termed the "output error value" is multiplied by a function 68 of the activation z(nk,1) of that neuron, as will be described below, by scaling junction 70. The output error value δ(nk,i) from each output neuron 52 is then transmitted to the previous layer of neurons 50 along interconnections 54.
Referring to FIG. 9, these output error values δ(nk,i) from the output neurons 52 are received by the preceding layer of hidden neurons 50, where they are weighted at input nodes 72 by the same weights w(k,i,j) as used in the forward-propagation along those particular interconnections 54. The weighted output error values are then summed at summing junction 72, and multiplied by function 68 of the activation of that neuron by scaling junction 70 to produce the error value δ(nk,i) for that neuron. This process is repeated for each hidden layer 50 until the errors δ(k,i) have been computed for each neuron layer up to the input neurons 40.
Specifically, error values δ(k,i) are computed for each layer as given by the following formula: ##EQU2##
where z' is function 68 and is the derivative of the squashing function 64 with respect to s(k,i) for the neurons of layers 50 and 52. In the preferred embodiment, the squashing function z(k,i) is the logistic described in equation (3) whose derivative z' is the computationally convenient z(k,i) (1-z(k,i)) and thus equation (4a) becomes: ##EQU3## where
δ(nk,i) is the output error value, i.e., the difference between the output value of the teaching set to (n,i) and the activation z(nk,i). Once the error values δ(k,i) for each of the layers 2 to nk and each of the neurons 1 to ni is determined, the weights w(k,i,j) are adjusted per process block 74 of FIG. 7.
The error values δ(k,i) are used to modify the weights w(k,i,j), and through many repetitions of this forward and back-propagation with different elements n of the teaching set, the network 10 is taught, by modification of its weights, to respond as the process 40 would respond.
The modification of the weights, per process block 74 of FIG. 7, is performed by determining a correction value dw for each weight for each interconnection 54 according to the following formula.
dw(k,i,j)=δε(k+1,i)z(k,j)+(momentum) lastdw(k,i,j) (5)
where lastdw is the value of dw immediately after the previous modification of the weights.
The new weight is determined by adjusting the previous weight value by the corresponding value of dw or
where lastw(k,i,j) is the weight value w(k,i,j) immediately after the previous modification of the weights.
The factor ε in equation 5 is a learning rate and along with momentum, another factor, adjusts how fast the weights are modified. Preferably, ε and momentum are less than one and in one embodiment may be 0.1 and 0.95 respectively.
As is shown in process block 76, new teaching set inputs ti(n+1,i) and outputs to(n+1,i) are obtained. If the last element of the teaching inputs has been used, the teaching set index n is set to zero and the teaching set elements are repeated.
At process block 78, errors δ(nk,i) previously obtained in preparation to back-propagation, are checked to see if the network 10 has been taught sufficiently. Preferably the square of the errors δ(nk,i) is summed over all the output neurons for all the teaching set elements n. If this error is below a predetermined value, the teaching process is done; if not, process blocks 64-76 are repeated.
When the teaching is complete, the weights w(k,i,j) are saved for use in optimizing an input to a target output as will now be described.
Optimizing an Input to a Target Output
Referring again to FIGS. 2 and 4, once the network 10 has been trained to the particular process, such as the injection molding process 40, the network 10 may be used to deduce the proper inputs to the process, such as the process inputs to the injection molding machine 24, given a desired output target value tg(i). This output target value tg(i) is determined by the user and is dependent on the desired characteristics of the product.
An arbitrary trial input value tr(i) is presented to the inputs of the neural network 10 per process block 80 of FIG. 4. The trial input may be a random input, within the normalization range of 0 and 1 described above, or may be any non-optimal input value presently in practice. The trial input, tr(i), is then modified, per process block 82, using the trained network 10 as will be described below.
Referring to FIG. 10, the optimization process begins, per process block 84 with an initialization of the network. First, the weights of the networks w(k,i,j) are set to those values previously determined by the training process of process blocks 56-76 shown in FIG. 7. Second, the trial input is presented to the input of the network 10.
In process block 86, the trial input tr(i) is forward-propagated through the network 10 to produce a trial output z(nk,i) in a manner identical to that of process block 58 of FIG. 7. An error value for the output neurons 52 is determined per equation (4) above, using the target output tr(i) from which to subtract the activation z(nk,i).
At decision block 88 the error values 8(nk,i) for each output neuron 52 are squared and summed and this sum compared to an error threshold to determine if the trial output z(nk,i) is acceptably close to the output error value. The error threshold is determined by the user and will depend on the requirements of the product. For example, in the injection molding process 40, the error threshold will be determined by the required dimensional accuracy of the parts.
If the error δ(nk,i) is less than the error threshold; then process block 82 is exited and the trial input tr(i) becomes the indicated optimal input. On the other hand, as will be more typical for the first few iterations of the optimization process 82, if the error will be significant, the program will proceed to process block 90.
At process block 90, the output error values δ(nk,i) are back-propagated for each neuron in a manner similar to that of process block 64 of FIG. 7 except that the back-propagation is performed for layers k=nk to 1 so as to include the input layer of neurons 48.
Trial Input Adjustment
At process block 92, the back-propagated error values δ(k,i) are used to modify not the weights w(k,i,j), as was done at process block 74 during the teaching process 46, but rather the trial input tr(i). The process of modifying the trial inputs tr(i) is analogous to the modification of the weights performed during the teaching process 46. That is, a correction value dtr is computed for each input neuron 48 according to the following formula:
where δ(1,i) is the error value at the input neurons 48 and will be termed the "input error value".
The new trial input tr(i) is determined by adjusting the previous trial input tr(i) by the corresponding value of dtr or
where lasttr(i) is the value of the trial input tr(i) immediately after the previous modification of the weights.
The factor ε' in equation 8 is a correction rate which adjusts how fast the trial input is modified. Preferably, ε' is less than one and may be 0.1.
Process blocks 86-92 are repeated until the desired error threshold is reached. At this point, the current trial input, tr(i) becomes the optimized input.
Referring to FIGS. 11 and 12, the operation of the neural network 10 is well adapted for realization on a conventional yon Neuman digital computer 100. With such an implementation, the activations z(k,i) for each neuron 48-52 are held as stored values in computer memory and the weighting, summation and compression operations are simply mathematical subroutines performed by well understood computer hardware. In the preferred embodiment, the neural network and the teaching method to be described below are run on an IBM AT personal computer using the MSDOS operating system and running a program written in the Microsoft "C" computer language.
Referring particularly to FIG. 11, a schematic representation of a yon Neuman architecture computer 100, well known in the art, is comprised of a microprocessor 110 connected through a high speed bus 118 to a co-processor 112 used to provide rapid arithmetic calculations for the neuron computations described above. Also connected to the bus 118 are a read only memory (ROM) 114 holding an operating system for the computer and a random access memory (RAM) 116 holding the neural network implementation program described above, as well as variables representing the states of the various inputs and outputs to the neurons. A port 122 attached to the bus 118 and communicating with the microprocessor 110 permits the input and output of data to and from a terminal 124 which is also used in programming the computer.
A second port 120 communicates with the microprocessor 10 via the bus 118 and includes an analog-to-digital converter and a digital-to-analog converter for interfacing the network 10 directly to the process being emulated to obtain a teaching set directly, for those circumstances where this direct interface is practical.
Referring to FIG. 12, the RAM 116 holds the values of the neurons' activations z(k,i) as described above in the form of a matrix variable in RAM 116, as is understood in the art. Similarly, the sums s(k,i), the weights w(k,i,j), the error values δ(k,i), the values dw(k,i,j) and the values ε and momentum are stored in RAM 116. Also, RAM 116 holds the teaching inputs and outputs ti(n,i) and ti(n,i), the trial input value tr(i) and output target tr(i) and the correction value dtr(i) all referred to above.
The above description has been that of a preferred embodiment of the present invention. It will occur to those who practice the art that many modifications may be made without departing from the spirit and scope of the invention. For example, the above described optimization will work with other forms of neural networks other than the feed forward architecture described herein. In order to apprise the public of the various embodiments that may fall within the scope of the invention, the following claims are made.
|Brevet cité||Date de dépôt||Date de publication||Déposant||Titre|
|US3950733 *||6 juin 1974||13 avr. 1976||Nestor Associates||Information processing system|
|US4918618 *||11 avr. 1988||17 avr. 1990||Analog Intelligence Corporation||Discrete weight neural network|
|US5052043 *||7 mai 1990||24 sept. 1991||Eastman Kodak Company||Neural network with back propagation controlled through an output confidence measure|
|US5056037 *||28 déc. 1989||8 oct. 1991||The United States Of America As Represented By The Administrator Of The National Aeronautics And Space Administration||Analog hardware for learning neural networks|
|US5107454 *||14 août 1989||21 avr. 1992||Agency Of Industrial Science And Technology||Pattern associative memory system|
|US5129039 *||10 juil. 1991||7 juil. 1992||Sony Corporation||Recurrent neural network with variable size intermediate layer|
|1||"Neural Networks, Part 2" Wasserman et al, IEEE Expert 1988.|
|2||*||Chapter 1 of Applied Optimal Control by Arthur E. Bryerson et al.|
|3||*||Generic Constraints On Underspecified Target Trajectories by Michael I. Jordan.|
|4||*||Neural Networks For Control edited by W. Thomas Miller et al.|
|5||*||Neural Networks for Control, pp. 34 37, W. Thomas Miller, III, et al. Massachusetts Istitute of Technology, 1990.|
|6||Neural Networks for Control, pp. 34-37, W. Thomas Miller, III, et al. Massachusetts Istitute of Technology, 1990.|
|7||*||Neural Networks, Part 2 Wasserman et al, IEEE Expert 1988.|
|8||*||Neurocontrol And Fuzzy Logic: Connections and Designs by Paul J. Werbos.|
|9||*||Neurocontrol and Related Techniques, Chapter 22, by Paul J. Werbos.|
|10||*||Supervised Learning And Systems With Excess Degrees of Freedom, pp. 5,6, Michael I. Jordan, Massachusetts Institute of Technology, May 1988.|
|Brevet citant||Date de dépôt||Date de publication||Déposant||Titre|
|US5914884 *||2 janv. 1997||22 juin 1999||General Electric Company||Method for evaluating moldability characteristics of a plastic resin in an injection molding process|
|US5956663 *||26 mars 1998||21 sept. 1999||Rosemount, Inc.||Signal processing technique which separates signal components in a sensor for sensor diagnostics|
|US6017143 *||28 mars 1996||25 janv. 2000||Rosemount Inc.||Device in a process system for detecting events|
|US6047220 *||29 déc. 1997||4 avr. 2000||Rosemount Inc.||Device in a process system for validating a control signal from a field device|
|US6119047 *||10 nov. 1997||12 sept. 2000||Rosemount Inc.||Transmitter with software for determining when to initiate diagnostics|
|US6298454||22 févr. 1999||2 oct. 2001||Fisher-Rosemount Systems, Inc.||Diagnostics in a process control system|
|US6356191||17 juin 1999||12 mars 2002||Rosemount Inc.||Error compensation for a process fluid temperature transmitter|
|US6370448||12 oct. 1998||9 avr. 2002||Rosemount Inc.||Communication technique for field devices in industrial processes|
|US6397114||3 mai 1999||28 mai 2002||Rosemount Inc.||Device in a process system for detecting events|
|US6434504||6 août 1999||13 août 2002||Rosemount Inc.||Resistance based process control device diagnostics|
|US6449574||14 juil. 2000||10 sept. 2002||Micro Motion, Inc.||Resistance based process control device diagnostics|
|US6473710||29 juin 2000||29 oct. 2002||Rosemount Inc.||Low power two-wire self validating temperature transmitter|
|US6505517||23 juil. 1999||14 janv. 2003||Rosemount Inc.||High accuracy signal processing for magnetic flowmeter|
|US6519546||19 oct. 1998||11 févr. 2003||Rosemount Inc.||Auto correcting temperature transmitter with resistance based sensor|
|US6532392||28 juil. 2000||11 mars 2003||Rosemount Inc.||Transmitter with software for determining when to initiate diagnostics|
|US6539267||4 mai 2000||25 mars 2003||Rosemount Inc.||Device in a process system for determining statistical parameter|
|US6556145||24 sept. 1999||29 avr. 2003||Rosemount Inc.||Two-wire fluid temperature transmitter with thermocouple diagnostics|
|US6557118||8 mars 2001||29 avr. 2003||Fisher Rosemount Systems Inc.||Diagnostics in a process control system|
|US6594603||30 sept. 1999||15 juil. 2003||Rosemount Inc.||Resistive element diagnostics for process devices|
|US6601005||25 juin 1999||29 juil. 2003||Rosemount Inc.||Process device diagnostics using process variable sensor signal|
|US6611775||23 mai 2000||26 août 2003||Rosemount Inc.||Electrode leakage diagnostics in a magnetic flow meter|
|US6615090||7 févr. 2000||2 sept. 2003||Fisher-Rosemont Systems, Inc.||Diagnostics in a process control system which uses multi-variable control techniques|
|US6615149||23 mai 2000||2 sept. 2003||Rosemount Inc.||Spectral diagnostics in a magnetic flow meter|
|US6629059||12 mars 2002||30 sept. 2003||Fisher-Rosemount Systems, Inc.||Hand held diagnostic and communication device with automatic bus detection|
|US6633782||7 févr. 2000||14 oct. 2003||Fisher-Rosemount Systems, Inc.||Diagnostic expert in a process control system|
|US6654697||27 août 1999||25 nov. 2003||Rosemount Inc.||Flow measurement with diagnostics|
|US6701274||27 août 1999||2 mars 2004||Rosemount Inc.||Prediction of error magnitude in a pressure transmitter|
|US6708160 *||6 avr. 2000||16 mars 2004||Paul J. Werbos||Object nets|
|US6735484||20 sept. 2000||11 mai 2004||Fargo Electronics, Inc.||Printer with a process diagnostics system for detecting events|
|US6754601||30 sept. 1999||22 juin 2004||Rosemount Inc.||Diagnostics for resistive elements of process devices|
|US6772036||30 août 2001||3 août 2004||Fisher-Rosemount Systems, Inc.||Control system using process model|
|US6792388||4 avr. 2001||14 sept. 2004||Liqum Oy||Method and system for monitoring and analyzing a paper manufacturing process|
|US6839608 *||22 avr. 2002||4 janv. 2005||Bayer Aktiengesellschaft||Hybrid model and method for determining mechanical properties and processing properties of an injection-molded part|
|US6845289 *||22 avr. 2002||18 janv. 2005||Bayer Aktiengesellschaft||Hybrid model and method for determining manufacturing properties of an injection-molded part|
|US6915172||23 août 2002||5 juil. 2005||General Electric||Method, system and storage medium for enhancing process control|
|US7216005 *||28 mars 2006||8 mai 2007||Nissei Plastic Industrial Co., Ltd.||Control apparatus for injection molding machine|
|US7412426 *||21 juin 2004||12 août 2008||Neuramatix Sdn. Bhd.||Neural networks with learning and expression capability|
|US7702401||5 sept. 2007||20 avr. 2010||Fisher-Rosemount Systems, Inc.||System for preserving and displaying process control data associated with an abnormal situation|
|US7750642||28 sept. 2007||6 juil. 2010||Rosemount Inc.||Magnetic flowmeter with verification|
|US7778946||9 juil. 2008||17 août 2010||Neuramatix SDN.BHD.||Neural networks with learning and expression capability|
|US7896636 *||18 oct. 2007||1 mars 2011||Nissei Plastic Industrial Co., Ltd.||Support apparatus of injection-molding machine|
|US7921734||12 mai 2009||12 avr. 2011||Rosemount Inc.||System to detect poor process ground connections|
|US7940189||26 sept. 2006||10 mai 2011||Rosemount Inc.||Leak detector for process valve|
|US7949495||17 août 2005||24 mai 2011||Rosemount, Inc.||Process variable transmitter with diagnostics|
|US7953501||25 sept. 2006||31 mai 2011||Fisher-Rosemount Systems, Inc.||Industrial process control loop monitor|
|US8005647||30 sept. 2005||23 août 2011||Rosemount, Inc.||Method and apparatus for monitoring and performing corrective measures in a process plant using monitoring data with corrective measures data|
|US8044793||22 mars 2002||25 oct. 2011||Fisher-Rosemount Systems, Inc.||Integrated device alerts in a process control system|
|US8055479||10 oct. 2007||8 nov. 2011||Fisher-Rosemount Systems, Inc.||Simplified algorithm for abnormal situation prevention in load following applications including plugged line diagnostics in a dynamic process|
|US8060240 *||18 oct. 2007||15 nov. 2011||Nissei Plastic Industrial Co., Ltd||Injection molding control method|
|US8073967||15 avr. 2002||6 déc. 2011||Fisher-Rosemount Systems, Inc.||Web services-based communications for use with process control systems|
|US8112565||6 juin 2006||7 févr. 2012||Fisher-Rosemount Systems, Inc.||Multi-protocol field device interface with automatic bus detection|
|US8290721||14 août 2006||16 oct. 2012||Rosemount Inc.||Flow measurement diagnostics|
|US8301676||23 août 2007||30 oct. 2012||Fisher-Rosemount Systems, Inc.||Field device with capability of calculating digital filter coefficients|
|US8340789 *||21 oct. 2011||25 déc. 2012||Powitec Intelligent Technologies Gmbh||System for monitoring and optimizing controllers for process performance|
|US8417595||13 mai 2010||9 avr. 2013||Fisher-Rosemount Systems, Inc.||Economic calculations in a process control system|
|US8620779||13 mai 2010||31 déc. 2013||Fisher-Rosemount Systems, Inc.||Economic calculations in a process control system|
|US8712731||23 sept. 2011||29 avr. 2014||Fisher-Rosemount Systems, Inc.||Simplified algorithm for abnormal situation prevention in load following applications including plugged line diagnostics in a dynamic process|
|US8788070||26 sept. 2006||22 juil. 2014||Rosemount Inc.||Automatic field device service adviser|
|US8898036||6 août 2007||25 nov. 2014||Rosemount Inc.||Process variable transmitter with acceleration sensor|
|US9052240||29 juin 2012||9 juin 2015||Rosemount Inc.||Industrial process temperature transmitter with sensor stress diagnostics|
|US9094470||7 nov. 2011||28 juil. 2015||Fisher-Rosemount Systems, Inc.||Web services-based communications for use with process control systems|
|US9201420||30 sept. 2005||1 déc. 2015||Rosemount, Inc.||Method and apparatus for performing a function in a process plant using monitoring data with criticality evaluation data|
|US9207129||27 sept. 2012||8 déc. 2015||Rosemount Inc.||Process variable transmitter with EMF detection and correction|
|US9207670||19 sept. 2011||8 déc. 2015||Rosemount Inc.||Degrading sensor detection implemented within a transmitter|
|US9602122||28 sept. 2012||21 mars 2017||Rosemount Inc.||Process variable measurement noise diagnostic|
|US9760651||16 juin 2015||12 sept. 2017||Fisher-Rosemount Systems, Inc.||Web services-based communications for use with process control systems|
|US20010051858 *||15 déc. 2000||13 déc. 2001||Jui-Ming Liang||Method of setting parameters for injection molding machines|
|US20030014152 *||22 avr. 2002||16 janv. 2003||Klaus Salewski||Hybrid model and method for determining manufacturing properties of an injection-molded part|
|US20030050728 *||22 avr. 2002||13 mars 2003||Bahman Sarabi||Hybrid model and method for determining mechanical properties and processing properties of an injection-molded part|
|US20030114940 *||28 janv. 2003||19 juin 2003||Einar Brose||Method for the remote diagnosis of a technological process|
|US20030139904 *||4 avr. 2001||24 juil. 2003||Sakari Laitinen-Vellonen||Method and system for monitoring and analyzing a paper manufacturing process|
|US20040165061 *||18 févr. 2004||26 août 2004||Jasinschi Radu S.||Method and apparatus for generating metadata for classifying and searching video databases based on 3-D camera motion|
|US20060149692 *||21 juin 2004||6 juil. 2006||Hercus Robert G||Neural networks with learning and expression capability|
|US20060224540 *||28 mars 2006||5 oct. 2006||Nissei Plastic Industrial Co., Ltd.||Control apparatus for injection molding machine|
|US20080099943 *||18 oct. 2007||1 mai 2008||Nissei Plastic Industrial Co., Ltd.||Injection molding control method|
|US20080102147 *||18 oct. 2007||1 mai 2008||Nissei Plastic Industrial Co., Ltd.||Support apparatus of injection-molding machine|
|US20090119236 *||9 juil. 2008||7 mai 2009||Robert George Hercus||Neural networks with learning and expression capability|
|US20120065746 *||21 oct. 2011||15 mars 2012||Powitec Intelligent Technologies Gmbh||Control system|
|CN100454314C||24 déc. 2004||21 janv. 2009||雅马哈发动机株式会社||Multiobjective optimization apparatus, multiobjective optimization method and multiobjective optimization program|
|DE19824838A1 *||4 juin 1998||9 déc. 1999||Leybold Systems Gmbh||Verfahren zum Herstellen von Kristallen|
|WO1999063134A1 *||1 juin 1999||9 déc. 1999||Leybold Systems Gmbh||Device and method for growing crystals|
|WO2001075222A2 *||4 avr. 2001||11 oct. 2001||Liqum Oy||Method and system for monitoring and analyzing a paper manufacturing process|
|WO2001075222A3 *||4 avr. 2001||13 déc. 2001||Laitinen Vellonen Sakari||Method and system for monitoring and analyzing a paper manufacturing process|
|WO2002010866A3 *||13 juil. 2001||25 avr. 2002||Siemens Ag||Method for the remote diagnosis of a technological process|
|Classification aux États-Unis||706/25, 706/31|
|Classification internationale||G06N3/08, G05B13/02|
|Classification coopérative||G05B13/027, G06N3/084|
|Classification européenne||G05B13/02C1, G06N3/08B|
|10 févr. 1998||CC||Certificate of correction|
|17 avr. 2001||REMI||Maintenance fee reminder mailed|
|23 sept. 2001||LAPS||Lapse for failure to pay maintenance fees|
|27 nov. 2001||FP||Expired due to failure to pay maintenance fee|
Effective date: 20010923