WO1991002322A1 - Pattern propagation neural network - Google Patents

Pattern propagation neural network Download PDF

Info

Publication number
WO1991002322A1
WO1991002322A1 PCT/US1990/004483 US9004483W WO9102322A1 WO 1991002322 A1 WO1991002322 A1 WO 1991002322A1 US 9004483 W US9004483 W US 9004483W WO 9102322 A1 WO9102322 A1 WO 9102322A1
Authority
WO
WIPO (PCT)
Prior art keywords
output
neurons
neural network
neuron
desired output
Prior art date
Application number
PCT/US1990/004483
Other languages
French (fr)
Inventor
Patrick F. Castelaz
Original Assignee
Hughes Aircraft Company
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hughes Aircraft Company filed Critical Hughes Aircraft Company
Publication of WO1991002322A1 publication Critical patent/WO1991002322A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Definitions

  • This invention relates to feature extraction and pattern recognition devices, and in particular, to a neural network that can internally develop, or “learn", the algorithms required for identification of features in a signal.
  • FIG. 2 is an illustration of the training technique employed by the present invention.
  • FIG. 4 is an illustration of an alternate embodiment of the neural network of the present invention.
  • the neural network 10 comprises a plurality of rows of individual processors;_, or "neurons", arranged in a configuration of the general* class known as a multilayer perceptron. It should be noted that while the preferred embodiment of the present invention is applied to a multilayer perceptron, the techniques of the present invention may be employed on other neural network architectures as well.
  • a multilayer perceptron 10 such as the one shown in FIG. 1, the neurons are arranged in three or more layers. Each neuron produces an output that is some predetermined function of its input.
  • the first, or input layer comprises neurons that are called input neurons 12, and the neurons in the last layer are called output neurons 14.
  • the input 12, inner 16 and output 14 neurons all comprise similar processing units having one or more inputs and producing a single output signal.
  • Each neuron produces an output that is a continuous differentiable nonlinear or semilinear function of its input. It is preferred that this function, called an activation function, should be a sigmoid logistic nonlinearity of the general form:
  • the PPNN 30 determines if the next layer is the input layer. If it is, the training signal 20 is again presented and the weights adjusted repeatedly until the error is reduced to an acceptable level (Steps 36- 44) . If, in Step 46, the next layer is not the input layer, then the actual output of each neuron in the next layer is compared to the desired output and an error signal in accordance with equation (3) is generated. (Step 48) . Next this error signal is used to adjust all the weights in the current layer. (Step , 44). Steps 44 through 48 are repeated until the input layer is reached, at which point the training steps (Steps 36 through 48) , are repeated. When the error signal at the output layer is finally smaller than the preset tolerance (Step 42) , the training procedure for that trainging signal 20 is complete. (Step 50) .

Abstract

A pattern propagation neural network (3) is trained by presenting a desired output (22) to inner neurons (16) as well as output neurons (14). Weighted connections (18) between the neurons (12, 14, 16) are adapted during training to reduce the difference between the desired output (22) and the actual output of inner (16) and output (14) neurons. The training method employed by the pattern propagation neural network (3) reduces the required training time and improves the ability of the network (30) to solve problems.

Description

PATTERN PROPAGATION NEURAL NETWORK
BACKGROUND OF THE INVENTION Technical Field
This invention relates to feature extraction and pattern recognition devices, and in particular, to a neural network that can internally develop, or "learn", the algorithms required for identification of features in a signal.
2. Discussion
The ability to recognize patterns is a major part of the development of artificial systems that match the perceptual abilities of biological systems. Speech and visual pattern recognition are two areas in which conventional computers are seriously deficient. In an effort to develop artificial systems that can perform these and other tasks, a number of signal processing techniques have been developed to extract features from signals. These techniques typically involve extensive preprocessing. Such preprocessing may require, for example, measuring pulse width, amplitude, rise and fall times, frequency, etc. Once these features are extracted, they can be matched with stored patterns for classification and identification of the signal. The software required to accomplish these steps is often complex and is time consuming to develop. Moreover, conventional digital signal processors are not able to tolerate certain variations in the input signal, such as changes in orientation of a visual pattern, or differences in speakers, as in the case of speech recognition.
In recent years it has been realized that conventional Von Neumann computers, which operate serially, bear little resemblance to the parallel processing that takes place in biological systems such as the brain. It is not surprising, therefore, that conventional signal processing techniques should fail to adequately - perform the tasks involved in human perception. Consequently, new methods based on neural models of "the brain are being developed to perform perceptual tasks. These systems are known variously as neural networks, neuromorphic systems, learning machines, parallel distributed processors, self-organizing systems, or adaptive logic systems. Whatever the name, these models utilize numerous nonlinear computational elements operating in parallel and arranged in patterns reminiscent of biological neural networks. Each computational element or "neuron" is connected via weights or "synapses" that typically are adapted during training to improve performance. Thus, these systems exhibit self-learning by changing their synaptic weights until the correct output is achieved in response to a particular input. Once trained, neural nets are capable of recognizing a target signal and producing a desired output even where the input is incomplete or hidden in background noise. Also, neural nets exhibit greater robustness,' or fault tolerance, than Von Neumann sequential - computers because there are many more processing nodes, each with primarily local connections. Damage to a few nodes, or links, need not impair overall performance significantly.
There is a wide variety of neural net models utilizing various topologies, neuron characteristics, and training or "learning rules. Learning rules specify an internal set of weights and indicate how weights should be adapted during - use, or training, to improve performance. By way of illustration, some of these neural net models include the Perceptron, described in U.S. Patent No. 3,287,649 issued to F. Rosenblatt; the Hopfield Net, described in U. S. Patent Nos. 4,660,166 and 4,719,591 issued to J. Hopfield; the Hamming Net and Kohohonen self-organizing maps, described in R. Lippman, "An Introduction to Computing with Neural Nets", IEEE ASSP Magazine, April 1987, pages 4-22; and the Generalized Delta Rule for Multilayered Perceptrons, described in Rumelhart, Hinton, and Williams, "Learning Internal Representations by Error Propagation", in D. E. Rumelhart and J. L. McClelland (Eds.), Parallel Distributed Processing; Explorations in the Microstructure of Cognition. Vol. 1: Foundations. MIT Press (1986) .
One impediment to the practical application of neural networks has been the relatively extensive training required to train the networks to solve certain problems. Another limitation is the relatively large size and complexity of the network required to solve certain complex problems. In particular, a large number of neurons and interconnects may be required, which makes the system difficult to manufacture, expensive, bulky, etc.
Thus it would be desirable to provide a neural network that requires less time for training than conventional neural networks. Further, it would be desirable to provide a neural network that can solve a particular problem with reduced complexity in terms of the required number of neurons and interconnects.
SUMMARY OF THE INVENTION In accordance with the teachings of the present invention, a Pattern Propagation Neural Network (PPNN) is provided which is trained by presenting a training input to its input neurons and also a desired output to its output neurons. In addition, the same desired output is presented to inner neurons as well. In this way, the Pattern Propagation Neural Network "learns" and adapts to the desired output in response to an input, not only at the output neurons, but at inner neurons as well.
BRIEF DESCRIPTION OF THE DRAWINGS The various advantages of the present invention will become apparent to those skilled in the art after reading the following specification and by reference to the drawings in which:
FIG. 1 is an illustration of a neural network training procedure of the Prior Art;
FIG. 2 is an illustration of the training technique employed by the present invention; and
FIG. 3 is a flow chart of the steps of the training technique for a neural network in accordance with the present invention.
FIG. 4 is an illustration of an alternate embodiment of the neural network of the present invention.
DESCRIPTION OF THE PREFERRED EMBODIMENT In accordance with the teachings of the present invention, a method and apparatus is provided for training a neural network. One of the objectives of the invention is to provide a neural network apparatus and technique which reduces the amount of training required for the network to solve a particular problem.
_ Referring now to FIG. 1, a drawing of a conventional neural network 10 is shown. The neural network 10 comprises a plurality of rows of individual processors;_, or "neurons", arranged in a configuration of the general* class known as a multilayer perceptron. It should be noted that while the preferred embodiment of the present invention is applied to a multilayer perceptron, the techniques of the present invention may be employed on other neural network architectures as well. In a multilayer perceptron 10, such as the one shown in FIG. 1, the neurons are arranged in three or more layers. Each neuron produces an output that is some predetermined function of its input. The first, or input layer comprises neurons that are called input neurons 12, and the neurons in the last layer are called output neurons 14. These neurons 12, 14 may be constructed from a variety of conventional, digital or analog devices. For example, circuits employing op amps may be used for the neurons 12, 14. One or more inner layers comprise additional neurons, called inner, or hidden, neurons 16. While only a small number of neurons are shown in each layer in FIG. 1, it will be understood that any number of neurons may be employed depending on the complexity of the problem to be solved. As is characteristic of the multilayer perceptron, each neuron in each layer is connected to each neuron in each adjacent layer. That is, each input neuron 12 is connected to each hidden neuron 16 in the adjacent layer. Likewise, each hidden neuron 16 is connected to each neuron in the next adjacent layer. This next layer may comprise additional hidden neurons 16 or the next layer may comprise the output neurons 14. It should be noted that in a perceptron, neurons are not typically connected to other neurons in the same layer, although they may be.
Each of the connections 18 between the neurons are weighted, "synaptic", connections. These synaptic connections 18 may be implemented with variable resistances, with amplifiers having variable gains, with FET connection control devices utilizing capacitors, or with other suitable circuits. The synaptic connections 18 are capable of reducing or increasing the strength of the connection between the neurons. While the connections 18 are shown with single lines, it will be understood that two individual lines may also be employed to provide signal transmission in two directions, as required during the training procedure. The value of the connection strength of each connection 18 may vary from some predetermined maximum value to zero. When the weight is zero there is, in effect, no connection between the two neurons. The actual effect of the synaptic weight in situ may represent both positive and negative strengths.
The process of training the neural network 10 to recognize a particular signal, requires adjusting the strengths of each synaptic connection 18 in a repetitive fashion until the desired output is produced in response to a particular input. More specifically, in accordance with the preferred embodiment the present invention utilizes the technique called backward error propagation. The back propagation training technique is described in more detail in the above-mentioned articles by Rumelhart and Lippmaι which are incorporated herein by reference. During training, in accordance with the back propagation technique, a signal containing a known, or target, waveform 20 is fed to the input neurons 12. This input signal may comprise a series of binary or continuous valued inputs supplied to the input neurons 12. For example, if the neural network 10 is to be used as a speech recognizer the inputs might be the output envelope values from a filter bank spectral analyzer sampled at one time instant and the classes identified by the output signal might represent different vowels. In an image classifier the inputs might be the gray scale level of each pixel for a picture and the classes identified might represent different objects. Whatever the nature of the input signal 20, in response to this input a particular output is produced at the output neurons 14 that is a function of both the' processing by each neuron, and the weighted value of each synaptic connection 18.
In conventional training algorithms such as the one shown in FIG. 2, the output of the output neurons 14 is compared to a desired output and the difference between the actual and desired output is computed. The process of comparing the desired output with the actual output is indicated by the shading in the output neurons 14. A desired output 22 is shown comprising a binary value for each output neuron 14. It should be noted that the desired output may instead be a continuous value within a predetermined range. Based on the difference between the actual and desired output, an error signal is produced which is used to adjust the synaptic connections 18 in a way that reduces the value of the error. The error signal for each output neuron 14 is then transmitted to each neuron in the preceding hidden layer, and the weights in the next layer of synaptic connections 18 are adjusted in response to these error signals calculated for those layers. The new error signals are a function of the sum of the previous error signals and the previous weights.
In a conventional back propagation training algorithm, the above procedure is repeated until the error signal is reduced to an acceptable level, and the neural network 10 produces the desired output in response to the training input 20. Once trained, a signal to be identified or classified is fed to the input neurons 12 in a way similar to the manner in which the training signal 20 was introduced. The signal to be identified may or may not contain the training signal 20 or it may contain a degraded or noisy version of the training signal 20. If the training signal 20 is present in some form, the trained neural network 10 will respond with the output that corresponds to that training signal 20 during training. If the training signal 20 is not present, a different response, or no response, will be produced.
It will be recalled that during the above discussion of the backward error propagation technique the actual output* of the output neurons 14 is compared to the desired output 122, as indicated by the shading in the output neurdns 1 . Once the error signal is computed based on this difference, the error signal is propagated to each preceding neuron, altered only by the neurons and the weights of- the synaptic connections 18 through which it travels. Thus, the desired output is only presented to output neurons 14. This approach has generally been thought to be preferred because the desired output is actually only desired at the output neurons 14. However, the applicant has discovered that, surprisingly, learning and performance of the neural network is greatly improved if the desired output is presented, not only at the output neurons 14, but at the inner neurons 16 as well.
Referring now to FIG. 2, a Pattern Propagation Neural Network 30 (PPNN) in accordance with the present invention is shown. The PPNN 30 is trained by presenting the desired output 22 to the inner neurons 16 as well as the output neurons 14 as indicated by the shading for all of these .neurons in FIG. 2. When presented with the desired-'outpuf 22, the inner neurons 16, as well as the output neurons 14, generate error signals that are a function O-J.the difference between the actual output of each neuron and the desired output 22. Thus, the PPNN 30 will adjust weights in a way which tends to change outputs at the inner layers as well as the output layer to mo e clβsely approximate the desired output . It is notable thiϊt, prior approaches to training neural nets are based $. an assumption that it was only desirable for output neurons 14 to produce the desired output 22, and that thfe inner neurons 16 should be producing some other output in order to best achieve the desired output by the output neurons 14.
The technique for training the PPNN 30 in accordance with the preferred embodiment of the present invention will now be described in more detail. Referring to FIG. 2, the input 12, inner 16 and output 14 neurons all comprise similar processing units having one or more inputs and producing a single output signal. Each neuron produces an output that is a continuous differentiable nonlinear or semilinear function of its input. It is preferred that this function, called an activation function, should be a sigmoid logistic nonlinearity of the general form:
1 ( 1) y ,(ij ) " i+β- iydj j + θ j )
Figure imgf000011_0001
where y' ,.. is the output of neuron j in layer i, ∑.w..y.... is the sum of the inputs to neuron j from the previous layer, y,... is the output of each neuron in the previous layer, w.. is the weight associated with each synaptic connection 18 between the neurons in the previous layer to neuron j, and θ . is a bias similar in function to a threshold.
During training, the activation function Y , - ^. _ generally remains the same for each neuron but the weights of each synaptic connection 18 are modified. It may be possible, however, to also modify the threshold θ. Thus, the patterns of connectivity are modified as a function of experience. The weights on each synaptic connection 18 are modified according to: (2) ΔW. = η δjYi
where 6 j is an error signal available to the neuron receiving input along that line, yi is the output of the neuron sending activation along that line or is an input, and ηis a constant ' of proportionality also called the learning rate.
The determination of the error signal δ j starts with the output units. First, a training signal 20 is transmitted to the input neurons 12. This will cause a signal to be propagated through the PPNN 30 until an output signal is produced. This output is then compared with the output that is desired. This is accomplished by presenting a desired output 22 to the output 14 as well as inner 16 neurons, by means of line 23. For example, a binary output such as the one 22 shown in FIG. 2 may be desired where, in response to a particular training signal 20, certain ones of the output neurons 24 are "on", and the others are "off". It should be noted that the activation function Y 4-s cannot reach the extreme values of one or zero without infinitely large weights, so that, as a practical matter, where the desired outputs are zero or one, values of, for example, .1 and .9 can be used as target values. The actual output produced by each output neuron 14 and for each inner neuron 16 is compared with the desired output and the error signal is calculated from this difference. This calculation may be performed inside or external to the neurons 14, 16. For all neurons 14, 16:
(3) δ . = (t±j Yij HYicHl-Y dj , ) -
Figure imgf000012_0001
where . is' the desired output for neuron j and y. is the actual output. It should be noted that equation (3) is used for output neurons 14 as well as inner neurons 16 in accordance with the present invention. In conventional back propagation techniques, equation (3) is only used for the output neuron 14, and the error signal δj is computed for inner neurons 16 without use of the desired output d. in the calculation.
From equation (2) it can be seen that the learning rate η will effect how much the weights are changed each time the error signal δ is propagated. The larger η , the larger the changes in the weights and the faster the learning rate. If, however, the learning rate is made too large the system can oscillate. Oscillation can be avoided even with large learning rates by using a momentum term - . For example:
(4) Δ w. _ η δ . + ~ Δ w. where θ < ~ < | and Δ w. is the value of the previous weight change for that synaptic connection 18. The constant ~ determines the effect of past weight changes on the current direction of movement in weight space, providing a kind of momentum in weight space that effectively filters out high frequency variations.
FIG. 4 is an illustration of an alternative embodiment of the PPNN 30 having only a single layer of inner neurons 16. In addition, there is a different number of output neurons 14 than inner neurons 16. In this case, more than one inner neurons 16 may be presented with the portion of the desired output 22 that is presented to a given output neuron 14. A summary of the PPNN 30 training algorithm is shown in FIG. 3. First, the weights w and neuron offsets, such as θ , are set to small random values. (Step 34) . The training signal 20 is then presented to the input neurons 12 (Step 36) . After the training signal 20 is propagated through each layer of neurons, an output value is generated for each output neuron, as a result of the outputs and weights for all the neurons. Next, the actual output is compared to the desired output 22 for each output neuron 14, and the error signal δ in equation (3) is computed (Step 40) . The error signal is then compared to a preset threshold (Step 42) . If the error is larger than the tolerance, the error signal is used to determine a new value for the weight for the connective synapse 18 for each output neuron 14 in accordance with equations (2) or (4) . (Step 44) .
Next, in decision diamond 46, the PPNN 30 determines if the next layer is the input layer. If it is, the training signal 20 is again presented and the weights adjusted repeatedly until the error is reduced to an acceptable level (Steps 36- 44) . If, in Step 46, the next layer is not the input layer, then the actual output of each neuron in the next layer is compared to the desired output and an error signal in accordance with equation (3) is generated. (Step 48) . Next this error signal is used to adjust all the weights in the current layer. (Step , 44). Steps 44 through 48 are repeated until the input layer is reached, at which point the training steps (Steps 36 through 48) , are repeated. When the error signal at the output layer is finally smaller than the preset tolerance (Step 42) , the training procedure for that trainging signal 20 is complete. (Step 50) .
The PPNN 30 may then be retrained with a new training signal. Once training for all the training signals is complete, an unknown signal may be presented to the input neurons 12. (Step 52). After the signal is propagated Jthrough the network, the output neurons 14 will produce an output signal. If the training signal 20 is present in some form in the unknown signal input, the PPNN 30 will produce the desired output 22 to correctly identify, ©r classify the input. (Step 54) .
It should be noted that beyond solving one and two dimensional problems as mentioned above, the PPNN 30 is adaptable to multi-dimensional problems such as predet_ectiα:h data fusion, natural language processing, real time synthetic expert (not requiring an expert) systems, multi-dime sional optimization classes of problems, and other classical pattern recognition problems including associative memory applications. It will be appreciated that the basic components of the PPNN 30 may be implemented via software, or with conventional analog or digital electrical circuits, as well as with analog VLSI circuitry. Also, optical devices may be used for some or all of the functions of the PPNN 30. An optical embodiment has been made feasible due to recent advances in such areas as holographic storage, phase conjugate optics, and wavefront modulation and mixing. In addition, once the PPNN 30 has been trained to recognize a particular waveform, the PPNN could then be reproduced an unlimited number of times by making an exact copy of the trained PPNN 30 having the same but fixed synaptic weight values as the trained PPNN 30. In this way, mass production of PPNN's 30 is possible without repeating the training process.
From the foregoing description it can be appreciated that the present invention provides a high speed pattern propagation neural network 30 that is capable of self-learning and can be implemented with noncomplex, low cost components and without software. It is more fault tolerant than conventional signal processors and can perform target identification in a robust manner. The PPNN 30 can be re-trainable for whole new classes of targets. The learning procedure of the PPNN 30, in accordance with the present invention, decreases the training time required, and reduces the network complexity required to solve certain problems. Those skilled in the art can appreciate that other advantages can be obtained from the use of this invention and that modifications can be made without departing from the true spirit of the invention after studying the specification, drawings and following claims.

Claims

CLAIMSWhat is Claimed is:
1. A neural network (30) having at least three layers of neurons (12, 14, 16) , including an input layer adapted to receive input signals (20) , one or more inner layers, and an output layer adapted to produce an output, a plurality of connective synapses (18) providing a weighted coupling between said neurons, said neural network (30) being capable of adapting to produce a desired output (22) in response to an input by changing the value of said synaptic weights (18) , characterized by: said neurons (12, 14, 16) including a means for changing said weights (14, 16) to produce said desired output during a training procedure; means for presenting said desired output to both output neurons and inner neurons during training (23) , whereby after a plurality of said training procedures, said neural network (30) will respond with said desired output (22) to a new input signal that is similar to said training input (20) .
2. The neural network (30) of Claim 1 wherein said means for changing said weights (18) further comprises: means for computing the difference between said desired output and the actual output of output neurons and inner neurons during training (12, 14, 16); and means for adjusting said weights so as to minimize the difference between said desired output and the actual output (12, 14, 16) .
3. The neural network of Claim 1 wherein said desired output (22) is a binary signal produced by selected ones of said output neurons (14).
4. The neural network of Claim 1 wherein said desired output is a continuous valued signal produced by selected ones of said output neurons.
5. The neural network of Claim 1 wherein said neurons (12, 14, 16) in a given layer produce an output that is a sig oid nonlinear function which takes the form l y'(ij) = 1, α+. _e_ ~( - y i ij (ij)- θ j) where y....is the output of each neuron in the previous layer to which the neuron is connected, W /-H\-S the weight associated with each synapse connecting the neurons in the previous layer to the neuron in the given layer, and Q . is a fixed threshold.
6. The neural network (30) of Claim 2 wherein said means for computing the difference between said desired output and the actual output of output neurons and inner neurons (12, 14, 16) generates an error term taking the form:
Figure imgf000017_0001
where y. is the actual output of the neuron, and d. is the desired output for the neuron.
7. The neural network of Claim 2 wherein said means for adjusting weights to minimize the difference between said desired output and the actual output (12, 14 16), comprises a means for adjusting synaptic weights by an amount Δ w that is calculated according to
Δ w = η δ . γ where η is a gain term, and y. is the output of the neuron sending a signal along that synaptic connection.
PCT/US1990/004483 1989-08-11 1990-08-09 Pattern propagation neural network WO1991002322A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US39269089A 1989-08-11 1989-08-11
US392,690 1989-08-11

Publications (1)

Publication Number Publication Date
WO1991002322A1 true WO1991002322A1 (en) 1991-02-21

Family

ID=23551617

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1990/004483 WO1991002322A1 (en) 1989-08-11 1990-08-09 Pattern propagation neural network

Country Status (3)

Country Link
EP (1) EP0438573A1 (en)
JP (1) JPH04501327A (en)
WO (1) WO1991002322A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE4208727A1 (en) * 1991-03-25 1992-10-01 Atr Interpreting Telephony Res Learning process for neural network - comparing sample signals and categorising for use in backward learning process
WO1997004400A1 (en) * 1995-07-24 1997-02-06 The Commonwealth Of Australia Selective attention adaptive resonance theory
DE19653554A1 (en) * 1996-12-20 1998-06-25 Siemens Nixdorf Advanced Techn Neural network training method
EP0782083A3 (en) * 1995-12-27 1998-12-23 Kabushiki Kaisha Toshiba Data processing system

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10546242B2 (en) 2017-03-03 2020-01-28 General Electric Company Image analysis neural network systems

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
IEEE First International Conference on Neural Networks, San Diego, California, 21-24 July 1987, A.D. McAULAY: "Engineering Design Neural Networks using Split Inversion Learning", pages IV-635-IV-642 see page IV-637, lines 6-32; page IV-638, lines 1-25; page IV-639, lines 18-28; figure 1 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE4208727A1 (en) * 1991-03-25 1992-10-01 Atr Interpreting Telephony Res Learning process for neural network - comparing sample signals and categorising for use in backward learning process
WO1997004400A1 (en) * 1995-07-24 1997-02-06 The Commonwealth Of Australia Selective attention adaptive resonance theory
EP0782083A3 (en) * 1995-12-27 1998-12-23 Kabushiki Kaisha Toshiba Data processing system
US5983210A (en) * 1995-12-27 1999-11-09 Kabushiki Kaisha Toshiba Data processing system, system-build system, and system-build method
DE19653554A1 (en) * 1996-12-20 1998-06-25 Siemens Nixdorf Advanced Techn Neural network training method

Also Published As

Publication number Publication date
EP0438573A1 (en) 1991-07-31
JPH04501327A (en) 1992-03-05

Similar Documents

Publication Publication Date Title
US5003490A (en) Neural network signal processor
Singh et al. A study on single and multi-layer perceptron neural network
US5150323A (en) Adaptive network for in-band signal separation
Pal et al. Multilayer perceptron, fuzzy sets, classifiaction
US5402522A (en) Dynamically stable associative learning neural system
Uhrig Introduction to artificial neural networks
US6038338A (en) Hybrid neural network for pattern recognition
US5588091A (en) Dynamically stable associative learning neural network system
EP0591415A1 (en) Sparse comparison neural network
Uwechue et al. Human face recognition using third-order synthetic neural networks
WO1990014631A1 (en) Dynamically stable associative learning neural system
GB2245401A (en) Neural network signal processor
WO1991002323A1 (en) Adaptive network for classifying time-varying data
WO2005022343A2 (en) System and methods for incrementally augmenting a classifier
CA2002681A1 (en) Neural network signal processor
WO1991002322A1 (en) Pattern propagation neural network
JPH0581227A (en) Neuron system network signal processor and method of processing signal
Arbib The artificial neuron
KR20210146002A (en) Method and apparatus for training multi-layer spiking neural network
AU620959B2 (en) Neural network signal processor
Namarvar et al. The Gauss-Newton learning method for a generalized dynamic synapse neural network
Yu et al. Pattern classification and recognition based on morphology and neural networks
Munakata Neural networks: Fundamentals and the backpropagation model
Ritter et al. Noise tolerant dendritic lattice associative memories
Zaknich Introduction to Neural Networks

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): JP

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH DE DK ES FR GB IT LU NL SE

WWE Wipo information: entry into national phase

Ref document number: 1990912378

Country of ref document: EP

WWP Wipo information: published in national office

Ref document number: 1990912378

Country of ref document: EP

WWW Wipo information: withdrawn in national office

Ref document number: 1990912378

Country of ref document: EP