EP0411341A2 - Neural network - Google Patents

Neural network Download PDF

Info

Publication number
EP0411341A2
EP0411341A2 EP90112956A EP90112956A EP0411341A2 EP 0411341 A2 EP0411341 A2 EP 0411341A2 EP 90112956 A EP90112956 A EP 90112956A EP 90112956 A EP90112956 A EP 90112956A EP 0411341 A2 EP0411341 A2 EP 0411341A2
Authority
EP
European Patent Office
Prior art keywords
neural
inductance
processing system
outputting
data processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP90112956A
Other languages
German (de)
French (fr)
Other versions
EP0411341A3 (en
Inventor
Sunao C/O Yozan Inc. Takatori
Ryohei C/O Yozan Inc. Kumagai
Koji C/O Yozan Inc. Matsumoto
Makoto C/O Yozan Inc. Yamamoto
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yozan Inc
Sharp Corp
Original Assignee
Yozan Inc
Sharp Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yozan Inc, Sharp Corp filed Critical Yozan Inc
Publication of EP0411341A2 publication Critical patent/EP0411341A2/en
Publication of EP0411341A3 publication Critical patent/EP0411341A3/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means

Definitions

  • the present invention relates to a data processing system based on a concept of neural network.
  • a neural network for such data processing system is organ­ized by arranging neuron models (hereafter, called "neuron") 1 shown in Fig.11, in parallel as shown in Fig.12.
  • Data D0 is output according to the comparison result between the sum of multiplied input data and threshold ⁇ .
  • Data input from outside DI1, DI2, DI3,...DIn, are multiplied by weights W1, W2, W3,...Wn, respectively.
  • Various comparison manners are possible. For example: it is determined that; an output data DO becomes “1" when the sum is more than or equal to threshold ⁇ , and an output data Do becomes "0" when said sum is smaller than threshold ⁇ .
  • a neural network is constructed by connecting a neural layer in series, while the neural layer is constructed by arranging neurons in parallel.
  • a neural network is constructed with 3 layers consisting of as many neurons as the number of data inputted.
  • Macalloch - Pitts model handles normalized digital signal as an output, as shown in the following formula. ⁇ [ ( w i A i) - ⁇ ] where, ⁇ : normalizing function, wi: weight of i-th synapse, Ai: input to i-th synapse, ⁇ : threshold for neuron.
  • the neural network disclosed in these issues have the struc­ture to control by means of variable resistance inputs of opera­tion amplifiers, each of which is connected to all other amplifi­ers in order to evaluate the energy formula below.
  • the neural network suggested here is effective to calculate the minimum and maximum in variables given by the function equiv­ alent to the formula above. It is used to solve the problem of traveling salesman, for example.
  • Such neural network lacks, however, the function to output digital value with normalization, that is, according to a com­pression with threshold. It cannot realize the function of neural network of organisms for information compression, integra­tion and approximation. Therefore, the most advantageous charac­teristics of a neural network cannot be obtained by the above construction; that is, the improvement and associative convention of an input pattern cannot be performed.
  • the present invention is invented so as to solve the above problems of the prior art and has an object to provide a data processing system capable of executing objective data processing without fail.
  • the present invention has an object to provide a data proc­essing system as an integrated circuit with a function of normal­izing.
  • a data processing system is characterized in that; the number of neural layers is equal to the number with respect to the abstraction difference between output data and input data, and at least 2 neural layers are divided into a plurality of neural cells which includes at least one neuron, and a neuron included in each neural cell is not connected to the neuron of other neural cell.
  • a data processing system includes a resonance system for inputting which is connected to a switching circuit and is driven by a plurality of resonance systems for outputting. It realizes the function of normaliza­tion by the breaking performance of switching circuit.
  • the data processing system consists of a plurality of neural cells NC each of which is formed into a hexa­ gon so that a honeycomb structure is constructed as a whole.
  • an image is processed after it is divided into a plurality of unit areas.
  • a unit area is square or rectan­gle area such as 3 x 3 pixels' area.
  • the most preferable form of unit area is the one whose peripheral touches under equivalent conditions to adjacent unit areas.
  • the honeycomb structure will provide the best processing condition by means of the above preference. It is possible to setup such preferable unit area, because the structure of neural layer can be set up independently from a hardware for scanning an image. According to the above, it is possible to optimize processing manner by setting up a unit area at neural network side rather than input system.
  • a binary data is input to a data processing system through input system (not shown). As shown in Fig.1, a triangle figure F is input to the data processing system.
  • Each neural cell is composed by a plurality of neural layers (Fig.2), each of which is constructed by arranging a plurality of neurons N in parallel.
  • a structure of a neural cell is described in Fig.2.
  • a neuron NR of each neural layer is connected with all neurons of an adjacent neural layer.
  • Input data is processed by neural layers, successively, so as to obtain the final output, wherein the output of nth neural layer is the input of (n+1)th neural layer.
  • Fig.3 sche­matically shows neural cells of neural layers 11, 12, 13 and 14 and typical relationship between a neural cell and the following neural layer 20.
  • each of neural layers 11, 12, 13 and 14 is divided into hexagonal neural cells NC.
  • a neural cell includes a plurality of neural layers 11, 12, 13 and 14, and a large number of neurons are included in each neural layer. Neurons are connected with one another only within a neural cell and neurons belonging to another neural cells are never connected with each other. According to the above, each neural cell NC is not connected with each other so that data transmission is only performed inside each neural cell, individually. It is also possible to construct a neural cell. Preparing of a neural cell NC for 2 neural layers is acceptable.
  • a data processing system is able to furnish with the objec­tive processing function by learning.
  • performance of simple recognition of geometrical figure is described.
  • FIG.4 An embodiment of a neural layer for the performance of edge extraction is shown in Fig.4.
  • Neurons A to I corresponding to 3x3 convolution are shown in Fig.4, in which neuron outputs "1" with respect to the input with high brightness value.
  • Outputs A to I are input to neuron N1, and outputs A to D, and F to I are input to neuron N2. Then, outputs from neuron N1 and N2 are input to neuron N3.
  • weights for neuron N1 to N3 and thresholds are set up as shown in Table 1 to 3, as one exam­ple.
  • neuron N1 performs the processing according to; E ( A + B + C + D + F + G + H + I ) (2)
  • Neuron N2 performs the processing according to; A + B + C + D + F + G + H + I (3)
  • neuron N3 performs the processing accord­ing to AND logic of above formulas (2) and (3).
  • neuron N3 outputs "1" when the edge of a configu­ration is projected to neuron E.
  • Fig.5 input of this neural network is an output of neuron N3 of Fig.4.
  • neuron N3 according to neuron A to I is expressed as A′ to I′.
  • Neurons N401 to N424 and N425 are prepared for this process­ing.
  • Output A′ to D′ and F′ to I′ are input to neuron N401 to N424 with respect to the combinations shown in Table 4, and weights and threshold for the above input are shown as follows: Table 4 Neuron Weight Input Threshold N401 A′,B′ 1 1.5 N402 A′,C′ 1 1.5 N403 A′,D′ 1 1.5 N404 A′,F′ 1 1.5 N405 A′,G′ 1 1.5 N406 A′,H′ 1 1.5 N407 B′,C′ 1 1.5 N408 B′,D′ 1 1.5 N409 B′,F′ 1 1.5 N410 B′,G′ 1 1.5 N411 B′,I′ 1 1.5 N412 C′,D′ 1 1.5 N413 C′,F′ 1 1.5 N414 C′,H′ 1 1.5 N415 C′,I′ 1 1.5 N416 D′,G′ 1 1.5 N417 D′,H′ 1 1.5 N418 D′,
  • Outputs N401 to N424 are input to N425, and weight and threshold for the above are shown in Table 5.
  • Table 5 Neuron Weight Threshold N401 ⁇ N424 1 1.5
  • neuron N245 and E′ are input to neuron N246, and weight and threshold for the above are shown in Table 6.
  • Table 6 Neuron Weight Threshold N245 1 1.5 E 1
  • a high neuron efficiency can be obtained by an advanced artificial setup of layer in which each data is input.
  • a neural network with the same functions as Fig.4 can be constructed(Fig.6) by inputting of data E′ to the first neural layer, together with A′ to D′ and F′ to I′, according to the neural network in Fig.5.
  • Fig.6 a neural network with the same functions as Fig.4
  • Fig.6 a neural network with the same functions as Fig.6
  • Fig.6 a neural network with the same functions as Fig.6
  • Neural layer group INT is composed with plural processing systems P1 to Pn, and each process system is constructed with plural number of neural layers. Processing systems P1 to Pn are classified by the shape of a configuration, for example; P1: triangle, P2: rectangle, Pn: a polygon with (n+2) corners.
  • Output from said neural cell is the signal showing of exist­ence of edges(hereinafter called “ES") and corners(hereinafter called “CS") inside the neural cell.
  • ES edges
  • CS corners
  • Neurons corresponding to said neural cell are included in the neural network in Fig.9. Each neuron is connected to all neural cell NC as neuron 21 in Fig.3. Weight of each neuron is; the maximal as compared with input of corresponding neural cell CS, the negative weight whose absolute value is the largest as compared with neural cell surrounding to corresponding neural cell, heightened its weight as the distance from corresponding neural cell becomes larger. The above relationship is described in Fig.10 by the second dimensions.
  • corner signal of appeared pseudo-corner during the processing is weakened, and corner signal of corner A, B and C is enhanced.
  • the graph in Fig.10 shows the approximation curve of a second degree indicating the relationship between distance and weight. Needless to say, adaptation of a monotonous increasing curve is also possible.
  • a data to be given to each processing system of neural layer INT is an abstracted data at neural cell and can be said that is a data at high degree. It is possible to input data to input data to neural layer group INT, directly, when the image process­ing system, enable of extracting corner and edge data, is used as an input system. It is possible to heightened the neuron effi­ciency by inputting data at high degree, corner for example, to the neural layer at following column, when image data and charac­teristics data, corner for example, is confused. It is possible to calculate the minimum number of neural layers so as to achieve expected objective results, by calculating the difference of the degrees between final data to be output and first data to be input to neural layer.
  • the data processing system has the basic unit of the circuit shown in Fig. 13.
  • the circuit in Fig. 13 comprises normalizing circuit 101, closed circuits for outputting 102 and 102′.
  • Normalizing circuit 101 is a resonance system, comprising a closed circuit 103 for inputting, inductance L1 for inputting, capacitance CI for outputting and resistance RI.
  • Induced electromotive force EI is generated in closed circuit 103 for inputting by mutual inducing action between closed circuit 103 and closed circuit 102.
  • Capacitance CI for outputting is charged by induced current II generated by the induced electromo­tive force EI in closed circuit 103: consequently, potential difference is generated on opposite terminals of CI.
  • Switching circuit 4 includes a transistor TR, whose base and corrector are connected to opposite terminals of capacitance CI, respectively. When a forward voltage Vbe between a base and emitter of the transistor TR exceeds the breaking range of TR, TR becomes con­ductive. A collector voltage Vcc is loaded on the collector of TR, the emitter of transistor TR is earthed through inductance 105 for outputting.
  • Closed circuit 103 for inputting and tran­sistor TR are connected with each other through diodes D1 and D2 which have the function of commutation in forward direction so as to prevent the transistor TR from reversed bias.
  • diodes D1 and D2 which have the function of commutation in forward direction so as to prevent the transistor TR from reversed bias.
  • emitter current Ie is generated, which flows through inductance 105 for output.
  • Inductance 105 is connected magnetically with a plurality of closed circuit 102′ for outputting. Induced electromotive force EO is generated in each closed circuit 102′ by mutual inducing action between inductance 105 and each closed circuit 102′.
  • closed circuits 102 and 102′ comprise inductance LOI for inputting, inductance LOO for outputting and variable resistance r.
  • the induced electromotive force EO is generated by mutual inducing between inductance LOI for inputting and inductance LO for outputting of normalizing circuit 1.
  • Output current IO is generated by the induced electromotive force EO in closed circuit 102′ for outputting: simultaneously, the induced electromotive force EI is generated by mutual inducing action between inductance LOO for outputting and inductance LI for inputting in closed circuit 101 for inputting 1.
  • Potential difference Vci generated in capacitance CI by induced electromo­tive force EO is as follows.
  • Vci 1 CI ⁇ IIdt (1)
  • the electromotive force generated in each closed circuit 102 for output is assumed to be Vi (i is from 1 to n), and Vi is assumed as in formula (2).
  • Vi Voi sin( ⁇ t+Si) (2) where, ⁇ is the resonance angle frequency common to all closed circuits 102 band 102′ for outputting and closed circuit 103 for inputting. That is, all closed circuits have the common reso­nance frequency. The attenuation is minimized, during the trans­mission of electrical signal from closed circuit 103 for output­ting to normalizing circuit 1 and the transmission of electrical signal form normalizing circuit 101 to closed circuit 102′ for outputting, because of the common resonance frequency.
  • Induced electromotive force EI in closed circuit 103 for inputting is expressed in formula (6), assuming that the mutual inductance is M between inductance LOO for outputting in closed circuit 102 for outputting and inductance LI for inputting in closed circuit 103 for inputting, and assuming that ⁇ equal to Co ⁇ o ⁇ o (Co is light speed in vacuum, ⁇ o is dielectric con­stant in vacuum of rational units, ⁇ o is magnetic permeability in vacuum of rational units). Therefore, the current shown below is generated in closed circuit for inputting. Substituting (9) into (1), formula (10) is obtained. Accordingly, Vbe is expressed as in formula (11).
  • Fig. 16 is a diagram showing induced electromotive force Eo by generated emitter current in Fig. 15.
  • normalized circuit 101 can execute threshold processing based on switching characteristics. As shown in Fig. 17, any outputs of normalizing circuit 101 can be inputted to other normalizing circuits 101′ by closed circuit 102 (102′) for outputting, and simultaneously, the outputs of other normalizing circuits 101 ⁇ can be inputted to a normalizing circuit 101 by closed circuit 102 (102′) for outputting. There­fore, network can be regenerated, and it is possible to settle a weight of synapse in the network by adjusting variable resister r in closed circuit 102 (102′) for outputting.
  • insulator 108 is formed, which performs as:
  • Insulator 110 performs as:
  • the data processing system has; advantages of efficiency as well as faculty to achieve expected objective results, since the number of neural layers is equal to the number corresponding to the abstraction difference between output data and input data, the structure of at least 2 neural layers are divided into a plurality of neural cells each of which includes at least one neuron, and a neuron included in each neural cell is not connect­ed to the neuron of other neural cells; therefore, a predeter­mined processing is certainly performed because heterogeneous data are never processed in one neural cell but processed in parallel and independently from one another.
  • data processing system drives the resonance system for inputting connected to switching circuit by which the resonance system for outputting is driven. It is possible to possess the number of neurons for practicing, since it is possible to realize neural network with the function of normalization by analog circuit.

Abstract

A data processing system including a plurality of neural layers characterized in that each neural layer is divided into a plurality of groups and that neurons in one of the group in one layer are connected only with neurons in corresponding group of adjacent layers, whereby independent neural cells are constructed each of which comprises corresponding groups of neurons.

Description

    FIELD OF THE INVENTION
  • The present invention relates to a data processing system based on a concept of neural network.
  • A neural network for such data processing system is organ­ized by arranging neuron models (hereafter, called "neuron") 1 shown in Fig.11, in parallel as shown in Fig.12.
  • Data D0 is output according to the comparison result between the sum of multiplied input data and threshold ϑ . Data input from outside DI1, DI2, DI3,...DIn, are multiplied by weights W1, W2, W3,...Wn, respectively. Various comparison manners are possible. For example: it is determined that;
    an output data DO becomes "1" when the sum is more than or equal to threshold ϑ, and
    an output data Do becomes "0" when said sum is smaller than threshold ϑ.
  • A neural network is constructed by connecting a neural layer in series, while the neural layer is constructed by arranging neurons in parallel. Conventionally, there was not established theory for the construction of neural network. Usually, follow­ing to perceptron proposed by Rosenblatt, a neural network is constructed with 3 layers consisting of as many neurons as the number of data inputted.
  • Therefore, it has never been clarified the correlation between data processing to be performed by a neural network and the structure of the neural network. The evaluation cannot be made whether constructed neural network can accomplish expected object or not, until the neural network is experimented.
  • As for a neuron which is the component of the neural net­work, it is not so difficult to realize Macalloch - Pitts model by a digital circuit. Macalloch - Pitts model handles normalized digital signal as an output, as shown in the following formula.
    φ[ (
    Figure imgb0001
    w i A i) -ϑ]
    where, φ: normalizing function,
    wi: weight of i-th synapse,
    Ai: input to i-th synapse,
    ϑ: threshold for neuron.
  • However, rather large circuits are necessary for the calcu­lation including multiplications, so it is difficult to construct a large scale neural network due to circuitry limit of ICs.
  • It is suggested an attempt to construct neural network by analog circuit in United States Patent No. 4,660,166, No. 4,719,591 and No. 4,731,747.
  • The neural network disclosed in these issues have the struc­ture to control by means of variable resistance inputs of opera­tion amplifiers, each of which is connected to all other amplifi­ers in order to evaluate the energy formula below.
    Figure imgb0002
  • The neural network suggested here is effective to calculate the minimum and maximum in variables given by the function equiv­ alent to the formula above. It is used to solve the problem of traveling salesman, for example.
  • Such neural network lacks, however, the function to output digital value with normalization, that is, according to a com­pression with threshold. It cannot realize the function of neural network of organisms for information compression, integra­tion and approximation. Therefore, the most advantageous charac­teristics of a neural network cannot be obtained by the above construction; that is, the improvement and associative convention of an input pattern cannot be performed.
  • SUMMARY OF THE INVENTION
  • The present invention is invented so as to solve the above problems of the prior art and has an object to provide a data processing system capable of executing objective data processing without fail.
  • The present invention has an object to provide a data proc­essing system as an integrated circuit with a function of normal­izing.
  • A data processing system according to the present invention is characterized in that;
    the number of neural layers is equal to the number with respect to the abstraction difference between output data and input data, and
    at least 2 neural layers are divided into a plurality of neural cells which includes at least one neuron, and a neuron included in each neural cell is not connected to the neuron of other neural cell.
  • A data processing system according to the present invention includes a resonance system for inputting which is connected to a switching circuit and is driven by a plurality of resonance systems for outputting. It realizes the function of normaliza­tion by the breaking performance of switching circuit.
  • BRIEF DESCRIPTION OF THE DRAWINGS
    • Fig.1 shows a schematic diagram of an embodiment of a data processing system including neural cells, according to the present invention;
    • Fig.2 shows a schematic diagram including layers of above neural cell;
    • Fig.3 shows a schematic diagram indicating the structure of a neural cell and the structure of a neural layer for the follow­ing processing;
    • Fig.4 shows a schematic diagram indicating the edge extrac­tion processing system according to the above neural cell;
    • Fig.5 shows a schematic diagram indicating the corner extraction processing system according to the above neural cell;
    • Fig.6 shows a schematic diagram indicating the example of the transformation of Fig.4;
    • Fig.7 shows a schematic diagram indicating a synthetic processing system at following column of neural processing sys­tem;
    • Fig.8 shows a schematic diagram indicating a pseudo-corner of a configuration;
    • Fig.9 shows a schematic diagram indicating the relationship between neurons by the second dimensional location;
    • Fig.10 shows a graph indicating the relationship between neurons by distance and weight;
    • Fig.11 shows a schematic diagram indicating neuron model;
    • Fig.12 shows a schematic diagram indicating the example of neural layer;
    • Fig.13 shows the circuit with the structure of basic unit in the first embodiment of a data processing system according to the present invention;
    • Fig.14 is a diagram showing voltage between base and emitter of transistor in the first embodiment;
    • Fig.15 is a diagram showing current of emitter of transistor in the first embodiment;
    • Fig.16 is a diagram showing induced electromotive force of closed circuit for outputting in the first embodiment;
    • Fig.17 is a diagram showing a connection of normalizing cir­cuits;
    • Fig.18 is a perspective view of inductance in an IC;
    • Fig.19 is a perspective view of another inductance;
    • Fig.20 shows a circuit of a form of changeable resistance in and IC.
    Preferred Embodiment of the Present Invention
  • Hereinafter, an embodiment of the data processing system according to the present invention is described with referring to the attached drawings.
  • As shown in Fig.1, the data processing system consists of a plurality of neural cells NC each of which is formed into a hexa­ gon so that a honeycomb structure is constructed as a whole.
  • In general, an image is processed after it is divided into a plurality of unit areas. Here, a unit area is square or rectan­gle area such as 3 x 3 pixels' area. Although the form of above unit area is resulted from such performance of a hardware that an image is scanned along horizontal scan lines. The most preferable form of unit area is the one whose peripheral touches under equivalent conditions to adjacent unit areas. The honeycomb structure will provide the best processing condition by means of the above preference. It is possible to setup such preferable unit area, because the structure of neural layer can be set up independently from a hardware for scanning an image. According to the above, it is possible to optimize processing manner by setting up a unit area at neural network side rather than input system.
  • A binary data is input to a data processing system through input system (not shown). As shown in Fig.1, a triangle figure F is input to the data processing system.
  • Each neural cell is composed by a plurality of neural layers (Fig.2), each of which is constructed by arranging a plurality of neurons N in parallel. A structure of a neural cell is described in Fig.2. According to the above embodiment, a neuron NR of each neural layer is connected with all neurons of an adjacent neural layer. Input data is processed by neural layers, successively, so as to obtain the final output, wherein the output of nth neural layer is the input of (n+1)th neural layer. Fig.3 sche­matically shows neural cells of neural layers 11, 12, 13 and 14 and typical relationship between a neural cell and the following neural layer 20. In Fig.3 each of neural layers 11, 12, 13 and 14 is divided into hexagonal neural cells NC. That is, a neural cell includes a plurality of neural layers 11, 12, 13 and 14, and a large number of neurons are included in each neural layer. Neurons are connected with one another only within a neural cell and neurons belonging to another neural cells are never connected with each other. According to the above, each neural cell NC is not connected with each other so that data transmission is only performed inside each neural cell, individually. It is also possible to construct a neural cell. Preparing of a neural cell NC for 2 neural layers is acceptable.
  • A data processing system is able to furnish with the objec­tive processing function by learning. Hereinafter, performance of simple recognition of geometrical figure is described.
  • An embodiment of a neural layer for the performance of edge extraction is shown in Fig.4. Neurons A to I corresponding to 3x3 convolution are shown in Fig.4, in which neuron outputs "1" with respect to the input with high brightness value. When outputs of neurons A to I are deemed to be A to I, existence of edges except isolate points can be described by logical formula as follows:
    E(A+B+C+D+F+G+H+I)(A+­B+C+D+F+G+H+I)= 1      (1)
  • Outputs A to I are input to neuron N1, and outputs A to D, and F to I are input to neuron N2. Then, outputs from neuron N1 and N2 are input to neuron N3. Here, weights for neuron N1 to N3 and thresholds are set up as shown in Table 1 to 3, as one exam­ple. Table 1
    Weight and threshold for neuron N1
    Weight with respect to output A W1A -1
    Weight with respect to output B W1B -1
    Weight with respect to output C W1C -1
    Weight with respect to output D W1D -1
    Weight with respect to output E W1E 9
    Weight with respect to output F W1F -1
    Weight with respect to output G W1G -1
    Weight with respect to output H W1H -1
    Weight with respect to output I W1I -1
    Theshold ϑ1 0.5
    Table 2
    Weight and threshold for neuron N2
    Weight with respect to output A W2A 1
    Weight with respect to output B W2B 1
    Weight with respect to output C W2C 1
    Weight with respect to output D W2D 1
    Weight with respect to output E W2E 1
    Weight with respect to output F W2F 1
    Weight with respect to output G W2G 1
    Weight with respect to output H W2H 1
    Weight with respect to output I W2I 1
    Theshold ϑ2 0.5
    Table 3
    Weight and threshold for neuron N3
    Weight with respect ot output N1 W3N1 1
    Weight with respect to output N2 W2N2 1
    Threshold ϑ 1.5
  • Here, neuron N1 performs the processing according to; E (A + B + C + D + F + G + H + I)      (2)
  • Neuron N2 performs the processing according to; A + B + C + D + F + G + H + I      (3)
  • On the other hand, neuron N3 performs the processing accord­ing to AND logic of above formulas (2) and (3).
  • Therefore, neuron N3 outputs "1" when the edge of a configu­ration is projected to neuron E.
  • Hereinafter, an neural network which performs corner extrac­tion is described according to Fig.5. Here, input of this neural network is an output of neuron N3 of Fig.4. In Fig.5, neuron N3 according to neuron A to I is expressed as A′ to I′. The logic formula (4) for extracting the corner is shown as follows:
    E′ (A′ B′ +A′ C′ +A′ D′ +A′ F′ +A′ G′ +A′ H′ +B′ C′ +B′ D′ +B′ F′ +B′ G′ +B′ I′ +C′ D′ + C′ F′ +C′ H′ +C′ I′ +D′ G′ +D′ H′ +D′ I′ +F′ G′ +F′ H′ +F′ I′ +G′ H′ +G′ I′ +H′ I′ ) = 1      (4)
  • Neurons N401 to N424 and N425 are prepared for this process­ing. Output A′ to D′ and F′ to I′ are input to neuron N401 to N424 with respect to the combinations shown in Table 4, and weights and threshold for the above input are shown as follows: Table 4
    Neuron Weight Input Threshold
    N401 A′,B′ 1 1.5
    N402 A′,C′ 1 1.5
    N403 A′,D′ 1 1.5
    N404 A′,F′ 1 1.5
    N405 A′,G′ 1 1.5
    N406 A′,H′ 1 1.5
    N407 B′,C′ 1 1.5
    N408 B′,D′ 1 1.5
    N409 B′,F′ 1 1.5
    N410 B′,G′ 1 1.5
    N411 B′,I′ 1 1.5
    N412 C′,D′ 1 1.5
    N413 C′,F′ 1 1.5
    N414 C′,H′ 1 1.5
    N415 C′,I′ 1 1.5
    N416 D′,G′ 1 1.5
    N417 D′,H′ 1 1.5
    N418 D′,I′ 1 1.5
    N419 F′,G′ 1 1.5
    N420 F′,H′ 1 1.5
    N421 F′,I′ 1 1.5
    N422 G′,H′ 1 1.5
    N423 G′,I′ 1 1.5
    N424 H′,I′ 1 1.5
  • Outputs N401 to N424 are input to N425, and weight and threshold for the above are shown in Table 5. Table 5
    Neuron Weight Threshold
    N401 ∼ N424 1 1.5
  • This is equivalent to OR logic.
  • Furthermore, output of neuron N245 and E′ are input to neuron N246, and weight and threshold for the above are shown in Table 6. Table 6
    Neuron Weight Threshold
    N245
    1 1.5
    E 1
  • This is equivalent to AND logic.
  • Although setup of above weight is automatically performed by learning of a data processing system and optimized association can be obtained by appropriate learning, a high neuron efficiency can be obtained by an advanced artificial setup of layer in which each data is input. For example, a neural network with the same functions as Fig.4 can be constructed(Fig.6) by inputting of data E′ to the first neural layer, together with A′ to D′ and F′ to I′, according to the neural network in Fig.5. In this case, it is possible to reduce the number of layers by 1; however, extreme increase in number of connection lines, contrary to the above, as to increase the number of synapses of the first neural layer by 24. Since input E′ according to the formula (4), is affected with respect to the result of logic operation (A′ B′ + ...), input E′ is considered to have the abstraction ratio at the same level of that of the result inside the parenthesis. Therefore, neuron efficiency can be improved by inputting layers correspond­ing to this abstraction ratio.
  • Here, the concept of degree according to abstraction ratio of data is implemented and following definitions are taken into the consideration:
    • 1. By a processing of a single neural layer, degree is heightened by 1.
    • 2. An input to the same neural layer is the one at the same degree.
  • According to the above definitions, 4 layers(in the case when the constructions in Fig.4 and Fig.6 are adapted) or 5 layers(in the case when the constructions in Fig.4 and Fig.5 are adapted) are necessary for corner extraction, and the degree of the final output becomes 4 or 5 degrees when degree of image data (the input to neurons A to I) is deemed to be 0.
  • The determination is output whether corners and edges for each neural cell are extracted or not, according to the above processing. As shown in Fig.7, this determination output is input to neural layer group INT for the unification. Data to be processed at this neural layer group INT is the one at higher degree and said neural layer. Neural layer group INT is composed with plural processing systems P1 to Pn, and each process system is constructed with plural number of neural layers. Processing systems P1 to Pn are classified by the shape of a configuration, for example; P1: triangle, P2: rectangle, Pn: a polygon with (n+2) corners.
  • Output from said neural cell is the signal showing of exist­ence of edges(hereinafter called "ES") and corners(hereinafter called "CS") inside the neural cell. There is the case when pseudo-corner X1 or X2 is appeared at side of a configuration as shown in Fig.8, due to the noises or tolerance of the configura­tion itself, even though it is a single triangle.
  • According to the processing system for a triangle, such failed corner signal is removed and actual corner A, B and C are enhanced, so as to output the coordinate values. An embodiment of a neural network which performs the enhancement of corner A, B and C and the removal of the pseudo-corner is shown in Fig.9.
  • Neurons corresponding to said neural cell are included in the neural network in Fig.9. Each neuron is connected to all neural cell NC as neuron 21 in Fig.3. Weight of each neuron is;
    the maximal as compared with input of corresponding neural cell CS,
    the negative weight whose absolute value is the largest as compared with neural cell surrounding to corresponding neural cell,
    heightened its weight as the distance from corresponding neural cell becomes larger. The above relationship is described in Fig.10 by the second dimensions.
  • According to the above structure, corner signal of appeared pseudo-corner during the processing is weakened, and corner signal of corner A, B and C is enhanced. The graph in Fig.10 shows the approximation curve of a second degree indicating the relationship between distance and weight. Needless to say, adaptation of a monotonous increasing curve is also possible.
  • According to Perceptrons introduced by Rosenblatt, the structure for the performance of edge enhancement of ignition pattern is proposed, by giving the control-type-connection to feed back system from reaction layer to integration layer. However, there is no suggestion whatsoever, with regards to feed forward control-type-connection for the corner enhancement of specific configuration.
  • A data to be given to each processing system of neural layer INT is an abstracted data at neural cell and can be said that is a data at high degree. It is possible to input data to input data to neural layer group INT, directly, when the image process­ing system, enable of extracting corner and edge data, is used as an input system. It is possible to heightened the neuron effi­ciency by inputting data at high degree, corner for example, to the neural layer at following column, when image data and charac­teristics data, corner for example, is confused. It is possible to calculate the minimum number of neural layers so as to achieve expected objective results, by calculating the difference of the degrees between final data to be output and first data to be input to neural layer.
  • Therefore, the present invention has advantages below.
    • (1) A predetermined processing is certainly performed according to the following assumption; the higher the abstraction of output data is, the more the number of neural layer becomes.
    • (2) Each neural cell is divided and data to be input to neural cell is output after it is processed in parallel; that is data is processed independently on after another. Therefore, a predetermined processing is certainly performed due to the fact that a heteroge­neous data is never processed by processing system since input data according to a data processing system is processed individually after being classified with respect to the sorts or the character.
  • Hereinafter, an embodiment of the data processing system according to the present invention is described with referring to the attached drawings. The data processing system has the basic unit of the circuit shown in Fig. 13. The circuit in Fig. 13 comprises normalizing circuit 101, closed circuits for outputting 102 and 102′. Normalizing circuit 101 is a resonance system, comprising a closed circuit 103 for inputting, inductance L1 for inputting, capacitance CI for outputting and resistance RI. Induced electromotive force EI is generated in closed circuit 103 for inputting by mutual inducing action between closed circuit 103 and closed circuit 102. Capacitance CI for outputting is charged by induced current II generated by the induced electromo­tive force EI in closed circuit 103: consequently, potential difference is generated on opposite terminals of CI. Switching circuit 4 includes a transistor TR, whose base and corrector are connected to opposite terminals of capacitance CI, respectively. When a forward voltage Vbe between a base and emitter of the transistor TR exceeds the breaking range of TR, TR becomes con­ductive. A collector voltage Vcc is loaded on the collector of TR, the emitter of transistor TR is earthed through inductance 105 for outputting. Closed circuit 103 for inputting and tran­sistor TR are connected with each other through diodes D1 and D2 which have the function of commutation in forward direction so as to prevent the transistor TR from reversed bias. When transistor TR is conductive, emitter current Ie is generated, which flows through inductance 105 for output.
  • Inductance 105 is connected magnetically with a plurality of closed circuit 102′ for outputting. Induced electromotive force EO is generated in each closed circuit 102′ by mutual inducing action between inductance 105 and each closed circuit 102′.
  • As shown in Fig. 14, closed circuits 102 and 102′ comprise inductance LOI for inputting, inductance LOO for outputting and variable resistance r. The induced electromotive force EO is generated by mutual inducing between inductance LOI for inputting and inductance LO for outputting of normalizing circuit 1. Output current IO is generated by the induced electromotive force EO in closed circuit 102′ for outputting: simultaneously, the induced electromotive force EI is generated by mutual inducing action between inductance LOO for outputting and inductance LI for inputting in closed circuit 101 for inputting 1. Potential difference Vci generated in capacitance CI by induced electromo­tive force EO is as follows.
    Vci= 1 CI
    Figure imgb0003
    ∫ IIdt      (1)
    The electromotive force generated in each closed circuit 102 for output is assumed to be Vi (i is from 1 to n), and Vi is assumed as in formula (2).
    Vi= Voi sin(ωt+Si)      (2)
    where, ω is the resonance angle frequency common to all closed circuits 102 band 102′ for outputting and closed circuit 103 for inputting. That is, all closed circuits have the common reso­nance frequency. The attenuation is minimized, during the trans­mission of electrical signal from closed circuit 103 for output­ting to normalizing circuit 1 and the transmission of electrical signal form normalizing circuit 101 to closed circuit 102′ for outputting, because of the common resonance frequency. Assuming the resistance value of variable resistance r in i-th closed circuit for outputting 2 to be ri, the current IOi generated in the closed circuit for outputting can be calculated from formula (2), then the formulae below can be obtained.
    Figure imgb0004
    I Oi={Voi/√r i² + ω²(L O I + LOO)²} sin (ω t -ϑ i)      (4)
    ϑ i =-tan⁻¹ {ω (L O I + L O O)/r i}      (5)
  • Induced electromotive force EI in closed circuit 103 for inputting is expressed in formula (6), assuming that the mutual inductance is M between inductance LOO for outputting in closed circuit 102 for outputting and inductance LI for inputting in closed circuit 103 for inputting, and assuming that Γ equal to Co εo µo (Co is light speed in vacuum, εo is dielectric con­stant in vacuum of rational units, µo is magnetic permeability in vacuum of rational units).
    Figure imgb0005
    Therefore, the current shown below is generated in closed circuit for inputting.
    Figure imgb0006
    Substituting (9) into (1), formula (10) is obtained.
    Figure imgb0007
    Accordingly, Vbe is expressed as in formula (11).
    Figure imgb0008
  • Supposing that Vbe has a characteristics shown in Fig. 14 (solid line) and that the breaking range of transistor TR is the level of broken line in Fig. 14, emitter current Ie is generated only when Vbe exceeds the broken line as shown in Fig. 15.
  • Induced electromotive force below is generated in each closed circuit 102′ by emitter current Ie.
    Figure imgb0009
    Fig. 16 is a diagram showing induced electromotive force Eo by generated emitter current in Fig. 15.
  • As described above, normalized circuit 101 can execute threshold processing based on switching characteristics. As shown in Fig. 17, any outputs of normalizing circuit 101 can be inputted to other normalizing circuits 101′ by closed circuit 102 (102′) for outputting, and simultaneously, the outputs of other normalizing circuits 101˝ can be inputted to a normalizing circuit 101 by closed circuit 102 (102′) for outputting. There­fore, network can be regenerated, and it is possible to settle a weight of synapse in the network by adjusting variable resister r in closed circuit 102 (102′) for outputting.
  • When inductances are constructed in IC of normalizing cir­cuit 1 and closed circuits 102 and 102′ for outputting, a con­struction in Fig. 18 or Fig. 19 can be applied.
  • In Fig. 18, insulator 108 is formed, which performs as:
    • i) Forming spiral conductor 106 on the first insulator 105, and
    • ii) Penetrating the first insulator 105 from the center of conductor 106, through insulator 105, pass the second insulator 107 and back again on the first insulator 105.
  • Current can be generated in insulator 106 and inductance can be realized by such structure.
  • In Fig. 107, insulator 110 and hooked insulator 111 are formed. Insulator 110 performs as:
    • i) Forming hooked conductor 109 on the first insulator 5,
    • ii) Penetrating the first insulator 105 from and end of conductor 109, through insulator 105, reach the second insulator 107.
    Hooked insulator 111 is connected to insulator 110. Circular circuit is constructed by hooked conductors 109 and 111 in such a structure, and inductance can be realized. It is possible to realize laterally expanded inductance by connecting conductor in the order of 109, 111, 109′, 111′, ... on generating a plurality of hooked conductors (109, 109′) on conductor 105 and generating a plurality of hooked conductors ( 111, 111′, ...) on insulator 107. Variable resistance r can be realized by the structure in Fig. 8 which is connected parallelly through diode d and transis­tor. Advantages of the Present Invention
  • As mentioned above, the data processing system according to the present invention has;
    advantages of efficiency as well as faculty to achieve expected objective results, since the number of neural layers is equal to the number corresponding to the abstraction difference between output data and input data,
    the structure of at least 2 neural layers are divided into a plurality of neural cells each of which includes at least one neuron, and a neuron included in each neural cell is not connect­ed to the neuron of other neural cells; therefore, a predeter­mined processing is certainly performed because heterogeneous data are never processed in one neural cell but processed in parallel and independently from one another.
  • Furthermore, data processing system according to the present invention drives the resonance system for inputting connected to switching circuit by which the resonance system for outputting is driven. It is possible to possess the number of neurons for practicing, since it is possible to realize neural network with the function of normalization by analog circuit.

Claims (9)

1. A data processing system including a plurality of neural layers characterized in that each neural layer is divided into a plurality of groups and that neurons in one of said group in one layer are connected only with neurons in corresponding group of adjacent layers, whereby independent neural cells are constructed each of which comprises corresponding groups of neurons.
2. A data processing system according to Claim 1, each neural cell on the most input side has a shape of hexagon, whereby honeycomb construction is shaped.
3. A data processing system according to Claim 1, wherein the number of neural layers is equal to the difference of abstraction degree of input and output data.
4. A data processing system comprising:
A plurality of normalizing circuits which comprises;
A closed circuit for inputting including an inductance for inputting and a capacitance for outputting, a switching circuit driven by a potential difference of said capacitance for outputting, and an inductance for outputting energized when said switching circuit is closed; and
A plurality of closed circuits for outputting which comprises a variable resistance, an inductance for inputting and an inductance for outputting has a resonance frequency substantially equal to that of said closed circuit for input­ting, said inductance for outputting in said normalizing circuits being magnetically connected with said inductance for inputting in said closed circuit, and said inductance for outputting in said closed circuits for outputting being con­nected with said inductance for inputting in one of said normalizing circuit.
5. A data processing system according to Claim 4, wherein said variable resistance has a forward direction corresponding to a induced current generated by a current passes through said switching circuit and comprises a plurality of resist­ances parallelly connected with one another through a switch­ing element.
6. A data processing system according to Claim 5, wherein said switching element comprises a transistor of forward direction as that of said variable resistance.
7. A data processing system according to Claim 5, wherein said switching element comprises a transistor.
8. A data processing system according to Claim 4, wherein each said inductance is constructed with a spiral conductive live formed on a insulating body, one end of said spiral being lead through said insulating body to another insulating body.
9. A data processing system according to Claim 4, wherein each said inductance is constructed with hooked conductive lines formed on two insulating bodies parallelly extending said conductive lines on one of said insulating body being connected with said conductive lines on the other said insu­lating body, so that loops are formed with conductive lines.
EP19900112956 1989-07-10 1990-07-06 Neural network Withdrawn EP0411341A3 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP178377/89 1989-07-10
JP17837789 1989-07-10
JP20722089 1989-08-10
JP207220/89 1989-08-10

Publications (2)

Publication Number Publication Date
EP0411341A2 true EP0411341A2 (en) 1991-02-06
EP0411341A3 EP0411341A3 (en) 1992-05-13

Family

ID=26498563

Family Applications (1)

Application Number Title Priority Date Filing Date
EP19900112956 Withdrawn EP0411341A3 (en) 1989-07-10 1990-07-06 Neural network

Country Status (3)

Country Link
US (2) US5463717A (en)
EP (1) EP0411341A3 (en)
KR (1) KR910003516A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0443208A2 (en) * 1990-02-20 1991-08-28 Kabushiki Kaisha Wacom Inductively coupled neural network
US5297232A (en) * 1991-10-30 1994-03-22 Westinghouse Electric Corp. Wireless neural network and a wireless neural processing element
US5371835A (en) * 1990-02-02 1994-12-06 Kabushikikaisha Wacom Inductively coupled neural network

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002358523A (en) * 2001-05-31 2002-12-13 Canon Inc Device and method for recognizing and processing pattern, and image input device
US8378777B2 (en) 2008-07-29 2013-02-19 Cooper Technologies Company Magnetic electrical device
US8310332B2 (en) * 2008-10-08 2012-11-13 Cooper Technologies Company High current amorphous powder core inductor
US8941457B2 (en) * 2006-09-12 2015-01-27 Cooper Technologies Company Miniature power inductor and methods of manufacture
US9589716B2 (en) 2006-09-12 2017-03-07 Cooper Technologies Company Laminated magnetic component and manufacture with soft magnetic powder polymer composite sheets
US8466764B2 (en) 2006-09-12 2013-06-18 Cooper Technologies Company Low profile layered coil and cores for magnetic components
US7791445B2 (en) * 2006-09-12 2010-09-07 Cooper Technologies Company Low profile layered coil and cores for magnetic components
US8279037B2 (en) * 2008-07-11 2012-10-02 Cooper Technologies Company Magnetic components and methods of manufacturing the same
US9558881B2 (en) 2008-07-11 2017-01-31 Cooper Technologies Company High current power inductor
US9859043B2 (en) 2008-07-11 2018-01-02 Cooper Technologies Company Magnetic components and methods of manufacturing the same
US8659379B2 (en) 2008-07-11 2014-02-25 Cooper Technologies Company Magnetic components and methods of manufacturing the same
US10192162B2 (en) * 2015-05-21 2019-01-29 Google Llc Vector computation unit in a neural network processor

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB923251A (en) * 1958-05-06 1963-04-10 Bailey Meters Controls Ltd Improvements in calculating networks
US3691400A (en) * 1967-12-13 1972-09-12 Ltv Aerospace Corp Unijunction transistor artificial neuron

Family Cites Families (39)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US2065280A (en) * 1933-05-13 1936-12-22 Koros Ladislaus Arrangement for deriving currents
US2538500A (en) * 1945-09-19 1951-01-16 Bess Leon Coincidence circuit
US2712065A (en) * 1951-08-30 1955-06-28 Robert D Elbourn Gate circuitry for electronic computers
US2785305A (en) * 1952-06-28 1957-03-12 Rca Corp Signal responsive circuit
US2798156A (en) * 1953-12-17 1957-07-02 Burroughs Corp Digit pulse counter
US2901605A (en) * 1953-12-18 1959-08-25 Electronique & Automatisme Sa Improvements in/or relating to electric pulse reshaping circuits
CA608373A (en) * 1954-06-08 1960-11-08 George F. Pittman, Jr. Control apparatus
US2943791A (en) * 1954-12-28 1960-07-05 Ibm Binary adder using transformer logical circuits
GB827658A (en) * 1955-04-25 1960-02-10 Eiichi Goto Improvements in or relating to electric multiplication circuits
DE1074298B (en) * 1955-08-15 1960-01-28 Sperry Rand Corporation, New York, N. Y. (V. St. A.) Logical circuit with controllable magnetic transformers
US2820897A (en) * 1955-08-29 1958-01-21 Control Company Inc Comp Universal gating package
BE558700A (en) * 1956-06-28
US2941722A (en) * 1956-08-07 1960-06-21 Roland L Van Allen Single quadrant analogue computing means
US2808990A (en) * 1956-10-31 1957-10-08 Roland L Van Allen Polarity responsive voltage computing means
US2934271A (en) * 1957-01-28 1960-04-26 Honeywell Regulator Co Adding and subtracting apparatus
GB896413A (en) * 1957-08-21 1962-05-16 Emi Ltd Improvements relating to serial adding circuits
US3021440A (en) * 1959-12-31 1962-02-13 Ibm Cryogenic circuit with output threshold varied by input current
US3250918A (en) * 1961-08-28 1966-05-10 Rca Corp Electrical neuron circuits
US3247366A (en) * 1962-05-22 1966-04-19 Gen Electric Four-quadrant multiplier
GB1089541A (en) * 1963-04-11 1967-11-01 English Electric Co Ltd Logical electric circuits
US3351773A (en) * 1963-05-31 1967-11-07 Mc Donnell Douglas Corp Electronic circuit for simulating certain characteristics of a biological neuron
DE1251379B (en) * 1963-12-06 1967-10-05 Radio Corporation oi America, New York NY (V St A) Inductive cryotron switch
US3383500A (en) * 1965-03-24 1968-05-14 Gen Magnetics Inc Analog computer circuits for multiplying, dividing and root-taking with magnetic amplifier in a feed-back loop
US3571918A (en) * 1969-03-28 1971-03-23 Texas Instruments Inc Integrated circuits and fabrication thereof
US3765082A (en) * 1972-09-20 1973-10-16 San Fernando Electric Mfg Method of making an inductor chip
JPS54182848U (en) * 1978-06-16 1979-12-25
GB2045540B (en) * 1978-12-28 1983-08-03 Tdk Electronics Co Ltd Electrical inductive device
JPS59189212U (en) * 1983-05-18 1984-12-15 株式会社村田製作所 chip type inductor
FR2558315B1 (en) * 1984-01-16 1986-06-06 Dassault Electronique ELECTRONIC PULSE AMPLIFIER DEVICE, IN PARTICULAR FOR HIGH VOLTAGE OUTPUT
US4660166A (en) * 1985-01-22 1987-04-21 Bell Telephone Laboratories, Incorporated Electronic network for collective decision based on large number of connections between signals
JPH0634236B2 (en) * 1985-11-02 1994-05-02 日本放送協会 Hierarchical information processing method
US4719591A (en) * 1985-11-07 1988-01-12 American Telephone And Telegraph Company, At&T Bell Labs. Optimization network for the decomposition of signals
US4731747A (en) * 1986-04-14 1988-03-15 American Telephone And Telegraph Company, At&T Bell Laboratories Highly parallel computation network with normalized speed of response
US4771247A (en) * 1987-09-24 1988-09-13 General Electric Company MMIC (monolithic microwave integrated circuit) low noise amplifier
US4941122A (en) * 1989-01-12 1990-07-10 Recognition Equipment Incorp. Neural network image processing system
US5195171A (en) * 1989-04-05 1993-03-16 Yozan, Inc. Data processing system
US5553196A (en) * 1989-04-05 1996-09-03 Yozan, Inc. Method for processing data using a neural network having a number of layers equal to an abstraction degree of the pattern to be processed
JP2940933B2 (en) * 1989-05-20 1999-08-25 株式会社リコー Pattern recognition method
US5371835A (en) * 1990-02-02 1994-12-06 Kabushikikaisha Wacom Inductively coupled neural network

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB923251A (en) * 1958-05-06 1963-04-10 Bailey Meters Controls Ltd Improvements in calculating networks
US3691400A (en) * 1967-12-13 1972-09-12 Ltv Aerospace Corp Unijunction transistor artificial neuron

Non-Patent Citations (8)

* Cited by examiner, † Cited by third party
Title
ARCHIV FUR TECHNISCHES MESSEN UND INDUSTRIELLE MESSTECHNIK no. 375, April 1967, MUNCHEN, DE pages 37 - 41; I. HORVAT ET AL.: "Genaue potentialfreie summierung der messwerte eines fernmess-systems mittels des zerhackerverfahrens", page 37, right-hand column, line 17 - page 38, left-hand column, line 23; figure 2. *
AUTOMATIC CONTROL vol. 17, no. 3, October 1962, NEW YORK, US pages 16 - 19; N. SUSSMAN: "Magnetic modulators: characteristics and applications", page 19, line 18 - line 26; figure 8. *
COMPUTER, vol. 21, no. 3, March 1988, LONG BEACH, US pages 52 - 63; J.HUTCHINSON, "Computing motion using analog and binary resistive networks", page 61, left-hand column, line 34 - line 46; figure 8. *
COMPUTER, vol. 21, no. 3, March 1988, pages 52-63, Long Beach, US; J. HUTCHINSON: "Computing motion using analog and binary resistive networks" *
IEEE ACOUSTICS, SPEECH, AND SIGNAL PROCESSING MAGAZINE. April 1987, NEW YORK US pages 4 - 21; R.P. LIPPMANN: "An introduction to computing with neural nets", page 15, right column, line 34 - page 16, right-hand column, line 47; figures 14,15. *
IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS vol. 2 OF3, May 1989 PORTLAND, US pages 774 - 777; F.Y SHIH ET AL: "Image morphological operations by neural circuits", page 774, left-hand column, line 1 - page 777, line 14; figures 3-6. *
IEEE TRANSACTIONS ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING. vol. 36, no. 7, July 1988, NEW YORK US pages 1180 - 1190; VIDAL: "Implementing neural nets with programmable logic", page 1184, rigth-hand column, line 35 - line 55; figure3. *
NEURAL NETWORKS vol. 1, no. 2, 1988, ELMSFORD, US pages 119 - 130; K. FUKUSHIMA: "Neocognitron : a hierarchical neural network capable of visual pattern recognition", page 123, right-hand column, line 8 - page 124, left-hand column, line 6; figure 6. *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5371835A (en) * 1990-02-02 1994-12-06 Kabushikikaisha Wacom Inductively coupled neural network
EP0443208A2 (en) * 1990-02-20 1991-08-28 Kabushiki Kaisha Wacom Inductively coupled neural network
EP0443208A3 (en) * 1990-02-20 1991-10-16 Kabushiki Kaisha Wacom Inductively coupled neural network
US5297232A (en) * 1991-10-30 1994-03-22 Westinghouse Electric Corp. Wireless neural network and a wireless neural processing element

Also Published As

Publication number Publication date
US5463717A (en) 1995-10-31
EP0411341A3 (en) 1992-05-13
KR910003516A (en) 1991-02-27
US5664069A (en) 1997-09-02

Similar Documents

Publication Publication Date Title
EP0411341A2 (en) Neural network
Kwan et al. A fuzzy neural network and its application to pattern recognition
Hussain et al. A novel feature recognition neural network and its application to character recognition
Michalski A planar geometrical model for representing multi-dimensional discrete spaces and multiple-valued logic functions
EP0459276A2 (en) Data processing system
US5148045A (en) Apparatus and method for assigning a plurality of elements to a plurality of cells
US5361328A (en) Data processing system using a neural network
Chakraborty Feature subset selection by neuro-rough hybridization
JPH06111038A (en) Neural network and operating method thereof
CN1526077A (en) Digital system and a method for error detection thereof
EP0420254B1 (en) Data processing system
Schmid et al. Hardware realization of a Hamming neural network with on-chip learning
JPH08329032A (en) Neural net type pattern recognition device and learning method for neural net
Lin et al. Design of k-WTA/sorting network using maskable WTA/MAX circuit
Watta et al. Decoupled-voting hamming associative memory networks
Yamakawa Silicon implementation of a fuzzy neuron
CN113673272B (en) Double-layer labeling two-stage cascade calculation loss value method based on pet detection
Wu et al. Automatically generated rules and membership functions for a neural fuzzy-based fault classifier
Park An ART2 trained by two-stage learning on circularly ordered data sequence
Nishida et al. An analysis of a multi-layered competitive net for invariant pattern recognition
Hayashi et al. Model reduction of neural network trees based on dimensionality reduction
Mishra Self organizing fuzzy neural network: an application to character recognition
Dandgaval et al. A Review on Object Detection Techniques using Deep Learning
Sung et al. Optimal synthesis method for binary neural network using NETLA
JP3804088B2 (en) Character recognition device

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

AK Designated contracting states

Kind code of ref document: A2

Designated state(s): BE DE FR GB IT NL SE

17P Request for examination filed

Effective date: 19911204

PUAL Search report despatched

Free format text: ORIGINAL CODE: 0009013

AK Designated contracting states

Kind code of ref document: A3

Designated state(s): BE DE FR GB IT NL SE

RAP1 Party data changed (applicant data changed or rights of an application transferred)

Owner name: YOZAN INC.

Owner name: SHARP KABUSHIKI KAISHA

17Q First examination report despatched

Effective date: 19950127

GRAG Despatch of communication of intention to grant

Free format text: ORIGINAL CODE: EPIDOS AGRA

GRAG Despatch of communication of intention to grant

Free format text: ORIGINAL CODE: EPIDOS AGRA

GRAH Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOS IGRA

GRAH Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOS IGRA

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 19990202