WO2012006468A1 - Methods and systems for replaceable synaptic weight storage in neuro-processors - Google Patents

Methods and systems for replaceable synaptic weight storage in neuro-processors Download PDF

Info

Publication number
WO2012006468A1
WO2012006468A1 PCT/US2011/043254 US2011043254W WO2012006468A1 WO 2012006468 A1 WO2012006468 A1 WO 2012006468A1 US 2011043254 W US2011043254 W US 2011043254W WO 2012006468 A1 WO2012006468 A1 WO 2012006468A1
Authority
WO
WIPO (PCT)
Prior art keywords
neuro
processor chip
weights
removable memory
memory
Prior art date
Application number
PCT/US2011/043254
Other languages
French (fr)
Inventor
Vladimir Aparin
Original Assignee
Qualcomm Incorporated
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qualcomm Incorporated filed Critical Qualcomm Incorporated
Priority to JP2013518841A priority Critical patent/JP2013534017A/en
Priority to CN201180033657.2A priority patent/CN102971754B/en
Priority to KR1020137003298A priority patent/KR101466251B1/en
Priority to EP11733755.0A priority patent/EP2591449A1/en
Publication of WO2012006468A1 publication Critical patent/WO2012006468A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means

Definitions

  • Certain embodiments of the present disclosure generally relate to neural system engineering and, more particularly, to a method for storing synaptic weights separately from a neuro-processor chip into replaceable storage.
  • synaptic weights which control strengths of connections between neurons.
  • the synaptic weights are typically stored in non-volatile, on-chip memory in order to preserve the processor functionality after being powered down.
  • the electrical circuit generally includes a neuro-processor chip with a plurality of neuron circuits and synapses, wherein each synapse connects a pair of the neuron circuits, and a removable memory connected to the neuro-processor chip storing weights of the synapses, wherein the weights determine a function of the neuro- processor chip.
  • Certain embodiments of the present disclosure provide a method for implementing a neural system.
  • the method generally includes using a removable memory to store weights of synapses, wherein each synapse connects two of a plurality of neuron circuits of a neuro-processor chip, and wherein the weights determine a function of the neuro-processor chip, and connecting the removable memory to the neuro-processor chip.
  • inventions of the present disclosure provide an apparatus for implementing a neural system.
  • the apparatus generally includes means for using a removable memory to store weights of synapses, wherein each synapse connects two of a plurality of neuron circuits of a neuro-processor chip, and wherein the weights determine a function of the neuro-processor chip, and means for connecting the removable memory to the neuro-processor chip.
  • FIG. 1 illustrates an example neural system in accordance with certain embodiments of the present disclosure.
  • FIG. 2 illustrates an example of neuro-processor interfaced with an external synaptic weight memory in accordance with certain embodiments of the present disclosure.
  • FIG. 3 illustrates example operations for implementing the synaptic weight memory external to the neuro-processor in accordance with certain embodiments of the present disclosure.
  • FIG. 3A illustrates example components capable of performing the operations illustrated in FIG. 3.
  • FIG. 4 illustrates examples of non-volatile memories that may be used for implementing the external synaptic weight memory in accordance with certain embodiments of the present disclosure.
  • FIG. 1 illustrates an example neural system 100 with multiple levels of neurons in accordance with certain embodiments of the present disclosure.
  • the neural system 100 may comprise a level of neurons 102 connected to another level of neurons 106 though a network of synapse connections 104.
  • a network of synapse connections 104 For simplicity, only two levels of neurons are illustrated in FIG. 1 , although more levels of neurons may exist in a typical neural system.
  • each neuron in the level 102 may receive an input signal 108 that may be generated by a plurality of neurons of a previous level (not shown in FIG. 1).
  • the signal 108 may represent an input current of the level 102 neuron. This current may be accumulated on the neuron membrane to charge a membrane potential. When the membrane potential reaches its threshold level, the neuron may fire and generate an output spike to be transferred to the next level of neurons (e.g., the level 106).
  • the transfer of spikes from one level of neurons to another may be achieved through the network of synaptic connections (or simply "synapses") 104, as illustrated in FIG. 1.
  • the synapses 104 may receive output signals (i.e., spikes) from the level 102 neurons, scale those signals according to adjustable synaptic weights ,...,
  • ⁇ ( +I) wnere is a total number of synaptic connections between the neurons of levels 102 and 106
  • wnere is a total number of synaptic connections between the neurons of levels 102 and 106
  • wnere is a total number of synaptic connections between the neurons of levels 102 and 106
  • Every neuron in the level 106 may generate an output spike 110 based on the corresponding combined input signal.
  • the output spikes 110 may be then transferred to another level of neurons using another network of synaptic connections (not shown in FIG. 1).
  • the neural system 100 may be emulated by a neuro-processor and utilized in a large range of applications, such as pattern recognition, machine learning and motor control.
  • Each neuron of the neural system 100 may be implemented as a neuron circuit within the neuro-processor chip.
  • the neuron membrane charged to the threshold level to initiate the output spike may be implemented within the neuron circuit as a capacitor which integrates an electrical current that flows through it.
  • a nanometer feature- sized memristor element may be utilized as the integrating device instead of the capacitor.
  • Functionality of the neuro-processor that emulates the neural system 100 may depend on weights of synaptic connections, which may control strengths of connections between neurons.
  • the synaptic weights may be stored in a non-volatile memory in order to preserve functionality of the processor after being powered down.
  • having this memory on the same chip with the neuro-processor may limit the processor functionality and flexibility.
  • the on-chip synaptic memory may limit choices for the type of non- volatile memory being utilized, and may increase the area and implementation cost of the overall chip.
  • Certain embodiments of the present disclosure support implementation of the synaptic weight memory on a separate external chip from the main neuro-processor chip.
  • the synaptic weight memory may be packaged separately from the neuro- processor chip as a replaceable removable memory. This may provide diverse functionalities to the neuro-processor, wherein a particular functionality may be based on synaptic weights stored in a removable memory currently attached to the neuro- processor.
  • FIG. 2 illustrates an example of neuromorphic architecture 200 in accordance with certain embodiments of the present disclosure.
  • a synaptic memory 206 may be implemented as a separate and external removable memory, which may be connected to a neuro-processor 202 through an interface circuit 204.
  • the neuro- processor 202 may emulate the neural system 100 illustrated in FIG. 1. It may comprise a large number of neuron circuits and synaptic connections.
  • the interface 204 may comprise a bus connecting the neuro-processor chip 202 and the external synaptic removable memory 206.
  • the interface bus may be designed to carry the synaptic weights data in both directions, as well as commands such as "memory write,” "memory read,” and "address.”
  • the neuro-processor 202 may typically comprise, for example, about 10,000 neuron circuits with about 100 synapses per neuron, which brings the total number of synapses in the neuro-processor 202 to approximately 10 6 .
  • the strength of each synaptic connection may be associated with a weight represented with a certain number of bits according to a desired precision. Typically, up to ten bits may be required per synaptic weight to provide sufficient precision for a large variety of applications. If, for example, every weight is represented with ten bits, then the memory of approximately lOMbits may be required to store the synaptic weights for the neuro-processor with approximately 10 6 synapses.
  • the number of neurons and synapses within a neuro-processor is expected to increase substantially in the near future for supporting even more complex neural system engineering applications.
  • the required size of synaptic weight memory may be much larger than lOMbits.
  • Implementation of the large synaptic memory as a removable memory external to the neuro-processor may provide more efficient die utilization of both the neuro-processor and the synaptic memory.
  • fabrication processes of the neuro-processor and memory may be uniquely tailored to the needs of these separate chips in order to provide better performance and lower cost.
  • functionality of the neuro-processor 202 may depend on weights of the synaptic connections between the neuron circuits.
  • training of the synaptic weights may need first to be performed within the neuro-processor.
  • the synaptic weights may be stored and loaded to/from the external memory 206 through the interface 204. Once the learning process is finished, all trained synaptic weights may be fully stored into the external memory chip 206.
  • duration of the weight-training process within the neuro-processor may last a long time. However, once the trained synaptic weights are fully stored in the external removable memory 206, they may be then quickly replicated to another removable memory. In this way, it may be possible to simply "clone" a functionality of the neuro-processor 202 from one memory chip to another. The time- and power-consuming weight-training process within another neuro-processor chip may be then fully avoided, and the other neuro-processor chip may be able to execute the same function as the neuro-processor 202 without performing the weight-training.
  • the external memory 206 may be implemented as a replaceable removable memory.
  • the same neuro-processor 202 may have different functionalities depending on the synaptic removable memory attached to it.
  • the replaceable removable memory may be shared between users, and a library of different functionalities (i.e., different weight values of same synapses) may be stored in different removable memorys. These synaptic removable memorys with diverse functionalities may be designed fully independently from the neuro-processor 202.
  • a local working memory with temporary data may be implemented within the neuro-processor chip 202 to provide faster processor operations.
  • the local memory may be also utilized during the aforementioned weight-training process.
  • a permanent memory comprising all trained synaptic weights fully determining the processor functionality may be external and implemented as the separate memory chip 206.
  • FIG. 3 illustrates example operations 300 for implementing a synaptic removable memory external to a neuro-processor chip in accordance with certain embodiments of the present disclosure.
  • a removable memory may be connected to the neuro-processor chip.
  • the removable memory may be used to store synapse weights, wherein each synapse may connect two of a plurality of neuron circuits of the neuro-processor chip, and wherein the weights may define, at least in part, a function of the neuro-processor chip.
  • An implementation area of one neuron circuit may be in the order of
  • CMOS complementary metal-oxide-semiconductor
  • 6 4 correspond to approximately 10 synapses for the exemplary processor comprising 10 neuron circuits.
  • the implementation area per synapse may be in the order of
  • 202 may be approximately equal to 110 mm (e.g., the die area of 10.5mmx l0.5mm ).
  • the fastest firing rate of a neuron may be equal to one spike per 5ms.
  • the maximum of about 10% of all neurons (or approximately 1000 neuron circuits in this exemplary case) may spike simultaneously in any given 5ms time period. Therefore, the maximum of 10 5 synaptic weights may need to be read every 5 ms from the synaptic weight memory 206 through the interface 204 into the neuro-processor 202. In other words, one synaptic weight may need to be read every 50ns, if only one synaptic weight at a time may be loaded from the external memory 206 to the neuro-processor 202.
  • a memory write time may be determined based on the number of eligible synapses that may need to be updated when a reward signal arrives. In the worst-case scenario, the memory write time may be equal to a memory read time.
  • the synaptic memory chip 206 may be typically required to store approximately 10 6 synaptic weights. If, for example, six bits are utilized per synaptic weight, then a total storage capacity of 6Mbits may be required.
  • Magneto-resistive Random Access Memory MRAM
  • Resistive Random Access Memory RRAM
  • MRAM Magneto-resistive Random Access Memory
  • RRAM Resistive Random Access Memory
  • FIG. 4 illustrates a graph 400 with examples of non-volatile memories that may be used for the external synaptic memory 206 in accordance with certain embodiments of the present disclosure.
  • Wide choices of non-volatile memory types include flash, ferroelectric, magnetic tunnel junction, spin-transfer torque devices, phase change memories, resistive/memristive switches, and so on. All these choices may represent possible candidates for the external synaptic memory 206.
  • a portion 402 of the graph 400 may correspond to an operational region of a local working on-chip memory, which may store a portion of synaptic weights for faster processor operations.
  • a Ferroelectric Random Access Memory FeRAM
  • MRAM Magneto-resistive Random Access Memory
  • SRAM Static Random Access Memory
  • DRAM Dynamic Random Access Memory
  • PRAM Phase-change Random Access Memory
  • RRAM Resistive Random Access Memory
  • PRAM, FeRAM and MRAM memories are all nonvolatile memories that do not require data to be erased before writing operations.
  • the RRAM memory is the non-volatile memory that requires erasing before writing operation.
  • DRAM and SRAM represent examples of volatile memories.
  • a portion 404 of the graph 400 may correspond to an operational region of an external memory for storing all synaptic weights associated with an application executed by a neuro-processor interfaced with the external memory. It can be observed from FIG. 4 that a NAND flash memory, a NOR flesh memory and a PRAM may be possible choices for the external synaptic memory. While the NAND flash memories and NOR flesh memories are non- volatile memories that may require data to be erased before writing, the PRAM is the example of non-volatile RAM that does not require erasing before writing.
  • the various operations of methods described above may be performed by any suitable means capable of performing the corresponding functions.
  • the means may include various hardware and/or software component(s) and/or module(s), including, but not limited to a circuit, an application specific integrate circuit (ASIC), or processor.
  • ASIC application specific integrate circuit
  • those operations may have corresponding counterpart means-plus-function components with similar numbering.
  • operations 300 illustrated in FIG. 3 correspond to components 300A, 302A and 304A illustrated in FIG. 3A.
  • determining encompasses a wide variety of actions. For example, “determining” may include calculating, computing, processing, deriving, investigating, looking up (e.g., looking up in a table, a database or another data structure), ascertaining and the like. Also, “determining” may include receiving (e.g., receiving information), accessing (e.g., accessing data in a memory) and the like. Also, “determining” may include resolving, selecting, choosing, establishing and the like.
  • a phrase referring to "at least one of a list of items refers to any combination of those items, including single members.
  • "at least one of: a, b, or c" is intended to cover: a, b, c, a-b, a-c, b-c, and a-b-c.
  • DSP digital signal processor
  • ASIC application specific integrated circuit
  • FPGA field programmable gate array signal
  • PLD programmable logic device
  • a general purpose processor may be a microprocessor, but in the alternative, the processor may be any commercially available processor, controller, microcontroller or state machine.
  • a processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
  • a software module may reside in any form of storage medium that is known in the art. Some examples of storage media that may be used include random access memory (RAM), read only memory (ROM), flash memory, EPROM memory, EEPROM memory, registers, a hard disk, a removable disk, a CD-ROM and so forth.
  • RAM random access memory
  • ROM read only memory
  • flash memory EPROM memory
  • EEPROM memory EEPROM memory
  • registers a hard disk, a removable disk, a CD-ROM and so forth.
  • a software module may comprise a single instruction, or many instructions, and may be distributed over several different code segments, among different programs, and across multiple storage media.
  • a storage medium may be coupled to a processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor.
  • the methods disclosed herein comprise one or more steps or actions for achieving the described method.
  • the method steps and/or actions may be interchanged with one another without departing from the scope of the claims.
  • the order and/or use of specific steps and/or actions may be modified without departing from the scope of the claims.
  • a storage media may be any available media that can be accessed by a computer.
  • Such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer.
  • Disk and disc include compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray ® disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers.
  • certain embodiments may comprise a computer program product for performing the operations presented herein.
  • a computer program product may comprise a computer readable medium having instructions stored (and/or encoded) thereon, the instructions being executable by one or more processors to perform the operations described herein.
  • the computer program product may include packaging material.
  • Software or instructions may also be transmitted over a transmission medium.
  • a transmission medium For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio and microwave are included in the definition of transmission medium.
  • DSL digital subscriber line
  • modules and/or other appropriate means for performing the methods and techniques described herein can be downloaded and/or otherwise obtained by a user terminal and/or base station as applicable.
  • a user terminal and/or base station can be coupled to a server to facilitate the transfer of means for performing the methods described herein.
  • various methods described herein can be provided via storage means (e.g., RAM, ROM, a physical storage medium such as a compact disc (CD) or floppy disk, etc.), such that a user terminal and/or base station can obtain the various methods upon coupling or providing the storage means to the device.
  • storage means e.g., RAM, ROM, a physical storage medium such as a compact disc (CD) or floppy disk, etc.
  • CD compact disc
  • floppy disk etc.
  • any other suitable technique for providing the methods and techniques described herein to a device can be utilized.

Abstract

Certain embodiments of the present disclosure support techniques for storing synaptic weights separately from a neuro-processor chip into a replaceable storage. The replaceable synaptic memory gives a unique functionality to the neuro-processor and improves its flexibility for supporting a large variety of applications. In addition, the replaceable synaptic storage can provide more choices for the type of memory used, and might decrease the area and implementation cost of the overall neuro-processor chip.

Description

METHODS AND SYSTEMS FOR REPLACEABLE SYNAPTIC WEIGHT STORAGE IN NEURO-PROCESSORS
BACKGROUND
Field
[0001] Certain embodiments of the present disclosure generally relate to neural system engineering and, more particularly, to a method for storing synaptic weights separately from a neuro-processor chip into replaceable storage.
Background
[0002] Functionality of a neuro-processor depends on synaptic weights, which control strengths of connections between neurons. The synaptic weights are typically stored in non-volatile, on-chip memory in order to preserve the processor functionality after being powered down.
[0003] Having this memory on the same chip with the neuro-processor limits the neuro-processor functionality and flexibility. In addition, the on-chip synaptic memory limits choices for the type of non-volatile memory that can be utilized, as well as it increases the area and implementation cost of the overall chip.
SUMMARY
[0004] Certain embodiments of the present disclosure provide an electrical circuit. The electrical circuit generally includes a neuro-processor chip with a plurality of neuron circuits and synapses, wherein each synapse connects a pair of the neuron circuits, and a removable memory connected to the neuro-processor chip storing weights of the synapses, wherein the weights determine a function of the neuro- processor chip.
[0005] Certain embodiments of the present disclosure provide a method for implementing a neural system. The method generally includes using a removable memory to store weights of synapses, wherein each synapse connects two of a plurality of neuron circuits of a neuro-processor chip, and wherein the weights determine a function of the neuro-processor chip, and connecting the removable memory to the neuro-processor chip.
[0006] Certain embodiments of the present disclosure provide an apparatus for implementing a neural system. The apparatus generally includes means for using a removable memory to store weights of synapses, wherein each synapse connects two of a plurality of neuron circuits of a neuro-processor chip, and wherein the weights determine a function of the neuro-processor chip, and means for connecting the removable memory to the neuro-processor chip.
BRIEF DESCRIPTION OF THE DRAWINGS
[0007] So that the manner in which the above-recited features of the present disclosure can be understood in detail, a more particular description, briefly summarized above, may be had by reference to embodiments, some of which are illustrated in the appended drawings. It is to be noted, however, that the appended drawings illustrate only certain typical embodiments of this disclosure and are therefore not to be considered limiting of its scope, for the description may admit to other equally effective embodiments.
[0008] FIG. 1 illustrates an example neural system in accordance with certain embodiments of the present disclosure.
[0009] FIG. 2 illustrates an example of neuro-processor interfaced with an external synaptic weight memory in accordance with certain embodiments of the present disclosure.
[0010] FIG. 3 illustrates example operations for implementing the synaptic weight memory external to the neuro-processor in accordance with certain embodiments of the present disclosure.
[0011] FIG. 3A illustrates example components capable of performing the operations illustrated in FIG. 3.
[0012] FIG. 4 illustrates examples of non-volatile memories that may be used for implementing the external synaptic weight memory in accordance with certain embodiments of the present disclosure. DETAILED DESCRIPTION
[0013] Various embodiments of the disclosure are described more fully hereinafter with reference to the accompanying drawings. This disclosure may, however, be embodied in many different forms and should not be construed as limited to any specific structure or function presented throughout this disclosure. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art. Based on the teachings herein one skilled in the art should appreciate that the scope of the disclosure is intended to cover any embodiment of the disclosure disclosed herein, whether implemented independently of or combined with any other embodiment of the disclosure. For example, an apparatus may be implemented or a method may be practiced using any number of the embodiments set forth herein. In addition, the scope of the disclosure is intended to cover such an apparatus or method which is practiced using other structure, functionality, or structure and functionality in addition to or other than the various embodiments of the disclosure set forth herein. It should be understood that any embodiment of the disclosure disclosed herein may be embodied by one or more elements of a claim.
[0014] The word "exemplary" is used herein to mean "serving as an example, instance, or illustration." Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
[0015] Although particular embodiments are described herein, many variations and permutations of these embodiments fall within the scope of the disclosure. Although some benefits and advantages of the preferred embodiments are mentioned, the scope of the disclosure is not intended to be limited to particular benefits, uses or objectives. Rather, embodiments of the disclosure are intended to be broadly applicable to different technologies, system configurations, networks and protocols, some of which are illustrated by way of example in the figures and in the following description of the preferred embodiments. The detailed description and drawings are merely illustrative of the disclosure rather than limiting, the scope of the disclosure being defined by the appended claims and equivalents thereof. Exemplary Neural System
[0016] FIG. 1 illustrates an example neural system 100 with multiple levels of neurons in accordance with certain embodiments of the present disclosure. The neural system 100 may comprise a level of neurons 102 connected to another level of neurons 106 though a network of synapse connections 104. For simplicity, only two levels of neurons are illustrated in FIG. 1 , although more levels of neurons may exist in a typical neural system.
[0017] As illustrated in FIG. 1, each neuron in the level 102 may receive an input signal 108 that may be generated by a plurality of neurons of a previous level (not shown in FIG. 1). The signal 108 may represent an input current of the level 102 neuron. This current may be accumulated on the neuron membrane to charge a membrane potential. When the membrane potential reaches its threshold level, the neuron may fire and generate an output spike to be transferred to the next level of neurons (e.g., the level 106).
[0018] The transfer of spikes from one level of neurons to another may be achieved through the network of synaptic connections (or simply "synapses") 104, as illustrated in FIG. 1. The synapses 104 may receive output signals (i.e., spikes) from the level 102 neurons, scale those signals according to adjustable synaptic weights
Figure imgf000005_0001
,...,
^( +I) (wnere is a total number of synaptic connections between the neurons of levels 102 and 106), and combine the scaled signals as input signals of the level 106 neurons. Every neuron in the level 106 may generate an output spike 110 based on the corresponding combined input signal. The output spikes 110 may be then transferred to another level of neurons using another network of synaptic connections (not shown in FIG. 1).
[0019] The neural system 100 may be emulated by a neuro-processor and utilized in a large range of applications, such as pattern recognition, machine learning and motor control. Each neuron of the neural system 100 may be implemented as a neuron circuit within the neuro-processor chip. The neuron membrane charged to the threshold level to initiate the output spike may be implemented within the neuron circuit as a capacitor which integrates an electrical current that flows through it. To substantially reduce the area of neuron circuit, a nanometer feature- sized memristor element may be utilized as the integrating device instead of the capacitor. By applying this approach, efficient implementation of the very large-scale neural system hardware may be possible.
[0020] Functionality of the neuro-processor that emulates the neural system 100 may depend on weights of synaptic connections, which may control strengths of connections between neurons. The synaptic weights may be stored in a non-volatile memory in order to preserve functionality of the processor after being powered down. However, having this memory on the same chip with the neuro-processor may limit the processor functionality and flexibility. In addition, the on-chip synaptic memory may limit choices for the type of non- volatile memory being utilized, and may increase the area and implementation cost of the overall chip.
[0021] Certain embodiments of the present disclosure support implementation of the synaptic weight memory on a separate external chip from the main neuro-processor chip. The synaptic weight memory may be packaged separately from the neuro- processor chip as a replaceable removable memory. This may provide diverse functionalities to the neuro-processor, wherein a particular functionality may be based on synaptic weights stored in a removable memory currently attached to the neuro- processor.
Exemplary Neuromorphic Architecture with External Synaptic Memory
[0022] FIG. 2 illustrates an example of neuromorphic architecture 200 in accordance with certain embodiments of the present disclosure. A synaptic memory 206 may be implemented as a separate and external removable memory, which may be connected to a neuro-processor 202 through an interface circuit 204. The neuro- processor 202 may emulate the neural system 100 illustrated in FIG. 1. It may comprise a large number of neuron circuits and synaptic connections. The interface 204 may comprise a bus connecting the neuro-processor chip 202 and the external synaptic removable memory 206. The interface bus may be designed to carry the synaptic weights data in both directions, as well as commands such as "memory write," "memory read," and "address."
[0023] For supporting neural system engineering applications, the neuro-processor 202 may typically comprise, for example, about 10,000 neuron circuits with about 100 synapses per neuron, which brings the total number of synapses in the neuro-processor 202 to approximately 106. The strength of each synaptic connection may be associated with a weight represented with a certain number of bits according to a desired precision. Typically, up to ten bits may be required per synaptic weight to provide sufficient precision for a large variety of applications. If, for example, every weight is represented with ten bits, then the memory of approximately lOMbits may be required to store the synaptic weights for the neuro-processor with approximately 106 synapses.
[0024] The number of neurons and synapses within a neuro-processor is expected to increase substantially in the near future for supporting even more complex neural system engineering applications. The required size of synaptic weight memory may be much larger than lOMbits. Implementation of the large synaptic memory as a removable memory external to the neuro-processor may provide more efficient die utilization of both the neuro-processor and the synaptic memory. In addition, fabrication processes of the neuro-processor and memory may be uniquely tailored to the needs of these separate chips in order to provide better performance and lower cost.
[0025] As aforementioned, functionality of the neuro-processor 202 may depend on weights of the synaptic connections between the neuron circuits. For the neuro- processor 202 to be able to perform a particular application, training of the synaptic weights may need first to be performed within the neuro-processor. During the training process, the synaptic weights may be stored and loaded to/from the external memory 206 through the interface 204. Once the learning process is finished, all trained synaptic weights may be fully stored into the external memory chip 206.
[0026] For many applications, duration of the weight-training process within the neuro-processor may last a long time. However, once the trained synaptic weights are fully stored in the external removable memory 206, they may be then quickly replicated to another removable memory. In this way, it may be possible to simply "clone" a functionality of the neuro-processor 202 from one memory chip to another. The time- and power-consuming weight-training process within another neuro-processor chip may be then fully avoided, and the other neuro-processor chip may be able to execute the same function as the neuro-processor 202 without performing the weight-training.
[0027] In one embodiment of the present disclosure, the external memory 206 may be implemented as a replaceable removable memory. The same neuro-processor 202 may have different functionalities depending on the synaptic removable memory attached to it. The replaceable removable memory may be shared between users, and a library of different functionalities (i.e., different weight values of same synapses) may be stored in different removable memorys. These synaptic removable memorys with diverse functionalities may be designed fully independently from the neuro-processor 202.
[0028] In another embodiment of the present disclosure, a local working memory with temporary data (e.g., with a portion of the synaptic weights) may be implemented within the neuro-processor chip 202 to provide faster processor operations. The local memory may be also utilized during the aforementioned weight-training process. On the other hand, a permanent memory comprising all trained synaptic weights fully determining the processor functionality may be external and implemented as the separate memory chip 206.
[0029] FIG. 3 illustrates example operations 300 for implementing a synaptic removable memory external to a neuro-processor chip in accordance with certain embodiments of the present disclosure. At 302, a removable memory may be connected to the neuro-processor chip. At 304, the removable memory may be used to store synapse weights, wherein each synapse may connect two of a plurality of neuron circuits of the neuro-processor chip, and wherein the weights may define, at least in part, a function of the neuro-processor chip.
Exemplary Implementation of Neuro-Processor and Synaptic Memory Chip
[0030] Implementation details related to the neuro-processor chip 202 and the external synaptic memory chip 206 are presented in the following text. The implementation estimates are based on the exemplary case when the neuro-processor 202 may comprise approximately 104 neurons for supporting various today's neural system applications.
[0031] An implementation area of one neuron circuit may be in the order of
2
32 χ 32 μιη for today's complementary metal-oxide-semiconductor (CMOS) technologies, if a memristor element is utilized as the integrating device instead of a capacitor to mimic the neuron membrane. This neuron circuit implementation may 2
result in the area cost of approximately 10 mm for all neurons within the neuro- processor chip 202.
[0032] Typically, there may be about 100 synapses per neuron, which may
6 4 correspond to approximately 10 synapses for the exemplary processor comprising 10 neuron circuits. The implementation area per synapse may be in the order of
2
10 x10 μιη for today's CMOS technologies, if each synapse is implemented based on the nanometer feature-sized memristor element. This may result into the area cost of
2
approximately 100 mm for all synapses within the exemplary neuro-processor 202
4
comprising 10 neuron circuits. Therefore, a total die area of the neuro-processor chip
2
202 may be approximately equal to 110 mm (e.g., the die area of 10.5mmx l0.5mm ).
[0033] The fastest firing rate of a neuron may be equal to one spike per 5ms. The maximum of about 10% of all neurons (or approximately 1000 neuron circuits in this exemplary case) may spike simultaneously in any given 5ms time period. Therefore, the maximum of 105 synaptic weights may need to be read every 5 ms from the synaptic weight memory 206 through the interface 204 into the neuro-processor 202. In other words, one synaptic weight may need to be read every 50ns, if only one synaptic weight at a time may be loaded from the external memory 206 to the neuro-processor 202.
[0034] On the other hand, a memory write time may be determined based on the number of eligible synapses that may need to be updated when a reward signal arrives. In the worst-case scenario, the memory write time may be equal to a memory read time. As aforementioned, the synaptic memory chip 206 may be typically required to store approximately 106 synaptic weights. If, for example, six bits are utilized per synaptic weight, then a total storage capacity of 6Mbits may be required.
[0035] Magneto-resistive Random Access Memory (MRAM) and the Resistive Random Access Memory (RRAM) represent today's fastest non- volatile memories. These memories may allow read/write times of less than 10ns and capacities greater than 6 or lOMBits, which make them suitable for use as external synaptic weight memories.
[0036] FIG. 4 illustrates a graph 400 with examples of non-volatile memories that may be used for the external synaptic memory 206 in accordance with certain embodiments of the present disclosure. Wide choices of non-volatile memory types include flash, ferroelectric, magnetic tunnel junction, spin-transfer torque devices, phase change memories, resistive/memristive switches, and so on. All these choices may represent possible candidates for the external synaptic memory 206.
[0037] A portion 402 of the graph 400 may correspond to an operational region of a local working on-chip memory, which may store a portion of synaptic weights for faster processor operations. It can be observed from FIG. 4 that a Ferroelectric Random Access Memory (FeRAM), a Magneto-resistive Random Access Memory (MRAM), a Static Random Access Memory (SRAM), a Dynamic Random Access Memory (DRAM) and a Phase-change Random Access Memory (PRAM) may represent possible candidates for the working on-chip memory. On the other hand, as illustrated in FIG. 4, a Resistive Random Access Memory (RRAM) may be too large and expensive to be utilized as the local on-chip working memory.
[0038] It should be noted that PRAM, FeRAM and MRAM memories are all nonvolatile memories that do not require data to be erased before writing operations. However, the RRAM memory is the non-volatile memory that requires erasing before writing operation. On the other hand, DRAM and SRAM represent examples of volatile memories.
[0039] A portion 404 of the graph 400 may correspond to an operational region of an external memory for storing all synaptic weights associated with an application executed by a neuro-processor interfaced with the external memory. It can be observed from FIG. 4 that a NAND flash memory, a NOR flesh memory and a PRAM may be possible choices for the external synaptic memory. While the NAND flash memories and NOR flesh memories are non- volatile memories that may require data to be erased before writing, the PRAM is the example of non-volatile RAM that does not require erasing before writing.
[0040] The various operations of methods described above may be performed by any suitable means capable of performing the corresponding functions. The means may include various hardware and/or software component(s) and/or module(s), including, but not limited to a circuit, an application specific integrate circuit (ASIC), or processor. Generally, where there are operations illustrated in Figures, those operations may have corresponding counterpart means-plus-function components with similar numbering. For example, operations 300 illustrated in FIG. 3 correspond to components 300A, 302A and 304A illustrated in FIG. 3A.
[0041] As used herein, the term "determining" encompasses a wide variety of actions. For example, "determining" may include calculating, computing, processing, deriving, investigating, looking up (e.g., looking up in a table, a database or another data structure), ascertaining and the like. Also, "determining" may include receiving (e.g., receiving information), accessing (e.g., accessing data in a memory) and the like. Also, "determining" may include resolving, selecting, choosing, establishing and the like.
[0042] As used herein, a phrase referring to "at least one of a list of items refers to any combination of those items, including single members. As an example, "at least one of: a, b, or c" is intended to cover: a, b, c, a-b, a-c, b-c, and a-b-c.
[0043] The various illustrative logical blocks, modules and circuits described in connection with the present disclosure may be implemented or performed with a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array signal (FPGA) or other programmable logic device (PLD), discrete gate or transistor logic, discrete hardware components or any combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any commercially available processor, controller, microcontroller or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
[0044] The steps of a method or algorithm described in connection with the present disclosure may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in any form of storage medium that is known in the art. Some examples of storage media that may be used include random access memory (RAM), read only memory (ROM), flash memory, EPROM memory, EEPROM memory, registers, a hard disk, a removable disk, a CD-ROM and so forth. A software module may comprise a single instruction, or many instructions, and may be distributed over several different code segments, among different programs, and across multiple storage media. A storage medium may be coupled to a processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor.
[0045] The methods disclosed herein comprise one or more steps or actions for achieving the described method. The method steps and/or actions may be interchanged with one another without departing from the scope of the claims. In other words, unless a specific order of steps or actions is specified, the order and/or use of specific steps and/or actions may be modified without departing from the scope of the claims.
[0046] The functions described may be implemented in hardware, software, firmware or any combination thereof. If implemented in software, the functions may be stored as one or more instructions on a computer-readable medium. A storage media may be any available media that can be accessed by a computer. By way of example, and not limitation, such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. Disk and disc, as used herein, include compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray® disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers.
[0047] Thus, certain embodiments may comprise a computer program product for performing the operations presented herein. For example, such a computer program product may comprise a computer readable medium having instructions stored (and/or encoded) thereon, the instructions being executable by one or more processors to perform the operations described herein. For certain embodiments, the computer program product may include packaging material.
[0048] Software or instructions may also be transmitted over a transmission medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio and microwave are included in the definition of transmission medium.
[0049] Further, it should be appreciated that modules and/or other appropriate means for performing the methods and techniques described herein can be downloaded and/or otherwise obtained by a user terminal and/or base station as applicable. For example, such a device can be coupled to a server to facilitate the transfer of means for performing the methods described herein. Alternatively, various methods described herein can be provided via storage means (e.g., RAM, ROM, a physical storage medium such as a compact disc (CD) or floppy disk, etc.), such that a user terminal and/or base station can obtain the various methods upon coupling or providing the storage means to the device. Moreover, any other suitable technique for providing the methods and techniques described herein to a device can be utilized.
[0050] It is to be understood that the claims are not limited to the precise configuration and components illustrated above. Various modifications, changes and variations may be made in the arrangement, operation and details of the methods and apparatus described above without departing from the scope of the claims.
[0051] While the foregoing is directed to embodiments of the present disclosure, other and further embodiments of the disclosure may be devised without departing from the basic scope thereof, and the scope thereof is determined by the claims that follow.
What is claimed is:

Claims

1. An electrical circuit, comprising:
a neuro-processor chip with a plurality of neuron circuits and at least one synapse, wherein the at least one synapse connects a pair of neuron circuits; and
a removable memory connected to the neuro-processor chip storing weights of the at least one synapse, wherein the weights define, at least in part, a function of the neuro-processor chip.
2. The electrical circuit of claim 1, wherein:
the removable memory is connected to the neuro-processor chip via an interface circuit, and
the interface circuit carries the weights from the neuro-processor chip to the removable memory and from the removable memory to the neuro-processor chip.
3. The electrical circuit of claim 1, wherein the neuro-processor chip comprises a local memory for storing at least a portion of the weights.
4. The electrical circuit of claim 1, wherein the weights are trained for the pair of neuron circuits before being stored in the removable memory.
5. The electrical circuit of claim 4, wherein:
values of the trained weights are replicated and stored in another removable memory connected to another neuro-processor chip, and
the other neuro-processor chip executes the function of the neuro-processor chip based at least in part on the values of the weights.
6. The electrical circuit of claim 1, wherein:
the removable memory is replaced with another removable memory that stores different values of the weights than the removable memory, and
the values of the weights define, at least in part, another function of the neuro- processor chip.
7. The electrical circuit of claim 1, wherein the removable memory comprises a non-volatile memory device.
8. A method for implementing a neural system, comprising:
connecting a removable memory to a neuro-processor chip; and
storing synapse weights on the removable memory, wherein a synapse connects two of a plurality of neuron circuits of a neuro-processor chip, and wherein the weights define, at least in part, a function of the neuro-processor chip.
9. The method of claim 8, further comprising:
connecting the removable memory to the neuro-processor chip using an interface circuit, and
transferring the weights from the neuro-processor chip to the removable memory, and from the removable memory to the neuro-processor chip, via the interface circuit.
10. The method of claim 8, further comprising:
storing at least a portion of the synapse weights on a local memory within the neuro-processor chip.
11. The method of claim 8, further comprising:
training the weights for the two neuron circuits; and
storing the trained weights in the removable memory.
12. The method of claim 11, further comprising:
replicating values of the trained weights to another removable memory connected to another neuro-processor chip, wherein
the other neuro-processor chip executes the function of the neuro-processor chip based at least in part on the values of the weights.
13. The method of claim 8, further comprising:
replacing the removable memory with another removable memory that stores different values of the weights than the removable memory, wherein the values of the weights define, at least in part, another function of the neuro- processor chip.
14. The method of claim 8, wherein the removable memory comprises a non-volatile memory device.
15. An apparatus for implementing a neural system, comprising:
means for connecting a removable memory to a neuro-processor chip; and means for storing synapse weights on the removable memory, wherein a synapse connects two of a plurality of neuron circuits of a neuro-processor chip, and wherein the weights define, at least in part, a function of the neuro-processor chip.
16. The apparatus of claim 15, further comprising:
means for connecting the removable memory to the neuro-processor chip using an interface circuit, and
means for transferring the weights from the neuro-processor chip to the removable memory, and from the removable memory to the neuro-processor chip, via the interface circuit.
17. The apparatus of claim 15, further comprising:
means for storing at least a portion of the synapse weights on a local memory within the neuro-processor chip.
18. The apparatus of claim 15, further comprising:
means for training the weights for the two neuron circuits; and
means for storing the trained weights in the removable memory.
19. The apparatus of claim 18, further comprising:
means for replicating values of the trained weights to another removable memory connected to another neuro-processor chip, wherein
the other neuro-processor chip executes the function of the neuro-processor chip based at least in part on the values of the weights.
20. The apparatus of claim 15, further comprising:
means for replacing the removable memory with another removable memory that stores different values of the weights than the removable memory, wherein
the values of the weights define, at least in part, another function of the neuro- processor chip.
21. The apparatus of claim 15, wherein the removable memory comprises a nonvolatile memory device.
PCT/US2011/043254 2010-07-07 2011-07-07 Methods and systems for replaceable synaptic weight storage in neuro-processors WO2012006468A1 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
JP2013518841A JP2013534017A (en) 2010-07-07 2011-07-07 Method and system for interchangeable synaptic load storage in a neuroprocessor
CN201180033657.2A CN102971754B (en) 2010-07-07 2011-07-07 For the method and system that the alternative synapse weight in neuron processor stores
KR1020137003298A KR101466251B1 (en) 2010-07-07 2011-07-07 Methods, apparatuses and computer-readable medium for implementing a neural system, and an electrical circuit thereby
EP11733755.0A EP2591449A1 (en) 2010-07-07 2011-07-07 Methods and systems for replaceable synaptic weight storage in neuro-processors

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US12/831,484 US8676734B2 (en) 2010-07-07 2010-07-07 Methods and systems for replaceable synaptic weight storage in neuro-processors
US12/831,484 2010-07-07

Publications (1)

Publication Number Publication Date
WO2012006468A1 true WO2012006468A1 (en) 2012-01-12

Family

ID=44584777

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2011/043254 WO2012006468A1 (en) 2010-07-07 2011-07-07 Methods and systems for replaceable synaptic weight storage in neuro-processors

Country Status (6)

Country Link
US (1) US8676734B2 (en)
EP (1) EP2591449A1 (en)
JP (3) JP2013534017A (en)
KR (1) KR101466251B1 (en)
CN (1) CN102971754B (en)
WO (1) WO2012006468A1 (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104854602A (en) * 2012-12-03 2015-08-19 Hrl实验室有限责任公司 Generating messages from the firing of pre-synaptic neurons
US10339041B2 (en) 2013-10-11 2019-07-02 Qualcomm Incorporated Shared memory architecture for a neural simulator
US10387298B2 (en) 2017-04-04 2019-08-20 Hailo Technologies Ltd Artificial neural network incorporating emphasis and focus techniques
US11221929B1 (en) 2020-09-29 2022-01-11 Hailo Technologies Ltd. Data stream fault detection mechanism in an artificial neural network processor
US11238334B2 (en) 2017-04-04 2022-02-01 Hailo Technologies Ltd. System and method of input alignment for efficient vector operations in an artificial neural network
US11237894B1 (en) 2020-09-29 2022-02-01 Hailo Technologies Ltd. Layer control unit instruction addressing safety mechanism in an artificial neural network processor
US11263077B1 (en) 2020-09-29 2022-03-01 Hailo Technologies Ltd. Neural network intermediate results safety mechanism in an artificial neural network processor
US11544545B2 (en) 2017-04-04 2023-01-03 Hailo Technologies Ltd. Structured activation based sparsity in an artificial neural network
US11551028B2 (en) 2017-04-04 2023-01-10 Hailo Technologies Ltd. Structured weight based sparsity in an artificial neural network
US11615297B2 (en) 2017-04-04 2023-03-28 Hailo Technologies Ltd. Structured weight based sparsity in an artificial neural network compiler
US11811421B2 (en) 2020-09-29 2023-11-07 Hailo Technologies Ltd. Weights safety mechanism in an artificial neural network processor

Families Citing this family (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8977583B2 (en) * 2012-03-29 2015-03-10 International Business Machines Corporation Synaptic, dendritic, somatic, and axonal plasticity in a network of neural cores using a plastic multi-stage crossbar switching
CN104809501B (en) * 2014-01-24 2018-05-01 清华大学 A kind of computer system based on class brain coprocessor
CN104809498B (en) * 2014-01-24 2018-02-13 清华大学 A kind of class brain coprocessor based on Neuromorphic circuit
US20150324691A1 (en) * 2014-05-07 2015-11-12 Seagate Technology Llc Neural network connections using nonvolatile memory devices
EP3035249B1 (en) * 2014-12-19 2019-11-27 Intel Corporation Method and apparatus for distributed and cooperative computation in artificial neural networks
US11100397B2 (en) 2015-05-21 2021-08-24 Rochester Institute Of Technology Method and apparatus for training memristive learning systems
US11226840B2 (en) 2015-10-08 2022-01-18 Shanghai Zhaoxin Semiconductor Co., Ltd. Neural network unit that interrupts processing core upon condition
US10664751B2 (en) 2016-12-01 2020-05-26 Via Alliance Semiconductor Co., Ltd. Processor with memory array operable as either cache memory or neural network unit memory
US11216720B2 (en) 2015-10-08 2022-01-04 Shanghai Zhaoxin Semiconductor Co., Ltd. Neural network unit that manages power consumption based on memory accesses per period
US11221872B2 (en) 2015-10-08 2022-01-11 Shanghai Zhaoxin Semiconductor Co., Ltd. Neural network unit that interrupts processing core upon condition
US10725934B2 (en) 2015-10-08 2020-07-28 Shanghai Zhaoxin Semiconductor Co., Ltd. Processor with selective data storage (of accelerator) operable as either victim cache data storage or accelerator memory and having victim cache tags in lower level cache wherein evicted cache line is stored in said data storage when said data storage is in a first mode and said cache line is stored in system memory rather then said data store when said data storage is in a second mode
US11029949B2 (en) 2015-10-08 2021-06-08 Shanghai Zhaoxin Semiconductor Co., Ltd. Neural network unit
CN114037068A (en) * 2016-03-21 2022-02-11 杭州海存信息技术有限公司 Neural computation circuit using three-dimensional memory to store activation function look-up table
US11270193B2 (en) 2016-09-30 2022-03-08 International Business Machines Corporation Scalable stream synaptic supercomputer for extreme throughput neural networks
US10489702B2 (en) * 2016-10-14 2019-11-26 Intel Corporation Hybrid compression scheme for efficient storage of synaptic weights in hardware neuromorphic cores
CN108073982B (en) * 2016-11-18 2020-01-03 上海磁宇信息科技有限公司 Brain-like computing system
US10423876B2 (en) * 2016-12-01 2019-09-24 Via Alliance Semiconductor Co., Ltd. Processor with memory array operable as either victim cache or neural network unit memory
US11580373B2 (en) 2017-01-20 2023-02-14 International Business Machines Corporation System, method and article of manufacture for synchronization-free transmittal of neuron values in a hardware artificial neural networks
US10197971B1 (en) 2017-08-02 2019-02-05 International Business Machines Corporation Integrated optical circuit for holographic information processing
JP6914342B2 (en) * 2017-09-07 2021-08-04 パナソニック株式会社 Neural network arithmetic circuit using semiconductor memory element
US11074499B2 (en) 2017-11-20 2021-07-27 International Business Machines Corporation Synaptic weight transfer between conductance pairs with polarity inversion for reducing fixed device asymmetries
WO2019141902A1 (en) * 2018-01-17 2019-07-25 Nokia Technologies Oy An apparatus, a method and a computer program for running a neural network
CN109886416A (en) * 2019-02-01 2019-06-14 京微齐力(北京)科技有限公司 The System on Chip/SoC and machine learning method of integrated AI's module
CN112684977A (en) * 2019-10-18 2021-04-20 旺宏电子股份有限公司 Memory device and in-memory computing method thereof
US11270759B2 (en) 2019-10-21 2022-03-08 Samsung Electronics Co., Ltd. Flash memory device and computing device including flash memory cells
KR20210047413A (en) 2019-10-21 2021-04-30 삼성전자주식회사 Flash memory device and computing device incuding flash meory cells
US11264082B2 (en) 2019-10-28 2022-03-01 Samsung Electronics Co., Ltd. Memory device, memory system and autonomous driving apparatus
KR20210050634A (en) 2019-10-28 2021-05-10 삼성전자주식회사 Memory device, memory system and autonomous driving apparatus

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5448682A (en) * 1992-05-30 1995-09-05 Gold Star Electron Co., Ltd. Programmable multilayer neural network

Family Cites Families (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5214743A (en) * 1989-10-25 1993-05-25 Hitachi, Ltd. Information processing apparatus
US5329611A (en) * 1990-05-22 1994-07-12 International Business Machines Corp. Scalable flow virtual learning neurocomputer
JPH04182769A (en) * 1990-11-17 1992-06-30 Nissan Motor Co Ltd Digital neuro processor
IT1244912B (en) * 1991-01-31 1994-09-13 Texas Instruments Italia Spa LEARNING SYSTEM FOR A NEURONIC NETWORK OF OPPORTUNITY ARCHITECTURE PHYSICALLY INSERABLE IN THE LEARNING PROCESS.
IT1244911B (en) * 1991-01-31 1994-09-13 Texas Instruments Italia Spa PHYSICALLY INSERABLE NEURONIC NETWORK ARCHITECTURE IN THE LEARNING PROCESS.
JPH04320223A (en) * 1991-04-19 1992-11-11 Sharp Corp Auto iris device
JPH0561843A (en) * 1991-05-15 1993-03-12 Wacom Co Ltd Neural network device
JPH04346040A (en) * 1991-05-23 1992-12-01 Minolta Camera Co Ltd Colorimeter
JPH0561847A (en) * 1991-09-02 1993-03-12 Toshiba Corp Neural network learning device
US5278945A (en) * 1992-01-10 1994-01-11 American Neuralogical, Inc. Neural processor apparatus
KR0178805B1 (en) * 1992-08-27 1999-05-15 정호선 Multilayer neural net
JPH06250997A (en) * 1993-03-01 1994-09-09 Toshiba Corp High-speed learning device for neural circuit network
JPH07192073A (en) * 1993-12-27 1995-07-28 Rohm Co Ltd Programmable device and device using the programmable device
EP0694854B1 (en) * 1994-07-28 2002-06-05 International Business Machines Corporation Improved neural semiconductor chip architectures and neural networks incorporated therein
JPH1021209A (en) * 1996-06-27 1998-01-23 Shimazu Yasumasa General purpose digital neural computer
JP2004295177A (en) * 2003-03-25 2004-10-21 Seiko Epson Corp Controller of electronic equipment and printer controller
MY138544A (en) * 2003-06-26 2009-06-30 Neuramatix Sdn Bhd Neural networks with learning and expression capability
JP2005098555A (en) * 2003-09-22 2005-04-14 Denso Corp Neural network type air conditioner
WO2006018898A1 (en) * 2004-08-20 2006-02-23 Fujitsu Limited Wireless network system
JP2006085557A (en) * 2004-09-17 2006-03-30 Toshiba Corp Memory management device and data processing device
US7533071B2 (en) * 2005-06-28 2009-05-12 Neurosciences Research Foundation, Inc. Neural modeling and brain-based devices using special purpose processor
US7904398B1 (en) * 2005-10-26 2011-03-08 Dominic John Repici Artificial synapse component using multiple distinct learning means with distinct predetermined learning acquisition times
US20070288410A1 (en) * 2006-06-12 2007-12-13 Benjamin Tomkins System and method of using genetic programming and neural network technologies to enhance spectral data
JP4842742B2 (en) * 2006-09-05 2011-12-21 富士通株式会社 Software management program, software management method, and software management apparatus
US7814038B1 (en) * 2007-12-06 2010-10-12 Dominic John Repici Feedback-tolerant method and device producing weight-adjustment factors for pre-synaptic neurons in artificial neural networks
JP5154666B2 (en) * 2008-03-14 2013-02-27 ヒューレット−パッカード デベロップメント カンパニー エル.ピー. Neuromorphic circuit

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5448682A (en) * 1992-05-30 1995-09-05 Gold Star Electron Co., Ltd. Programmable multilayer neural network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
MISRA J ET AL: "Artificial neural networks in hardware: A survey of two decades of progress", NEUROCOMPUTING, ELSEVIER SCIENCE PUBLISHERS, AMSTERDAM, NL, vol. 74, no. 1-3, 1 December 2010 (2010-12-01), pages 239 - 255, XP027517200, ISSN: 0925-2312, [retrieved on 20101123], DOI: 10.1016/J.NEUCOM.2010.03.021 *

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104854602A (en) * 2012-12-03 2015-08-19 Hrl实验室有限责任公司 Generating messages from the firing of pre-synaptic neurons
CN104854602B (en) * 2012-12-03 2017-04-19 Hrl实验室有限责任公司 Neural network unit and relevant system and method
US10339041B2 (en) 2013-10-11 2019-07-02 Qualcomm Incorporated Shared memory architecture for a neural simulator
US11675693B2 (en) 2017-04-04 2023-06-13 Hailo Technologies Ltd. Neural network processor incorporating inter-device connectivity
US11216717B2 (en) 2017-04-04 2022-01-04 Hailo Technologies Ltd. Neural network processor incorporating multi-level hierarchical aggregated computing and memory elements
US10387298B2 (en) 2017-04-04 2019-08-20 Hailo Technologies Ltd Artificial neural network incorporating emphasis and focus techniques
US11238334B2 (en) 2017-04-04 2022-02-01 Hailo Technologies Ltd. System and method of input alignment for efficient vector operations in an artificial neural network
US11238331B2 (en) 2017-04-04 2022-02-01 Hailo Technologies Ltd. System and method for augmenting an existing artificial neural network
US11551028B2 (en) 2017-04-04 2023-01-10 Hailo Technologies Ltd. Structured weight based sparsity in an artificial neural network
US11263512B2 (en) 2017-04-04 2022-03-01 Hailo Technologies Ltd. Neural network processor incorporating separate control and data fabric
US11615297B2 (en) 2017-04-04 2023-03-28 Hailo Technologies Ltd. Structured weight based sparsity in an artificial neural network compiler
US11354563B2 (en) 2017-04-04 2022-06-07 Hallo Technologies Ltd. Configurable and programmable sliding window based memory access in a neural network processor
US11461614B2 (en) 2017-04-04 2022-10-04 Hailo Technologies Ltd. Data driven quantization optimization of weights and input data in an artificial neural network
US11461615B2 (en) 2017-04-04 2022-10-04 Hailo Technologies Ltd. System and method of memory access of multi-dimensional data
US11514291B2 (en) 2017-04-04 2022-11-29 Hailo Technologies Ltd. Neural network processing element incorporating compute and local memory elements
US11544545B2 (en) 2017-04-04 2023-01-03 Hailo Technologies Ltd. Structured activation based sparsity in an artificial neural network
US11221929B1 (en) 2020-09-29 2022-01-11 Hailo Technologies Ltd. Data stream fault detection mechanism in an artificial neural network processor
US11263077B1 (en) 2020-09-29 2022-03-01 Hailo Technologies Ltd. Neural network intermediate results safety mechanism in an artificial neural network processor
US11237894B1 (en) 2020-09-29 2022-02-01 Hailo Technologies Ltd. Layer control unit instruction addressing safety mechanism in an artificial neural network processor
US11811421B2 (en) 2020-09-29 2023-11-07 Hailo Technologies Ltd. Weights safety mechanism in an artificial neural network processor

Also Published As

Publication number Publication date
JP2013534017A (en) 2013-08-29
CN102971754A (en) 2013-03-13
EP2591449A1 (en) 2013-05-15
KR20130036323A (en) 2013-04-11
US8676734B2 (en) 2014-03-18
JP2018014114A (en) 2018-01-25
CN102971754B (en) 2016-06-22
KR101466251B1 (en) 2014-11-27
US20120011087A1 (en) 2012-01-12
JP2015167019A (en) 2015-09-24

Similar Documents

Publication Publication Date Title
US8676734B2 (en) Methods and systems for replaceable synaptic weight storage in neuro-processors
EP2776988B1 (en) Method and apparatus for using memory in probabilistic manner to store synaptic weights of neural network
TWI705444B (en) Computing memory architecture
US11361216B2 (en) Neural network circuits having non-volatile synapse arrays
TWI515670B (en) Appartus,system and computer product for reinforcement learning
US9727459B2 (en) Non-volatile, solid-state memory configured to perform logical combination of two or more blocks sharing series-connected bit lines
US11620505B2 (en) Neuromorphic package devices and neuromorphic computing systems
US11880226B2 (en) Digital backed flash refresh
CN110751276A (en) Implementing neural networks with ternary inputs and binary weights in NAND memory arrays
KR102409859B1 (en) Memory cells configured to generate weighted inputs for neural networks
US10761773B2 (en) Resource allocation in memory systems based on operation modes
CN114121086A (en) Bayesian networks in memory
US11694065B2 (en) Spiking neural unit
CN116075832A (en) Artificial neural network training in memory
JP2024035145A (en) IMC circuit, neural network device including IMC circuit, and method of operating IMC circuit

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 201180033657.2

Country of ref document: CN

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11733755

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 10419/CHENP/2012

Country of ref document: IN

ENP Entry into the national phase

Ref document number: 2013518841

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

REEP Request for entry into the european phase

Ref document number: 2011733755

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2011733755

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 20137003298

Country of ref document: KR

Kind code of ref document: A