WO1988009995A1 - Pipeline memory structure - Google Patents

Pipeline memory structure Download PDF

Info

Publication number
WO1988009995A1
WO1988009995A1 PCT/US1988/001267 US8801267W WO8809995A1 WO 1988009995 A1 WO1988009995 A1 WO 1988009995A1 US 8801267 W US8801267 W US 8801267W WO 8809995 A1 WO8809995 A1 WO 8809995A1
Authority
WO
WIPO (PCT)
Prior art keywords
memory
data
units
pipeline
address
Prior art date
Application number
PCT/US1988/001267
Other languages
French (fr)
Inventor
Peter Panec
William P. Real
O. James Fiske
Original Assignee
Hughes Aircraft Company
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hughes Aircraft Company filed Critical Hughes Aircraft Company
Publication of WO1988009995A1 publication Critical patent/WO1988009995A1/en
Priority to KR1019890700185A priority Critical patent/KR890702208A/en
Priority to NO89890416A priority patent/NO890416L/en

Links

Classifications

    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C8/00Arrangements for selecting an address in a digital store
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline, look ahead
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C7/00Arrangements for writing information into, or reading information out from, a digital store
    • G11C7/10Input/output [I/O] data interface arrangements, e.g. I/O data control circuits, I/O data buffers
    • G11C7/1006Data managing, e.g. manipulating data before writing or reading out, data bus switches or control circuits therefor
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C7/00Arrangements for writing information into, or reading information out from, a digital store
    • G11C7/10Input/output [I/O] data interface arrangements, e.g. I/O data control circuits, I/O data buffers
    • G11C7/1015Read-write modes for single port memories, i.e. having either a random port or a serial port
    • G11C7/1039Read-write modes for single port memories, i.e. having either a random port or a serial port using pipelining techniques, i.e. using latches between functional memory parts, e.g. row/column decoders, I/O buffers, sense amplifiers
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C8/00Arrangements for selecting an address in a digital store
    • G11C8/12Group selection circuits, e.g. for memory block selection, chip selection, array selection

Definitions

  • the present invention relates generally to high speed computers having task streaming or instruction streaming architectures, and particularly to a pipeline memory structure for such computers.
  • a task stream computer architecture is fundamentally that of a pipeline.
  • an optimal pipeline should be designed so that no conflict ever occurs due to two objects or instructions attempting to enter the same pipeline stage at the same time.
  • the present invention provides a pipeline memory structure having a plurality of randomly accessible memory units and a hierarchical arrangement of data input, data output and address memory interface registers.
  • the data input and address memory interface registers are used to distribute data and address information to the memory units from a data input port and an address port of the processor of a computer, while the data output memory interface registers are used for collecting data information from the memory units and directing this data information to ' a data output port of the -3-
  • the data input, data output and address memory interface registers each comprise a plurality of memory interface units which are interconnected together to form separate branched-tree structures having a plurality of levels.
  • each of the memory interface units contained in the data input, data output and address registers have a substantially identical circuit construction.
  • each of the memory interface units will be capable of generating and appending haxrming code information to the data information being written into the memory units, and a provision for performing error detection and correction on the data information being read from the memory units.
  • Figure 1 is a block diagram of a pipeline memory structure according to the present invention.
  • FIG. 1 is block diagram of the memory blocks shown in Figure 1.
  • Figure 3 is a block diagram of the memory interface units shown in Figure 1.
  • FIG 4 is a block diagram of the PASRAM memory chip shown in Figure 2.
  • FIG. 1 a block diagram of a pipeline memory structure 110 according to the present invention is shown.
  • the pipeline memory structure 110 is shown to be connected to a data input port 112, a data output port 114, and an address port 116.
  • These ports 112-116 may be provided by a processor of any high speed computer, such as a computer employing a task stream or instruction stream architecture.
  • the data input port 112 and the data output port 114 may be comprised of one or more separate data buses within the computer architecture.
  • the address port 116 may be comprised of the address bus within the computer architecture.
  • the pipeline memory structure 110 is generally comprised of a plurality of memory blocks 118 and a plurality of memory interface units 120.
  • the memory interface units 120 fan out from the data input port 112, the data output port 114 and the address port 116 to form three distinct hierarchical arrangements of these memory interface units.
  • Each of these hierarchical arrangements of memory interface units may be characterized as tree structures which branch out to form multiple levels of memory interface units which expand in the number of memory interface units as each level gets closer to the row of memory blocks 118.
  • the data input tree structure 122 is used to distribute data information to the appropriate memory blocks 118 from the data input port 112.
  • the address tree structure 124 is used to distribute address information to the appropriate memory blocks 118 from the address port 116.
  • the data output tree structure 126 is used to collect data information being read from the appropriate memory blocks 118 and direct this data information to the data output port 114 of the computer.
  • FIG. 1 shows only two memory sections or units 128 and 130 which each comprise a row of three memory blocks 118.
  • the pipeline memory structure 110 according to the present invention is capable of being expanded with the appropriate tree structures to provide many memory sections or units.
  • the principals of the present invention are independent of the number of levels to the tree structures 122-126 or the number of memory units 128-130 employed in the memory structure. Additionally, the principals of the present invention are also independent of the number of memory blocks 118 contained in each of the memory sections or units .
  • each of the memory interface units 120 used in the tree structures 122-126 could be of identical circuit construction, different circuit constructions for these memory interface units could also be employed in the appropriate application. While the tree structures 122-126 are shown to have the identical hierarchical arrangement of memory interface units 120, it may also be possible in the appropriate application to provide different arrangements of the memory interface units 120 in the tree structures 122-126. Furthermore, while each level of the memory interface units 120 in each of the tree structures 122-126 doubles the number of these memory interface units used in the previous level, it should be appreciated that the number of memory interface units branching out from any qiven memory interface unit may be increased or decreased at any level from the basic arrangement shown in Figure 1. The present invention may also be used with a greater or smaller number of levels.
  • each of the memory interface units 132 and 134 includes chip select control circuitry which controls the accessing of these memory interface units. Accordingly, the data word present at the data input port 112 may be transmitted to one or both of the memory interface units 132-134.
  • Another pair of memory interface units 136 and 138 are connected to the output of the memory interface unit 134. It should be appreciated that additional memory interface units would be connected to the output of the memory interface unit 132 if Figure 1 were to show the entire pipeline memory structure. While the memory interface units 132 and 134 represent one level in the data input tree structure 122, the memory interface units 136 and 138 represent another level in this tree structure.
  • the memory interface unit 136 is used to direct the flow of data to one or more of the memory blocks 118 in the memory, section or unit 128. Similarly, the memory interface unit 138 is used to direct the flow of data to one or more of the memory blocks 118 in the memory section or unit 130. _,
  • each level of the data input tree structure 122 an address for the memory block 118 to which this data is to be written into will be transmitted down through corresponding levels in the address tree structure 124. Accordingly, when a data word reaches the bottom level of the data input tree structure 122, the appropriate memory address will reach the bottom level of memory interface units in the address tree structure 124.
  • Each of the tree structure levels closest to the memory blocks 118 in the data input and address tree structures comprise a different memorv interface unit 120 connected to each one of the memory sections.
  • the memory interface unit 139 in the address tree structure 124 is connected to the memory section 128, while the memory interface unit 140 is connected to the memory section 130.
  • Figure 1 also shows that the memory interface unit level closest to the memory blocks 118 in the data output tree structure 126 comprises a different memory interface unit connected to each one of the memorv sections.
  • a memory interface unit 141 is connected to the memory section 128, while a memory interface unit 142 is connected to the memory section 130.
  • the memory interface units 141 and 142 will collect the data information being read from the memory sections 128 and/or 130, and direct this information to a memory interface unit 144.
  • the memory interface unit 144 is in turn connected to the data output port 114.
  • An additional memory interface unit 146 is also connected to the data output port 114 for collecting data information from other memory sections in the pipeline memory structure.
  • the memory blocks 118 include two identical sets of memory arrays 148 and 248.
  • Each of the memory arrays 148 and 248 include six pipelined access static random access memory (PASRAM) chips 152.
  • PASR2M memory chips 152 are capable of storing eight bit data words. Accordingly, in order to store a sixteen bit data word plus six bits of hamming code, three of these eight bit PASRAM memory chips 152 must be accessed jointly in one of the memory arrays 148, 248.
  • FIG. 2 Also shown in Figure 2 are a number of memory interface units 120. These particular memory interface units 138-140, 238-240 correspond to the level in the tree structures which are closest to the memory blocks.
  • the address bus 164 supplies address information to the memory interface unit 140 while the address bus 264 supplies address information to memory interface unit 240.
  • data in the upper half of the data word is supplied to the ⁇ emory interface vmit 138, while the data in the lower half of the data word is supplied to the memory interface unit 238.
  • data read from the PASRAM memory chips 152 in the memory array 148 is supplied to the memory interface unit 142, while the data read from the PASRAM memory chips 152 in the memory array 248 is supplied to the memory interface unit 242.
  • address buses 164 and " 264 shown in Figure 2 each comprise seventeen address lines in this embodiment.
  • the "111" designation in the bus lines connected to the memory interface units 140 and 240 indicate that seventeen address lines are provided to both of these memory interface units. This designation for the number of lines is also used for the data input and output buses in Figure 2, as well as for the other buses shown in Figures 3 and 4.
  • the memory interface unit 120 generally includes a latch 166 and latch 168.
  • the latch 166 is used to receive and hold the address or data information being transmitted to the memory interface unit.
  • the latch 168 is used to receive and hold various control signals such as aethe data is to be written into or read from memory, and whether the data word is sixteen or thirty-two bits long.
  • the information from the latch 168 plus two additional control signals i.e., mode select signals) is directed to a control decode circuit 170 which is used to enable the appropriate chip function.
  • the control decode circuit 170 may also include the provision for a lock-out condition which will block the memory interface unit 120 from outputting the address or data information even if the chip has otherwise been directed by other control signals.
  • the output from the control decode circuit 170 is directed to an output control circuit 176 which is used to drive the transmission of address or data information from the memory interface unit 120 to the next memory interface unit in the tree structure or to a memory block. Accordingly, it will be appreciated that the provision of an output driver in each of the memory interface units 120 will distribute the drive load necessary to access a large memory configuration.
  • the memory interface unit 120 may also include a maintenance bus interface 172 which is used for testing the memory interface unit during a debugging process.
  • the maintenance bus interface 172 includes a serial input/output line 174 which is used to shift data into or out of the memory interface unit during this process.
  • the memory interface unit 120 also includes a hamming code generation circuit 178 and an error detection/correction circuit 180.
  • the hamming code generation circuit 178 is used to generate and append a number of hamming code bits (e.g., 6) to each of the sixteen bit data words being written into the memory arrays.
  • the error detection/ correction circuit 180 is used to perform error detection and correction on each data word being read from the memory arrays. Preferably, the error detection/correction circuit will be able to detect the presence the two erroneous bits and correct one of these erroneous bits. If more than one bit error is detected, the error detection/correction circuit 180 will generate an interrupt to the computer.
  • the memory interface unit 120 also preferably includes ' an address decode circuit 182.
  • the address decode circuit 182 performs partial decoding of the memory address so that the memory addresses may be distributed to the appropriate memory arrays.
  • the same general circuit construction of a memory interface unit may be used in each of the tree structures 122-126.
  • the mode select signals could be used, for example, to hard wire the particular memory interface units to their appropriate function in these tree structures.
  • the memory interface unit may be fabricated using VHSIC half micrometer technology, though other suitable technologies may be used. It should be noted that one of the advantages of the circuit construction for the memory interface unit 120 is that the number of gate delavs between pipeline registers or memory interface units can be limited to five or six.
  • the PASRAM memory chip 152 generally includes an 8k x 8 bit RAM array 184.
  • the principals of the present invention are not limited to the particular size of the RAM array.
  • the size of RAM array could be enlarged to 32k x 8 bits in the appropriate application.
  • the PASRAM memory chip 152 also includes a set of five latches 186-194.
  • the latch 186 is used to receive and hold address, chip select (CS) , read, and write information which needs to be decoded in an address and control decode circuit 196. After the address and chip select information are decoded, this information will be transmitted to the latch 188 in a pipeline fashion before the RAM arrav 184 is accessed.
  • the latch 190 is used to receive and hold data information to be written into the RAM array 184.
  • the latch 192 is used to match up the data information being written into the RAM array 184 with the- address information contained in the latch 188.
  • the PASRAM memory chip 152 includes the latch 194 for receiving and holding data information being read frcm the RAM array.

Abstract

A pipeline memory structure having a plurality of randomly accessible memory units (128, 130) and a hierarchical arrangement of data input, data output and address memory interface registers. The data input and address registers are used to distribute data and address information to the memory units from a data input port (112) and an address port (116) of the processor of a computer, while the data output registers are used for collecting data information from the memory units and directing this data information to a data output port (114) of the processor. The data input, the data output and address registers each comprise a plurality of memory interface units (120, 124, 126) which are inter-connected together to form separate branched-tree structures having a plurality of levels.

Description

PIEEEJNE MM3RY tf±HLAJiυWi
Technical Field
The present invention relates generally to high speed computers having task streaming or instruction streaming architectures, and particularly to a pipeline memory structure for such computers.
Background of the Invention
In a task streaming or instruction streaming computer architecture, multiple processes will be able to share common hardware simultaneously. This is accomplished through a pipeline architecture in which instructions from multiple processes flow through the pipeline such that the instructions for different processes occupy different pipeline stages at a given time. In this way, one physical processor can support a multiple concurrent "virtual processors". An example of such task streaming or instruction streaming computer architectures are disclosed in the U.S. Patent No. 4,229,790, issued on October 21, 1980 to Maxwell C. Gilliland, et al; and in the paper entitled "A Multiple Instruction Stream Processor With Shared Resources" by M. J. Flynn, A. Podvin and K. Sσhmizu, pgs. 251-286, published in Parallel Processor Systems, Technologies and Applications. The above-identified patent and paper are incorporated herein by reference.
As discussed above, a task stream computer architecture is fundamentally that of a pipeline. In order to optimally utilize such a pipeline, it is desirable that the temporal flow of data and instructions through the pipeline be unobstructed. In other words, an optimal pipeline should be designed so that no conflict ever occurs due to two objects or instructions attempting to enter the same pipeline stage at the same time.
Accordingly, in order to further increase the efficiency of a computer having a task stream architecture, it is a principal objective of the present invention to provide a pipeline memory structure which is capable of operating at very high speeds (e.g. , 100 MHz) .
It is another objective of the present invention to provide a pipeline memory structure which is capable of distributing the drive loads among a large number of pipeline registers. It is another objective of the present invention to provide a pipeline memory structure incorporating contingent free address and data paths.
It is another objective of the present invention to provide an expandable memory structure which is capable of operating at very high speeds and has the other advantages described herein.
It is a further objective of the present invention to provide a pipeline memory structure which minimizes the number of gate delavs between pipeline registers.
It is an additional objective of the present invention to provide a pipeline memory structure which exploits the advantages of extremely dense and fast chip technology.
It is yet another objective of the present invention to provide a pipeline memory structure which is capable of being accessed for either sixteen bit or thirty-two bit computer data words.
It is yet a further objective of the present invention to provide a pipeline memory structure which may be randomly accessed, and where read and write operations may be mixed arbitrarily.
It is still another objective of the present invention to provide a pipeline memory structure which is capable of performing error detection and correction functions in the data path.
It is still another objective of the present invention to provide a pipeline memory structure which is capable of utilizing the same register structure for data input, data output and address transmission.
Scramary of the Invention
To achieve the foregoing objectives, the present invention" provides a pipeline memory structure having a plurality of randomly accessible memory units and a hierarchical arrangement of data input, data output and address memory interface registers. The data input and address memory interface registers are used to distribute data and address information to the memory units from a data input port and an address port of the processor of a computer, while the data output memory interface registers are used for collecting data information from the memory units and directing this data information to ' a data output port of the -3-
processor. The data input, data output and address memory interface registers each comprise a plurality of memory interface units which are interconnected together to form separate branched-tree structures having a plurality of levels. In one embodiment of the present invention, each of the memory interface units contained in the data input, data output and address registers have a substantially identical circuit construction. Preferably, each of the memory interface units will be capable of generating and appending haxrming code information to the data information being written into the memory units, and a provision for performing error detection and correction on the data information being read from the memory units.
Additional advantages and features of the present invention will become apparent from a reading of the detailed description of the preferred embodiment which makes reference to the following set of drawings:
Brief Description of the Drawings
Figure 1 is a block diagram of a pipeline memory structure according to the present invention.
Figure 2 is block diagram of the memory blocks shown in Figure 1.
Figure 3 is a block diagram of the memory interface units shown in Figure 1.
Figure 4 is a block diagram of the PASRAM memory chip shown in Figure 2.
Detailed Description of the Preferred Bπbcdiment
Referring to Figure 1, a block diagram of a pipeline memory structure 110 according to the present invention is shown. The pipeline memory structure 110 is shown to be connected to a data input port 112, a data output port 114, and an address port 116. These ports 112-116 may be provided by a processor of any high speed computer, such as a computer employing a task stream or instruction stream architecture. Thus, for example, the data input port 112 and the data output port 114 may be comprised of one or more separate data buses within the computer architecture. Similarly, the address port 116 may be comprised of the address bus within the computer architecture.
The pipeline memory structure 110 is generally comprised of a plurality of memory blocks 118 and a plurality of memory interface units 120. The memory interface units 120 fan out from the data input port 112, the data output port 114 and the address port 116 to form three distinct hierarchical arrangements of these memory interface units. Each of these hierarchical arrangements of memory interface units may be characterized as tree structures which branch out to form multiple levels of memory interface units which expand in the number of memory interface units as each level gets closer to the row of memory blocks 118.
The data input tree structure 122 is used to distribute data information to the appropriate memory blocks 118 from the data input port 112. Similarly, the address tree structure 124 is used to distribute address information to the appropriate memory blocks 118 from the address port 116. Conversely, the data output tree structure 126 is used to collect data information being read from the appropriate memory blocks 118 and direct this data information to the data output port 114 of the computer.
It is important to note that the block diagram for the pipeline memory structure 110 shown in Figure 1 is only a partial block diagram of the pipeline memory structure. Figure 1 shows only two memory sections or units 128 and 130 which each comprise a row of three memory blocks 118. It should be appreciated that the pipeline memory structure 110 according to the present invention is capable of being expanded with the appropriate tree structures to provide many memory sections or units. Thus, it should be understood that the principals of the present invention are independent of the number of levels to the tree structures 122-126 or the number of memory units 128-130 employed in the memory structure. Additionally, the principals of the present invention are also independent of the number of memory blocks 118 contained in each of the memory sections or units . Furthermore, while each of the memory interface units 120 used in the tree structures 122-126 could be of identical circuit construction, different circuit constructions for these memory interface units could also be employed in the appropriate application. While the tree structures 122-126 are shown to have the identical hierarchical arrangement of memory interface units 120, it may also be possible in the appropriate application to provide different arrangements of the memory interface units 120 in the tree structures 122-126. Furthermore, while each level of the memory interface units 120 in each of the tree structures 122-126 doubles the number of these memory interface units used in the previous level, it should be appreciated that the number of memory interface units branching out from any qiven memory interface unit may be increased or decreased at any level from the basic arrangement shown in Figure 1. The present invention may also be used with a greater or smaller number of levels.
Taking the data input tree structure 122, for example, two memory interface units 132 and 134 are shown to be connected to the data input port 112. As will be discussed more fully in connection with Figure 3, each of the memory interface units 132 and 134 (as well as the other memory interface units 120) include chip select control circuitry which controls the accessing of these memory interface units. Accordingly, the data word present at the data input port 112 may be transmitted to one or both of the memory interface units 132-134.
Another pair of memory interface units 136 and 138 are connected to the output of the memory interface unit 134. It should be appreciated that additional memory interface units would be connected to the output of the memory interface unit 132 if Figure 1 were to show the entire pipeline memory structure. While the memory interface units 132 and 134 represent one level in the data input tree structure 122, the memory interface units 136 and 138 represent another level in this tree structure. The memory interface unit 136 is used to direct the flow of data to one or more of the memory blocks 118 in the memory, section or unit 128. Similarly, the memory interface unit 138 is used to direct the flow of data to one or more of the memory blocks 118 in the memory section or unit 130. _,
Concurrent with this flow of data through the data input tree structure 122, corresponding memory address information will flow through the address tree structure 124. Thus, as a data word is transmitted down through each level of the data input tree structure 122, an address for the memory block 118 to which this data is to be written into will be transmitted down through corresponding levels in the address tree structure 124. Accordingly, when a data word reaches the bottom level of the data input tree structure 122, the appropriate memory address will reach the bottom level of memory interface units in the address tree structure 124. Each of the tree structure levels closest to the memory blocks 118 in the data input and address tree structures comprise a different memorv interface unit 120 connected to each one of the memory sections. Thus, the memory interface unit 139 in the address tree structure 124 is connected to the memory section 128, while the memory interface unit 140 is connected to the memory section 130.
Figure 1 also shows that the memory interface unit level closest to the memory blocks 118 in the data output tree structure 126 comprises a different memory interface unit connected to each one of the memorv sections. Specifically, a memory interface unit 141 is connected to the memory section 128, while a memory interface unit 142 is connected to the memory section 130. The memory interface units 141 and 142 will collect the data information being read from the memory sections 128 and/or 130, and direct this information to a memory interface unit 144. The memory interface unit 144 is in turn connected to the data output port 114." An additional memory interface unit 146 is also connected to the data output port 114 for collecting data information from other memory sections in the pipeline memory structure.
Referring to Figure 2, a block diagram of two memory blocks 118 is shown. The memory blocks 118 include two identical sets of memory arrays 148 and 248. Each of the memory arrays 148 and 248 include six pipelined access static random access memory (PASRAM) chips 152. In this embodiment, the PASR2M memory chips 152 are capable of storing eight bit data words. Accordingly, in order to store a sixteen bit data word plus six bits of hamming code, three of these eight bit PASRAM memory chips 152 must be accessed jointly in one of the memory arrays 148, 248. If a thirty-two bit data word plus twelve bits of hamming code was desired to be stored in the memory blocks 118, three PASRAM memory chips 152 in the memory array 148 would have to be accessed along with three PASRAM memory chips 152 in the memory array 248. The selection of a sixteen bit or thirty-two bit data word is indicated by a 16/32 select signal, and the choice of whether the memory array 148 or the memory array 248 is to be accessed for a sixteen bit word is indicated by the least significant bit of the address.
Also shown in Figure 2 are a number of memory interface units 120. These particular memory interface units 138-140, 238-240 correspond to the level in the tree structures which are closest to the memory blocks. When accessing a thirty-two bit data word, the address bus 164 supplies address information to the memory interface unit 140 while the address bus 264 supplies address information to memory interface unit 240. In addition, data in the upper half of the data word is supplied to the πemory interface vmit 138, while the data in the lower half of the data word is supplied to the memory interface unit 238. In a similar fashion, data read from the PASRAM memory chips 152 in the memory array 148 is supplied to the memory interface unit 142, while the data read from the PASRAM memory chips 152 in the memory array 248 is supplied to the memory interface unit 242.
It should be noted that the address buses 164 and"264 shown in Figure 2 each comprise seventeen address lines in this embodiment. Thus, the "111" designation in the bus lines connected to the memory interface units 140 and 240 indicate that seventeen address lines are provided to both of these memory interface units. This designation for the number of lines is also used for the data input and output buses in Figure 2, as well as for the other buses shown in Figures 3 and 4.
Referring to Figure 3, a block diagram of the memory interface unit 120 is shown. The memory interface unit 120 generally includes a latch 166 and latch 168. The latch 166 is used to receive and hold the address or data information being transmitted to the memory interface unit. The latch 168 is used to receive and hold various control signals such as aethe data is to be written into or read from memory, and whether the data word is sixteen or thirty-two bits long. The information from the latch 168 plus two additional control signals (i.e., mode select signals) is directed to a control decode circuit 170 which is used to enable the appropriate chip function. The control decode circuit 170 may also include the provision for a lock-out condition which will block the memory interface unit 120 from outputting the address or data information even if the chip has otherwise been directed by other control signals. The output from the control decode circuit 170 is directed to an output control circuit 176 which is used to drive the transmission of address or data information from the memory interface unit 120 to the next memory interface unit in the tree structure or to a memory block. Accordingly, it will be appreciated that the provision of an output driver in each of the memory interface units 120 will distribute the drive load necessary to access a large memory configuration.
The memory interface unit 120 may also include a maintenance bus interface 172 which is used for testing the memory interface unit during a debugging process. The maintenance bus interface 172 includes a serial input/output line 174 which is used to shift data into or out of the memory interface unit during this process.
The memory interface unit 120 also includes a hamming code generation circuit 178 and an error detection/correction circuit 180. The hamming code generation circuit 178 is used to generate and append a number of hamming code bits (e.g., 6) to each of the sixteen bit data words being written into the memory arrays. The error detection/ correction circuit 180 is used to perform error detection and correction on each data word being read from the memory arrays. Preferably, the error detection/correction circuit will be able to detect the presence the two erroneous bits and correct one of these erroneous bits. If more than one bit error is detected, the error detection/correction circuit 180 will generate an interrupt to the computer. One technique for detecting and correcting errors using a hamming code, is set forth in the Intel Corporation application note AP-46, entitled "Error Detecting and Correcting Codes, Part 1", pgs. 3-110 through 3-122, 1979. This publication is incorporated herein by reference.
The memory interface unit 120 also preferably includes ' an address decode circuit 182. The address decode circuit 182 performs partial decoding of the memory address so that the memory addresses may be distributed to the appropriate memory arrays. In accordance with one embodiment of the present invention, the same general circuit construction of a memory interface unit may be used in each of the tree structures 122-126. The mode select signals could be used, for example, to hard wire the particular memory interface units to their appropriate function in these tree structures. In order for the memory interface unit 120 to operate at the speeds required in a task stream architecture, the memory interface unit may be fabricated using VHSIC half micrometer technology, though other suitable technologies may be used. It should be noted that one of the advantages of the circuit construction for the memory interface unit 120 is that the number of gate delavs between pipeline registers or memory interface units can be limited to five or six.
Referring to Figure 4, a block diagram of a PASRAM memory chip 152 is shown. The PASRAM memory chip 152 generally includes an 8k x 8 bit RAM array 184. However, it should be appreciated that the principals of the present invention are not limited to the particular size of the RAM array. Thus, the size of RAM array could be enlarged to 32k x 8 bits in the appropriate application.
The PASRAM memory chip 152 also includes a set of five latches 186-194. The latch 186 is used to receive and hold address, chip select (CS) , read, and write information which needs to be decoded in an address and control decode circuit 196. After the address and chip select information are decoded, this information will be transmitted to the latch 188 in a pipeline fashion before the RAM arrav 184 is accessed. The latch 190 is used to receive and hold data information to be written into the RAM array 184. The latch 192 is used to match up the data information being written into the RAM array 184 with the- address information contained in the latch 188. Finally, the PASRAM memory chip 152 includes the latch 194 for receiving and holding data information being read frcm the RAM array. These separate latch circuits for the inputs and outputs of the RAM array 184 are used to enable fast intrachip speeds. With this pipelined architecture within the PASRAM memory chip 152 and with the PASRAM memory chip constructed of advanced CMOS technology, this memory circuit should be capable of having an access speed of under ten nanoseconds.
Other features which could be incorporated into this PASRAM chip construction are the use of built-in redundant cells for yield enhancement, and a serial shift link to provide testability and visibility to the internal registers of the circuit. It will be appreciated that the above disclosed embodiment is well calculated to achieve the aforementioned objectives of the present invention. For example, while the invention as described above uses hamming codes, other error detection and correction codes may be used. In addition, it is evident that those skilled in the art, once given the benefit of the foregoing disclosure, may now make modifications of the specific embodiment described herein without department from the spirit of the present invention. Such modifications are to be considered within the scope of the present invention which is limited solely by the scope and spirit of the appended claims.

Claims

WE CEAM:
1. A pipeline memory structure for a high speed computer having a processor, said structure comprising: a plurality of memory units which may be randomly accessed; a hierarchical arrangement of data input and address register means for distributing data and address information to said memory units from a data input port and an address port of said processor; and a hierarchical arrangement of data output register means for collecting data information from said memory units and directing said data information to a data output port of said processor.
2. The pipeline memory structure according to Claim 1, wherein each of said memory units is a memory block which contains a plurality of RAM arrays.
3. The pipeline memory structure according to Claim 2, wherein each of said RAM arrays includes latch means for holding data and address information, and control means for decoding said address information.
4. The pipeline memory structure according to Claim 3, wherein said latch means includes a pair of sequential latches for receiving address information, a pair of sequential latches for receiving data information, and a latch for transmitting data information.
5. The pipeline memory structure according to Claim 4, wherein each of said memory blocks includes at least one RAM array for storing at least sixteen bit data words plus error detection and correction codes.
6. The pipeline memory structure according to Claim 1, wherein said data input register means and said address register means each include a plurality of memory interface units which are interconnected together in an expanding branched-tree structure having a plurality of levels, the bottom level of each such tree structure having one of said memory interface units connected to at least one of said memory units.
7. The pipeline memory structure according to Claim 6, wherein each of said levels is a pipeline stage of the memory structure.
8. The pipeline memory structure of Claim 6, wherein the number of said levels can be varied.
9. The pipeline memory structure according to Claim 6, wherein said tree structures for said data input register means and said address register means are substantially identical.
10. The pipeline memory structure according to Claim 6, wherein said data output register means includes a branched-tree structure of memory interface units which is substantially the reverse of said data input register means tree structure.
11. The pipeline memory structure according to Claim 10, wherein said memory interface units contained in each of data input, data output and address register means have a substantially identical circuit construction.
12. The pipeline memory structure according to Claim 6, wherein at least some of said memory interface units include means for generating and appending error detection and correction information to the data information being written into said memory units.
13. The pipeline memory structure according to Claim 12, wherein at least some of said memory interface units further include means for performing error detection on data information being read from said memory units.
14. The pipeline memory structure according to Claim 6, wherein at least some of said memory interface units include means for partially decoding memory address information.
15. A pipeline memory structure for the processor of a computer employing a task streaming architecture, comprising: a plurality of memory units each having latched address, data input and data output ports; data input register means for interfacing a data input port of said processor to each of said memory units; data output register means for interfacing each of said memory units to a data output port of said processor; and address register means for interfacing an address port of said processor to each of said memory units.
16. The pipeline structure according to Claim 15, wherein each of said data input, data output and address register means includes at least one memory interface unit connected to each of said memory units.
17. The pipeline structure according to Claim 16, wherein each of said data input, data output and address register iteans is -comprised of a plurality of said memory interface units interconnected together in a branched-tree structure having a plurality of levels.
18. A memory interface unit for a pipeline πemory structure which is responsive to a set of control signals, comprising: first latch means for receiving and holding a computer word containing information to be directed to a memory unit in said pipeline memory structure; second latch means for receiving and holding at least one control signal; coding means for generating a hamming code, and appending said hamming code to said computer word when said computer word represents data to be written into one of said memory units; means for decoding at least one of said control signals; and detecting means for detecting a bit error in a computer word being read from one of said memory units.
19. The memory interface unit according to Claim 18, wherein said detecting means is operable to correct at least one bit error in a computer word.
20. The memory interface unit according to Claim 18, wherein said memory interface unit includes decoding means for performing a partial decoding of the address to one of said memory units.
21. The memory interface unit according to Claim 18, wherein said memory interface unit includes output control means for driving the transmission of information from said memory interface unit.
22. A method writing and reading data information in a high speed computer memory having a processor, comprising the steps of: providing a plurality of memory units; distributing data and address information from data and address input ports of said processor to the appropriate -one of said memory units through a branched-tree structure of memory interface units; and collecting data information from the appropriate ones of said memory units and directing said data information to a data output port of said processor through a branched-tree structure of memory interface units.
23. The method of Claim 22, wherein said steps of distributing data and collecting data permit access to sixteen bit data words.
24. The method of Claim 23, wherein said steps of distributing data and collecting data permit access to thirty-two bit data-words.
25. The methof of Claim 22, wherein said step of distributing data permits selective access to each of two halves of a thirty-two bit data word.
PCT/US1988/001267 1987-06-02 1988-04-22 Pipeline memory structure WO1988009995A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
KR1019890700185A KR890702208A (en) 1987-06-02 1989-02-01 Pipeline memory structure
NO89890416A NO890416L (en) 1987-06-02 1989-02-01 The pipeline memory structure.

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US5694087A 1987-06-02 1987-06-02
US056,940 1987-06-02

Publications (1)

Publication Number Publication Date
WO1988009995A1 true WO1988009995A1 (en) 1988-12-15

Family

ID=22007502

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1988/001267 WO1988009995A1 (en) 1987-06-02 1988-04-22 Pipeline memory structure

Country Status (7)

Country Link
EP (1) EP0315671A1 (en)
JP (1) JPH02500697A (en)
KR (1) KR890702208A (en)
ES (1) ES2007233A6 (en)
IL (1) IL86196A0 (en)
TR (1) TR23376A (en)
WO (1) WO1988009995A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0393436A2 (en) * 1989-04-21 1990-10-24 Siemens Aktiengesellschaft Static memory with pipe-line registers
GB2255209A (en) * 1989-04-21 1992-10-28 Secr Defence Apparatus for pipelining a storage system
WO1994029870A1 (en) * 1993-06-02 1994-12-22 Microunity Systems Engineering, Inc. A burst mode memory accessing system
EP0788110A2 (en) * 1996-02-02 1997-08-06 Fujitsu Limited Semiconductor memory device with a pipe-line operation
EP1028427A1 (en) * 1999-02-11 2000-08-16 Infineon Technologies North America Corp. Hierarchical prefetch for semiconductor memories
EP1132925A2 (en) * 2000-02-25 2001-09-12 Infineon Technologies North America Corp. Data path calibration and testing mode using a data bus for semiconductor memories

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3983537A (en) * 1973-01-28 1976-09-28 Hawker Siddeley Dynamics Limited Reliability of random access memory systems
EP0011374A1 (en) * 1978-11-17 1980-05-28 Motorola, Inc. Execution unit for data processor using segmented bus structure
EP0042966A1 (en) * 1980-06-30 1982-01-06 International Business Machines Corporation Digital data storage error detecting and correcting system and method
WO1987001858A2 (en) * 1985-09-23 1987-03-26 Ncr Corporation Memory system with page mode operation

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3983537A (en) * 1973-01-28 1976-09-28 Hawker Siddeley Dynamics Limited Reliability of random access memory systems
EP0011374A1 (en) * 1978-11-17 1980-05-28 Motorola, Inc. Execution unit for data processor using segmented bus structure
EP0042966A1 (en) * 1980-06-30 1982-01-06 International Business Machines Corporation Digital data storage error detecting and correcting system and method
WO1987001858A2 (en) * 1985-09-23 1987-03-26 Ncr Corporation Memory system with page mode operation

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Proceedings SCS 85, 1st International Conference on Supercomputing Systems, 16-20 December 1985, St. Petersburg, Florida, IEEE, (US), J.A. Davis et al.: "On optimizing memory hierarchies for supercomputers", pages 561-567 *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0393436A2 (en) * 1989-04-21 1990-10-24 Siemens Aktiengesellschaft Static memory with pipe-line registers
EP0393436A3 (en) * 1989-04-21 1992-06-24 Siemens Aktiengesellschaft Static memory with pipe-line registers
GB2255209A (en) * 1989-04-21 1992-10-28 Secr Defence Apparatus for pipelining a storage system
GB2255209B (en) * 1989-04-21 1993-07-28 Secr Defence Apparatus for pipelining a storage system
WO1994029870A1 (en) * 1993-06-02 1994-12-22 Microunity Systems Engineering, Inc. A burst mode memory accessing system
EP0788110A3 (en) * 1996-02-02 1999-02-03 Fujitsu Limited Semiconductor memory device with a pipe-line operation
EP0788110A2 (en) * 1996-02-02 1997-08-06 Fujitsu Limited Semiconductor memory device with a pipe-line operation
US6055615A (en) * 1996-02-02 2000-04-25 Fujitsu Limited Pipeline memory access using DRAM with multiple independent banks
US6163832A (en) * 1996-02-02 2000-12-19 Fujitsu Limited Semiconductor memory device including plural blocks with a pipeline operation for carrying out operations in predetermined order
US6507900B1 (en) 1996-02-02 2003-01-14 Fujitsu Limited Semiconductor memory device including plural blocks with selecting and sensing or reading operations in different blocks carried out in parallel
EP1028427A1 (en) * 1999-02-11 2000-08-16 Infineon Technologies North America Corp. Hierarchical prefetch for semiconductor memories
EP1132925A2 (en) * 2000-02-25 2001-09-12 Infineon Technologies North America Corp. Data path calibration and testing mode using a data bus for semiconductor memories
EP1132925A3 (en) * 2000-02-25 2003-08-13 Infineon Technologies North America Corp. Data path calibration and testing mode using a data bus for semiconductor memories

Also Published As

Publication number Publication date
KR890702208A (en) 1989-12-23
TR23376A (en) 1989-12-28
EP0315671A1 (en) 1989-05-17
JPH02500697A (en) 1990-03-08
ES2007233A6 (en) 1989-06-01
IL86196A0 (en) 1988-11-15

Similar Documents

Publication Publication Date Title
US5142540A (en) Multipart memory apparatus with error detection
US5313624A (en) DRAM multiplexer
US5396641A (en) Reconfigurable memory processor
EP0974894B1 (en) Instruction cache associative cross-bar switch
AU640813B2 (en) A data processing system including a memory controller for direct or interleave memory accessing
US5752260A (en) High-speed, multiple-port, interleaved cache with arbitration of multiple access addresses
US4835729A (en) Single instruction multiple data (SIMD) cellular array processing apparatus with on-board RAM and address generator apparatus
US6279072B1 (en) Reconfigurable memory with selectable error correction storage
EP0341897A2 (en) Content addressable memory array architecture
US4483001A (en) Online realignment of memory faults
US5848258A (en) Memory bank addressing scheme
US6948045B2 (en) Providing a register file memory with local addressing in a SIMD parallel processor
US7386689B2 (en) Method and apparatus for connecting a massively parallel processor array to a memory array in a bit serial manner
JPH06333394A (en) Dual port computer memory device, method for access, computer memory device and memory structure
JPH04267464A (en) Supercomputer system
US6223253B1 (en) Word selection logic to implement an 80 or 96-bit cache SRAM
EP0668561A2 (en) A flexible ECC/parity bit architecture
US4959772A (en) System for monitoring and capturing bus data in a computer
JPH0820967B2 (en) Integrated circuit
WO1994022085A1 (en) Fault tolerant memory system
US4962501A (en) Bus data transmission verification system
US4697233A (en) Partial duplication of pipelined stack with data integrity checking
JPH07120312B2 (en) Buffer memory controller
US5301292A (en) Page mode comparator decode logic for variable size DRAM types and different interleave options
US5406607A (en) Apparatus, systems and methods for addressing electronic memories

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): JP KR NO

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH DE FR GB IT LU NL SE

WWE Wipo information: entry into national phase

Ref document number: 1988905056

Country of ref document: EP

WWP Wipo information: published in national office

Ref document number: 1988905056

Country of ref document: EP

WWW Wipo information: withdrawn in national office

Ref document number: 1988905056

Country of ref document: EP