WO1994006210A1 - Multichip ic design using tdm - Google Patents

Multichip ic design using tdm Download PDF

Info

Publication number
WO1994006210A1
WO1994006210A1 PCT/US1992/007299 US9207299W WO9406210A1 WO 1994006210 A1 WO1994006210 A1 WO 1994006210A1 US 9207299 W US9207299 W US 9207299W WO 9406210 A1 WO9406210 A1 WO 9406210A1
Authority
WO
WIPO (PCT)
Prior art keywords
module
shift register
chip
chips
multichip
Prior art date
Application number
PCT/US1992/007299
Other languages
French (fr)
Inventor
Prabhakar Goel
Original Assignee
Prabhakar Goel
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Prabhakar Goel filed Critical Prabhakar Goel
Priority to PCT/US1992/007299 priority Critical patent/WO1994006210A1/en
Priority to AU25611/92A priority patent/AU2561192A/en
Publication of WO1994006210A1 publication Critical patent/WO1994006210A1/en

Links

Classifications

    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03KPULSE TECHNIQUE
    • H03K19/00Logic circuits, i.e. having at least two inputs acting on one output; Inverting circuits
    • H03K19/02Logic circuits, i.e. having at least two inputs acting on one output; Inverting circuits using specified components
    • H03K19/173Logic circuits, i.e. having at least two inputs acting on one output; Inverting circuits using specified components using elementary logic circuits as components
    • H03K19/1731Optimisation thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/22Means for limiting or controlling the pin/gate ratio
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/30Circuit design
    • G06F30/34Circuit design for reconfigurable circuits, e.g. field programmable gate arrays [FPGA] or programmable logic devices [PLD]

Definitions

  • This invention pertains to the field of designing circuits having a plurality of integrated circuit (i.e.) chips, said design using techniques of TDM (time division multiplexing) .
  • TDM time division multiplexing
  • U.S. patent 5,036,473 uses dedicated FPGA' s solely to determine what active FPGA' s get connected to which others.
  • the reference discloses the use of software to drive and observe signals, but does not disclose the use of shift registers or TDM.
  • U.S. patent 5,109,353 discloses an array of programmable gate elements for emulating electronic circuits and systems. It does not disclose the use of shift registers or TDM.
  • the present invention is a multichip integrated circuit module (4) comprising at least two integrated circuit chips (1) .
  • the first chip (1) has at least one output shift register (9) .
  • the second chip (1) has at least one input shift register (7) .
  • Interconnections (19) couple the output register (s) (9) and the input shift register(s) (7) .
  • Means (15) are provided for loading data in parallel to the output shift register (s) (9) .
  • Means (17) are provided for sequentially shifting data through the output shift register (s) (9) over the interconnections (19) and into the input shift register (s) (7) .
  • Figure 1 is a sketch of an embodiment of the present invention in which an integrated circuit chip 1 uses at least one input shift register 7 and one output shift register 9.
  • Figure 2 is a sketch showing how the techniques of the present invention reduce the number of interconnection wires 19.among chips 1.
  • Figure 3 is a sketch of an embodiment of the present invention in which the number of stages 24 in an input shift register 7 and output shift register 9 can be reduced by one.
  • Figure 4 is a sketch of a chip 1 using tri-state output drivers 21.
  • Figure 5 is a sketch of an embodiment of the present invention in which two output shift registers 9 are used in conjunction with a single output driver 21.
  • Figure 6 is a sketch showing a plurality of tri-state output drivers 21, each having a common gating signal 23.
  • Figure 7 is a sketch showing how a single output shift register 9 can be used when the output drivers 21 have a common gating signal 23.
  • Figure 8 is a sketch showing the use of bi-directional pins and a plurality of tri-state output drivers 21 having a common gating signal 23.
  • Figure 9 is a sketch showing how a single bi-directional pin 25 can be used when a plurality of tri-state output drivers 21 have a common gating signal 23.
  • Figure 10 shows a chip 1 having a plurality of bi-directional pins 25, each coupled to an output driver 21 that has a different gating signal.
  • Figure 11 shows a single pin 25 equivalent to the embodiment depicted in Figure 10.
  • Figure 12 shows an embodiment of the present invention in which asynchronous logic 18 is employed.
  • Figure 13 shows an embodiment of the present invention in which a plurality of output shift registers 9 are daisy chained together.
  • Figure 14 shows an embodiment of the present invention in which a plurality of test shift registers 12 are multiplexed together.
  • FIG. 15 shows an embodiment of the present invention in which the TDM sequence must be applied twice
  • FIG 16 shows an embodiment of the present invention in which an interconnect 19 couples two chips 1.
  • Figure 17 shows how the interconnect 19 of Figure 16 can be reconfigured by reprogramming the first chip 1(1) .
  • Figure 18 shows a single interconnect 19 coupling four chips 1.
  • Figure 19 shows how the interconnect 19 of Figure 18 can be reprogrammed by reprogramming the source chip 1(1) .
  • Figure 20 shows an embodiment of the present invention in which an interconnect wire 19 couples two chips 1.
  • Figure 21 shows how the interconnect 19 of Figure 20 can be reprogrammed by adding a new signal S3 to the source chip 1(1) .
  • Figure 22 shows an embodiment of the present invention in which the number of stages 24 in an input shift register 7 can be reduced by one.
  • a major purpose of this invention is to increase the effective number of input/output (I/O) pins 3,5 on integrated circuit chips 1 within a module 4 that comprises a plurality of said chips 1.
  • the invention can be thought of as creating a number of virtual I/O pins 3,5 that is greater than the number of actual pins 3,5.
  • the invention also eases pressures on the system designer when the number of interconnect wires 19 on the module 4 is limited. Further, the invention enhances the reconfigurability of signal flow across chips 1.
  • the module 4 can be a production module, in which the chips 1 are executable and application-ready.
  • module 4 can be a prototype module, in which the chips 1 are experimented with by the designer to create a system design. Changes in such a prototyping environment are typically done by a combination of hardware and software changes .
  • module 4 can contain some production chips 1 and some programmable chips 1.
  • Chips 1 are any chips for which the user has control over the contents, such as FPGA' s (field programmable gate arrays) , non-field-programmable gate arrays, custom i.c.'s, semi-custom i.c.'s (application specific integrated circuits), and standard cell i.c.'s. FPGA' s are normally preferable, because of their flexibility.
  • FPGA' s field programmable gate arrays
  • non-field-programmable gate arrays custom i.c.'s
  • custom i.c.'s custom i.c.'s
  • semi-custom i.c.'s application specific integrated circuits
  • standard cell i.c.'s standard cell i.c.'s.
  • FPGA' s are normally preferable, because of their flexibility.
  • the present invention makes use of techniques of TDM
  • N is any positive integer greater than or equal to 2, and could be typically between 2 and 5.
  • N is the same for all shift registers 7,9 on a chip 1. If N were not the same, a separate shift clock 17 would be needed for each different value of N. In the chip 1 illustrated in Figure 1, the value of N is the same, and therefore there is but one shift clock 17.
  • Each chip 1 can have a plurality of input pins 3 and a different number of output pins 5.
  • the different chips 1 on a module 4 can have different numbers of input pins 3 and output pins 5.
  • the individual stages 24 of the shift registers 7,9 can be, for example, flip-flops, edge triggered latches, and pairs of polarity hold latches . When polarity hold latches are used, a pair of shift clocks 17 is required for use with the corresponding shift register 7,9.
  • the shift register 9 attached to an output pin 5 is referred to as an output shift register (OSR) 9, and functions as N virtual output pins 5.
  • N internal signals from within the chip 1 can be loaded into the OSR 9 in parallel by means of activating a parallel load clock 15 associated with the OSR 9. These signals are then serially shifted out of the OSR 9 over the corresponding output pin 5 using the shift clock 17 associated with said OSR 9. The signals travel over interconnection wires 19 to other chips 1 that need to receive the signals.
  • the shift register 7 attached to an input pin 3 is referred to as an input shift register (ISR) 7, and functions as N virtual input pins 3. It can receive serially N signals from a board interconnect 19 through the associated input pin 3 by means of the shift clock 17 that is connected to said ISR 7. The received signals can then simultaneously be applied inside the chip 1 using their stored states within the stages of the ISR 7. No special clock is needed to unload the signals from the ISR 7, because once these signals are in the ISR 7, they are visible to the logic within the chip 1.
  • ISR input shift register
  • a single parallel load clock 15 and a single shift clock 17 are used for all the ISR' s 7 and OSR' s 9 on the board 4.
  • the OSR' s 9, ISR' s 7, and attendant parallel load and shift clocks 15, 17 are customized within peripheral regions of the gate arrays 1.
  • the OSR' s 9 and ISR' s 7 are fabricated from logic normally present on the gate arrays 1.
  • Figure 2 illustrates how the invention minimizes th number of interconnect wires 19 among chips 1,2 and minimizes the number of I/O pins 3,5 within a chip 1.
  • Figure 2 illustrates the interconnections of OSR 9 within a source chip 1 and three ISR' s 7 residing within three target (sink) chips 1.
  • the target chips 1 could be identical or different in a hardware sense.
  • a single interconnect wire 19 couples the chips 1.
  • Three signals SI, S2, and S3 from the source chip 1 are conveyed to the three sink chips 1. If the TDM and shift register technique were not used, three interconnects 19 would be required.
  • the interconnect wire 19 can be thought of as a TDM bus which bundles in the time domain the three signals SI, S2 and S3. Said TDM bus 19 is visible to all three sink chips 1.
  • the TDM process described herein is transparent to the intended logic design. This transparency can be achieved in different ways, depending upon the degree of transparency needed. For example, if all chips 1 on the module 4 are synchronously clocked (which is preferable) , the following two-step TDM process can be used, at a safe time after the application of each pulse from the system clock (not illustrated) .
  • the safe time is that amount of time needed for all of the signals in the logic to achieve a steady state, i.e., when the intended design has been programmed into the chips 1.
  • Step 1 The parallel load clock 15 (which is preferably a single clock applied to all output shift registers 9 on the board 4) is applied to effect a parallel capture of signals into all of the OSR' s 9 on all the chips 1.
  • Step 2 Using the shift clock 17 (preferably a single clock used by all the ISR' s 7 and OSR' s 9 on all the chips 1), the contents of all of the OSR' s 9 are shifted into the target ISR' s 7 via the interconnects 19. This shifting process involves N applications of shift clock 17 to all of the chips 1.
  • the shift clock 17 preferably a single clock used by all the ISR' s 7 and OSR' s 9 on all the chips 1
  • I/O pins are bi-directional pins 25 (see Figure 8) . Such bi-directional pins 25 could be left unmodified. Signals involving such pins 25 are not affected by the above two steps.
  • Inputs 3 that receive direct clock signals (as opposed to data signals) do not require the use of an ISR 7. Such would be undesirable, because it would cause the clock to jiggle.
  • output pins 5 that carry clock signals going to other chips 1 do not require the use of OSR' s 9. Clock signals cannot be transferred across chips 1 using TDM without significant impact to the intended logic. As such, it is not desirable to use such techniques for I/O pins 3,5 that carry clock signals.
  • FIG 12 shows that if the logic being prototyped is asynchronous, an additional latch 29 is needed for each ISR 7 stage 24 that drives a piece of asynchronous logic 18.
  • a latch 29 is a voltage level or logic level sensitive latch that receives its data from an ISR 7 stage output and is clocked by yet another clock called the P clock 27.
  • P clock 27 is shared among all chips 1 that have asynchronous logic.
  • the function of this added latch 29 is to screen the shifting of the ISR 7 from the asynchronous logic 18.
  • P clock 27 is pulsed once after the completion of the two usual TDM steps, described earlier, that are used to effectuate the transfer of signals across chip 1 boundaries using TDM.
  • some of the stages 24 of the ISR 7 can drive synchronous logic 16, and others of the stages 24 can drive asynchronous logic 18.
  • multiple OSR' s 9 can be connected into one daisy-chained composite "test shift register” (TSR) 12 on each chip 1 to facilitate the observation of the captured signals externally by shifting out the TSR 12. All of the observation can be performed at the output of pin 5 (M) . These observed signals can be used for debugging the prototype hardware, depending upon the availability of extra board 4 logic and pins 3,5. The contents of the TSR 12 can then be reloaded from the output 5 (M) of the TSR 12 to the shift register data input 3 on the same chip 1 if it is desired to continue with the operation of the hardware.
  • TSR test shift register
  • a preferred way of accomplishing this reloading is to make a connection on the chip 1 itself from output 5 (M) to input 3, thereby creating a circular shift register.
  • the set of TSR' s 12 can be multiplexed to be observable at the output 10 of the board 4 as illustrated in Figure 14.
  • Multiplexer select signals 14 control the selection of the individual TSPs. 12 outputs by multiplexer 8.
  • the input to all the TSR' s 12 is injected via board input 6.
  • the scheme depicted in Figure 14 is useful when there are more chip output pins 5 that the designer wants to observe than there are available board output pins 10. Since the OSR' s 9 are usable for debugging purposes, the designer can choose to provide extra, initially unused, stages 24 within some OSR' s 9. These stages 24 are then available to be used for observing internal signals that the hardware designer may not have thought about earlier, for example, by changing the programming on the chip 1 to look at these stages 24.
  • FIG. 14 shows a module 4 with a number of FPGA' s 1 and set up for using TDM. To achieve a reprogrammable board 4, it is required that one must able to reprogram both the logic on the chips 1 as well as the interconnects 19. The on-chip logic is already rather reprogrammable through the use of the FPGA' s 1. To reprogram the interconnects 19, an FPGA 1 can be used as a programmable interconnect chip as in U.S.
  • Figure 16 shows three signals, SI, S2, S3, TDM' ed between chips 1(1) and 1(2) .
  • the manner in which the three signals SI, S2, S3 connect can be changed, as shown in Figure 17.
  • the connections 19 between the two chips 1 have been rewired.
  • Figure 18 shows four chips 1 with signals SI, S2, S3 connected as shown.
  • signal S3 crossing between chips 1(1) and chips 1(4) is changed to S4.
  • Figure 20 two interconnected chips 1 are shown.
  • Figure 21 shows how a signal can be added to the interconnect 19 just by reprogramming the individual chip 1(1) and not touching the interconnect 19.
  • chip 1 being able to reprogram a number of signals that are TDM'ed across to other chips 1.
  • Traditional techniques limit the reprogramability to a single signal.
  • the total number of signals that can be moved across the chip 1 boundaries is predetermined.
  • the total number of signals movable is N multiplied by the number of unique interconnect wires 19 on the board 4. It is apparent that by increasing N, a capacity greater than what is initially needed for the total number of signals can be achieved.
  • the additional capability is usable for greater flexibility in reconfiguring the interconnects 19. Theoretically, it is possible to accomplish all interconnections between any two chips 1 by using two interconnect wires 19 between them: one to carry signals flowing in one direction, and the second to carry signals flowing in the opposition direction. In this extreme situation of maximum multiplexing, the shift registers 7,9 must have an N greater than or equal to the larger of the number of the input signals and the number of the output signals on any single chip 1. (One signal is associated with each stage 24 of a shift register 7,9.)
  • ISR' s 7 and OSR' s 9 need not be placed on all qualifying I/O pins 3,5. Instead, if there are insufficient resources on the chip 1, the ISR' s and OSR' s 7,9 may be deployed only on enough I/O pins 3,5 to allow adequate signal flow through the chips 1. It should be noted that an interconnect 19 must have either ISR' s 7 and OSR' s 9 on all terminals of that interconnect 19, or else must have no ISR' s 7 or OSR' s 9 at all.
  • the chips 1 are FPGA's
  • Figure 22 shows an optimized version of the scheme in Figure 2.
  • the contents of the OSR 9 can be made observable, while reducing the number of stages 24 in an ISR 7 by one.
  • the state of the last stage 24 of an OSR 9 is used as a substitute for the eliminated stage 24 of an ISR 7.
  • the number of stages 24 needed in an ISR 7 and OSR 9 can be reduced by one, corresponding to the stage 24 the system designer does not need to observe. This reduction is achieved by using the scheme depicted in Figure 3.
  • Figure 3 shows how N equals 2 can provide the capability to multiplex three signals across a single interconnect 19.
  • the final state of the interconnect 19 after application of the TDM sequence is used as one of the multiplexed signals for the sink chip 1(2) .
  • the scheme depicted in Figure 2 does not have the sink chips 1 dependent upon the final state of the interconnect 19, but only on the states of the ISR's 7.
  • the scheme depicted in Figure 1 uses ISR' s 7 and OSR' s 9 to multiplex signals across chip 1 boundaries.
  • Such a TDM sequence serves to transfer signals once across an interconnect 19.
  • the TDM sequence needs to be repeated.
  • the total number of TDM sequence applications required is equal to the number of distinct board level interconnect wires 19 that are included in the logic path.
  • Figure 15 shows a logic path of combinational logic 22 that spans two board level interconnect wires 19(1) and 19(2) . There are no latches other than ISR's 7 and OSR' s 9 in the path.
  • the TDM sequence needs to be applied twice to move the signal through the three chips 1.
  • ISR's 7 and OSR' s 9 force the use of TDM.
  • the signal from chip 1(1) to chip 1(2) to chip 1(3) would flow without any clocking.
  • the techniques of the present invention can be extended to bi-directional pins 25 and/or three state output drivers 21. (See Figure 4) .
  • Such a driver 21 typically has outputs of logical zero, logical one, and high impedance.
  • Buses 19 generally employ tri-state drivers 21 to allow multiple source chips 1 onto a single wire. Such buses 19 are intended by the designer to achieve efficient interconnects between a multitude of source chips 1 and some number of sink chips 1.
  • one source chip output driver 21 is active while others are in a high impedance state, so that one source chip output driver 21 is driving all receivers in the bus 19.
  • the architecture shown in Figure 1 cannot be employed directly in such a case.
  • Figure 4 shows N output pins 5, with a tri-state driver 21 on each pin 5.
  • the OSR 9 scheme of Figure 1 will not work here, because the high impedance state cannot be transmitted via an OSR 9 and an ISR 7. Instead, the scheme shown in Figure 5 needs to be employed. It should be noted that this scheme uses two OSR' s 9 instead of one: the first OSR 9(1) to capture the N gating signals and the second OSR 9(2) to capture the N data signals.
  • Output driver 21 sees the corresponding gate-data combination as the OSR' s 9 are shifted out.
  • output pin 5 sees the gated output from output driver 21.
  • the generated TDM sequence of signals on output pin 5 combines with similarly generated TDM sequences from other output pins 5 that are connected with this first output pin 5.
  • Figure 6 shows N output pins 5 with high impedance drivers 21 that share a common output gating signal 23. Signal 23 either produces a high impedance on the outputs of drivers 21 or else transmits the input signals to the outputs.
  • Figure 7 shows a corresponding single pin architecture that avoids the use of two OSR' s 9 but still places output driver 21 between the single OSR 9 and the output pin 5. The use of two OSR' s 9 is avoided because of the presence of the common gating signal 23.
  • Figure 8 shows a typical N bi-directional pin 25 configuration with a common output gating signal 23.
  • the same pin 25 is used to both output a signal from the chip 1 as well as to receive signals from the outside.
  • the scheme of Figure 9 is employed.
  • the architecture of Figure 7 is used to feed the outputs of the N drivers 21, and a single ISR 7 is used for the N input signals as in the Figure 1 embodiment .
  • Figure 10 shows N bi-directional pins 25, each with a different gating output signal.
  • the single-pin 25 architecture equivalent for TDM of the N signals uses the scheme shown in Figure 11.
  • the architecture of Figure 5 is employed for the N output drivers 21, and a single ISR 7 is applied to the N input signals as in the Figure 1 embodiment.
  • Personalization instructions are injected into module 4, e.g., by techniques described in U.S. patent 5,109,353 cited above. More than one logic design can be introduced into the module 4 by different personalization instructions.
  • Software that is intended to provide automatic or interactive partitioning of the intended system design, such as Concept Silicon from InCA cited above, can exploit the knowledge of the existence of
  • ISR' s 7 and OSR' s 9 to help pack more logic into each of the programmable chips 1 and/or to minimize the number of interconnection wires 19.
  • the software is executed on a computer external to module 4, e.g., a workstation.
  • the software can be standalone (not physically coupled to module 4) software that exploits the hardware architecture of the module 4 as introduced by the invention.
  • the benefit of minimizing the number of chips 1 is obvious.
  • the benefit of minimizing the number of interconnects 19 is significant. If the prototype board 4 is intended to utilize Aptix-type programmable interconnect chips, minimizing the number of interconnects 19 will reduce the number of Aptix components needed to achieve programmable interconnects. Either the software is told the number of virtual I/O's 3,5 rather than the actual number; or else the software is told all about the ISR's 7 and OSR' s 9 so that it will take this information into account when it does the partitioning. The programmable interconnect chip does not always need to be changed.
  • ISR's 7 and/or OSR' s 9 are placed on the chips 1 to which the programmable interconnect chip is connected, thereby enhancing the programmable interconnect chip.
  • Combining the use of the present invention with the use of one or more programmable interconnect chips can allow the board 4 designer the flexibility of creating additional interconnects 19 among chips 1.
  • Programmable interconnect chips are used to interconnect among two or more I/O pins 3,5. These I/O pins 3,5 could also have ISR's 7 and OSR' s 9 on board the chip 1, thus providing a greatly increased number of programmable interconnects.
  • the configuration of the prototype board 4 is changed by a combination of hardware and software.
  • interconnects 19 are reprogrammed, FPGA' s and/or programmable interconnect chips 1 are reprogrammed, FPGA' s 1 are added or subtracted, and connections are obliterated using lasers.
  • software changes software other than the partitioning software is used to, e.g., reprogram the Aptix chip(s) 1.

Abstract

In a multichip integrated circuit module (4), the number of effective input/output pins (3, 5, respectively) is increased by using techniques of TDM (time division multiplexing). A first chip (1) has at least one output shift register (9). A second chip (1) has at least one input shift register (7). Interconnection wires (19) couple the output shift registers (9) and the input shift registers (7). Means (15) are provided for loading data in parallel to the output shift registers (9). Means (17) are provided for sequentially shifting data through the output shift registers (9) over the interconnections (19) and into the shift input registers (7). Embodiments of the invention are described for use in conjunction with bi-directional pins (25), tri-state output drivers (21), and asynchronous logic (18).

Description

Description MULTICHIP IC DESIGN USING TDM
Field of the Invention
This invention pertains to the field of designing circuits having a plurality of integrated circuit (i.e.) chips, said design using techniques of TDM (time division multiplexing) . Description of Background Art
Dobbelaere et al. , "Field Programmable MCM Systems -- Design of an Interconnection Frame", IEEE 1992 Custom Integrated Circuits Conference, pp. 4.6.1-4.6.4, addresses the same problem addressed by the present invention — increasing the number of effective input/output pins in a multichip i.e. module — by different techniques. The references teaches programming a matrix at the corner of each chip to determine the interconnects among the inputs and outputs. The chips are FPGA' s (field programmable gate arrays) . The reference does not disclose the use of shift registers or TDM. "Programmable Interconnect Architecture", Aptix
Corporation Technology Backgrounder, Nov. 1991, pp. 1-14, and an Aptix press release dated January 1, 1992, describe use of an areal grid to preserve the ability to change the interconnects among a set of integrated circuit chips in applications where a logic is being implemented onto a set of multiple i.c.'s. These references do not disclose techniques of TDM. Shift registers are used, but only to determine which i.c.'s get connected to each other. The present invention uses shift registers to convey signal information from chip to chip.
U.S. patent 5,036,473 uses dedicated FPGA' s solely to determine what active FPGA' s get connected to which others. The reference discloses the use of software to drive and observe signals, but does not disclose the use of shift registers or TDM.
U.S. patent 5,109,353 discloses an array of programmable gate elements for emulating electronic circuits and systems. It does not disclose the use of shift registers or TDM.
"Handbook of Hardware Modeling", Logic Modeling Systems Incorporated, February 1992, describes the use of software to drive and observe signals in a multichip module. Techniques disclosed in this reference can be used in conjunction with the invention described herein.
"Computer-Aided Prototyping", Ouickturn Systems, Inc. , 1991, pp. 1-4, discloses partitioning logic for a set of interconnected FPGA' s and providing software stimulus to capture and observe the response. There is no disclosure of shift registers or TDM. This technology is further described in "FastForward", LSI Logic, September 1991, and "MARS Product Overview", PiE Design Systems, Inc. Similar technology is described in three press releases by InCA Integrated Circuit Applications: "InCA 'Virtual ASIC Emulation System supports Xilinx 4000 family FPGAS", June 8, 1992; "Virtual ASIC, Automatic ASIC Emulation from InCA", 1991; and "Concept Silicon partitions your design onto multiple FPGAs". Disclosure of Invention
The present invention is a multichip integrated circuit module (4) comprising at least two integrated circuit chips (1) . The first chip (1) has at least one output shift register (9) . The second chip (1) has at least one input shift register (7) . Interconnections (19) couple the output register (s) (9) and the input shift register(s) (7) . Means (15) are provided for loading data in parallel to the output shift register (s) (9) . Means (17) are provided for sequentially shifting data through the output shift register (s) (9) over the interconnections (19) and into the input shift register (s) (7) . Brief Description of the Drawings These and other more detailed and specific objects and features of the present invention are more fully disclosed in the following specification, reference being had to the accompanying drawings, in which: Figure 1 is a sketch of an embodiment of the present invention in which an integrated circuit chip 1 uses at least one input shift register 7 and one output shift register 9. Figure 2 is a sketch showing how the techniques of the present invention reduce the number of interconnection wires 19.among chips 1.
Figure 3 is a sketch of an embodiment of the present invention in which the number of stages 24 in an input shift register 7 and output shift register 9 can be reduced by one.
Figure 4 is a sketch of a chip 1 using tri-state output drivers 21.
Figure 5 is a sketch of an embodiment of the present invention in which two output shift registers 9 are used in conjunction with a single output driver 21.
Figure 6 is a sketch showing a plurality of tri-state output drivers 21, each having a common gating signal 23. Figure 7 is a sketch showing how a single output shift register 9 can be used when the output drivers 21 have a common gating signal 23.
Figure 8 is a sketch showing the use of bi-directional pins and a plurality of tri-state output drivers 21 having a common gating signal 23.
Figure 9 is a sketch showing how a single bi-directional pin 25 can be used when a plurality of tri-state output drivers 21 have a common gating signal 23. Figure 10 shows a chip 1 having a plurality of bi-directional pins 25, each coupled to an output driver 21 that has a different gating signal.
Figure 11 shows a single pin 25 equivalent to the embodiment depicted in Figure 10. Figure 12 shows an embodiment of the present invention in which asynchronous logic 18 is employed.
Figure 13 shows an embodiment of the present invention in which a plurality of output shift registers 9 are daisy chained together. Figure 14 shows an embodiment of the present invention in which a plurality of test shift registers 12 are multiplexed together.
Figure 15 shows an embodiment of the present invention in which the TDM sequence must be applied twice,
Figure 16 shows an embodiment of the present invention in which an interconnect 19 couples two chips 1.
Figure 17 shows how the interconnect 19 of Figure 16 can be reconfigured by reprogramming the first chip 1(1) . Figure 18 shows a single interconnect 19 coupling four chips 1.
Figure 19 shows how the interconnect 19 of Figure 18 can be reprogrammed by reprogramming the source chip 1(1) .
Figure 20 shows an embodiment of the present invention in which an interconnect wire 19 couples two chips 1.
Figure 21 shows how the interconnect 19 of Figure 20 can be reprogrammed by adding a new signal S3 to the source chip 1(1) . Figure 22 shows an embodiment of the present invention in which the number of stages 24 in an input shift register 7 can be reduced by one.
Detailed Description of the Preferred Embodiments
A major purpose of this invention is to increase the effective number of input/output (I/O) pins 3,5 on integrated circuit chips 1 within a module 4 that comprises a plurality of said chips 1. The invention can be thought of as creating a number of virtual I/O pins 3,5 that is greater than the number of actual pins 3,5. The invention also eases pressures on the system designer when the number of interconnect wires 19 on the module 4 is limited. Further, the invention enhances the reconfigurability of signal flow across chips 1.
An example where the invention is useful is in the design of a system 4 in which there is a need to fit a given amount of logic, which may be provided in the form of a netlist, into one or more of the chips 1. Normally, said chips 1 come with a fixed number of I/O pins 3,5 that are used for interconnections 19 among the chips 1. The limited number of I/O pins 3,5 can greatly reduce the efficiency of utilizing the chips 1, and in some cases make the task of fitting the logic into the chips 1 extremely difficult. The module 4 can be a production module, in which the chips 1 are executable and application-ready. Alternatively, module 4 can be a prototype module, in which the chips 1 are experimented with by the designer to create a system design. Changes in such a prototyping environment are typically done by a combination of hardware and software changes . Alternatively, module 4 can contain some production chips 1 and some programmable chips 1.
Chips 1 are any chips for which the user has control over the contents, such as FPGA' s (field programmable gate arrays) , non-field-programmable gate arrays, custom i.c.'s, semi-custom i.c.'s (application specific integrated circuits), and standard cell i.c.'s. FPGA' s are normally preferable, because of their flexibility. The present invention makes use of techniques of TDM
(time division multiplexing or time domain multiplexing) . The execution of the system embodied in module 4 takes longer because of this, but in many applications this is of no concern. The TDM process is transparent to the operation of the logic embodied in module 4.
Typically the TDM is implemented by shift registers 7,9, as illustrated in Figure 1. Each input pin 3 and each output pin 5 on a chip 1 is assigned a dedicated shift register, 7,9, respectively. The number of stages 24 in a shift register 7,9 is referred to as N. N is any positive integer greater than or equal to 2, and could be typically between 2 and 5. Preferably, N is the same for all shift registers 7,9 on a chip 1. If N were not the same, a separate shift clock 17 would be needed for each different value of N. In the chip 1 illustrated in Figure 1, the value of N is the same, and therefore there is but one shift clock 17. Each chip 1 can have a plurality of input pins 3 and a different number of output pins 5. The different chips 1 on a module 4 can have different numbers of input pins 3 and output pins 5. The individual stages 24 of the shift registers 7,9 can be, for example, flip-flops, edge triggered latches, and pairs of polarity hold latches . When polarity hold latches are used, a pair of shift clocks 17 is required for use with the corresponding shift register 7,9. The shift register 9 attached to an output pin 5 is referred to as an output shift register (OSR) 9, and functions as N virtual output pins 5. N internal signals from within the chip 1 can be loaded into the OSR 9 in parallel by means of activating a parallel load clock 15 associated with the OSR 9. These signals are then serially shifted out of the OSR 9 over the corresponding output pin 5 using the shift clock 17 associated with said OSR 9. The signals travel over interconnection wires 19 to other chips 1 that need to receive the signals.
Similarly, the shift register 7 attached to an input pin 3 is referred to as an input shift register (ISR) 7, and functions as N virtual input pins 3. It can receive serially N signals from a board interconnect 19 through the associated input pin 3 by means of the shift clock 17 that is connected to said ISR 7. The received signals can then simultaneously be applied inside the chip 1 using their stored states within the stages of the ISR 7. No special clock is needed to unload the signals from the ISR 7, because once these signals are in the ISR 7, they are visible to the logic within the chip 1.
Preferably, a single parallel load clock 15 and a single shift clock 17 are used for all the ISR' s 7 and OSR' s 9 on the board 4. Preferably, the OSR' s 9, ISR' s 7, and attendant parallel load and shift clocks 15, 17 are customized within peripheral regions of the gate arrays 1. Alternatively, the OSR' s 9 and ISR' s 7 are fabricated from logic normally present on the gate arrays 1. Figure 2 illustrates how the invention minimizes th number of interconnect wires 19 among chips 1,2 and minimizes the number of I/O pins 3,5 within a chip 1. Figure 2 illustrates the interconnections of OSR 9 within a source chip 1 and three ISR' s 7 residing within three target (sink) chips 1. The target chips 1 could be identical or different in a hardware sense. A single interconnect wire 19 couples the chips 1. Three signals SI, S2, and S3 from the source chip 1 are conveyed to the three sink chips 1. If the TDM and shift register technique were not used, three interconnects 19 would be required. The interconnect wire 19 can be thought of as a TDM bus which bundles in the time domain the three signals SI, S2 and S3. Said TDM bus 19 is visible to all three sink chips 1.
As illustrated in Figure 2, not all of the three signals need to be used in each sink chip 1. (The signals within the sink chips 1 are primed for notational purposes . ) The TDM process described herein is transparent to the intended logic design. This transparency can be achieved in different ways, depending upon the degree of transparency needed. For example, if all chips 1 on the module 4 are synchronously clocked (which is preferable) , the following two-step TDM process can be used, at a safe time after the application of each pulse from the system clock (not illustrated) . The safe time is that amount of time needed for all of the signals in the logic to achieve a steady state, i.e., when the intended design has been programmed into the chips 1.
Step 1. The parallel load clock 15 (which is preferably a single clock applied to all output shift registers 9 on the board 4) is applied to effect a parallel capture of signals into all of the OSR' s 9 on all the chips 1.
Step 2. Using the shift clock 17 (preferably a single clock used by all the ISR' s 7 and OSR' s 9 on all the chips 1), the contents of all of the OSR' s 9 are shifted into the target ISR' s 7 via the interconnects 19. This shifting process involves N applications of shift clock 17 to all of the chips 1.
Exceptions to the above scheme exist for various reasons, such as: (1) Certain I/O pins are bi-directional pins 25 (see Figure 8) . Such bi-directional pins 25 could be left unmodified. Signals involving such pins 25 are not affected by the above two steps. (2) Inputs 3 that receive direct clock signals (as opposed to data signals) do not require the use of an ISR 7. Such would be undesirable, because it would cause the clock to jiggle. Similarly, output pins 5 that carry clock signals going to other chips 1 do not require the use of OSR' s 9. Clock signals cannot be transferred across chips 1 using TDM without significant impact to the intended logic. As such, it is not desirable to use such techniques for I/O pins 3,5 that carry clock signals.
Figure 12 shows that if the logic being prototyped is asynchronous, an additional latch 29 is needed for each ISR 7 stage 24 that drives a piece of asynchronous logic 18. Such a latch 29 is a voltage level or logic level sensitive latch that receives its data from an ISR 7 stage output and is clocked by yet another clock called the P clock 27. P clock 27 is shared among all chips 1 that have asynchronous logic. The function of this added latch 29 is to screen the shifting of the ISR 7 from the asynchronous logic 18. P clock 27 is pulsed once after the completion of the two usual TDM steps, described earlier, that are used to effectuate the transfer of signals across chip 1 boundaries using TDM. As shown in Figure 12, some of the stages 24 of the ISR 7 can drive synchronous logic 16, and others of the stages 24 can drive asynchronous logic 18.
As illustrated in Figures 13 and 14, multiple OSR' s 9 can be connected into one daisy-chained composite "test shift register" (TSR) 12 on each chip 1 to facilitate the observation of the captured signals externally by shifting out the TSR 12. All of the observation can be performed at the output of pin 5 (M) . These observed signals can be used for debugging the prototype hardware, depending upon the availability of extra board 4 logic and pins 3,5. The contents of the TSR 12 can then be reloaded from the output 5 (M) of the TSR 12 to the shift register data input 3 on the same chip 1 if it is desired to continue with the operation of the hardware. A preferred way of accomplishing this reloading is to make a connection on the chip 1 itself from output 5 (M) to input 3, thereby creating a circular shift register. The set of TSR' s 12 can be multiplexed to be observable at the output 10 of the board 4 as illustrated in Figure 14. Multiplexer select signals 14 control the selection of the individual TSPs. 12 outputs by multiplexer 8. The input to all the TSR' s 12 is injected via board input 6. The scheme depicted in Figure 14 is useful when there are more chip output pins 5 that the designer wants to observe than there are available board output pins 10. Since the OSR' s 9 are usable for debugging purposes, the designer can choose to provide extra, initially unused, stages 24 within some OSR' s 9. These stages 24 are then available to be used for observing internal signals that the hardware designer may not have thought about earlier, for example, by changing the programming on the chip 1 to look at these stages 24.
It is possible to reconfigure the interconnects 19 among the chips 1 by sending a different set of signals on each interconnect 19 than was previously intended. When combined with the internal reconfigurability of the FPGA 1 itself, a powerful new type of reconfigurability is thereby created. Figure 14 shows a module 4 with a number of FPGA' s 1 and set up for using TDM. To achieve a reprogrammable board 4, it is required that one must able to reprogram both the logic on the chips 1 as well as the interconnects 19. The on-chip logic is already rather reprogrammable through the use of the FPGA' s 1. To reprogram the interconnects 19, an FPGA 1 can be used as a programmable interconnect chip as in U.S. patent 5,036,473, cited above; or a programmable interconnect chip as described in the Aptix references cited above can be used. Alternatively, several programmable interconnect chips 1 may be used. A limited reprogrammability of interconnects is described in Figures 16-21.
Figure 16 shows three signals, SI, S2, S3, TDM' ed between chips 1(1) and 1(2) . By reprogramming the first FPGA 1(1), the manner in which the three signals SI, S2, S3 connect can be changed, as shown in Figure 17. In effect, the connections 19 between the two chips 1 have been rewired. Figure 18 shows four chips 1 with signals SI, S2, S3 connected as shown. By reprogramming the first chip 1(1), as shown in Figure 19, signal S3 crossing between chips 1(1) and chips 1(4) is changed to S4.
In Figure 20, two interconnected chips 1 are shown. Figure 21 shows how a signal can be added to the interconnect 19 just by reprogramming the individual chip 1(1) and not touching the interconnect 19.
The flexibility described herein results from chip 1 being able to reprogram a number of signals that are TDM'ed across to other chips 1. Traditional techniques limit the reprogramability to a single signal.
Once the value N is fixed for a given system 4, the total number of signals that can be moved across the chip 1 boundaries is predetermined. The total number of signals movable is N multiplied by the number of unique interconnect wires 19 on the board 4. It is apparent that by increasing N, a capacity greater than what is initially needed for the total number of signals can be achieved. The additional capability is usable for greater flexibility in reconfiguring the interconnects 19. Theoretically, it is possible to accomplish all interconnections between any two chips 1 by using two interconnect wires 19 between them: one to carry signals flowing in one direction, and the second to carry signals flowing in the opposition direction. In this extreme situation of maximum multiplexing, the shift registers 7,9 must have an N greater than or equal to the larger of the number of the input signals and the number of the output signals on any single chip 1. (One signal is associated with each stage 24 of a shift register 7,9.)
The techniques of the present invention can be selectively applied to areas of logic that are considered susceptible to change or require use of TDM to accommodate the number of signals that need to traverse across chip 1 boundaries. As such, ISR' s 7 and OSR' s 9 need not be placed on all qualifying I/O pins 3,5. Instead, if there are insufficient resources on the chip 1, the ISR' s and OSR' s 7,9 may be deployed only on enough I/O pins 3,5 to allow adequate signal flow through the chips 1. It should be noted that an interconnect 19 must have either ISR' s 7 and OSR' s 9 on all terminals of that interconnect 19, or else must have no ISR' s 7 or OSR' s 9 at all.
When the chips 1 are FPGA's, it is possible to make engineering changes to prototype hardware 4 by reprogramming the affected FPGA's 1 to (1) change the logic realized by certain FPGA's 1 and/or (2) modify the effective interconnects 19 by capturing different signals into one or more of the OSR' s 9 and/or (3) change the signals that are made observable by capturing them into unused stages 24 of OSR' s 9.
Figure 22 shows an optimized version of the scheme in Figure 2. Here, the contents of the OSR 9 can be made observable, while reducing the number of stages 24 in an ISR 7 by one. In this embodiment, the state of the last stage 24 of an OSR 9 is used as a substitute for the eliminated stage 24 of an ISR 7. In certain prototyping situations, it may not be necessary to observe the contents of the OSR' s 9. In these cases, the number of stages 24 needed in an ISR 7 and OSR 9 can be reduced by one, corresponding to the stage 24 the system designer does not need to observe. This reduction is achieved by using the scheme depicted in Figure 3. Figure 3 shows how N equals 2 can provide the capability to multiplex three signals across a single interconnect 19. In this embodiment, the final state of the interconnect 19 after application of the TDM sequence is used as one of the multiplexed signals for the sink chip 1(2) . In contrast, the scheme depicted in Figure 2 does not have the sink chips 1 dependent upon the final state of the interconnect 19, but only on the states of the ISR's 7.
The scheme depicted in Figure 1 uses ISR' s 7 and OSR' s 9 to multiplex signals across chip 1 boundaries. Such a TDM sequence serves to transfer signals once across an interconnect 19. However, if there is a logic path that is not interrupted by a latch or other sequential logic (other than ISR's 7 and OSR' s 9) and spans more than two FPGA chips 1, then the TDM sequence needs to be repeated. The total number of TDM sequence applications required is equal to the number of distinct board level interconnect wires 19 that are included in the logic path. Figure 15 shows a logic path of combinational logic 22 that spans two board level interconnect wires 19(1) and 19(2) . There are no latches other than ISR's 7 and OSR' s 9 in the path. As such, the TDM sequence needs to be applied twice to move the signal through the three chips 1. Notice that ISR's 7 and OSR' s 9 force the use of TDM. Without ISR's 7 and OSR' s 9, the signal from chip 1(1) to chip 1(2) to chip 1(3) would flow without any clocking. The techniques of the present invention can be extended to bi-directional pins 25 and/or three state output drivers 21. (See Figure 4) . Such a driver 21 typically has outputs of logical zero, logical one, and high impedance. Buses 19 generally employ tri-state drivers 21 to allow multiple source chips 1 onto a single wire. Such buses 19 are intended by the designer to achieve efficient interconnects between a multitude of source chips 1 and some number of sink chips 1. Generally, one source chip output driver 21 is active while others are in a high impedance state, so that one source chip output driver 21 is driving all receivers in the bus 19. The architecture shown in Figure 1 cannot be employed directly in such a case. Figure 4 shows N output pins 5, with a tri-state driver 21 on each pin 5. The OSR 9 scheme of Figure 1 will not work here, because the high impedance state cannot be transmitted via an OSR 9 and an ISR 7. Instead, the scheme shown in Figure 5 needs to be employed. It should be noted that this scheme uses two OSR' s 9 instead of one: the first OSR 9(1) to capture the N gating signals and the second OSR 9(2) to capture the N data signals. Output driver 21 sees the corresponding gate-data combination as the OSR' s 9 are shifted out. Thus, output pin 5 sees the gated output from output driver 21. The generated TDM sequence of signals on output pin 5 combines with similarly generated TDM sequences from other output pins 5 that are connected with this first output pin 5. Figure 6 shows N output pins 5 with high impedance drivers 21 that share a common output gating signal 23. Signal 23 either produces a high impedance on the outputs of drivers 21 or else transmits the input signals to the outputs. Figure 7 shows a corresponding single pin architecture that avoids the use of two OSR' s 9 but still places output driver 21 between the single OSR 9 and the output pin 5. The use of two OSR' s 9 is avoided because of the presence of the common gating signal 23.
Figure 8 shows a typical N bi-directional pin 25 configuration with a common output gating signal 23. In Figure 8, the same pin 25 is used to both output a signal from the chip 1 as well as to receive signals from the outside. To achieve a single pin 25 architecture for TDM of the N signals, the scheme of Figure 9 is employed. In Figure 9, the architecture of Figure 7 is used to feed the outputs of the N drivers 21, and a single ISR 7 is used for the N input signals as in the Figure 1 embodiment .
Figure 10 shows N bi-directional pins 25, each with a different gating output signal. The single-pin 25 architecture equivalent for TDM of the N signals uses the scheme shown in Figure 11. In Figure 11, the architecture of Figure 5 is employed for the N output drivers 21, and a single ISR 7 is applied to the N input signals as in the Figure 1 embodiment.
Personalization instructions are injected into module 4, e.g., by techniques described in U.S. patent 5,109,353 cited above. More than one logic design can be introduced into the module 4 by different personalization instructions. Software that is intended to provide automatic or interactive partitioning of the intended system design, such as Concept Silicon from InCA cited above, can exploit the knowledge of the existence of
ISR' s 7 and OSR' s 9 to help pack more logic into each of the programmable chips 1 and/or to minimize the number of interconnection wires 19.
The software is executed on a computer external to module 4, e.g., a workstation. The software can be standalone (not physically coupled to module 4) software that exploits the hardware architecture of the module 4 as introduced by the invention. Alternatively, there can be physical coupling between the software and the module 4, e.g., the computer on which the software is executed can have an electrical connection to the module 4, over which the personalization created by the software is downloaded into the module 4.
The benefit of minimizing the number of chips 1 is obvious. The benefit of minimizing the number of interconnects 19 is significant. If the prototype board 4 is intended to utilize Aptix-type programmable interconnect chips, minimizing the number of interconnects 19 will reduce the number of Aptix components needed to achieve programmable interconnects. Either the software is told the number of virtual I/O's 3,5 rather than the actual number; or else the software is told all about the ISR's 7 and OSR' s 9 so that it will take this information into account when it does the partitioning. The programmable interconnect chip does not always need to be changed. Rather, ISR's 7 and/or OSR' s 9 are placed on the chips 1 to which the programmable interconnect chip is connected, thereby enhancing the programmable interconnect chip. Combining the use of the present invention with the use of one or more programmable interconnect chips can allow the board 4 designer the flexibility of creating additional interconnects 19 among chips 1. Programmable interconnect chips are used to interconnect among two or more I/O pins 3,5. These I/O pins 3,5 could also have ISR's 7 and OSR' s 9 on board the chip 1, thus providing a greatly increased number of programmable interconnects.
The configuration of the prototype board 4 is changed by a combination of hardware and software. For hardware changes, interconnects 19 are reprogrammed, FPGA' s and/or programmable interconnect chips 1 are reprogrammed, FPGA' s 1 are added or subtracted, and connections are obliterated using lasers. For software changes, software other than the partitioning software is used to, e.g., reprogram the Aptix chip(s) 1.
The above description is included to illustrate the operation of the preferred embodiments and is not meant to limit the scope of the invention. The scope of the invention is to limited only by the following claims. From the above discussion, many variations will be apparent to one skilled in the art that would yet be encompassed by the spirit and scope of the invention.
What is claimed is :

Claims

Claims
1. A multichip i.e. module comprising: at least two integrated circuit chips, a first chip having at least one output shift register and a second chip having at least one input shift register; interconnections coupling the output shift register(s) and the input shift register(s); means for loading signals in parallel to the output shift register(s); and means for sequentially shifting signals through the output shift register (s) over the interconnections and into the input shift register (s) .
2. The multichip i.e. module of claim 1 wherein all chips are clocked synchronously.
3. The multichip i.e. module of claim 1 wherein at least one chip contains asynchronous logic, said chip comprising an input shift register that is coupled to said asynchronous logic via a polarity hold latch; wherein: said polarity hold latch is clocked by a P clock.
4. The multichip i.e. module of claim 1 wherein at least some of the chips are gate arrays from the set comprising FPGA's and non-field-programmable gate arrays.
5. The multichip i.e. module of claim 4 wherein the output shift register(s), the input shift register (s), and associated clocking are customized within peripheral regions of the gate arrays .
6. The multichip i.e. module of claim 4 wherein the output shift register (s) and input shift register (s) are fabricated from logic normally present on the gate arrays .
7. The multichip i.e. module of claim 1 wherein the means for loading signals is a parallel load clock.
8. The multichip i.e. module of claim 1 wherein the module is a production module containing executable chips .
9. The multichip i.e. module of claim 1 wherein the multichip module is a prototype module containing at least some experimental chips .
10. The multichip i.e. module of claim 1 further comprising software means to partition logic among the chips, said software means exploiting information concerning the particular architecture of the module.
11. The multichip i.e. module of claim 1 wherein said module contains a programmable interconnect chip.
12. In a multichip i.e. module comprising at least two integrated circuit chips, each chip having a plurality of I/O pins, said module having inter-i.e. connection wires interconnecting said pins, a method for increasing the effective number of I/O pins, said method comprising the steps of: time division multiplexing signals at an output pin of a first chip; and sending said signals over a connection wire to an input pin of a second chip.
13. The method of claim 12 wherein all chips are clocked synchronously.
14. The method of claim 12 wherein said step of time division multiplexing comprises the substeps of: parallel loading signals into an output shift register within a first chip; and accessing signals in parallel from an input shift register within a second chip.
15. The method of claim 14 wherein at least some of the chips are gate arrays from the set comprising FPGA's and non-field-programmable gate arrays.
16. The method of claim 15 wherein the output shift register(s) and input shift register (s) are customized within peripheral regions of said gate arrays.
17. The method of claim 15 wherein the output shift register (s) and input shift register (s) are fabricated from logic normally present on said gate arrays.
18. The method of claim 14 wherein the input and output shift registers comprise stages fabricated from items from the group of items comprising flip-flops, edge triggered latches, and pairs of polarity hold latches.
19. The method of claim 18 wherein one of said stages in reserved for debugging purposes .
20. The method of claim 12 further comprising the additional step of determining a logic design for the module by use of partitioning software that exploits information concerning the particular architecture of the module.
21. The method of claim 12 further comprising the step of observing said signals at said output pin for test purposes .
PCT/US1992/007299 1992-08-28 1992-08-28 Multichip ic design using tdm WO1994006210A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/US1992/007299 WO1994006210A1 (en) 1992-08-28 1992-08-28 Multichip ic design using tdm
AU25611/92A AU2561192A (en) 1992-08-28 1992-08-28 Multichip ic design using tdm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US1992/007299 WO1994006210A1 (en) 1992-08-28 1992-08-28 Multichip ic design using tdm

Publications (1)

Publication Number Publication Date
WO1994006210A1 true WO1994006210A1 (en) 1994-03-17

Family

ID=22231338

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1992/007299 WO1994006210A1 (en) 1992-08-28 1992-08-28 Multichip ic design using tdm

Country Status (2)

Country Link
AU (1) AU2561192A (en)
WO (1) WO1994006210A1 (en)

Cited By (48)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2000019353A1 (en) * 1998-09-30 2000-04-06 Koninklijke Philips Electronics N.V. Data carrier
US6658636B2 (en) 2001-07-09 2003-12-02 Eric G. F. Hochapfel Cross function block partitioning and placement of a circuit design onto reconfigurable logic devices
US6817001B1 (en) 2002-03-20 2004-11-09 Kudlugi Muralidhar R Functional verification of logic and memory circuits with multiple asynchronous domains
US6876962B2 (en) 1999-09-24 2005-04-05 Mentor Graphics Corporation Method and apparatus for concurrent emulation of multiple circuit designs on an emulation system
US6947882B1 (en) 1999-09-24 2005-09-20 Mentor Graphics Corporation Regionally time multiplexed emulation system
US6961691B1 (en) 2000-03-30 2005-11-01 Mentor Graphics Corporation Non-synchronized multiplex data transport across synchronous systems
US7035787B2 (en) 2001-10-30 2006-04-25 Mentor Graphics Corporation Emulation components and system including distributed routing and configuration of emulation resources
US7130788B2 (en) 2001-10-30 2006-10-31 Mentor Graphics Corporation Emulation components and system including distributed event monitoring, and testing of an IC design under emulation
WO2007076546A2 (en) * 2005-12-29 2007-07-05 Bitmicro Networks, Inc. Multiple chip module and package stacking method for storage devices
US7286976B2 (en) 2003-06-10 2007-10-23 Mentor Graphics (Holding) Ltd. Emulation of circuits with in-circuit memory
US7305633B2 (en) 2001-10-30 2007-12-04 Mentor Graphics Corporation Distributed configuration of integrated circuits in an emulation system
US7379859B2 (en) 2001-04-24 2008-05-27 Mentor Graphics Corporation Emulator with switching network connections
GB2452271A (en) * 2007-08-29 2009-03-04 Wolfson Microelectronics Plc Reducing pin count on an integrated circuit
US7587649B2 (en) 2003-09-30 2009-09-08 Mentor Graphics Corporation Testing of reconfigurable logic and interconnect sources
US7693703B2 (en) 2003-08-01 2010-04-06 Mentor Graphics Corporation Configuration of reconfigurable interconnect portions
WO2010096423A1 (en) * 2009-02-19 2010-08-26 Advanced Micro Devices, Inc. Data processing interface device
US7924845B2 (en) 2003-09-30 2011-04-12 Mentor Graphics Corporation Message-based low latency circuit emulation signal transfer
US8214192B2 (en) 2008-02-27 2012-07-03 Mentor Graphics Corporation Resource remapping in a hardware emulation environment
US8788725B2 (en) 2009-09-07 2014-07-22 Bitmicro Networks, Inc. Multilevel memory bus system for solid-state mass storage
US8959307B1 (en) 2007-11-16 2015-02-17 Bitmicro Networks, Inc. Reduced latency memory read transactions in storage devices
US9043669B1 (en) 2012-05-18 2015-05-26 Bitmicro Networks, Inc. Distributed ECC engine for storage media
US9099187B2 (en) 2009-09-14 2015-08-04 Bitmicro Networks, Inc. Reducing erase cycles in an electronic storage device that uses at least one erase-limited memory device
US9135190B1 (en) 2009-09-04 2015-09-15 Bitmicro Networks, Inc. Multi-profile memory controller for computing devices
US9372755B1 (en) 2011-10-05 2016-06-21 Bitmicro Networks, Inc. Adaptive power cycle sequences for data recovery
US9400617B2 (en) 2013-03-15 2016-07-26 Bitmicro Networks, Inc. Hardware-assisted DMA transfer with dependency table configured to permit-in parallel-data drain from cache without processor intervention when filled or drained
US9423457B2 (en) 2013-03-14 2016-08-23 Bitmicro Networks, Inc. Self-test solution for delay locked loops
US9430386B2 (en) 2013-03-15 2016-08-30 Bitmicro Networks, Inc. Multi-leveled cache management in a hybrid storage system
US9501436B1 (en) 2013-03-15 2016-11-22 Bitmicro Networks, Inc. Multi-level message passing descriptor
US9672178B1 (en) 2013-03-15 2017-06-06 Bitmicro Networks, Inc. Bit-mapped DMA transfer with dependency table configured to monitor status so that a processor is not rendered as a bottleneck in a system
US9720603B1 (en) 2013-03-15 2017-08-01 Bitmicro Networks, Inc. IOC to IOC distributed caching architecture
US9734067B1 (en) 2013-03-15 2017-08-15 Bitmicro Networks, Inc. Write buffering
US9798688B1 (en) 2013-03-15 2017-10-24 Bitmicro Networks, Inc. Bus arbitration with routing and failover mechanism
US9811461B1 (en) 2014-04-17 2017-11-07 Bitmicro Networks, Inc. Data storage system
US9842024B1 (en) 2013-03-15 2017-12-12 Bitmicro Networks, Inc. Flash electronic disk with RAID controller
US9858084B2 (en) 2013-03-15 2018-01-02 Bitmicro Networks, Inc. Copying of power-on reset sequencer descriptor from nonvolatile memory to random access memory
US9875205B1 (en) 2013-03-15 2018-01-23 Bitmicro Networks, Inc. Network of memory systems
US9916213B1 (en) 2013-03-15 2018-03-13 Bitmicro Networks, Inc. Bus arbitration with routing and failover mechanism
US9934045B1 (en) 2013-03-15 2018-04-03 Bitmicro Networks, Inc. Embedded system boot from a storage device
US9952991B1 (en) 2014-04-17 2018-04-24 Bitmicro Networks, Inc. Systematic method on queuing of descriptors for multiple flash intelligent DMA engine operation
US9971524B1 (en) 2013-03-15 2018-05-15 Bitmicro Networks, Inc. Scatter-gather approach for parallel data transfer in a mass storage system
US10025736B1 (en) 2014-04-17 2018-07-17 Bitmicro Networks, Inc. Exchange message protocol message transmission between two devices
US10042792B1 (en) 2014-04-17 2018-08-07 Bitmicro Networks, Inc. Method for transferring and receiving frames across PCI express bus for SSD device
US10055150B1 (en) 2014-04-17 2018-08-21 Bitmicro Networks, Inc. Writing volatile scattered memory metadata to flash device
US10078604B1 (en) 2014-04-17 2018-09-18 Bitmicro Networks, Inc. Interrupt coalescing
US10149399B1 (en) 2009-09-04 2018-12-04 Bitmicro Llc Solid state drive with improved enclosure assembly
US10489318B1 (en) 2013-03-15 2019-11-26 Bitmicro Networks, Inc. Scatter-gather approach for parallel data transfer in a mass storage system
US10552050B1 (en) 2017-04-07 2020-02-04 Bitmicro Llc Multi-dimensional computer storage system
CN110825667A (en) * 2019-11-12 2020-02-21 天津飞腾信息技术有限公司 Design method and structure of low-speed IO device controller

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2164769A (en) * 1984-09-19 1986-03-26 Int Standard Electric Corp Apparatus and method for obtaining reduced pin count packaging
EP0439199A1 (en) * 1986-05-30 1991-07-31 Advanced Micro Devices, Inc. Programmable logic device with means for preloading storage cells therein

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2164769A (en) * 1984-09-19 1986-03-26 Int Standard Electric Corp Apparatus and method for obtaining reduced pin count packaging
EP0439199A1 (en) * 1986-05-30 1991-07-31 Advanced Micro Devices, Inc. Programmable logic device with means for preloading storage cells therein

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
MINI MICRO CONFERENCE RECORD May 1984, NEW YORK US pages 1 - 6 STANLEY 'versatile serial protocol for a microcomputer-peripheral interface' *
PATENT ABSTRACTS OF JAPAN vol. 14, no. 238 (P-1050)21 May 1990 *

Cited By (73)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2000019353A1 (en) * 1998-09-30 2000-04-06 Koninklijke Philips Electronics N.V. Data carrier
US7098688B2 (en) 1999-09-24 2006-08-29 Mentor Graphics Corporation Regionally time multiplexed emulation system
US6876962B2 (en) 1999-09-24 2005-04-05 Mentor Graphics Corporation Method and apparatus for concurrent emulation of multiple circuit designs on an emulation system
US6947882B1 (en) 1999-09-24 2005-09-20 Mentor Graphics Corporation Regionally time multiplexed emulation system
US6961691B1 (en) 2000-03-30 2005-11-01 Mentor Graphics Corporation Non-synchronized multiplex data transport across synchronous systems
US7379859B2 (en) 2001-04-24 2008-05-27 Mentor Graphics Corporation Emulator with switching network connections
US6658636B2 (en) 2001-07-09 2003-12-02 Eric G. F. Hochapfel Cross function block partitioning and placement of a circuit design onto reconfigurable logic devices
US7130788B2 (en) 2001-10-30 2006-10-31 Mentor Graphics Corporation Emulation components and system including distributed event monitoring, and testing of an IC design under emulation
US7035787B2 (en) 2001-10-30 2006-04-25 Mentor Graphics Corporation Emulation components and system including distributed routing and configuration of emulation resources
US7305633B2 (en) 2001-10-30 2007-12-04 Mentor Graphics Corporation Distributed configuration of integrated circuits in an emulation system
US7143377B1 (en) 2002-03-20 2006-11-28 Mentor Graphics Corporation Functional verification of logic and memory circuits with multiple asynchronous domains
US6817001B1 (en) 2002-03-20 2004-11-09 Kudlugi Muralidhar R Functional verification of logic and memory circuits with multiple asynchronous domains
US7286976B2 (en) 2003-06-10 2007-10-23 Mentor Graphics (Holding) Ltd. Emulation of circuits with in-circuit memory
US7693703B2 (en) 2003-08-01 2010-04-06 Mentor Graphics Corporation Configuration of reconfigurable interconnect portions
US7587649B2 (en) 2003-09-30 2009-09-08 Mentor Graphics Corporation Testing of reconfigurable logic and interconnect sources
US7924845B2 (en) 2003-09-30 2011-04-12 Mentor Graphics Corporation Message-based low latency circuit emulation signal transfer
US7826243B2 (en) 2005-12-29 2010-11-02 Bitmicro Networks, Inc. Multiple chip module and package stacking for storage devices
WO2007076546A3 (en) * 2005-12-29 2008-08-21 Bitmicro Networks Inc Multiple chip module and package stacking method for storage devices
US8093103B2 (en) 2005-12-29 2012-01-10 Bitmicro Networks, Inc. Multiple chip module and package stacking method for storage devices
WO2007076546A2 (en) * 2005-12-29 2007-07-05 Bitmicro Networks, Inc. Multiple chip module and package stacking method for storage devices
GB2452271A (en) * 2007-08-29 2009-03-04 Wolfson Microelectronics Plc Reducing pin count on an integrated circuit
US7683661B2 (en) 2007-08-29 2010-03-23 Wolfson Microelectronics Plc Method to reduce the pin count on an integrated circuit and associated apparatus
US8959307B1 (en) 2007-11-16 2015-02-17 Bitmicro Networks, Inc. Reduced latency memory read transactions in storage devices
US10120586B1 (en) 2007-11-16 2018-11-06 Bitmicro, Llc Memory transaction with reduced latency
US10089425B2 (en) 2008-02-27 2018-10-02 Mentor Graphics Corporation Resource mapping in a hardware emulation environment
US8214192B2 (en) 2008-02-27 2012-07-03 Mentor Graphics Corporation Resource remapping in a hardware emulation environment
US9262567B2 (en) 2008-02-27 2016-02-16 Mentor Graphics Corporation Resource mapping in a hardware emulation environment
CN102349058A (en) * 2009-02-19 2012-02-08 超威半导体公司 Data processing interface device
JP2012518790A (en) * 2009-02-19 2012-08-16 アドバンスト・マイクロ・ディバイシズ・インコーポレイテッド Data processing interface device
WO2010096423A1 (en) * 2009-02-19 2010-08-26 Advanced Micro Devices, Inc. Data processing interface device
US8594966B2 (en) 2009-02-19 2013-11-26 Advanced Micro Devices, Inc. Data processing interface device
US9135190B1 (en) 2009-09-04 2015-09-15 Bitmicro Networks, Inc. Multi-profile memory controller for computing devices
US10149399B1 (en) 2009-09-04 2018-12-04 Bitmicro Llc Solid state drive with improved enclosure assembly
US10133686B2 (en) 2009-09-07 2018-11-20 Bitmicro Llc Multilevel memory bus system
US8788725B2 (en) 2009-09-07 2014-07-22 Bitmicro Networks, Inc. Multilevel memory bus system for solid-state mass storage
US9484103B1 (en) 2009-09-14 2016-11-01 Bitmicro Networks, Inc. Electronic storage device
US9099187B2 (en) 2009-09-14 2015-08-04 Bitmicro Networks, Inc. Reducing erase cycles in an electronic storage device that uses at least one erase-limited memory device
US10082966B1 (en) 2009-09-14 2018-09-25 Bitmicro Llc Electronic storage device
US10180887B1 (en) 2011-10-05 2019-01-15 Bitmicro Llc Adaptive power cycle sequences for data recovery
US9372755B1 (en) 2011-10-05 2016-06-21 Bitmicro Networks, Inc. Adaptive power cycle sequences for data recovery
US9996419B1 (en) 2012-05-18 2018-06-12 Bitmicro Llc Storage system with distributed ECC capability
US9043669B1 (en) 2012-05-18 2015-05-26 Bitmicro Networks, Inc. Distributed ECC engine for storage media
US9423457B2 (en) 2013-03-14 2016-08-23 Bitmicro Networks, Inc. Self-test solution for delay locked loops
US9977077B1 (en) 2013-03-14 2018-05-22 Bitmicro Llc Self-test solution for delay locked loops
US9798688B1 (en) 2013-03-15 2017-10-24 Bitmicro Networks, Inc. Bus arbitration with routing and failover mechanism
US9720603B1 (en) 2013-03-15 2017-08-01 Bitmicro Networks, Inc. IOC to IOC distributed caching architecture
US9875205B1 (en) 2013-03-15 2018-01-23 Bitmicro Networks, Inc. Network of memory systems
US9916213B1 (en) 2013-03-15 2018-03-13 Bitmicro Networks, Inc. Bus arbitration with routing and failover mechanism
US9934160B1 (en) 2013-03-15 2018-04-03 Bitmicro Llc Bit-mapped DMA and IOC transfer with dependency table comprising plurality of index fields in the cache for DMA transfer
US9934045B1 (en) 2013-03-15 2018-04-03 Bitmicro Networks, Inc. Embedded system boot from a storage device
US10489318B1 (en) 2013-03-15 2019-11-26 Bitmicro Networks, Inc. Scatter-gather approach for parallel data transfer in a mass storage system
US10013373B1 (en) 2013-03-15 2018-07-03 Bitmicro Networks, Inc. Multi-level message passing descriptor
US9672178B1 (en) 2013-03-15 2017-06-06 Bitmicro Networks, Inc. Bit-mapped DMA transfer with dependency table configured to monitor status so that a processor is not rendered as a bottleneck in a system
US9858084B2 (en) 2013-03-15 2018-01-02 Bitmicro Networks, Inc. Copying of power-on reset sequencer descriptor from nonvolatile memory to random access memory
US9971524B1 (en) 2013-03-15 2018-05-15 Bitmicro Networks, Inc. Scatter-gather approach for parallel data transfer in a mass storage system
US10423554B1 (en) 2013-03-15 2019-09-24 Bitmicro Networks, Inc Bus arbitration with routing and failover mechanism
US10042799B1 (en) 2013-03-15 2018-08-07 Bitmicro, Llc Bit-mapped DMA transfer with dependency table configured to monitor status so that a processor is not rendered as a bottleneck in a system
US10210084B1 (en) 2013-03-15 2019-02-19 Bitmicro Llc Multi-leveled cache management in a hybrid storage system
US9400617B2 (en) 2013-03-15 2016-07-26 Bitmicro Networks, Inc. Hardware-assisted DMA transfer with dependency table configured to permit-in parallel-data drain from cache without processor intervention when filled or drained
US9430386B2 (en) 2013-03-15 2016-08-30 Bitmicro Networks, Inc. Multi-leveled cache management in a hybrid storage system
US9734067B1 (en) 2013-03-15 2017-08-15 Bitmicro Networks, Inc. Write buffering
US9501436B1 (en) 2013-03-15 2016-11-22 Bitmicro Networks, Inc. Multi-level message passing descriptor
US10120694B2 (en) 2013-03-15 2018-11-06 Bitmicro Networks, Inc. Embedded system boot from a storage device
US9842024B1 (en) 2013-03-15 2017-12-12 Bitmicro Networks, Inc. Flash electronic disk with RAID controller
US9952991B1 (en) 2014-04-17 2018-04-24 Bitmicro Networks, Inc. Systematic method on queuing of descriptors for multiple flash intelligent DMA engine operation
US10078604B1 (en) 2014-04-17 2018-09-18 Bitmicro Networks, Inc. Interrupt coalescing
US10055150B1 (en) 2014-04-17 2018-08-21 Bitmicro Networks, Inc. Writing volatile scattered memory metadata to flash device
US10042792B1 (en) 2014-04-17 2018-08-07 Bitmicro Networks, Inc. Method for transferring and receiving frames across PCI express bus for SSD device
US10025736B1 (en) 2014-04-17 2018-07-17 Bitmicro Networks, Inc. Exchange message protocol message transmission between two devices
US9811461B1 (en) 2014-04-17 2017-11-07 Bitmicro Networks, Inc. Data storage system
US10552050B1 (en) 2017-04-07 2020-02-04 Bitmicro Llc Multi-dimensional computer storage system
CN110825667A (en) * 2019-11-12 2020-02-21 天津飞腾信息技术有限公司 Design method and structure of low-speed IO device controller
CN110825667B (en) * 2019-11-12 2022-03-11 飞腾信息技术有限公司 Design method and structure of low-speed IO device controller

Also Published As

Publication number Publication date
AU2561192A (en) 1994-03-29

Similar Documents

Publication Publication Date Title
WO1994006210A1 (en) Multichip ic design using tdm
JP2614169B2 (en) Programmable array logic and programmable logic
US6034541A (en) In-system programmable interconnect circuit
US9461650B2 (en) User registers implemented with routing circuits in a configurable IC
US5367209A (en) Field programmable gate array for synchronous and asynchronous operation
US6034540A (en) Programmable logic integrated circuit architecture incorporating a lonely register
JP3471088B2 (en) Improved programmable logic cell array architecture
EP1382117B9 (en) A field programmable gate array and microcontroller system-on-a-chip
US7420392B2 (en) Programmable gate array and embedded circuitry initialization and processing
US6526461B1 (en) Interconnect chip for programmable logic devices
JP3325657B2 (en) Integrated circuit
US6747480B1 (en) Programmable logic devices with bidirect ional cascades
US6748577B2 (en) System for simplifying the programmable memory to logic interface in FPGA
JPH0256114A (en) Programmable logic device having array block coupled through programmable wiring
GB2312067A (en) Programmable logic with lonely register
WO1990011648A1 (en) Configurable cellular array
WO1993009502A1 (en) Field programmable logic module
US7489162B1 (en) Users registers in a reconfigurable IC
JPH10233676A (en) Method for arraying local mutual connection line inside logic array block and programmable logic circuit
US7827433B1 (en) Time-multiplexed routing for reducing pipelining registers
EP1116141A1 (en) A regionally time multiplexed emulation system
EP1738462B1 (en) Routing architecture with high speed i/o bypass path
US6538469B1 (en) Technique to test an integrated circuit using fewer pins
US7461362B1 (en) Replacing circuit design elements with their equivalents
US6977520B1 (en) Time-multiplexed routing in a programmable logic device architecture

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AT AU BB BG BR CA CH CS DE DK ES FI GB HU JP KP KR LK LU MG MN MW NL NO PL RO RU SD SE US

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH DE DK ES FR GB GR IE IT LU MC NL SE BF BJ CF CG CI CM GA GN ML MR SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
122 Ep: pct application non-entry in european phase
REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

NENP Non-entry into the national phase

Ref country code: CA