WO2004042562A2 - Pipeline accelerator and related system and method - Google Patents
Pipeline accelerator and related system and method Download PDFInfo
- Publication number
- WO2004042562A2 WO2004042562A2 PCT/US2003/034558 US0334558W WO2004042562A2 WO 2004042562 A2 WO2004042562 A2 WO 2004042562A2 US 0334558 W US0334558 W US 0334558W WO 2004042562 A2 WO2004042562 A2 WO 2004042562A2
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- data
- pipeline
- hardwired
- operable
- memory
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F15/00—Digital computers in general; Data processing equipment in general
- G06F15/16—Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
- G06F15/163—Interprocessor communication
- G06F15/167—Interprocessor communication using a common memory, e.g. mailbox
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F15/00—Digital computers in general; Data processing equipment in general
- G06F15/76—Architectures of general purpose stored program computers
- G06F15/82—Architectures of general purpose stored program computers data or demand driven
Definitions
- a common computing architecture for processing relatively large amounts of data in a relatively short period of time includes multiple interconnected processors that share the processing burden. By sharing the processing burden, these multiple processors can often process the data more quickly than a single processor can for a given clock frequency. For example, each of the processors can process a respective portion of the data or execute a respective portion of a processing algorithm.
- FIG. 1 is a schematic block diagram of a conventional computing machine 10 having a multi-processor architecture.
- the machine 10 includes a master processor 12 and coprocessors 14 ⁇ - 14 n , which communicate with each other and the master processor via a bus 16, an input port 18 for receiving raw data from a remote device (not shown in FIG. 1), and an output port 20 for providing processed data to the remote source.
- the machine 10 also includes a memory 22 for the master processor 12, respective memories 24 ⁇ - 24 n for the coprocessors 14 ⁇ - 14 tone, and a memory 26 that the master processor and coprocessors share via the bus 16.
- the computing machine 10 effectively divides the processing of raw data among the master processor 12 and the coprocessors 14.
- the remote source such as a sonar array loads the raw data via the port 18 into a section of the shared memory 26, which acts as a first-in-first-out (FIFO) buffer (not shown) for the raw data.
- the master processor 12 retrieves the raw data from the memory 26 via the bus 16, and then the master processor and the coprocessors 14 process the raw data, transferring data among themselves as necessary via the bus 16.
- the master processor 12 loads the processed data into another FIFO buffer (not shown) defined in the shared memory 26, and the remote source retrieves the processed data from this FIFO via the port 20.
- the master processor 12 After retrieving the raw data from the raw-data FIFO (not shown) in the memory 26, the master processor 12 performs a first operation, such as a trigonometric function, on the raw data. This operation yields a first result, which the processor 12 stores in a first-result FIFO (not shown) defined within the memory 26.
- the processor 12 executes a program stored in the memory 22, and performs the above-described actions under the control of the program.
- the processor 12 may also use the memory 22 as working memory to temporarily store data that the processor generates at intermediate intervals of the first operation.
- the coprocessors 24 - 24 are sequentially perform third - n th operations on the second - (n-1 ) th results in a manner similar to that discussed above for the coprocessor 24 ⁇ .
- the n th operation which is performed by the coprocessor 24 n , yields the final result, i.e., the processed data.
- the coprocessor 24 n loads the processed data into a processed-data FIFO (not shown) defined within the memory 26, and the remote device (not shown in FIG. 1) retrieves the processed data from this FIFO.
- the computing machine 10 is often able to process the raw data faster than a computing machine having a single processor that sequentially performs the different operations.
- the single processor cannot retrieve a new set of the raw data until it performs all n + 1 operations on the previous set of raw data.
- the master processor 12 can retrieve a new set of raw data after performing only the first operation. Consequently, for a given clock frequency, this pipeline technique can increase the speed at which the machine 10 processes the raw data by a factor of approximately n + 1 as compared to a single-processor machine (not shown in FIG. 1).
- the computing machine 10 may process the raw data in parallel by simultaneously performing n + 1 instances of a processing algorithm, such as an FFT, on the raw data. That is, if the algorithm includes n + 1 sequential operations as described above in the previous example, then each of the master processor 12 and the coprocessors 14 sequentially perform all n + 1 operations on respective sets of the raw data. Consequently, for a given clock frequency, this parallel-processing technique, like the above-described pipeline technique, can increase the speed at which the machine 10 processes the raw data by a factor of approximately n + 1 as compared to a single-processor machine (not shown in FIG.
- a processing algorithm such as an FFT
- the computing machine 10 can process data more quickly than a single-processor computer machine (not shown in FIG. 1 ), the data-processing speed of the machine 10 is often significantly less than the frequency of the processor clock. Specifically, the data-processing speed of the computing machine 10 is limited by the time that the master processor 12 and coprocessors 14 require to process data. For brevity, an example of this speed limitation is discussed in conjunction with the master processor 12, although it is understood that this discussion also applies to the coprocessors 14. As discussed above, the master processor 12 executes a program that controls the processor to manipulate data in a desired manner. This program includes a sequence of instructions that the processor 12 executes.
- the processor 12 typically requires multiple clock cycles to execute a single instruction, and often must execute multiple instructions to process a single value of data. For example, suppose that the processor 12 is to multiply a first data value A (not shown) by a second data value B (not shown). During a first clock cycle, the processor 12 retrieves a multiply instruction from the memory 22. During second and third clock cycles, the processor 12 respectively retrieves A and B from the memory 26. During a fourth clock cycle, the processor 12 multiplies A and B, and, during a fifth clock cycle, stores the resulting product in the memory 22 or 26 or provides the resulting product to the remote device (not shown). This is a best-case scenario, because in many cases the processor 12 requires additional clock cycles for overhead tasks such as initializing and closing counters.
- FIG. 2 is a block diagram of a hardwired data pipeline 30 that can typically process data faster than a processor can for a given clock frequency, and often at substantially the same rate at which the pipeline is clocked.
- the pipeline 30 includes operator circuits 32 ⁇ - 32 relief,which each perform a respective operation on respective data without executing program instructions. That is, the desired operation is "burned in" to a circuit 32 such that it implements the operation automatically, without the need of program instructions.
- the pipeline 30 can typically perform more operations per second than a processor can for a given clock frequency.
- the pipeline 30 can often solve the following equation faster than a processor can for a given clock frequency:
- the pipeline 30 continues processing subsequent raw data values x k in this manner until all the raw data values are processed. [21 ] Consequently, a delay of two clock cycles after receiving a raw data value xi — this delay is often called the latency of the pipeline 30 — the pipeline generates the result (5x ⁇ + 3)2 x1 , and thereafter generates one result — e.g., (5x 2 + 3)2 x2 , (5x 3 + 3)2 x3 , . . ., 5x n + 3)2 xn — each clock cycle.
- the pipeline 30 thus has a data-processing speed equal to the clock speed.
- the master processor 12 and coprocessors 14 (FIG. 1) have data-processing speeds that are 0.4 times the clock speed as in the above example, the pipeline 30 can process data 2.5 times faster than the computing machine 10 (FIG. 1) for a given clock speed.
- a designer may choose to implement the pipeline 30 in a programmable logic IC (PLIC), such as a field-programmable gate array (FPGA), because a PLIC allows more design and modification flexibility than does an application specific IC (ASIC).
- PLIC programmable logic IC
- FPGA field-programmable gate array
- ASIC application specific IC
- the designer merely sets interconnection-configuration registers disposed within the PLIC to predetermined binary states. The combination of all these binary states is often called “firmware.”
- the designer loads this firmware into a nonvolatile memory (not shown in FIG. 2) that is coupled to the PLIC. When one "turns on” the PLIC, it downloads the firmware from the memory into the interconnection-configuration registers.
- the designer merely modifies the firmware and allows the PLIC to download the modified firmware into the interconnection-configuration registers.
- This ability to modify the PLIC by merely modifying the firmware is particularly useful during the prototyping stage and for upgrading the pipeline 30 "in the field".
- the hardwired pipeline 30 may not be the best choice to execute algorithms that entail significant decision making, particularly nested decision making.
- a processor can typically execute a nested-decision-making instruction (e.g., a nested conditional instruction such as "if A, then do B, else if C, do D else do n") approximately as fast as it can execute an operational instruction (e.g., "A + B") of comparable length.
- a nested-decision-making instruction e.g., a nested conditional instruction such as "if A, then do B, else if C, do D else do n
- an operational instruction e.g., "A + B”
- the pipeline 30 may be able to make a relatively simple decision (e.g., "A > B?”) efficiently, it typically cannot execute a nested decision (e.g., "if A, then do B, else if C, do D, . .
- processors are typically used in applications that require significant decision making, and hardwired pipelines are typically limited to "number crunching" applications that entail little or no decision making.
- Computing components such as processors and their peripherals
- processors typically include industry-standard communication interfaces that facilitate the interconnection of the components to form a processor-based computing machine.
- a standard communication interface typically includes two layers: a physical layer and a services layer.
- the physical layer includes the circuitry and the corresponding circuit interconnections that form the interface and the operating parameters of this circuitry.
- the physical layer includes the pins that connect the component to a bus, the buffers that latch data received from the pins, and the drivers that drive signals onto the pins.
- the operating parameters include the acceptable voltage range of the data signals that the pins receive, the signal timing for writing and reading data, and the supported modes of operation (e.g., burst mode, page mode).
- Conventional physical layers include transistor-transistor logic (TTL) and RAMBUS.
- the services layer includes the protocol by which a computing component transfers data.
- the protocol defines the format of the data and the manner in which the component sends and receives the formatted data.
- FTP file-transfer protocol
- TCP/IP transmission control protocol/internet protocol
- a pipeline accelerator includes a memory and a hardwired-pipeline circuit coupled to the memory.
- the hardwired-pipeline circuit is operable to receive data, load the data into the memory, retrieve the data from the memory, process the retrieved data, and provide the processed data to an external source.
- the hardwired-pipeline circuit is operable to receive data, process the received data, load the processed data into the memory, retrieve the processed data from the memory, and provide the retrieved processed data to an external source.
- FIG. 2 is a block diagram of a conventional hardwired pipeline.
- FIG. 4 is a block diagram of the pipeline accelerator of FIG. 3 according to an embodiment of the invention.
- FIG. 5 is a block diagram of the hardwired-pipeline circuit and the data memory of FIG. 4 according to an embodiment of the invention.
- FIG. 6 is a block diagram of the memory-write interfaces of the communication shell of FIG. 5 according to an embodiment of the invention.
- FIG. 7 is a block diagram of the memory-read interfaces of the communication shell of FIG. 5 according to an embodiment of the invention.
- FIG. 8 is a block diagram of the pipeline accelerator of FIG. 3 according to another embodiment of the invention.
- FIG. 9 is a block diagram of the hardwired-pipeline circuit and the data memory of FIG. 8 according to an embodiment of the invention.
- the peer-vector computing machine 40 includes a processor memory 46, an interface memory 48, a bus 50, a firmware memory 52, an optional raw-data input port 54, a processed-data output port 58, and an optional router 61.
- the accelerator 44 may be disposed on multiple PLICs, these PLICs and their respective firmware memories may be disposed in multiple pipeline units (FIG. 4).
- the accelerator 44 and pipeline units are discussed further below and in previously cited U.S. Patent App. Serial No. 10/683,932 entitled PIPELINE ACCELERATOR HAVING MULTIPLE PIPELINE UNITS AND RELATED COMPUTING MACHINE AND METHOD.
- the accelerator 44 may be disposed on at least one ASIC, and thus may have internal interconnections that are unconfigurable.
- the machine 40 may omit the firmware memory 52.
- the accelerator 44 is shown including multiple pipelines 74, it may include only a single pipeline.
- the accelerator 44 may include one or more processors such as a digital-signal processor (DSP).
- the accelerator 44 may include a data input port and/or a data output port.
- DSP digital-signal processor
- FIG. 4 is a schematic block diagram of the pipeline accelerator 44 of FIG. 3 according to an embodiment of the invention.
- this peer-vector architecture prevents data "bottlenecks" that otherwise might occur if all of the pipeline units 78 communicated through a central location such as a master pipeline unit (not shown) or the host processor 42. Furthermore, it allows one to add or remove peers from the peer-vector machine 40 (FIG. 3) without significant modifications to the machine.
- the pipeline circuit 80 includes a communication interface 82, which transfers data between a peer, such as the host processor 42 (FIG. 3), and the following other components of the pipeline circuit: the hardwired pipelines 74 ⁇ -74k (FIG. 3) via a communication shell 84, a controller 86, an exception manager 88, and a configuration manager 90.
- the pipeline circuit 80 may also include an industry-standard bus interface 91. Alternatively, the functionality of the interface 91 may be included within the communication interface 82. [59] By designing the components of the pipeline circuit 80 as separate modules, one can often simplify the design of the pipeline circuit.
- the communication interface 82 sends and receives data in a format recognized by the message handler 64 (FIG. 3), and thus typically facilitates the design and modification of the peer-vector machine 40 (FIG. 3). For example, if the data format is an industry standard such as the Rapid I/O format, then one need not design a custom interface between the host processor 42 and the accelerator 44. Furthermore, by allowing the pipeline circuit 80 to communicate with other peers, such as the host processor 42 (FIG. 3), via the pipeline bus 50 instead of via a non-bus interface, one can change the number of pipeline units 78 by merely connecting or disconnecting them (or the circuit cards that hold them) to the pipeline bus instead of redesigning a non-bus interface from scratch each time a pipeline unit is added or removed.
- the data format is an industry standard such as the Rapid I/O format
- the hardwired pipelines 74 ⁇ -74 are perform respective operations on data as discussed above in conjunction with FIG. 3 and in previously cited U.S. Patent App. Serial No. 10/684,102 entitled IMPROVED COMPUTING ARCHITECTURE AND RELATED SYSTEM AND METHOD, and the communication shell 84 interfaces the pipelines to the other components of the pipeline circuit 80 and to circuits (such as a data memory 92 discussed below) external to the pipeline circuit.
- the controller 86 synchronizes the hardwired pipelines 74 ⁇ .74 n and monitors and controls the sequence in which they perform the respective data operations in response to communications, i.e., "events," from other peers.
- a peer such as the host processor 42 may send an event to the pipeline unit 78 via the pipeline bus 50 to indicate that the peer has finished sending a block of data to the pipeline unit and to cause the hardwired pipelines 74 ⁇ .74 n to begin processing this data.
- An event that includes data is typically called a message, and an event that does not include data is typically called a "door bell.”
- the pipeline unit 78 may also synchronize the pipelines 74 ⁇ .74sky in response to a synchronization signal.
- the configuration manager 90 sets the soft configuration of the hardwired pipelines 74r74 n , the communication interface 82, the communication shell 84, the controller 86, the exception manager 88, and the interface 9fin response to soft-configuration data from the host processor 42 (FIG. 3) — as discussed in previously cited U.S. Patent App. Serial No. 10/684,102 entitled IMPROVED COMPUTING ARCHITECTURE AND RELATED SYSTEM AND METHOD, the hard configuration denotes the actual topology, on the transistor and circuit-block level, of the pipeline circuit 80, and the soft configuration denotes the physical parameters (e.g., data width, table size) of the hard-configured components.
- the hard configuration denotes the actual topology, on the transistor and circuit-block level, of the pipeline circuit 80
- the soft configuration denotes the physical parameters (e.g., data width, table size) of the hard-configured components.
- soft configuration data is similar to the data that can be loaded into a register of a processor (not shown in FIG. 4) to set the operating mode (e.g., burst-memory mode) of the processor.
- the host processor 42 may send soft-configuration data that causes the configuration manager 90 to set the number and respective priority levels of queues in the communication interface 82.
- the exception manager 88 may also send soft-configuration data that causes the configuration manager 90 to, e.g., increase the size of an overflowing buffer in the communication interface 82.
- the pipeline unit 78 of the accelerator 44 includes the data memory 92, an optional communication bus 94, and, if the pipeline circuit is a PLIC, the firmware memory 52 (FIG. 3).
- the data memory 92 buffers data as it flows between another peer, such as the host processor 42 (FIG. 3), and the hardwired pipelines 74r74 regard, and is also a working memory for the hardwired pipelines.
- the communication interface 82 interfaces the data memory 92 to the pipeline bus 50 (via the communication bus 94 and industry-standard interface 91 if present), and the communication shell 84 interfaces the data memory to the hardwired pipelines 74 ⁇ -74 n .
- the industry-standard interface 91 is a conventional bus-interface circuit that reduces the size and complexity of the communication interface 82 by effectively offloading some of the interface circuitry from the communication interface. Therefore, if one wishes to change the parameters of the pipeline bus 50 or router 61 (FIG. 3), then he need only modify the interface 91 and not the communication interface 82. Alternatively, one may dispose the interface 91 in an IC (not shown) that is external to the pipeline circuit 80. Offloading the interface 91 from the pipeline circuit 80 frees up resources on the pipeline circuit for use in, e.g., the hardwired pipelines 74 ⁇ -74 texture and the controller 86. Or, as discussed above, the bus interface 91 may be part of the communication interface 82.
- the firmware memory 52 stores the firmware that sets the hard configuration of the pipeline circuit.
- the memory 52 loads the firmware into the pipeline circuit 80 during the configuration of the accelerator 44, and may receive modified firmware from the host processor 42 (FIG. 3) via the communication interface 82 during or after the configuration of the accelerator.
- the loading and receiving of firmware is further discussed in previously cited U.S. Patent App. Serial No. 10/684,057 entitled PROGRAMMABLE CIRCUIT AND RELATED COMPUTING MACHINE AND METHOD.
- the pipeline circuit 80, data memory 92, and firmware memory 52 may be disposed on a circuit board or card 98, which may be plugged into a pipeline-bus connector (not shown) much like a daughter card can be plugged into a slot of a mother board in a personal computer (not shown).
- a pipeline-bus connector not shown
- conventional ICs and components such as a power regulator and a power sequencer may also be disposed on the card 98 as is known.
- Further details of the structure and operation of the pipeline unit 78 are discussed below in conjunction with FIG. 5.
- FIG. 5 is a block diagram of the pipeline unit 78 of FIG. 4 according to an embodiment of the invention.
- the pipeline circuit 80 receives a master CLOCK signal, which drives the below-described components of the pipeline circuit either directly or indirectly.
- the pipeline circuit 80 may generate one or more slave clock signals (not shown) from the master CLOCK signal in a conventional manner.
- the pipeline circuit 80 may also a receive a synchronization signal SYNC as discussed below.
- the data memory 92 includes an input dual-port-static-random-access memory (DPSRAM) 100, an output DPSRAM 102, and an optional working DPSRAM 104.
- DPSRAM dual-port-static-random-access memory
- the input DPSRAM 100 includes an input port 06 for receiving data from a peer, such as the host processor 42 (FIG. 3), via the communication interface 82, and includes an output port 108 for providing this data to the hardwired pipelines 74 ⁇ -74 n via the communication shell 84.
- a peer such as the host processor 42 (FIG. 3)
- the input DPSRAM 100 includes an input port 06 for receiving data from a peer, such as the host processor 42 (FIG. 3), via the communication interface 82, and includes an output port 108 for providing this data to the hardwired pipelines 74 ⁇ -74 n via the communication shell 84.
- Having two ports, one for data input and one for data output, increases the speed and efficiency of data transfer to/from the DPSRAM 100 because the communication interface 82 can write data to the DPSRAM while the pipelines 74 ⁇ -74 n read data from the DPSRAM.
- using the DPSRAM 100 to buffer data from a peer such as the host processor 42 allows the peer and the pipelines
- the peer can send data to the pipelines 74 ⁇ -74 unpleasantness for the pipelines to complete a current operation.
- the pipelines 74r74 n can retrieve data without "waiting" for the peer to complete a data-sending operation.
- the output DPSRAM 102 includes an input port 110 for receiving data from the hardwired pipelines 74 ⁇ -74 n via the communication shell 84, and includes an output port 112 for providing this data to a peer, such as the host processor 42 (FIG. 3), via the communication interface 82.
- a peer such as the host processor 42 (FIG. 3)
- the two data ports 110 (input) and 112 (output) increase the speed and efficiency of data transfer to/from the DPSRAM 102, and using the DPSRAM 102 to buffer data from the pipelines 74r74 n allows the peer and the pipelines to operate asynchronously relative to one another.
- the pipelines 74 ⁇ -74 n can publish data to the peer without "waiting" for the output-data handler 726 to complete a data transfer to the peer or to another peer.
- the output-data handler 726 can transfer data to a peer without "waiting" for the pipelines 74 ⁇ -74 n to complete a data-publishing operation.
- DPSRAMS 700, 702, and 704 are described as being external to the pipeline circuit 80, one or more of these DPSRAMS, or equivalents thereto, may be internal to the pipeline circuit.
- the communication interface 82 includes an industry-standard bus adapter 778, an input-data handler 720, input-data and input-event queues 722 and 724, an output-data handler 726, and output-data and output-event queues 728 and 730.
- the queues 722, 724, 728, and 730 are shown as single queues, one or more of these queues may include sub queues (not shown) that allow segregation by, e.g., priority, of the values stored in the queues or of the respective data that these values represent.
- the industry-standard bus adapter 778 includes the physical layer that allows the transfer of data between the pipeline circuit 80 and the pipeline bus 50 (FIG. 4) via the communication bus 94. Therefore, if one wishes to change the parameters of the bus 94, then he need only modify the adapter 778 and not the entire communication interface 82. Where the industry-standard bus interface 91 is omitted from the pipeline unit 78, , then the adapter 778 may be modified to allow the transfer of data directly between the pipeline bus 50 and the pipeline circuit 80. In this latter implementation, the modified adapter 778 includes the functionality of the bus interface 97, and one need only modify the adapter 778 if he/she wishes to change the parameters of the bus 50.
- the input-data handler 720 receives data from the industry-standard adapter 78, loads the data into the DPSRAM 700 via the input port 706, and generates and stores a pointer to the data and a corresponding data identifier in the input-data queue 722. If the data is the payload of a message from a peer, such as the host processor 42 (FIG. 3), then the input-data handler 720 extracts the data from the message before loading the data into the DPSRAM 700.
- the input-data handler 720 includes an interface 732, which writes the data to the input port 706 of the DPSRAM 700 and which is further discussed below in conjunction with FIG. 6. Alternatively, the input-data handler 720 can omit the extraction step and load the entire message into the DPSRAM 700.
- the validation manager 734 may also cause the input-data handler 720 to send to the host processor 42 (FIG. 3) an exception message that identifies the exception (erroneously received data/event) and the peer that caused the exception.
- the output-data handler 726 retrieves processed data from locations of the DPSRAM 702 pointed to by the output-data queue 728, and sends the processed data to one or more peers, such as the host processor 42 (FIG. 3), via the industry-standard bus adapter 778.
- the output-data handler 726 includes an interface 736, which reads the processed data from the DPSRAM 702 via the port 772. The interface 736 is further discussed below in conjunction with FIG. 7.
- the output-data handler 726 also retrieves from the output-event queue 730 events generated by the pipelines 74 ⁇ - 74 n , and sends the retrieved events to one or more peers, such as the host processor 42 (FIG. 3) via the industry-standard bus adapter 778.
- the output-data handler 726 includes a subscription manager 738, which includes a list of peers, such as the host processor 42 (FIG. 3), that subscribe to the processed data and to the events; the output-data handler uses this list to send the data/events to the correct peers. If a peer prefers the data/event to be the payload of a message, then the output-data handler 726 retrieves the network or bus-port address of the peer from the subscription manager 738, generates a header that includes the address, and generates the message from the data/event and the header.
- a subscription manager 738 which includes a list of peers, such as the host processor 42 (FIG. 3), that subscribe to the processed data and to the events; the output-data handler uses this list to send the data/events to the correct peers. If a peer prefers the data/event to be the payload of a message, then the output-data handler 726 retrieves the network or bus-port address of the peer from the subscription manager 738, generate
- the DPSRAMS 700 and 702 involves the use of pointers and data identifiers, one may modify the input- and output-data handlers 720 and 726 to implement other data-management techniques.
- Conventional examples of such data-management techniques include pointers using keys or tokens, input/output control (IOC) block, and spooling.
- IOC input/output control
- the communication shell 84 includes a physical layer that interfaces the hardwired pipelines 74 ⁇ -74 n to the output-data queue 728, the controller 86, and the DPSRAMs 700, 702, and 704.
- the shell 84 includes interfaces 740 and 742, and optional interfaces 744 and 146.
- the sequence manager 748 maintains a predetermined internal operating synchronization among the hardwired pipelines 74 ⁇ -74 n .
- a predetermined internal operating synchronization among the hardwired pipelines 74 ⁇ -74 n For example, to avoid all of the pipelines 74 ⁇ -74 n simultaneously retrieving data from the DPSRAM 700, it may be desired to synchronize the pipelines such that while the first pipeline 74 ⁇ is in a preprocessing state, the second pipeline 74 2 is in a processing state and the third pipeline 74 3 is in a post-processing state. Because a state of one pipeline 74 may require a different number of clock cycles than a concurrently performed state of another pipeline, the pipelines 74 74 n may lose synchronization if allowed to run freely.
- the sequence manager 748 reads the pointer and the data identifier from the input-data queue 722, determines from the data identifier the pipeline or pipelines 74 ⁇ - 74 n for which the data is intended, and passes the pointer to the pipeline or pipelines via the communication shell 84.
- the data-receiving pipeline or pipelines 74 ⁇ - 74 decay cause the interface 740 to retrieve the data from the pointed-to location of the DPSRAM 700 via the port 708.
- the output-data handler 726 retrieves the pointer and the data identifier from the output-data queue 728, the subscription manager 738 determines from the identifier the destination peer or peers (e.g., the host processor 42 of FIG. 3) of the data, the interface 736 retrieves the data from the pointed-to location of the DPSRAM 702 via the port 772, and the output-data handler sends the data to the industry-standard bus adapter 778. If a destination peer requires the data to be the payload of a message, then the output-data handler 726 generates the message and sends the message to the adapter 778. For example, suppose the data has multiple destination peers and the pipeline bus 50 supports message broadcasting.
- the output-data handler 726 generates a single header that includes the addresses of all the destination peers, combines the header and data into a message, and sends (via the adapter 778 and the industry-standard bus interface 97) a single message to all of the destination peers simultaneously.
- the output- data handler 726 generates a respective header, and thus a respective message, for each destination peer, and sends each of the messages separately.
- the industry-standard bus interface 97 receives a signal (which originates from a peer, such as the host processor 42 of FIG. 3) from the pipeline bus 50 (and the router 67 if present), and translates the signal into a header (i.e., a data-less message) that includes the event.
- a signal which originates from a peer, such as the host processor 42 of FIG. 3
- the pipeline bus 50 and the router 67 if present
- the industry-standard bus adapter 778 converts the header from the industry-standard bus interface 97 into a format that is compatible with the input-data handler 720.
- the input-data handler 720 extracts from the header the event and a description of the event.
- the description may include, e.g., the address of the pipeline unit 78, the type of event, or an instance identifier that identifies the pipeline(s) 78 ⁇ - 78 n for which the event is intended.
- the validation manager 734 analyzes the event description and confirms that the event is intended for one of the hardwired pipelines 74 ⁇ -74 n , and the input-data handler 720 stores the event and its description in the input-event queue 724.
- the sequence manager 748 reads the event and its description from the input-event queue 724, and, in response to the event, triggers the operation of one or more of the pipelines 74r74 n as discussed above.
- the sequence manager 748 may trigger the pipeline 74 to begin processing data that the pipeline 74 ⁇ previously stored in the DPSRAM 704.
- the sequence manager 748 To output an event, the sequence manager 748 generates the event and a description of the event, and loads the event and its description into the output-event queue 730 — the event description identifies the destination peer(s) for the event if there is more than one possible destination peer. For example, as discussed above, the event may confirm the receipt and implementation of an input event, an input-data or input-event message, or a SYNC pulse
- the output-data handler 726 retrieves the event and its description from the output-event queue 730, the subscription manager 738 determines from the event description the destination peer or peers (e.g., the host processor 42 of FIG. 3) of the event, and the output-data handler sends the event to the proper destination peer or peers via the industry-standard bus adapter 778 and the industry-standard bus interface 97 as discussed above.
- the industry-standard bus adapter 778 receives the command from the host processor 42 (FIG. 3) via the industry-standard bus interface 97, and provides the command to the input-data handler 720 in a manner similar to that discussed above for a data-less event (i.e., doorbell)
- the configuration manager 90 implements the command.
- the command may cause the configuration manager 90 to disable one of the pipelines 74 ⁇ -74 texture for debugging purposes.
- the command may allow a peer, such as the host processor 42 (FIG. 3), to read the current configuration of the pipeline circuit 80 from the configuration manager 90 via the output-data handler 726.
- a configuration command to define an exception that is recognized by the exception manager 88.
- a component such as the input-data queue 722, of the pipeline circuit 80 triggers an exception to the exception manager 88.
- the component includes an exception-triggering adapter (not shown) that monitors the component and triggers the exception in response to a predetermined condition or set of conditions.
- the exception-triggering adapter may be a universal circuit that can be designed once and then included as part of each component of the pipeline circuit 80 that generates exceptions.
- the exception manager 88 generates an exception identifier.
- the identifier may indicate that the input-data queue 722 has overflowed.
- the identifier may include its destination peer if there is more than one possible destination peer.
- the output-data handler 726 retrieves the exception identifier from the exception manager 88 and sends the exception identifier to the host processor 42 (FIG. 3) as discussed in previously cited U.S. Patent App. Serial No. 10/684,053 entitled COMPUTING MACHINE HAVING IMPROVED COMPUTING ARCHITECTURE AND RELATED SYSTEM AND METHOD.
- the exception identifier can also include destination information from which the subscription manager 738 determines the destination peer or peers (e.g., the host processor 42 of FIG. 3) of the identifier.
- the output-data handler 726 then sends the identifier to the destination peer or peers via the industry-standard bus adapter 778 and the industry-standard bus interface 97.
- the data memory 92 may include other types of memory ICs such as quad-data-rate (QDR) SRAMs.
- QDR quad-data-rate
- FIG. 6 is a block diagram of the interface 742 of FIG. 5 according to an embodiment of the invention.
- the interface 742 writes processed data from the hardwired pipelines 74 ⁇ -74 mask to the DPSRAM 702.
- the structure of the interface 742 reduces or eliminates data "bottlenecks" and, where the pipeline circuit 80 (FIG. 5) is a PLIC, makes efficient use of the PLICs local and global routing resources.
- the interface 742 includes write channels 750* - 7 0,,, one channel for each hardwired pipeline 74 ⁇ - 74 n (FIG. 5), and includes a controller 752.
- the channel 7 0* is discussed below, it being understood that the operation and structure of the other channels 750 2 - 0 classroom are similar unless stated otherwise.
- the FIFO 754* receives the data from the pipeline 74 ⁇ via a bus 758*, receives the address of the location to which the data is to be written via a bus 760*, and provides the data and address to the register 756* via busses 762* and 764*, respectively. Furthermore, the FIFO 754* receives a WRITE FIFO signal from the pipeline 74 ⁇ on a line 766*, receives a CLOCK signal via a line 768*, and provides a FIFO FULL signal to the pipeline 74 ⁇ on a line 770*.
- the FIFO 754* receives a READ FIFO signal from the controller 752 via a line 772*, and provides a FIFO EMPTY signal to the controller via a line 774*.
- the pipeline circuit 80 (FIG. 5) is a PLIC
- the busses 758*, 760*, 762*, and 764* and the lines 766*, 768*, 770*, 772*, and 774* are preferably formed using local routing resources.
- local routing resources are preferred to global routing resources because the signal- path lengths are generally shorter and the routing is easier to implement.
- the register 756* receives the data to be written and the address of the write location from the FIFO 754* via the busses 762* and 764*, respectively, and provides the data and address to the port 770 of the DPSRAM 702 (FIG. 5) via an address/data bus 776. Furthermore, the register 756* also receives the data and address from the registers 756 2 - 156 n via an address/data bus 778* as discussed below. In addition, the register 756* receives a SHIFT/LOAD signal from the controller 752 via a line 780. Where the pipeline circuit 80 (FIG. 5) is a PLIC, the bus 776 is typically formed using global routing resources, and the busses 778* - 778 token.* and the line 780 are preferably formed using local routing resources.
- the controller 752 provides a WRITE DPSRAM signal to the port 770 of the DPSRAM 702 (FIG. 5) via a line 782.
- the FIFO 754* drives the FIFO FULL signal to the logic level corresponding to the current state ("full” or “not full") of the FIFO.
- the FIFO 754* drives the FIFO EMPTY signal to the logic level corresponding to the current state ("empty” or "not empty") of the FIFO.
- the controller 752 asserts the READ FIFO signal and drives the SHIFT/LOAD signal to the load logic level, thus loading the first loaded data and address from the FIFO into the register 756*. If the FIFO 754* is empty, the controller 752 does not assert READ FIFO, but does drive SHIFT load to the load logic level if any of the other FIFOs 754 2 -754 transit are not empty.
- the controller 752 drives the SHIFT/LOAD signal to the shift logic level and asserts the WRITE DPSRAM signal, thus serially shifting the data and addresses from the registers 756* - 756 transit onto the address/data bus 776 and loading the data into the corresponding locations of the DPSRAM 702. Specifically, during a first shift cycle, the data and address from the register 756* are shifted onto the bus 776 such that the data from the FIFO 754* is loaded into the addressed location of the DPSRAM 702. Also during the first shift cycle, the data and address from the register 56 2 are shifted into the register 756*, the data and address from the register 756 3 (not shown) are shifted into the register 756 2 , and so on.
- the controller 752 may implement these shift cycles by pulsing the SHIFT/LOAD signal, or by generating a shift clock signal (not shown) that is coupled to the registers 756*-756 transit. Furthermore, if one of the registers 756*-756 transit is empty during a particular shift operation because its corresponding FIFO 754*-754 transit was empty when the controller 752 loaded the register, then the controller may bypass the empty register, and thus shorten the shift operation by avoiding shifting null data and a null address onto the bus 776. [133] Referring to FIGS. 5 and 6, according to an embodiment of the invention, the interface 744 is similar to the interface 742, and the interface 732 is also similar to the interface 742 except that the interface 732 includes only one write channel 750.
- FIG. 7 is a block diagram of the interface 740 of FIG. 5 according to an embodiment of the invention.
- the interface 740 reads input data from the DPSRAM 700 and transfers this data to the hardwired 74 ⁇ -74 unpleasantness.
- the structure of the interface 740 reduces or eliminates data "bottlenecks" and, where the pipeline circuit 80 (FIG. 5) is a PLIC, makes efficient use of the PLICs local and global routing resources.
- the interface 740 includes read channels 790* - 790 tone, one channel for each hardwired pipeline 74-t - 74 n (FIG. 5), and a controller 792.
- the read channel 790* is discussed below, it being understood that the operation and structure of the other read channels 790 2 - 790 shadow are similar unless stated otherwise.
- the channel 790* includes a FIFO 794* and an address/identifier (ID) register 796*.
- ID address/identifier
- the identifier identifies the pipeline 74 ⁇ -74 n that makes the request to read data from a particular location of the DPSRAM 700 to receive the data.
- the FIFO 794* includes two sub-FIFOs (not shown), one for storing the address of the location within the DPSRAM 700 from which the pipeline 74 ⁇ wishes to read the input data, and the other for storing the data read from the DPSRAM 700. Therefore, the FIFO 794* reduces or eliminates the bottleneck that may occur if the pipeline 74 ⁇ had to "wait" to provide the read address to the channel 790* until the controller 792 finished reading previous data, or if the controller had to wait until the pipeline 74 ⁇ retrieved the read data before the controller could read subsequent data.
- the FIFO 794* receives the read address from the pipeline 74 ⁇ via a bus 798* and provides the address and ID to the register 796* via a bus 200 ⁇ . Since the ID corresponds to the pipeline 74 ⁇ and typically does not change, the FIFO 794* may store the ID and concatenate the ID with the address. Alternatively, the pipeline 74 ⁇ may provide the ID to the FIFO 794* via the bus 798*. Furthermore, the FIFO 794* receives a READY WRITE FIFO signal from the pipeline 74 ⁇ via a line 202 receives a CLOCK signal via a line 204 ⁇ , and provides a FIFO FULL (of read addresses) signal to the pipeline via a line 206 ⁇ .
- the FIFO 794* receives a WRITE/READ FIFO signal from the controller 792 via a line 208 1t and provides a FIFO EMPTY signal to the controller via a line 270*. Moreover, the FIFO 794* receives the read data and the corresponding ID from the controller 792 via a bus 272, and provides this data to the pipeline 74*>via a bus 274*.
- the pipeline circuit 80 (FIG. 5) is a PLIC
- the busses 798*, 200 1t and 274* and the lines 202 1t 204 ⁇ , 206 ⁇ , 208 ⁇ , and 270* are preferably formed using local routing resources, and the bus 272 is typically formed using global routing resources.
- 220 ⁇ .220 n - ⁇ and the line 222 are preferably formed using local routing resources.
- the controller 792 receives the data read from the port 708 of the DPSRAM 700 (FIG. 5) via a bus 224 and generates a READ DPSRAM signal on a line 226, which couples this signal to the port 708.
- the pipeline circuit 80 (FIG. 5) is a PLIC
- the bus 224 and the line 226 are typically formed using global routing resources.
- the FIFO 794* drives the FIFO FULL signal to the logic level corresponding to the current state ("full" or “not full") of the FIFO relative to the read addresses. That is, if the FIFO 794* is full of addresses to be read, then it drives the logic level of FIFO FULL to one level, and if the FIFO is not full of read addresses, it drives the logic level of FIFO FULL to another level.
- the pipeline 74 ⁇ drives the address of the data to be read onto the bus 798*, and asserts the READ/WRITE FIFO signal to a write level, thus loading the address into the FIFO.
- the pipeline 74 ⁇ gets the address from the input-data queue 722 via the sequence manager 748. If, however, the FIFO 794* is full of read addresses, the pipeline 74 ⁇ waits until the FIFO is not full before loading the read address.
- the controller 792 may recognize the ID and drive only the WRITE/READ FIFO signal on the line 20 ⁇ to the write level. This eliminates the need for the controller 792 to send the ID to the FIFOs 794*-794 ⁇ .
- the WRITE/READ FIFO signal may be only a read signal, and the FIFO 794* (as well as the other FIFOs 194 2 -194 confusion) may load the data on the bus 272 when the ID on the bus 272 matches the ID of the FIFO 794*. This eliminates the need of the controller 792 to generate a write signal.
- the address and ID from the register 796* is shifted onto the busses 276 and 278 such that the controller 792 reads data from the location of the DPSRAM 700 specified by the FIFO 794 2 .
- the controller 792 drives the WRITE/READ FIFO signal to a write level and drives the received data and the ID onto the bus 272. Because the ID is the ID from the FIFO 794 2 , the FIFO 794 2 recognizes the ID and thus loads the data from the bus 272. The remaining FIFOs 794* and 794 3 - 794 immunity do not load the data because the ID on the bus 272 does not correspond to their IDs.
- FIG. 8 is a schematic block diagram of a pipeline unit 230 of FIG. 4 according to another embodiment of the invention.
- the pipeline unit 230 is similar to the pipeline unit 78 of FIG. 4 except that the pipeline unit 230 includes multiple pipeline circuits 80 — here two pipeline circuits 80a and 80b.
- Increasing the number of pipeline circuits 80 typically allows an increase in the number n of hardwired pipelines 74 ⁇ -74AN, and thus an increase in the functionality of the pipeline unit 230 as compared to the pipeline unit 78.
- the services components i.e., the communication interface 82, the controller 86, the exception manager 88, the configuration manager 90, and the optional industry-standard bus interface 97, are disposed on the pipeline circuit 80a, and the pipelines 74 ⁇ -74 n and the communication shell 84 are disposed on the pipeline circuit 80b.
- the services components and the pipelines 74 ⁇ -74 are disposed on separate pipeline circuits, one can include a higher number n of pipelines and/or more complex pipelines than he can where the service components and the pipelines are located on the same pipeline circuit.
Abstract
Description
Claims
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2005502225A JP2006518058A (en) | 2002-10-31 | 2003-10-31 | Pipeline accelerator, related system and method for improved computing architecture |
AU2003287320A AU2003287320B2 (en) | 2002-10-31 | 2003-10-31 | Pipeline accelerator and related system and method |
CA002503617A CA2503617A1 (en) | 2002-10-31 | 2003-10-31 | Pipeline accelerator for improved computing architecture and related system and method |
EP03781553A EP1573515A2 (en) | 2002-10-31 | 2003-10-31 | Pipeline accelerator and related system and method |
Applications Claiming Priority (12)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US42250302P | 2002-10-31 | 2002-10-31 | |
US60/422,503 | 2002-10-31 | ||
US10/684,102 | 2003-10-09 | ||
US10/683,929 | 2003-10-09 | ||
US10/684,053 | 2003-10-09 | ||
US10/684,057 | 2003-10-09 | ||
US10/683,932 | 2003-10-09 | ||
US10/684,057 US7373432B2 (en) | 2002-10-31 | 2003-10-09 | Programmable circuit and related computing machine and method |
US10/684,102 US7418574B2 (en) | 2002-10-31 | 2003-10-09 | Configuring a portion of a pipeline accelerator to generate pipeline date without a program instruction |
US10/683,932 US7386704B2 (en) | 2002-10-31 | 2003-10-09 | Pipeline accelerator including pipeline circuits in communication via a bus, and related system and method |
US10/684,053 US7987341B2 (en) | 2002-10-31 | 2003-10-09 | Computing machine using software objects for transferring data that includes no destination information |
US10/685,929 US6990562B2 (en) | 2001-04-07 | 2003-10-14 | Memory controller to communicate with memory devices that are associated with differing data/strobe ratios |
Publications (2)
Publication Number | Publication Date |
---|---|
WO2004042562A2 true WO2004042562A2 (en) | 2004-05-21 |
WO2004042562A3 WO2004042562A3 (en) | 2005-08-11 |
Family
ID=34831533
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2003/034558 WO2004042562A2 (en) | 2002-10-31 | 2003-10-31 | Pipeline accelerator and related system and method |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2004042562A2 (en) |
Cited By (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2007087507A2 (en) * | 2006-01-26 | 2007-08-02 | Exegy Incorporated | Firmware socket module for fpga-based pipeline processing |
US7711844B2 (en) | 2002-08-15 | 2010-05-04 | Washington University Of St. Louis | TCP-splitter: reliable packet monitoring methods and apparatus for high speed networks |
US7921046B2 (en) | 2006-06-19 | 2011-04-05 | Exegy Incorporated | High speed processing of financial information using FPGA devices |
US9047243B2 (en) | 2011-12-14 | 2015-06-02 | Ip Reservoir, Llc | Method and apparatus for low latency data distribution |
US9363078B2 (en) | 2007-03-22 | 2016-06-07 | Ip Reservoir, Llc | Method and apparatus for hardware-accelerated encryption/decryption |
US9547680B2 (en) | 2005-03-03 | 2017-01-17 | Washington University | Method and apparatus for performing similarity searching |
US9898312B2 (en) | 2003-05-23 | 2018-02-20 | Ip Reservoir, Llc | Intelligent data storage and processing using FPGA devices |
US9990393B2 (en) | 2012-03-27 | 2018-06-05 | Ip Reservoir, Llc | Intelligent feed switch |
US10037568B2 (en) | 2010-12-09 | 2018-07-31 | Ip Reservoir, Llc | Method and apparatus for managing orders in financial markets |
US10062115B2 (en) | 2008-12-15 | 2018-08-28 | Ip Reservoir, Llc | Method and apparatus for high-speed processing of financial market depth data |
US10102260B2 (en) | 2012-10-23 | 2018-10-16 | Ip Reservoir, Llc | Method and apparatus for accelerated data translation using record layout detection |
US10121196B2 (en) | 2012-03-27 | 2018-11-06 | Ip Reservoir, Llc | Offload processing of data packets containing financial market data |
US10146845B2 (en) | 2012-10-23 | 2018-12-04 | Ip Reservoir, Llc | Method and apparatus for accelerated format translation of data in a delimited data format |
US10158377B2 (en) | 2008-05-15 | 2018-12-18 | Ip Reservoir, Llc | Method and system for accelerated stream processing |
US10191974B2 (en) | 2006-11-13 | 2019-01-29 | Ip Reservoir, Llc | Method and system for high performance integration, processing and searching of structured and unstructured data |
US10572824B2 (en) | 2003-05-23 | 2020-02-25 | Ip Reservoir, Llc | System and method for low latency multi-functional pipeline with correlation logic and selectively activated/deactivated pipelined data processing engines |
US10621192B2 (en) | 2012-10-23 | 2020-04-14 | IP Resevoir, LLC | Method and apparatus for accelerated format translation of data in a delimited data format |
US10650452B2 (en) | 2012-03-27 | 2020-05-12 | Ip Reservoir, Llc | Offload processing of data packets |
US10846624B2 (en) | 2016-12-22 | 2020-11-24 | Ip Reservoir, Llc | Method and apparatus for hardware-accelerated machine learning |
US10902013B2 (en) | 2014-04-23 | 2021-01-26 | Ip Reservoir, Llc | Method and apparatus for accelerated record layout detection |
US10942943B2 (en) | 2015-10-29 | 2021-03-09 | Ip Reservoir, Llc | Dynamic field data translation to support high performance stream data processing |
US11436672B2 (en) | 2012-03-27 | 2022-09-06 | Exegy Incorporated | Intelligent switch for processing financial market data |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8095508B2 (en) | 2000-04-07 | 2012-01-10 | Washington University | Intelligent data storage and processing using FPGA devices |
US7418574B2 (en) | 2002-10-31 | 2008-08-26 | Lockheed Martin Corporation | Configuring a portion of a pipeline accelerator to generate pipeline date without a program instruction |
US7809982B2 (en) | 2004-10-01 | 2010-10-05 | Lockheed Martin Corporation | Reconfigurable computing machine and related systems and methods |
US7702629B2 (en) | 2005-12-02 | 2010-04-20 | Exegy Incorporated | Method and device for high performance regular expression pattern matching |
WO2007121035A2 (en) | 2006-03-23 | 2007-10-25 | Exegy Incorporated | Method and system for high throughput blockwise independent encryption/decryption |
US7840482B2 (en) | 2006-06-19 | 2010-11-23 | Exegy Incorporated | Method and system for high speed options pricing |
US8326819B2 (en) | 2006-11-13 | 2012-12-04 | Exegy Incorporated | Method and system for high performance data metatagging and data indexing using coprocessors |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5892962A (en) * | 1996-11-12 | 1999-04-06 | Lucent Technologies Inc. | FPGA-based processor |
EP1061439A1 (en) * | 1999-06-15 | 2000-12-20 | Hewlett-Packard Company | Memory and instructions in computer architecture containing processor and coprocessor |
-
2003
- 2003-10-31 WO PCT/US2003/034558 patent/WO2004042562A2/en active Application Filing
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5892962A (en) * | 1996-11-12 | 1999-04-06 | Lucent Technologies Inc. | FPGA-based processor |
EP1061439A1 (en) * | 1999-06-15 | 2000-12-20 | Hewlett-Packard Company | Memory and instructions in computer architecture containing processor and coprocessor |
Non-Patent Citations (1)
Title |
---|
SALCIC Z ET AL: "FLIX environment for generation of custom-configurable machines in FPLDs for embedded applications" MICROPROCESSORS AND MICROSYSTEMS, IPC BUSINESS PRESS LTD. LONDON, GB, vol. 23, no. 8-9, 15 December 1999 (1999-12-15), pages 513-526, XP004254077 ISSN: 0141-9331 * |
Cited By (54)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7711844B2 (en) | 2002-08-15 | 2010-05-04 | Washington University Of St. Louis | TCP-splitter: reliable packet monitoring methods and apparatus for high speed networks |
US10929152B2 (en) | 2003-05-23 | 2021-02-23 | Ip Reservoir, Llc | Intelligent data storage and processing using FPGA devices |
US10719334B2 (en) | 2003-05-23 | 2020-07-21 | Ip Reservoir, Llc | Intelligent data storage and processing using FPGA devices |
US10572824B2 (en) | 2003-05-23 | 2020-02-25 | Ip Reservoir, Llc | System and method for low latency multi-functional pipeline with correlation logic and selectively activated/deactivated pipelined data processing engines |
US10346181B2 (en) | 2003-05-23 | 2019-07-09 | Ip Reservoir, Llc | Intelligent data storage and processing using FPGA devices |
US11275594B2 (en) | 2003-05-23 | 2022-03-15 | Ip Reservoir, Llc | Intelligent data storage and processing using FPGA devices |
US9898312B2 (en) | 2003-05-23 | 2018-02-20 | Ip Reservoir, Llc | Intelligent data storage and processing using FPGA devices |
US9547680B2 (en) | 2005-03-03 | 2017-01-17 | Washington University | Method and apparatus for performing similarity searching |
US10957423B2 (en) | 2005-03-03 | 2021-03-23 | Washington University | Method and apparatus for performing similarity searching |
US10580518B2 (en) | 2005-03-03 | 2020-03-03 | Washington University | Method and apparatus for performing similarity searching |
US7954114B2 (en) | 2006-01-26 | 2011-05-31 | Exegy Incorporated | Firmware socket module for FPGA-based pipeline processing |
WO2007087507A3 (en) * | 2006-01-26 | 2007-09-13 | Exegy Inc | Firmware socket module for fpga-based pipeline processing |
JP2009528584A (en) * | 2006-01-26 | 2009-08-06 | エクセジー・インコーポレイテツド | Firmware socket module for FPGA-based pipeline processing |
WO2007087507A2 (en) * | 2006-01-26 | 2007-08-02 | Exegy Incorporated | Firmware socket module for fpga-based pipeline processing |
US10504184B2 (en) | 2006-06-19 | 2019-12-10 | Ip Reservoir, Llc | Fast track routing of streaming data as between multiple compute resources |
US10817945B2 (en) | 2006-06-19 | 2020-10-27 | Ip Reservoir, Llc | System and method for routing of streaming data as between multiple compute resources |
US11182856B2 (en) | 2006-06-19 | 2021-11-23 | Exegy Incorporated | System and method for routing of streaming data as between multiple compute resources |
US9672565B2 (en) | 2006-06-19 | 2017-06-06 | Ip Reservoir, Llc | High speed processing of financial information using FPGA devices |
US10169814B2 (en) | 2006-06-19 | 2019-01-01 | Ip Reservoir, Llc | High speed processing of financial information using FPGA devices |
US7921046B2 (en) | 2006-06-19 | 2011-04-05 | Exegy Incorporated | High speed processing of financial information using FPGA devices |
US10360632B2 (en) | 2006-06-19 | 2019-07-23 | Ip Reservoir, Llc | Fast track routing of streaming data using FPGA devices |
US9916622B2 (en) | 2006-06-19 | 2018-03-13 | Ip Reservoir, Llc | High speed processing of financial information using FPGA devices |
US10467692B2 (en) | 2006-06-19 | 2019-11-05 | Ip Reservoir, Llc | High speed processing of financial information using FPGA devices |
US11449538B2 (en) | 2006-11-13 | 2022-09-20 | Ip Reservoir, Llc | Method and system for high performance integration, processing and searching of structured and unstructured data |
US10191974B2 (en) | 2006-11-13 | 2019-01-29 | Ip Reservoir, Llc | Method and system for high performance integration, processing and searching of structured and unstructured data |
US9363078B2 (en) | 2007-03-22 | 2016-06-07 | Ip Reservoir, Llc | Method and apparatus for hardware-accelerated encryption/decryption |
US10158377B2 (en) | 2008-05-15 | 2018-12-18 | Ip Reservoir, Llc | Method and system for accelerated stream processing |
US10411734B2 (en) | 2008-05-15 | 2019-09-10 | Ip Reservoir, Llc | Method and system for accelerated stream processing |
US11677417B2 (en) | 2008-05-15 | 2023-06-13 | Ip Reservoir, Llc | Method and system for accelerated stream processing |
US10965317B2 (en) | 2008-05-15 | 2021-03-30 | Ip Reservoir, Llc | Method and system for accelerated stream processing |
US11676206B2 (en) | 2008-12-15 | 2023-06-13 | Exegy Incorporated | Method and apparatus for high-speed processing of financial market depth data |
US10929930B2 (en) | 2008-12-15 | 2021-02-23 | Ip Reservoir, Llc | Method and apparatus for high-speed processing of financial market depth data |
US10062115B2 (en) | 2008-12-15 | 2018-08-28 | Ip Reservoir, Llc | Method and apparatus for high-speed processing of financial market depth data |
US10037568B2 (en) | 2010-12-09 | 2018-07-31 | Ip Reservoir, Llc | Method and apparatus for managing orders in financial markets |
US11803912B2 (en) | 2010-12-09 | 2023-10-31 | Exegy Incorporated | Method and apparatus for managing orders in financial markets |
US11397985B2 (en) | 2010-12-09 | 2022-07-26 | Exegy Incorporated | Method and apparatus for managing orders in financial markets |
US9047243B2 (en) | 2011-12-14 | 2015-06-02 | Ip Reservoir, Llc | Method and apparatus for low latency data distribution |
US10121196B2 (en) | 2012-03-27 | 2018-11-06 | Ip Reservoir, Llc | Offload processing of data packets containing financial market data |
US10650452B2 (en) | 2012-03-27 | 2020-05-12 | Ip Reservoir, Llc | Offload processing of data packets |
US9990393B2 (en) | 2012-03-27 | 2018-06-05 | Ip Reservoir, Llc | Intelligent feed switch |
US10872078B2 (en) | 2012-03-27 | 2020-12-22 | Ip Reservoir, Llc | Intelligent feed switch |
US11436672B2 (en) | 2012-03-27 | 2022-09-06 | Exegy Incorporated | Intelligent switch for processing financial market data |
US10963962B2 (en) | 2012-03-27 | 2021-03-30 | Ip Reservoir, Llc | Offload processing of data packets containing financial market data |
US10949442B2 (en) | 2012-10-23 | 2021-03-16 | Ip Reservoir, Llc | Method and apparatus for accelerated format translation of data in a delimited data format |
US10133802B2 (en) | 2012-10-23 | 2018-11-20 | Ip Reservoir, Llc | Method and apparatus for accelerated record layout detection |
US10146845B2 (en) | 2012-10-23 | 2018-12-04 | Ip Reservoir, Llc | Method and apparatus for accelerated format translation of data in a delimited data format |
US10621192B2 (en) | 2012-10-23 | 2020-04-14 | IP Resevoir, LLC | Method and apparatus for accelerated format translation of data in a delimited data format |
US10102260B2 (en) | 2012-10-23 | 2018-10-16 | Ip Reservoir, Llc | Method and apparatus for accelerated data translation using record layout detection |
US11789965B2 (en) | 2012-10-23 | 2023-10-17 | Ip Reservoir, Llc | Method and apparatus for accelerated format translation of data in a delimited data format |
US10902013B2 (en) | 2014-04-23 | 2021-01-26 | Ip Reservoir, Llc | Method and apparatus for accelerated record layout detection |
US11526531B2 (en) | 2015-10-29 | 2022-12-13 | Ip Reservoir, Llc | Dynamic field data translation to support high performance stream data processing |
US10942943B2 (en) | 2015-10-29 | 2021-03-09 | Ip Reservoir, Llc | Dynamic field data translation to support high performance stream data processing |
US11416778B2 (en) | 2016-12-22 | 2022-08-16 | Ip Reservoir, Llc | Method and apparatus for hardware-accelerated machine learning |
US10846624B2 (en) | 2016-12-22 | 2020-11-24 | Ip Reservoir, Llc | Method and apparatus for hardware-accelerated machine learning |
Also Published As
Publication number | Publication date |
---|---|
WO2004042562A3 (en) | 2005-08-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
AU2003287320B2 (en) | Pipeline accelerator and related system and method | |
US20040136241A1 (en) | Pipeline accelerator for improved computing architecture and related system and method | |
WO2004042562A2 (en) | Pipeline accelerator and related system and method | |
US7487302B2 (en) | Service layer architecture for memory access system and method | |
CN116324741A (en) | Method and apparatus for configurable hardware accelerator |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A2 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A2 Designated state(s): BW GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWE | Wipo information: entry into national phase |
Ref document number: 2503617 Country of ref document: CA |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2005502225 Country of ref document: JP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1020057007750 Country of ref document: KR |
|
REEP | Request for entry into the european phase |
Ref document number: 2003781553 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2003781553 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2003287320 Country of ref document: AU |
|
WWP | Wipo information: published in national office |
Ref document number: 1020057007750 Country of ref document: KR |
|
WWP | Wipo information: published in national office |
Ref document number: 2003781553 Country of ref document: EP |