US20130262734A1 - Modular scalable pci-express implementation - Google Patents

Modular scalable pci-express implementation Download PDF

Info

Publication number
US20130262734A1
US20130262734A1 US13/621,994 US201213621994A US2013262734A1 US 20130262734 A1 US20130262734 A1 US 20130262734A1 US 201213621994 A US201213621994 A US 201213621994A US 2013262734 A1 US2013262734 A1 US 2013262734A1
Authority
US
United States
Prior art keywords
pci express
buffers
port
functional
ports
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US13/621,994
Inventor
Keng Teck Yap
Azydee Hamid
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to US13/621,994 priority Critical patent/US20130262734A1/en
Publication of US20130262734A1 publication Critical patent/US20130262734A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/14Handling requests for interconnection or transfer
    • G06F13/36Handling requests for interconnection or transfer for access to common bus or bus system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/38Information transfer, e.g. on bus
    • G06F13/40Bus structure
    • G06F13/4004Coupling between buses
    • G06F13/4027Coupling between buses using bus bridges
    • G06F13/405Coupling between buses using bus bridges where the bridge performs a synchronising function
    • G06F13/4059Coupling between buses using bus bridges where the bridge performs a synchronising function where the synchronisation uses buffers, e.g. for speed matching between buses
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2213/00Indexing scheme relating to interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F2213/0026PCI express
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • the inventions generally relate to a modular and scalable PCI-Express implementation.
  • Chipset designs such as some Intel Input/Output Controller Hub (ICH) designs include an implementation of PCI-Express (PCIe) ports that is not modular, and thus not scalable.
  • PCIe PCI-Express
  • the shared buffer in some Intel ICH designs is only shared with four PCIe ports, and there is not any flexibility to expand the number of PCIe ports beyond four. This makes it extremely difficult to design, for example, a new System On Chip (SOC) with a different number of PCIe ports. For example, if a new SOC design requires five PCIe ports, a major effort is necessary in order to convert the lout ICH PCIe ports to five. ICH PCIe ports for the new SOC design. Additionally, it is not convenient to reduce the number of PCIe ports to less than four. If a new SOC requires only three PCIe ports and it has to adopt an ICH design that includes four PCIe ports, an additional cost of integrating the fourth port is necessary.
  • PCIe PCI-Express
  • FIG. 1 illustrates a system according to some embodiments of the inventions.
  • FIG. 2 illustrates a system according to some embodiments of the inventions.
  • FIG. 3 illustrates a flow according to some embodiments of the inventions.
  • FIG. 4 illustrates a graph according to some embodiments of the inventions.
  • Some embodiments of the inventions relate to a modular and scalable PCI-Express implementation.
  • a functional PCI Express port includes first buffers and an idle PCI Express port includes second buffers. One or more of the second buff ON are accessed by the functional PCI Express port.
  • access is provided to one or more buffers of a functional PCI Express port.
  • the functional PCI Express port is provided access to one or more buffers included in an idle PCI Express port.
  • FIG. 1 illustrates a system 100 according to some embodiments.
  • system 100 is an implementation using one PCI-Express (PCIe) port.
  • system 100 includes one PCIe port.
  • System 100 includes a common data bus 102 , a tristate buffer 104 , a multiplexer (MIA) 106 , a controller 108 , and buffers 110 (for example, Virtual Channel (VC) receive buffers and/or transmit buffers).
  • MIA multiplexer
  • VC Virtual Channel
  • system 100 is a scalable implementation that significantly increases the performance of a PO-Express port with addition of minimal logic and/or power. In some embodiments, system 100 is a scalable implementation that significantly increases the performance of a PCI-Express port in a System On Chip (SOC) design with addition of minimal logic and/or power.
  • SOC System On Chip
  • PCIe ports are utilized. Conventionally, these unused ports are powered down to save power. However, in some embodiments, logic in the unused PCIe ports is used to boost the performance of the running PCIe ports. In some embodiments, this is performed in a modular and/or scalable manner.
  • Virtual Channel (VC) receive/transmit buffers 110 are used at the transceiver layer of unused PCIe ports.
  • controller 108 is a data write enable, data read enable, pointer counter, data selector and/or state machine. Controller 108 may be used as a controller for a buffer pointer, a data write enable, a data read enable, and/or data select, etc.
  • system 100 operates in two modes of operation, including a normal mode and a shared mode.
  • a normal mode all ports are independent and function as a separate entity (that is, the one PCIe port of system 100 is independent and functions as a separate entity from any other PCIe ports).
  • the shared mode the virtual buffers 110 in a non-functional PCIe port are used to boost performance of functional PCIe ports.
  • the common data bus 102 is connected to common data buses on all ports (other ports connected to the common data bus 102 are not illustrated in FIG. 1 ).
  • common data bus 102 is left idle by tristating all tristate buffers in all PCIe ports (for example, in some embodiments, tristate buffer 104 ).
  • the common data bus 102 is driven by data from a functional PCIe port. This is accomplished, for example, b ⁇ activating the tristate buffer for the functional PCIe port (for example, in some embodiments, tristate buffer 104 ) while maintaining tristate buffers in other (for example, all other) PCIe ports in the tristate mode.
  • Controller 108 includes in some embodiments a counter and a state machine, for example.
  • the state machine controls, for example, data write enable (and/or data read enable) signals and or pointer signals.
  • Controller 108 serves as an arbiter to activate buffers 110 front other PCIe ports.
  • Controller 108 also selects using MUX 106 either data from the comma data bus 102 or from the regular data bus (input Data signal) depending, on the mode of operation.
  • the common data bus 102 when operating in the normal mode, the common data bus 102 is left idle by tristating the tristate buffers in all PCIe ports (for example, including tristate buffer 104 ) and the input Data is provided to (and/or from) the buffers 110 via the MLA 106 .
  • the common data bus 102 when operating in the shared mode, the common data bus 102 is driven by data from a functional PCIe port, and data from common bus 102 may be provided via MUX 106 to buffers 110 .
  • FIG. 2 illustrates a system 200 according to some embodiments.
  • system 200 is an implementation using four PCI-Express (PCIe) ports, including PCIe port A 220 , PCIe port B 240 , PCIe port C 260 , and PCIe port D 280 .
  • PCIe PCI-Express
  • some or all of the PCIe ports are similar to or identical to the PCIe port illustrated in FIG. 1 .
  • system 200 includes four PCIe ports as illustrated in FIG. 2 , but in other embodiments, any number of PCIe ports may be included in system 200 .
  • System 200 further includes a common data bus 202 on all four PCIe ports A 220 , B 240 , C 260 , and D 280 .
  • PCIe port A 220 includes a tristate buffer 224 , a multiplexer (MUX) 226 , a controller 228 , and buffers 230 (for example, Virtual Channel (VC) receive buffers and/or transmit buffers).
  • PCIe port B 240 includes a tristate buffer 244 , a multiplexer (MUX) 246 , a controller 248 , and buffers 250 (for example, Virtual Channel (VC) receive buffers and/or transmit buffers).
  • PCIe port C 260 includes a tristate buffer 264 , a multiplexer (MUX) 266 , a controller 268 , and buffers 270 (for example, Virtual Channel (VC) receive buffers and/or transmit buffers).
  • PCIe port D 280 includes a tristate buffer 284 , a multiplexer (MUX) 286 , a controller 288 , and buffers 290 (for example, Virtual Channel (VC) receive buffers and or transmit buffers).
  • PCIe ports are utilized. Conventionally, these unused ports are powered down to save power. However, in some embodiments, logic in the unused PCIe ports is used to boost the performance of the running PCIe ports. In some embodiments, this is performed in a modular and/or scalable manner.
  • Virtual Channel (VC) receive/transmit buffets 230 , 250 , 270 , and/or 290 are used at the transceiver layer of unused PCIe ports.
  • controllers 228 , 248 , 268 , and/or 288 control a data write enable, data read enable, pointer counter, data selector and/or state machine. Controllers 228 , 248 , 268 , and/or 288 may be used as a controller for a buffer pointer, a data write enable, a data read enable, and or data select, etc.
  • system 200 operates in two modes of operation, including a normal mode and a shared mode.
  • all ports 220 , 240 , 260 , and 280 are independent and function as a separate entity (that is, each PCIe port of system 200 is independent and functions as a separate entity from all other PCIe ports).
  • the shared mode the virtual buffers 230 , 250 , 270 , and/or 290 in a non-functional PCIe port are used to boost performance of functional PCIe ports.
  • the common data bus 202 is connected through all ports of system 200 .
  • common data bus 202 is left idle by tristating all tristate buffers 224 , 244 , 264 , and/or 284 in all PCIe ports.
  • the common data bus 202 is driven by data from a functional one or more of the PCIe ports 220 , 240 , 260 , and/or 280 . This is accomplished, for example, by activating the tristate buffer for the functional PCIe port while maintaining tristate buffers in other (for example, all other) PCIe ports in the tristate mode.
  • Controllers 228 , 248 , 268 , and/or 288 each include in some embodiments a counter and a state machine, for example.
  • the state machine controls, for example, data write enable (and/or data read enable) signals and/or pointer signals.
  • Controllers 228 , 248 , 268 , and/or 288 together and/or separately serve as an arbiter to activate buffers 230 , 250 , 270 and/or 290 , respectively, from other PCIe ports.
  • Controllers 228 , 248 , 268 , and 288 also each select using MUX 226 , 246 , 266 , and 288 , respectively, either data from the common data bus 202 or from the regular data bus (input signal Data A, Data B, Data C, and Data D, respectively) depending on the mode of operation.
  • the common data bus 202 when operating in the normal mode, the common data bus 202 is left idle by tristating the tristate buffers in all PCIe ports and the input Data is provided to (and/or from) the buffers 230 , 250 , 270 , and/or 290 via the MUX 226 , 246 , 266 , and/or 286 , respectively.
  • the common data bus 202 when operating in the shared mode, is driven by data from a functional PCIe port, and data from common bus 202 may be provided via the respective MUX 226 , 246 , 266 , and 286 to buffers 230 , 250 , 270 , and 290 , respectively.
  • FIG. 2 for example, a four PCIe port implementation is used. It is noted, however, that in some embodiments, scaling to any number of PCIe ports may be implemented.
  • data will come from the regular data bus (that is, from Data A. Data B, Data C, and/or Data D, etc.), and the tristate buffers 224 , 244 , 264 , and 284 will be in a tristate mode.
  • the pointers for example, Pointer A, Pointer B, Pointer C, and Pointer D
  • the Data Write Enables, Data Read Enables, and/or Data Selects will behave as usual.
  • PCIe port A 220 is being utilized in an SOC design as a functional port, and logic from the non-functional ports (that is, PCIe port B 240 , PCIe port C 260 , and PCIe port D 280 ) are utilized during a shared buffer mode.
  • the logic used it the non-functional ports is logic located at the transaction layer of the non-functional ports.
  • data select signals from the controllers 248 , 268 , and 288 to the multiplexers 246 , 266 , and 286 , respectively, will choose data from the common data bus 202 .
  • the tristate buffer 224 of port A 220 will be configured to select input Data A. All the other tristate buffers 244 , 264 , and 284 of ports B 240 , C 260 , and D 280 , respectively, will be left in tristate mode. In this manner, input Data A will drive the common data bus 202 .
  • FIG. 3 illustrates a state machine 300 according to some embodiments.
  • state machine 300 is a state machine used to choose active logic between PCI Express (PCIe) ports during a shared buffer mode.
  • PCIe PCI Express
  • the state machine 300 operation may be implemented, for example, in controller 108 illustrated in FIG. 1 , and/or in one or more of controllers 228 , 248 , 268 , and/or 288 illustrated in FIG. 2 .
  • controllers 228 , 248 , 268 , and/or 288 illustrated in FIG. 2 For example, during a shared buffer mode in which only PCIe port A 220 is a functional port and the other PCIe ports 240 , 260 , and 280 are non-functional ports.
  • State machine 300 includes a state A 302 , a state B 304 , a state C 306 , and a state D 308 .
  • Control signals such as, for example, pointers, data write enables, data read enables, etc. will behave based on state machine 300 according to some embodiments.
  • Data is first filled in receive buffers 230 of port A 220 . Once receive buffers 230 are full, receive buffers 250 of port B 240 are filled. Once receive buffers 250 are full, receive buffers 270 of port C 260 are filled. Once receive buffers 270 are full, receive buffers 290 of port D 280 are filled. Once all four buffers 230 , 250 , 270 , and 290 are filled, the pointer is looped back to port A 220 .
  • PCIe ports for example, ports A, B, C, and D of FIG. 2 .
  • any number of ports may be implemented.
  • eight buffer entries per port are used (that is count 8 , for example).
  • any number of buffer entries per port may be implemented.
  • port A is configured as the only functional port and flow is at state A 302 at which only pointer A (to the buffers in port A) is active.
  • state A 302 at which only pointer A (to the buffers in port A) is active.
  • FIGS. 1-3 have been described and illustrated mostly in reference to data write operations, according to some embodiments, the invention is implemented in reference to data read operations, and in some embodiments data read and/or write implementations may be performed.
  • FIG. 4 illustrates a graph 400 according to some embodiments.
  • Graph 400 compares the number of entries vs. bandwidth in Gb/s of a chipset with a PCIe x16 configuration 402 and a PCIe x8 configuration 404 , in which the chip has, for example, two ports and supports 1 ⁇ 16 and 2 ⁇ 8 configurations according to some embodiments.
  • Graph 400 illustrates a relationship between the bandwidth in Gb/s and a write request queue size, for example. In the design of each port, the chip uses 18 entries to hit the maximum x8 bandwidth, which is approximately 3.05 Gb/s, as illustrated by point 414 of graph 404 .
  • the bandwidth for 18 queue entries is approximately 3.5 Gb/s, as illustrated by point 412 of graph 402 .
  • the write queue size may be doubled to 36 entries, the bandwidth may be increased to approximately 6.1 (ibis, as illustrated by point 422 of graph 402 . This is a gain of more than 74% compared to the original implementation.
  • the values on the graph have been approximated using an Intel chipset, but it is noted that the values are approximate and may have a margin of error (for example, plus or minus 5%).
  • only the logic used in the non-functional ports (for example, ports B, C, and D) needs to be clocked rather than the entire port. This can be accomplished according to some embodiments, by using a clock gating arrangement.
  • the PCIe port performance is increased in a manner that is fully modular and scalable.
  • the number of ports may be scaled according to some embodiments depending on the requirements of the chip and/or the SOC.
  • a significant performance gain is possible by using logic that is already present in the SOC.
  • very minimal additional logic is necessary, since most of the logic is already present in the original silicon.
  • little additional power is necessary, since only minimal additional logic need be powered.
  • a simple clock gating scheme may be employed to power up only necessary logic in an unused port (or ports). Therefore, higher performance is provided with very few implications to cost, die size, and/or power.
  • PCIe logic which is left idle on a chip may be used.
  • idle PCIe Virtual Channel (VC) buffers may be used to boost performance of PCIe ports that are running.
  • VC Virtual Channel
  • the elements in some cases may each have a same reference number or a different reference number to suggest that the elements represented could be different and/or similar.
  • an element may be flexible enough to have different implementations and work with some or all of the systems shown or described herein.
  • the various elements shown in the figures ma be the same or different. Which one is referred to as a first element and which is called a second element is arbitrary.
  • Coupled may mean that two or more elements are in direct physical or electrical contact. However, “coupled” may also mean that two or more elements are not in direct contact with each other, but yet still co-operate or interact with each other.
  • An algorithm is here, and generally, considered to be a self-consistent sequence of acts or operations leading to a desired result. These include physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers or the like. It should be understood, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities.
  • An embodiment is an implementation or example of the inventions.
  • Reference in the specification to “an embodiment,” “one embodiment,” “some embodiments,” or “other embodiments” means that a particular feature, structure, or characteristic described in connection with the embodiments is included in at least some embodiments, but not necessarily all embodiments, of the inventions.
  • the various appearances “an embodiment,” “one embodiment,” or some embodiments are not necessarily all referring to the same embodiments.

Abstract

In some embodiments a functional PCI Express port includes first buffers and an idle PCI Express port includes second buffers. One or more of the second buffers are accessed by the functional PCI Express port. Other embodiments are described and claimed.

Description

    CROSS REFERENCE TO RELATED APPLICATIONS
  • This application is a continuation of U.S. Nonprovisional application Ser. No. 12/058,830 filed en Mar. 31, 2008, and issued as U.S. Pat. No. 8,271,715 (incorporated herein by reference in its entirety).
  • TECHNICAL FIELD
  • The inventions generally relate to a modular and scalable PCI-Express implementation.
  • BACKGROUND
  • Chipset designs such as some Intel Input/Output Controller Hub (ICH) designs include an implementation of PCI-Express (PCIe) ports that is not modular, and thus not scalable. As an example, the shared buffer in some Intel ICH designs is only shared with four PCIe ports, and there is not any flexibility to expand the number of PCIe ports beyond four. This makes it extremely difficult to design, for example, a new System On Chip (SOC) with a different number of PCIe ports. For example, if a new SOC design requires five PCIe ports, a major effort is necessary in order to convert the lout ICH PCIe ports to five. ICH PCIe ports for the new SOC design. Additionally, it is not convenient to reduce the number of PCIe ports to less than four. If a new SOC requires only three PCIe ports and it has to adopt an ICH design that includes four PCIe ports, an additional cost of integrating the fourth port is necessary.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The inventions will be understood more fully from the detailed description given below and from the accompanying drawings of some embodiments of the inventions which, however, should not be taken to limit the inventions to the specific embodiments described, but are for explanation and understanding only.
  • FIG. 1 illustrates a system according to some embodiments of the inventions.
  • FIG. 2 illustrates a system according to some embodiments of the inventions.
  • FIG. 3 illustrates a flow according to some embodiments of the inventions.
  • FIG. 4 illustrates a graph according to some embodiments of the inventions.
  • DETAILED DESCRIPTION
  • Some embodiments of the inventions relate to a modular and scalable PCI-Express implementation.
  • In some embodiments a functional PCI Express port includes first buffers and an idle PCI Express port includes second buffers. One or more of the second buff ON are accessed by the functional PCI Express port.
  • In some embodiments, access is provided to one or more buffers of a functional PCI Express port. The functional PCI Express port is provided access to one or more buffers included in an idle PCI Express port.
  • FIG. 1 illustrates a system 100 according to some embodiments. In some embodiments system 100 is an implementation using one PCI-Express (PCIe) port. In some embodiments, system 100 includes one PCIe port. System 100 includes a common data bus 102, a tristate buffer 104, a multiplexer (MIA) 106, a controller 108, and buffers 110 (for example, Virtual Channel (VC) receive buffers and/or transmit buffers).
  • In some embodiments, system 100 is a scalable implementation that significantly increases the performance of a PO-Express port with addition of minimal logic and/or power. In some embodiments, system 100 is a scalable implementation that significantly increases the performance of a PCI-Express port in a System On Chip (SOC) design with addition of minimal logic and/or power.
  • In typical SOC designs, and depending on system application, not all PCIe ports are utilized. Conventionally, these unused ports are powered down to save power. However, in some embodiments, logic in the unused PCIe ports is used to boost the performance of the running PCIe ports. In some embodiments, this is performed in a modular and/or scalable manner.
  • In some embodiments, Virtual Channel (VC) receive/transmit buffers 110 are used at the transceiver layer of unused PCIe ports. In some embodiments, controller 108 is a data write enable, data read enable, pointer counter, data selector and/or state machine. Controller 108 may be used as a controller for a buffer pointer, a data write enable, a data read enable, and/or data select, etc.
  • In some embodiments, system 100 operates in two modes of operation, including a normal mode and a shared mode. In the normal mode, all ports are independent and function as a separate entity (that is, the one PCIe port of system 100 is independent and functions as a separate entity from any other PCIe ports). In the shared mode, the virtual buffers 110 in a non-functional PCIe port are used to boost performance of functional PCIe ports.
  • The common data bus 102 is connected to common data buses on all ports (other ports connected to the common data bus 102 are not illustrated in FIG. 1). During the normal mode of operation, common data bus 102 is left idle by tristating all tristate buffers in all PCIe ports (for example, in some embodiments, tristate buffer 104). During the shared buffer mode, the common data bus 102 is driven by data from a functional PCIe port. This is accomplished, for example, b\ activating the tristate buffer for the functional PCIe port (for example, in some embodiments, tristate buffer 104) while maintaining tristate buffers in other (for example, all other) PCIe ports in the tristate mode.
  • Controller 108 includes in some embodiments a counter and a state machine, for example. The state machine controls, for example, data write enable (and/or data read enable) signals and or pointer signals. Controller 108 serves as an arbiter to activate buffers 110 front other PCIe ports. Controller 108 also selects using MUX 106 either data from the comma data bus 102 or from the regular data bus (input Data signal) depending, on the mode of operation. In some embodiments, when operating in the normal mode, the common data bus 102 is left idle by tristating the tristate buffers in all PCIe ports (for example, including tristate buffer 104) and the input Data is provided to (and/or from) the buffers 110 via the MLA 106. In some embodiments, when operating in the shared mode, the common data bus 102 is driven by data from a functional PCIe port, and data from common bus 102 may be provided via MUX 106 to buffers 110.
  • FIG. 2 illustrates a system 200 according to some embodiments. In some embodiments system 200 is an implementation using four PCI-Express (PCIe) ports, including PCIe port A 220, PCIe port B 240, PCIe port C 260, and PCIe port D 280. In some embodiments, some or all of the PCIe ports are similar to or identical to the PCIe port illustrated in FIG. 1. In some embodiments, system 200 includes four PCIe ports as illustrated in FIG. 2, but in other embodiments, any number of PCIe ports may be included in system 200. System 200 further includes a common data bus 202 on all four PCIe ports A 220, B 240, C 260, and D 280.
  • PCIe port A 220 includes a tristate buffer 224, a multiplexer (MUX) 226, a controller 228, and buffers 230 (for example, Virtual Channel (VC) receive buffers and/or transmit buffers). PCIe port B 240 includes a tristate buffer 244, a multiplexer (MUX) 246, a controller 248, and buffers 250 (for example, Virtual Channel (VC) receive buffers and/or transmit buffers). PCIe port C 260 includes a tristate buffer 264, a multiplexer (MUX) 266, a controller 268, and buffers 270 (for example, Virtual Channel (VC) receive buffers and/or transmit buffers). PCIe port D 280 includes a tristate buffer 284, a multiplexer (MUX) 286, a controller 288, and buffers 290 (for example, Virtual Channel (VC) receive buffers and or transmit buffers).
  • In some embodiments, system 200 is a scalable implementation that significantly increases the performance of a PCI-Express port with addition of minimal logic and/or power. In some embodiments, system 200 is a scalable implementation that significantly increases the performance of a PCI-Express port in a System On Chip (SOC) design with addition of minimal logic and/or power.
  • In typical SOC designs, and depending on system application, not all PCIe ports are utilized. Conventionally, these unused ports are powered down to save power. However, in some embodiments, logic in the unused PCIe ports is used to boost the performance of the running PCIe ports. In some embodiments, this is performed in a modular and/or scalable manner.
  • In some embodiments, Virtual Channel (VC) receive/transmit buffets 230, 250, 270, and/or 290 are used at the transceiver layer of unused PCIe ports. In some embodiments, controllers 228, 248, 268, and/or 288 control a data write enable, data read enable, pointer counter, data selector and/or state machine. Controllers 228, 248, 268, and/or 288 may be used as a controller for a buffer pointer, a data write enable, a data read enable, and or data select, etc.
  • In some embodiments, system 200 operates in two modes of operation, including a normal mode and a shared mode. In the normal mode, all ports 220, 240, 260, and 280 are independent and function as a separate entity (that is, each PCIe port of system 200 is independent and functions as a separate entity from all other PCIe ports). In the shared mode, the virtual buffers 230, 250, 270, and/or 290 in a non-functional PCIe port are used to boost performance of functional PCIe ports.
  • In some embodiments, and as illustrated in FIG. 2, the common data bus 202 is connected through all ports of system 200. During, the normal mode of operation, common data bus 202 is left idle by tristating all tristate buffers 224, 244, 264, and/or 284 in all PCIe ports. During the shared buffer mode, the common data bus 202 is driven by data from a functional one or more of the PCIe ports 220, 240, 260, and/or 280. This is accomplished, for example, by activating the tristate buffer for the functional PCIe port while maintaining tristate buffers in other (for example, all other) PCIe ports in the tristate mode.
  • Controllers 228, 248, 268, and/or 288 each include in some embodiments a counter and a state machine, for example. The state machine controls, for example, data write enable (and/or data read enable) signals and/or pointer signals. Controllers 228, 248, 268, and/or 288 together and/or separately serve as an arbiter to activate buffers 230, 250, 270 and/or 290, respectively, from other PCIe ports. Controllers 228, 248, 268, and 288 also each select using MUX 226, 246, 266, and 288, respectively, either data from the common data bus 202 or from the regular data bus (input signal Data A, Data B, Data C, and Data D, respectively) depending on the mode of operation. In some embodiments, when operating in the normal mode, the common data bus 202 is left idle by tristating the tristate buffers in all PCIe ports and the input Data is provided to (and/or from) the buffers 230, 250, 270, and/or 290 via the MUX 226, 246, 266, and/or 286, respectively. In some embodiments, when operating in the shared mode, the common data bus 202 is driven by data from a functional PCIe port, and data from common bus 202 may be provided via the respective MUX 226, 246, 266, and 286 to buffers 230, 250, 270, and 290, respectively.
  • In some embodiments, as illustrated in FIG. 2, for example, a four PCIe port implementation is used. It is noted, however, that in some embodiments, scaling to any number of PCIe ports may be implemented.
  • During regular mode operation of system 200 of FIG. 2, data will come from the regular data bus (that is, from Data A. Data B, Data C, and/or Data D, etc.), and the tristate buffers 224, 244, 264, and 284 will be in a tristate mode. Further, the pointers (for example, Pointer A, Pointer B, Pointer C, and Pointer D) and the Data Write Enables, Data Read Enables, and/or Data Selects will behave as usual.
  • During a shared buffer mode, operation is different. For example, in some embodiments, only PCIe port A 220 is being utilized in an SOC design as a functional port, and logic from the non-functional ports (that is, PCIe port B 240, PCIe port C 260, and PCIe port D 280) are utilized during a shared buffer mode. In some embodiments, the logic used it the non-functional ports is logic located at the transaction layer of the non-functional ports.
  • In some embodiments, during a shared buffer mode in which only PCIe port A 220 is a functional port and the other PCIe ports 240, 260, and 280 are non-functional ports, data select signals from the controllers 248, 268, and 288 to the multiplexers 246, 266, and 286, respectively, will choose data from the common data bus 202. The tristate buffer 224 of port A 220 will be configured to select input Data A. All the other tristate buffers 244, 264, and 284 of ports B 240, C 260, and D 280, respectively, will be left in tristate mode. In this manner, input Data A will drive the common data bus 202.
  • FIG. 3 illustrates a state machine 300 according to some embodiments. In some embodiments, state machine 300 is a state machine used to choose active logic between PCI Express (PCIe) ports during a shared buffer mode. In some embodiments, the state machine 300 operation may be implemented, for example, in controller 108 illustrated in FIG. 1, and/or in one or more of controllers 228, 248, 268, and/or 288 illustrated in FIG. 2. For example, during a shared buffer mode in which only PCIe port A 220 is a functional port and the other PCIe ports 240, 260, and 280 are non-functional ports. State machine 300 includes a state A 302, a state B 304, a state C 306, and a state D 308. Control signals such as, for example, pointers, data write enables, data read enables, etc. will behave based on state machine 300 according to some embodiments. Data is first filled in receive buffers 230 of port A 220. Once receive buffers 230 are full, receive buffers 250 of port B 240 are filled. Once receive buffers 250 are full, receive buffers 270 of port C 260 are filled. Once receive buffers 270 are full, receive buffers 290 of port D 280 are filled. Once all four buffers 230, 250, 270, and 290 are filled, the pointer is looped back to port A 220.
  • In some embodiments as illustrated, for example, in FIG. 3, four PCIe ports are used (for example, ports A, B, C, and D of FIG. 2). However, in some embodiments, any number of ports may be implemented. In some embodiments as illustrated in FIG. 2 and FIG. 3, for example, eight buffer entries per port are used (that is count 8, for example). However, in some embodiments, any number of buffer entries per port may be implemented.
  • As illustrated in FIG. 3, during reset, port A is configured as the only functional port and flow is at state A 302 at which only pointer A (to the buffers in port A) is active. Once the buffers at port A are full, as long as the count remains at 8, and a B-inactive signal is present, flow remains at state A 302. Once a B-active signal is asserted to indicate that the shared buffer logic, inside port B will be utilized (and the count is at 8), flow moves to state B 304, at which only pointer B (to the buffers in port B) is active. Once the buffers at port B are full (count=8), if a C-active signal has not been asserted (C-inactive), flow returns to state A 302. However, once the buffers at port B are full (count=8), if the C-active signal is asserted to indicate that the shared buffer logic inside port C will be utilized, flow moves to suite C 306, at which only pointer C (to the buffers in port C) is active. Once the buffers at port C are full (count 8), if a 1)-active signal has not been asserted (D-inactive), flow returns to state A 302. However, once the buffers at port C are full (count=8), if the D-active signal is asserted to indicate that the shared buffer logic inside port D will be utilized, flow moves to state D 308, at which only pointer D (to the buffers in port D) is active. Once the count 8 for the buffers in port D (that is, the buffers at port ID are full), then flow returns to state A 302.
  • According to some embodiments, in an implementation with four PCIe ports and eight buffers per port, the size of the receive buffers for a functional port A with the three other ports B, C and D being non-functional) can be increased from eight entries to thirty two entries (8 buffers per port×4 ports=32 entries).
  • Although embodiments in FIGS. 1-3 have been described and illustrated mostly in reference to data write operations, according to some embodiments, the invention is implemented in reference to data read operations, and in some embodiments data read and/or write implementations may be performed.
  • FIG. 4 illustrates a graph 400 according to some embodiments. Graph 400 compares the number of entries vs. bandwidth in Gb/s of a chipset with a PCIe x16 configuration 402 and a PCIe x8 configuration 404, in which the chip has, for example, two ports and supports 1×16 and 2×8 configurations according to some embodiments. Graph 400 illustrates a relationship between the bandwidth in Gb/s and a write request queue size, for example. In the design of each port, the chip uses 18 entries to hit the maximum x8 bandwidth, which is approximately 3.05 Gb/s, as illustrated by point 414 of graph 404. When both ports are in operation, a total throughput of 2×1.05 Gb/s=6.1 Gb/s is possible. When only one port is in operation in the 1×16 configuration, the bandwidth for 18 queue entries is approximately 3.5 Gb/s, as illustrated by point 412 of graph 402. However, according to some embodiments, the write queue size may be doubled to 36 entries, the bandwidth may be increased to approximately 6.1 (ibis, as illustrated by point 422 of graph 402. This is a gain of more than 74% compared to the original implementation. The values on the graph have been approximated using an Intel chipset, but it is noted that the values are approximate and may have a margin of error (for example, plus or minus 5%).
  • In some embodiments, only the logic used in the non-functional ports (for example, ports B, C, and D) needs to be clocked rather than the entire port. This can be accomplished according to some embodiments, by using a clock gating arrangement.
  • In some embodiments, the PCIe port performance is increased in a manner that is fully modular and scalable. The number of ports may be scaled according to some embodiments depending on the requirements of the chip and/or the SOC. In some embodiments, a significant performance gain is possible by using logic that is already present in the SOC. Thus, very minimal additional logic is necessary, since most of the logic is already present in the original silicon. In some embodiments, little additional power is necessary, since only minimal additional logic need be powered. As discussed above, a simple clock gating scheme may be employed to power up only necessary logic in an unused port (or ports). Therefore, higher performance is provided with very few implications to cost, die size, and/or power.
  • In some embodiments. PCIe logic which is left idle on a chip may be used. In some embodiments, idle PCIe Virtual Channel (VC) buffers may be used to boost performance of PCIe ports that are running.
  • Although some embodiments have been described herein as being implemented in a particular manner, according to some embodiments these particular implementations may not be required. For example, although some implementations have been described herein as applying to implemented during data write operations, some embodiments apply to date read operations, and some embodiments apply to data read and/or write operations.
  • Although some embodiments have been described in reference to particular implementations, other implementations are possible according to some embodiments. Additionally, the arrangement and/or order of circuit elements or other features illustrated in the drawings and/or described herein need not be arranged in the particular way illustrated and described. Many other arrangements are possible according to some embodiments.
  • In each system shown in a figure, the elements in some cases may each have a same reference number or a different reference number to suggest that the elements represented could be different and/or similar. However, an element may be flexible enough to have different implementations and work with some or all of the systems shown or described herein. The various elements shown in the figures ma be the same or different. Which one is referred to as a first element and which is called a second element is arbitrary.
  • In the description and claims, the terms “coupled” and “connected,” along with their derivatives, may be used. It should be understood that these terms are not intended as synonyms for each other. Rather, in particular embodiments, “connected” may be used to indicate that two or more elements are in direct physical or electrical contact with each other. “Coupled” may mean that two or more elements are in direct physical or electrical contact. However, “coupled” may also mean that two or more elements are not in direct contact with each other, but yet still co-operate or interact with each other.
  • An algorithm is here, and generally, considered to be a self-consistent sequence of acts or operations leading to a desired result. These include physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers or the like. It should be understood, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities.
  • Some embodiments may be implemented in one or a combination of hardware, firmware, and software. Some embodiments may also be implemented as instructions stored on a machine-readable medium, which may be read and executed by a computing platform to perform the operations described herein. A machine-readable medium may include any mechanism for storing or transmitting information in a form readable by a machine (e.g., computer). For example, a machine-readable medium may include read only memory (ROM); random access memory (RAM); magnetic disk storage media; optical storage media; flash memory devices; electrical, optical, acoustical or other form of propagated signals e.g., carrier waves, infrared signals, digital signals, the interfaces that transmit and/or receive signals, etc.), and others.
  • An embodiment is an implementation or example of the inventions. Reference in the specification to “an embodiment,” “one embodiment,” “some embodiments,” or “other embodiments” means that a particular feature, structure, or characteristic described in connection with the embodiments is included in at least some embodiments, but not necessarily all embodiments, of the inventions. The various appearances “an embodiment,” “one embodiment,” or some embodiments are not necessarily all referring to the same embodiments.
  • Not all components, features, structures, characteristics, etc. described and illustrated herein need be included in a particular embodiment or embodiments. If the specification states a component, feature, structure, or characteristic “may”, “might”, “can” or “could” be included, for example, that particular component, feature, structure, or characteristic is not required to be included. If the specification or claim refers to “a” or “an” element, that does not mean there is only one of the element, if the specification or claims refer to “an additional” element, that does not preclude there being more than one of the additional element.
  • Although flow diagrams and/or state diagrams may have been used herein to describe embodiments, the inventions are not limited to those diagrams or to corresponding descriptions herein. For example, flow need not move through each illustrated box or state or in exactly the same order as illustrated and described herein.
  • The inventions are not restricted to the particular details listed herein. Indeed, those skilled in the art having the benefit of this disclosure will appreciate that many other variations from the foregoing description and drawings may be made within the scope of the present inventions. Accordingly, it is the following claims including any amendments thereto that define the scope of the inventions.

Claims (24)

What is claimed is:
1. An apparatus comprising:
a functional PCI Express port including first buffers; and
an idle PCI Express port including second buffers;
wherein one or more of the second buffers are accessed by the functional PCI Express port.
2. The apparatus of claim 1, wherein the one or more of the second buffers are used by the functional PCI Express port after all of the first buffers have been filled.
3. The apparatus of claim 1, further comprising a common data bus coupled to the functional PCI Express port and coupled to the idle PCI Express port.
4. The apparatus of claim 1, the functional PCI Express port including a controller to provide data between the functional PCI Express port and the second buffers.
5. The apparatus of claim 3, the functional PCI Express port including a controller to provide data between the first buffers and the common data bus.
6. The apparatus of claim 1, the idle PCI Express port including a controller to provide data between the second buffers and the functional PCI Express port.
7. The apparatus of claim 3, the idle PCI Express port including a controller to provide input data between the second buffers and the functional PCI Express port via the common data bus.
8. The apparatus of claim 1, further comprising a dock gating device to limit power provided to the idle PCI Express port.
9. The apparatus of claim 8, wherein the clock gating device is to provide power to the second buffers.
10. The apparatus of claim 1, further comprising one or more additional idle PCI Express port each including buffers, wherein one or more of the buffers included in at least one of the one or more additional idle PCI Express ports are accessed by the functional PCI Express port.
11. The apparatus of claim 10, wherein one or more of the buffers of each of the one or more additional idle PCI Express ports are accessed by the functional PCI Express port.
12. The apparatus of claim 1, wherein the apparatus is scalable such that any number of PCI Express ports may be included.
13. The apparatus of claim 12, wherein each of the PCI Express ports may be a functional PCI Express port or an idle PCI Express port.
14. The apparatus of claim 1, wherein each of PCI Express ports may be a functional PCI Express port or an idle PCI Express port.
15. A method comprising:
providing access to one or more buffers of a functional PCI Express pout; and
providing to the functional PCI Express port access to one or more buffers included in an idle PCI Express port.
16. The method of claim 15, wherein the one or more of the buffers included in the idle PCI Express port are used by the functional PCI Express port after all of the buffers of the functional PCI Express port have been used.
17. The method of claim 15, further comprising providing data between the functional PCI Express port and the buffers of the idle PCI Express port.
18. The method of claim 16, further comprising limiting power provided to the idle PCI Express port.
19. The method of claim 18, wherein the limiting includes providing power to the buffers of the idle PCI Express port.
20. The method of claim 15, further comprising accessing by the functional PCI Express port one or more buffers included in each of one or more additional idle PCI Express ports.
21. The method of claim 20, further comprising accessing by the functional PCI Express port one or more buffers included in all of a plurality of the one or more additional idle PCI Express ports.
22. The method of claim 15, further comprising accessing by the functional PCI Express port one or more buffers included in all of a scalable number of additional idle PCI Express ports.
23. The method of claim 15, wherein each of the PCI Express ports may be a functional PCI Express port or an idle PCI Express port.
24. The method of claim 23, wherein each of the PCI Express ports may be a functional PCI Express port or an idle PCI Express port.
US13/621,994 2008-03-31 2012-09-18 Modular scalable pci-express implementation Abandoned US20130262734A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US13/621,994 US20130262734A1 (en) 2008-03-31 2012-09-18 Modular scalable pci-express implementation

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US12/058,830 US8271715B2 (en) 2008-03-31 2008-03-31 Modular scalable PCI-Express implementation
US13/621,994 US20130262734A1 (en) 2008-03-31 2012-09-18 Modular scalable pci-express implementation

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US12/058,830 Continuation US8271715B2 (en) 2008-03-31 2008-03-31 Modular scalable PCI-Express implementation

Publications (1)

Publication Number Publication Date
US20130262734A1 true US20130262734A1 (en) 2013-10-03

Family

ID=41118849

Family Applications (2)

Application Number Title Priority Date Filing Date
US12/058,830 Active 2028-11-26 US8271715B2 (en) 2008-03-31 2008-03-31 Modular scalable PCI-Express implementation
US13/621,994 Abandoned US20130262734A1 (en) 2008-03-31 2012-09-18 Modular scalable pci-express implementation

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US12/058,830 Active 2028-11-26 US8271715B2 (en) 2008-03-31 2008-03-31 Modular scalable PCI-Express implementation

Country Status (1)

Country Link
US (2) US8271715B2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9501442B2 (en) 2014-04-30 2016-11-22 Freescale Semiconductor, Inc. Configurable peripheral componenent interconnect express (PCIe) controller

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8271715B2 (en) * 2008-03-31 2012-09-18 Intel Corporation Modular scalable PCI-Express implementation
JP5966265B2 (en) 2011-07-15 2016-08-10 株式会社リコー Data transfer apparatus and image forming system
US9928198B2 (en) 2013-11-22 2018-03-27 Oracle International Corporation Adapter card with a computer module form factor
WO2015130312A1 (en) 2014-02-28 2015-09-03 Hewlett-Packard Development Company, L. P. Computing system control
WO2016122480A1 (en) 2015-01-28 2016-08-04 Hewlett-Packard Development Company, L.P. Bidirectional lane routing

Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5748612A (en) * 1995-08-10 1998-05-05 Mcdata Corporation Method and apparatus for implementing virtual circuits in a fibre channel system
US6046817A (en) * 1997-05-12 2000-04-04 Lexmark International, Inc. Method and apparatus for dynamic buffering of input/output ports used for receiving and transmitting print data at a printer
US6108692A (en) * 1995-07-12 2000-08-22 3Com Corporation Method and apparatus for internetworking buffer management
US6421756B1 (en) * 1999-05-06 2002-07-16 International Business Machines Corporation Buffer assignment for bridges
US6798741B2 (en) * 2001-12-05 2004-09-28 Riverstone Networks, Inc. Method and system for rate shaping in packet-based computer networks
US6822966B2 (en) * 1999-03-01 2004-11-23 Enterasys Networks, Inc. Allocating buffers for data transmission in a network communication device
US20060227143A1 (en) * 2005-04-11 2006-10-12 Yutaka Maita Image processing apparatus and image forming apparatus
US7245586B2 (en) * 2002-08-30 2007-07-17 Lucent Technologies Inc. Buffer management based on buffer sharing across ports and per-port minimum buffer guarantee
US7249206B2 (en) * 2002-03-12 2007-07-24 International Business Machines Corporation Dynamic memory allocation between inbound and outbound buffers in a protocol handler
US7293127B2 (en) * 2004-01-15 2007-11-06 Ati Technologies, Inc. Method and device for transmitting data using a PCI express port
US7424566B2 (en) * 2005-11-16 2008-09-09 Sun Microsystems, Inc. Method, system, and apparatus for dynamic buffer space allocation
US20080244118A1 (en) * 2007-03-28 2008-10-02 Jos Manuel Accapadi Method and apparatus for sharing buffers
US20080288798A1 (en) * 2007-05-14 2008-11-20 Barnes Cooper Power management of low power link states
US20090154456A1 (en) * 2007-12-18 2009-06-18 Plx Technology, Inc. Dynamic buffer pool in pciexpress switches
US7587575B2 (en) * 2006-10-17 2009-09-08 International Business Machines Corporation Communicating with a memory registration enabled adapter using cached address translations
US7630389B1 (en) * 2005-12-14 2009-12-08 Nvidia Corporation Multi-thread FIFO memory generator
US8077610B1 (en) * 2006-02-22 2011-12-13 Marvell Israel (M.I.S.L) Ltd. Memory architecture for high speed network devices
US8271715B2 (en) * 2008-03-31 2012-09-18 Intel Corporation Modular scalable PCI-Express implementation
US8312188B1 (en) * 2009-12-24 2012-11-13 Marvell International Ltd. Systems and methods for dynamic buffer allocation
US20130242985A1 (en) * 2012-03-14 2013-09-19 International Business Machines Corporation Multicast bandwidth multiplication for a unified distributed switch

Patent Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6108692A (en) * 1995-07-12 2000-08-22 3Com Corporation Method and apparatus for internetworking buffer management
US5748612A (en) * 1995-08-10 1998-05-05 Mcdata Corporation Method and apparatus for implementing virtual circuits in a fibre channel system
US6046817A (en) * 1997-05-12 2000-04-04 Lexmark International, Inc. Method and apparatus for dynamic buffering of input/output ports used for receiving and transmitting print data at a printer
US6822966B2 (en) * 1999-03-01 2004-11-23 Enterasys Networks, Inc. Allocating buffers for data transmission in a network communication device
US6421756B1 (en) * 1999-05-06 2002-07-16 International Business Machines Corporation Buffer assignment for bridges
US6798741B2 (en) * 2001-12-05 2004-09-28 Riverstone Networks, Inc. Method and system for rate shaping in packet-based computer networks
US7249206B2 (en) * 2002-03-12 2007-07-24 International Business Machines Corporation Dynamic memory allocation between inbound and outbound buffers in a protocol handler
US7245586B2 (en) * 2002-08-30 2007-07-17 Lucent Technologies Inc. Buffer management based on buffer sharing across ports and per-port minimum buffer guarantee
US7293127B2 (en) * 2004-01-15 2007-11-06 Ati Technologies, Inc. Method and device for transmitting data using a PCI express port
US20060227143A1 (en) * 2005-04-11 2006-10-12 Yutaka Maita Image processing apparatus and image forming apparatus
US7424566B2 (en) * 2005-11-16 2008-09-09 Sun Microsystems, Inc. Method, system, and apparatus for dynamic buffer space allocation
US7630389B1 (en) * 2005-12-14 2009-12-08 Nvidia Corporation Multi-thread FIFO memory generator
US8077610B1 (en) * 2006-02-22 2011-12-13 Marvell Israel (M.I.S.L) Ltd. Memory architecture for high speed network devices
US7587575B2 (en) * 2006-10-17 2009-09-08 International Business Machines Corporation Communicating with a memory registration enabled adapter using cached address translations
US20080244118A1 (en) * 2007-03-28 2008-10-02 Jos Manuel Accapadi Method and apparatus for sharing buffers
US20080288798A1 (en) * 2007-05-14 2008-11-20 Barnes Cooper Power management of low power link states
US20090154456A1 (en) * 2007-12-18 2009-06-18 Plx Technology, Inc. Dynamic buffer pool in pciexpress switches
US8271715B2 (en) * 2008-03-31 2012-09-18 Intel Corporation Modular scalable PCI-Express implementation
US8312188B1 (en) * 2009-12-24 2012-11-13 Marvell International Ltd. Systems and methods for dynamic buffer allocation
US20130242985A1 (en) * 2012-03-14 2013-09-19 International Business Machines Corporation Multicast bandwidth multiplication for a unified distributed switch

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9501442B2 (en) 2014-04-30 2016-11-22 Freescale Semiconductor, Inc. Configurable peripheral componenent interconnect express (PCIe) controller

Also Published As

Publication number Publication date
US20090248944A1 (en) 2009-10-01
US8271715B2 (en) 2012-09-18

Similar Documents

Publication Publication Date Title
US20130262734A1 (en) Modular scalable pci-express implementation
US7165125B2 (en) Buffer sharing in host controller
US7096296B2 (en) Supercharge message exchanger
JP4195655B2 (en) Bus interconnection system
US5721839A (en) Apparatus and method for synchronously providing a fullness indication of a dual ported buffer situated between two asynchronous buses
US5771359A (en) Bridge having a data buffer for each bus master
US20110231685A1 (en) High speed input/output system and power saving control method thereof
US6304936B1 (en) One-to-many bus bridge using independently and simultaneously selectable logical FIFOS
US20130194881A1 (en) Area-efficient multi-modal signaling interface
US10108567B2 (en) Memory channel selection control
US20110161543A1 (en) Memory Management
US20050033875A1 (en) System and method for selectively affecting data flow to or from a memory device
US10409512B2 (en) Method of operating storage controller and method of operating data storage device having the storage controller
US5535333A (en) Adapter for interleaving second data with first data already transferred between first device and second device without having to arbitrate for ownership of communications channel
US6877060B2 (en) Dynamic delayed transaction buffer configuration based on bus frequency
US6842831B2 (en) Low latency buffer control system and method
US10459847B1 (en) Non-volatile memory device application programming interface
US20080247501A1 (en) Fifo register unit and method thereof
US20160188258A1 (en) Memory interface signal reduction
US20040006664A1 (en) System and method for efficient chip select expansion
US7114019B2 (en) System and method for data transmission
US20210124705A1 (en) Nand interface device to boost operation speed of a solid-state drive
US10346328B2 (en) Method and apparatus for indicating interrupts
GB2368152A (en) A DMA data buffer using parallel FIFO memories
US7984212B2 (en) System and method for utilizing first-in-first-out (FIFO) resources for handling differences in data rates between peripherals via a merge module that merges FIFO channels

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- AFTER EXAMINER'S ANSWER OR BOARD OF APPEALS DECISION