US20130179613A1 - Network on chip (noc) with qos features - Google Patents

Network on chip (noc) with qos features Download PDF

Info

Publication number
US20130179613A1
US20130179613A1 US13/680,965 US201213680965A US2013179613A1 US 20130179613 A1 US20130179613 A1 US 20130179613A1 US 201213680965 A US201213680965 A US 201213680965A US 2013179613 A1 US2013179613 A1 US 2013179613A1
Authority
US
United States
Prior art keywords
priority
signal
packet
urgency
band
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US13/680,965
Inventor
Philippe Boucard
Philippe Martin
Jean-Jacques Lecler
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qualcomm Technologies Inc
Original Assignee
Arteris SAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Arteris SAS filed Critical Arteris SAS
Priority to US13/680,965 priority Critical patent/US20130179613A1/en
Assigned to ARTERIS S.A. reassignment ARTERIS S.A. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: LECLER, JEAN-JACQUES, MARTIN, PHILIPPE, BOUCARD, PHILIPPE
Publication of US20130179613A1 publication Critical patent/US20130179613A1/en
Assigned to QUALCOMM TECHNOLOGIES, INC. reassignment QUALCOMM TECHNOLOGIES, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ARTERIS, SAS
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/14Handling requests for interconnection or transfer
    • G06F13/36Handling requests for interconnection or transfer for access to common bus or bus system
    • G06F13/362Handling requests for interconnection or transfer for access to common bus or bus system with centralised access control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS
    • H04L47/2441Traffic characterised by specific attributes, e.g. priority or QoS relying on flow classification, e.g. using integrated services [IntServ]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS
    • H04L47/2458Modification of priorities while in transit
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/10Packet switching elements characterised by the switching fabric construction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/10Packet switching elements characterised by the switching fabric construction
    • H04L49/109Integrated on microchip, e.g. switch-on-chip
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/10Packet switching elements characterised by the switching fabric construction
    • H04L49/112Switch control, e.g. arbitration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/20Support for services
    • H04L49/205Quality of Service based
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/302Route determination based on requested QoS

Definitions

  • IP Intellectual Property
  • IP cores such as Central Processing Units (CPUs), Digital Signal Processors (DSPs), video and networking processing blocks, memory controllers and others with an interconnect system.
  • the interconnect system implements the system-level communications of the particular design.
  • the IP cores are typically designed using a standard IP interface protocol, either public or proprietary. These IP interface protocols are referred to as transaction protocols.
  • An example transaction protocol is Open Core Protocol (OCP) from OCP-IP, and Advanced Extensible Interface (AXITM) and Advanced High-performance Bus (AHBTM) from Arm Inc.
  • OCP Open Core Protocol
  • AXITM Advanced Extensible Interface
  • AHBTM Advanced High-performance Bus
  • the first generation of IP core interconnect technology consisted of a hierarchical set of busses and crossbars.
  • the interconnect itself consists mostly of a set of wires, connecting the IP cores together, and one or more arbiters which arbitrate access to the communication system.
  • a hierarchical approach is used to separate high-speed, high performance communications from lower-speed, lower performance subsystems. This solution is an appropriate solution for simple designs.
  • a common topology used for these interconnects is either a bus or a crossbar. The trade-off between these topologies is straightforward.
  • the bus topology has fewer physical wires which saves area and hence cost, but it is limited in bandwidth.
  • the wire-intensive crossbar approach provides a higher aggregate communication bandwidth.
  • the above approach has a severe limitation in that the re-use of the IP cores is limited.
  • the interfaces of all the IP cores connecting to the same interconnect are required to be the same. This can result in the re-design of the interface of an IP core or the design of bridge logic when a particular IP core needs to be used in another system.
  • This first generation of interconnect also implements a limited amount of system-level functions.
  • This first generation of IP core interconnect technology can be described as a coupled solution. Since the IP interfaces are logically and physically not independent from each other, they are coupled such that modifying one interface requires modifying all the interfaces.
  • IP interconnect is a partially decoupled implementation of the above described bus and crossbar topologies.
  • the internal communication protocol of the communications system, or transport protocol is decoupled from the IP interface protocol, or transaction protocol.
  • IP interface protocol or transaction protocol.
  • the third generation of IP core interconnect technology is the NoC, which implements not only decoupling between transaction and transport layers, but also a clean decoupling between transport and physical layers.
  • the key innovation enabling this solution is the packetization of the transaction layer information.
  • the command and data information that is to be transported is encapsulated in a packet and the transport of the packet over the physical medium is independent of the physical layer.
  • the packet format consists of a header and a payload.
  • the payload contains the data and data-related qualifiers such as byte-enable signals.
  • the header contains routing information, system-level address, and additional control information such as security-related indicators.
  • the NoC is constructed by connecting a set of IP elements such as network interface units, switches, synchronizers, width converters together through physical links. The elements are selected and connected in such a manner as to meet protocol, performance, Quality-of-Service (QoS), area and timing requirements of the System-on-Chip (SoC).
  • QoS Quality-of-Service
  • Each data-flow from initiator to target has its own bandwidth and latency requirement to be met by the on-chip-interconnect.
  • Certain traffic such as from CPU to DRAM Controller has a low latency constraint.
  • Other traffic such as the traffic originating from a video-processing block has a bandwidth constraint that is driven by the “real-time” nature of the data stream.
  • Other traffic may exhibit a burst-like behavior, that is, the traffic intermittently requires a high bandwidth access to a target resource.
  • the disclosed implementations include systems and methods for controlling the arbitration of transfers in a NoC. Using these systems and methods, additional interface signals are enabled at IP Core interfaces of the NoC, and logic structures are implemented within the NoC, which take advantage of the information on the interface signals to control the priority of processing of request and response packets within the NoC.
  • FIG. 1 is a block diagram of an example NoC.
  • FIG. 2 is an example physical link.
  • FIG. 3 is an example sequence of a packet transport.
  • FIG. 4 is an example packet.
  • FIG. 5A is an example switch element.
  • FIG. 5B is an example packet including a header with an “urgency” field.
  • FIG. 6 is an example physical link with out-of-band signaling.
  • FIG. 7 is an example multi-level arbitration structure in an on-chip interconnect
  • FIG. 8 is a flow diagram of an example process for implementing QoS features in a multi-level arbitration structure in an on-chip interconnect.
  • FIG. 9 illustrates an example of routing within an on-chip-interconnect.
  • FIG. 1 is a block diagram of an example NoC 100 .
  • NoC 100 can be constructed out of a set of IP elements 102 which communicate with each other through a packet-based transport-protocol.
  • IP elements 102 include but are not limited to: switches 102 a, clock converters 102 b, bandwidth regulators 102 c, sync First-In-First-Out (FIFO) 102 d, width converters 102 e, Endian converters 102 f, rate adaptors 102 g, power isolators 102 h and other IP elements.
  • Network Interface Units (NIUs) 104 implement a conversion between transaction protocol and transport protocol (ingress) and vice versa (egress).
  • Some examples of NIUs for transaction protocols include but are not limited to: OCP NIU 104 a, AXITM NIU 104 b, AHBTM NIU 104 c, memory scheduler 104 d and a proprietary NIU 104 e.
  • the NIUs 104 couple to various IP cores 110 .
  • IP cores are DSP 110 a, CPU 110 b, Direct Memory Access 110 c, OCP subsystem 110 d, DRAM 110 e, SRAM 110 f and other types of IP cores.
  • the transport protocol is packet-based.
  • the commands of the transaction layer can include load and store instructions of one or more words of data that are converted into packets for transmission over physical links. Physical links form connections between the IP elements.
  • An implementation of a transport port protocol used by NoC 100 is described in reference to FIG. 2 .
  • FIG. 2 is a block diagram of an example physical link 200 connecting a transmitter 202 (TX) and a receiver 204 (RX) in NoC 100 of FIG. 1 .
  • a transport protocol socket can be used to transfer a packet from transmitter 202 to receiver 204 over physical link 200 .
  • the socket can contain flow control signals (Vld, Rdy), framing signals (Head, Tail) and information signals (Data).
  • the socket can be a synchronous interface working on rising edges of a clock signal (Clk).
  • An active low reset signal (RStN) can also be included in the physical link 200 . The logical meaning of the different signals in this particular implementation is described next.
  • Vld Indicates that transmitter 202 presents valid information (Head, Tail and Data) in a current clock cycle. When Vld is negated, transmitter 202 drives an X value on Head, Tail and Data and receiver 204 discards these signals. Once transmitter 202 asserts Vld, the signals Head, Tail, Data and Vld remain constant until Rdy is asserted by receiver 204 .
  • the width of Vld can be 1 . Other widths can also be used.
  • Rdy Indicates that receiver 204 is ready to accept Data in a current clock cycle.
  • Rdy can depend (in combination) on Vld, Head, Tail and Data, or can only depend on the internal state of receiver 204 .
  • the width of Rdy can be 1. Other widths can also be used.
  • Head Indicates a first clock cycle of a packet.
  • the width of Head is 1. Other widths can also be used.
  • Tail Indicates a last clock cycle of a packet.
  • the width of Tail is 1. Other widths can also be used.
  • Data Effective information transferred from transmitter 202 to receiver 204 .
  • Data contains a header and a payload.
  • a data word transfer can occur when the condition Vld AND Rdy is true.
  • the width of Data can be configurable.
  • FIG. 3 is an example sequence of packet transport over the link of FIG. 2 .
  • a packet starts when Vld and Head are asserted, and completes when Vld and Tail are asserted.
  • a single cycle packet can have both Head and Tail asserted.
  • Head is negated when Vld is asserted, and outside a packet, Head is asserted simultaneously with Vld.
  • Packet content is carried on the Data signals.
  • FIG. 4 is an example packet for use with NoC 100 of FIG. 1 . More particularly, FIG. 4 illustrates an example packet 400 including a header 402 and a payload 404 .
  • the example packet 400 can be defined by four bytes (with byte-enables) of payload width and one cycle header penalty.
  • some fields may be optional.
  • header 402 includes a header field containing a RouteID, an Address field (Addr) and several Control fields.
  • the Control fields in the header 402 can carry additional end-to-end or transport protocol information. The use of this Control field for implementing QoS features is the subject of this invention and described in later sections. The meaning of the other fields in header 402 is explained next.
  • Addr This header field indicates the start address of a transaction, expressed in bytes, in the target address space.
  • RouteId This header field uniquely identifies a “initiator-mapping, target-mapping” pair.
  • the pair can be unique information used by routing tables to steer a packet inside NoC 100 .
  • the fields in the payload of the packet can be Byte-Enable (BE) field and Data field (Byte). The meaning of these fields is explained next.
  • BE Byte-Enable
  • Byte Data field
  • BE Indicates one Byte Enable bit per payload byte.
  • This field contains the payload part of the packet.
  • the width of this field is configurable, and in some implementations, contains at least 8 bits of data.
  • the width of a Byte can be extended to contain additional information such as protection or security information.
  • FIG. 5A shows an example of a switch element 500 .
  • three inputs 501 a - 501 c are sending packets to a single output 504 .
  • an arbitration point 502 is introduced.
  • arbitration mechanisms that can be used for arbitrating at arbitration point 502 including but not limited to: round-robin, first-in-first-out (FIFO), least-recently-used (LRU), fixed priority and any other mechanism that can be used to arbitrate.
  • FIFO first-in-first-out
  • LRU least-recently-used
  • “Urgency” is an indicator of the relative priority of a packet that is used by arbitration points in a NoC to implement a multi-level arbitration scheme, where the selection of one output transfer from several available input transfers is made based on a static arbitration between transfers with equal urgency values.
  • the highest value of “urgency” can be assigned the highest priority.
  • different values of urgency can be allocated different priorities.
  • the above described scheme implements a dynamic arbitration scheme. Transfers with the highest priority, for example, as determined by their urgency value, are considered first and arbitrated using a fixed arbitration scheme. If no highest priority transfers are present, transfers with the next level of priority can be considered for transfer, and so forth.
  • the same static arbitration scheme can be used for arbitration within each level of priority. In other embodiments, different levels of priority can have a different static arbitration schemes.
  • an “urgency” value is encoded in-band with the transfer.
  • the urgency value can be transmitted with the command information.
  • the urgency can be transmitted in the header of the packet.
  • the urgency information is encoded using a bar-graph encoding.
  • the value of the urgency is indicated by the highest bit-position in a multi-bit field with the value of “1,” while lower bit-positions also have the value of “1.”
  • a 3-bit field can indicate 4 levels of urgency with the four values “000,” “001,” 011,” “111.”
  • This or other implementations of bar-graph encoding are particularly efficient in combining urgency information from multiple sources into a common result because a simple OR function can be used to give the desired result. Since the “urgency” value is encoded in the header of a packet, or, is transmitted with the command or data information, the value is considered an in-band signal.
  • 5B illustrates one implementation of a packet 506 where the “urgency” is a 4-bit field 510 transferred as part of the header 508 .
  • the packet 506 is the same as packet 400 shown in FIG. 4 .
  • Pressure is an out-of-band signal that can include one or more bits for carrying urgency information of the urgency field out-of-band through the NoC.
  • the “pressure” signal can be a bundle of one or more physical wires that are part of the physical links that interconnect different IP elements on a NoC. The “pressure” signal can transport information in an out-of-band manner as described below in reference to FIG. 6 .
  • FIG. 6 is a block diagram of an example physical link 600 extending the physical link 200 of FIG. 2 to incorporate an out-of-band pressure (priority) signal 604 .
  • an out-of-band pressure (priority) signal 604 When a packet in the network-on-chip is being transmitted through the NoC and encounters back-pressure, that is, the Rdy signal is de-asserted, effectively blocking the packet from progressing through the network, the out-of-band pressure signal 604 on the physical link 600 takes on the value of the urgency that is encoded in the header of that particular packet.
  • Back pressure normally occurs at a point where arbitration occurs, since the basic function of arbitration logic is to select one of a plurality of packets to proceed while holding back the unselected packets using back-pressure.
  • This above described pressure mechanism allows the urgency value of a packet to propagate ahead of the packet, since the pressure signal 604 is out-of-band.
  • the out-of-band pressure values are routed forward starting from the point of origin through all paths that are back-pressured.
  • an arbiter in an arbitration can consider two values when deciding what input packet to select: the in-band urgency value of the current packet and the out-of-band pressure value on the pressure signal 604 on the link 600 associated with the input port.
  • the priority value can be determined from the maximum of the urgency value and the pressure value.
  • FIG. 7 illustrates one example of a multi-level arbitration structure 700 in an on-chip interconnect.
  • Two-input arbitration points 701 and 702 are selecting between three input links 710 , 711 and 712 .
  • arbitration point 702 has a fixed arbitration, always giving link 712 highest priority for equal levels of in-band urgency. Assume that a packet has started to transfer on link 711 and is now presented on link 713 to arbitration point 702 . As long as new requests are presented on link 712 , the packet on link 713 will be back-pressured and not selected by the arbitration scheme in arbitration point 702 .
  • the signals can be called “IF_Urgency”, “IF_Pressure” and “IF_Hurry”, where the prefix “IF” in the signal names refers to term “InterFace.”
  • Each one of these signals can be configured to be one or more bits and a priority level can be encoded onto those bits using, for example, binary encoding. Not all signals need to be present or used at the transaction-level interface.
  • the selection of which signals to implement at the interface can be configurable by a user through a Graphical User Interface (GUI).
  • GUI Graphical User Interface
  • the information on the IF_Urgency signal is used to define the urgency field in the header of the packet.
  • the urgency field is determined from the IF_Urgency signal through a one-to-one mapping, but other mappings can also be used. Such other mappings are useful to translate the values of “IF_Urgency” of the local transaction-level interface to global system-wide values.
  • the selection of values of “IF_Urgency” can be defined by the IP Core design and these values may not be consistent with the system-requirements when multiple independently designed IP Cores are being interconnected through a NoC.
  • the request and response paths can be configured independently.
  • the urgency of a request packet is transferred to the response packet to create an end-to-end priority.
  • this transfer of the urgency value can be implemented in the Target NIU.
  • the NoC can be configured to only implement the priority on the request path, but not on the response path.
  • the decision whether to transfer the value of the urgency of the request packet to the response packet can be made dependent on the type of transfer. For example, “READ” and “NON-POSTED WRITE” requests transfer the request urgency to the response packet, while “POSTED WRITE” commands do not.
  • a NON-POSTED write requires an acknowledgement of completion of the write.
  • a POSTED WRITE is passed through the NoC for which an acknowledgement is not required.
  • the transaction-level interface field “IF Pressure” can be connected to the out-of-band pressure field that was described above. It is possible that the logic in an IP Core is already aware of impending high priority requests before the requests that have been issued are accepted by the NoC. In this particular embodiment, the IF_Pressure field at the interface can be used to indicate such an occurrence and the value of “IP Pressure” can be transferred onto the above described out-of-bend pressure field.
  • the third field “IF_Hurry” addresses the following circumstance.
  • An IP Core may have issued a set of requests with a particular value of urgency and at some point in time, the IP Core may decide that these requests should be processed with a higher urgency.
  • an IP Core could have issued a set or “READ” requests where the response data is stored in a local FIFO and the FIFO may be in danger of under-flowing; that is, the read data is not returning quickly enough.
  • the IP Core can assert a higher priority value on the “IF_Hurry” field.
  • the NIU can issue a special “Hurry” request packet to all the targets towards which that the NIU has outstanding transactions.
  • This packet can have a value of urgency in the header defined by the value on “IF_Hurry” and can be routed through the NoC as the other request packets.
  • An “Urgency” packet can “push” all the transactions in front of it on any path from the initiator to the target on which there are outstanding requests.
  • the value of “IF_Hurry” is encoded in the urgency field of the header, this embodiment allows for similar configuration options regarding the transfer of the urgency field from request to response packet as previously described.
  • the value of the urgency field of the “Urgency” packet can optionally either be transferred to the response packet associated with the “Urgency” packet or not.
  • the optional need of re-mapping values on the IF_Hurry field to the value on the urgency field in the header of the “Urgency” packet can be supported.
  • an initiator NIU can contain a table.
  • the table has an entry for every route that a packet can take through the interconnect. For each route represented in the table, a value is stored representing the priority level of the most recent transaction request sent on that route. For each route represented in the table a count is stored. The count is incremented whenever a transaction request is sent on that route and the count is decremented whenever a response is received from a transaction sent on that route.
  • an “IF_Hurry” is asserted at the initiator interface an “Urgency” packet is sent only on the routes for which the “IF_Hurry” priority level of the generated transaction is greater than the priority level of the most recent transaction previously sent along the route.
  • the table is then updated with the priority level of the urgency packet. If the packet is blocked by flow control then the pressure mechanism described above is applied.
  • the table of transaction counts and priorities has entries that correspond to address ranges instead of routes.
  • the priority level of a transaction is determined within the NoC by a bandwidth limiter.
  • a bandwidth limiter can be associated with an initiator.
  • the bandwidth limiter can observe the size of each request packet issued to the interconnect by the initiator.
  • the size can be added to a counter.
  • the counter can accumulate the count within a sliding window of time. When the count in the bandwidth limiter exceeds a threshold, the bandwidth limiter can block transactions until the count falls below the threshold.
  • the priority level of a transaction is assigned within the NoC by a bandwidth regulator.
  • the bandwidth regulator measures the bandwidth on the response network. When the bandwidth exceeds a threshold, the bandwidth regulator can reduce the priority of transaction requests until the response bandwidth falls below the threshold. More than one threshold can be applied such that a higher threshold causes a lower priority. When the observed bandwidth crosses below a threshold, the priority of transaction can be increased and an “Urgency” packet can be generated.
  • the time span covered by the windows in the above two embodiments may can be configured. This has the benefit of providing bandwidth control for initiators with different long-term or short-term bandwidth requirements. Controlling maximum bandwidth limits on one initiator ensures the best interconnect availability to other initiators. This can improve their response latency.
  • the priority level of a transaction is determined by the fullness of a write FIFO and a read FIFO associated with an initiator. More than one threshold can be applied to each FIFO. The priority level of transactions can be determined by the fullness of the FIFOs relative to thresholds. When the fullness of a write FIFO associated with the initiator exceeds a particular threshold, an “Urgency” packet can be generated. When the fullness of a read FIFO associated with the initiator falls below a particular threshold, an “Urgency” packet can be generated.
  • FIG. 9 illustrates an example routing 900 within an on-chip-interconnect. Two routes 901 and 902 are shown. One route between initiator interface 911 and target interface 912 is congested at switch 913 . A table 921 stores one entry for each route that exists between the initiator interface 911 and target interfaces in the system. The table 921 entries can include the priority of most recently transmitted transactions and the number of transactions that have been transmitted without having received a response. Routing priority for initiated transactions can be determined as a function of the “IF_Hurry” signal. The “IF_Hurry” signal can be a side-band signal to the transaction request path. The priority of urgency packets can be a function of the “IF_Hurry” signal and bandwidth regulator 931 .
  • switch 913 When congestion at switch 913 causes transaction responses to be delayed, threatening an overflow or underflow condition in the initiator IP, switch 913 asserts the “IF_Hurry” signal. If the “IF Hurry” priority exceeds the priority of the table 921 entry for route 901 , and route 901 has one or more transactions requests pending for which a response has not been received, then an “urgency” packet can be generated and transmitted on route 901 . If bandwidth regulator 931 indicates that route 901 requires more bandwidth, then an “urgency” packet can be generated and transmitted on route 901 , and the priority of the most recently transmitted packet in the entry of table 921 corresponding to route 901 can be set to the urgency level of the transmitted “urgency” packet.
  • the value of the pressure field can define the value on the “IF_Pressure” field
  • the value of the in-band urgency field in the header can define the value on the in-band “IF_Urgency” field of the transaction
  • the value of the “IF Hurry” field can be determined by the urgency value in the “Hurry” request packet.
  • the NIU at the target can optionally remap the global values of pressure and urgency to the values required by the Target IP Core at the target NIU Interface.
  • examples of the additional configuration decisions are the selection of the transaction-level interface options, the selection of the encoding of the priority information, the selection of the re-mapping of the priority information, and the decision on whether to transfer the request priority information to the response.
  • the selection of the parameters can be made through a Graphical User Interface (GUI), allowing for a quick and efficient configuration of the packet-formats and associated links.
  • GUI Graphical User Interface
  • FIG. 8 is a flow diagram of an exemplary process 800 for implementing QoS features in a multi-level arbitration structure in an on-chip interconnect.
  • process 800 can begin by receiving in-band priority information ( 802 ) and receiving out-of-band priority information ( 804 ) at a transaction-level interface of an on-chip interconnect.
  • Process 800 processes transaction requests subject to back pressure, where the processing is performed with a priority based on a value of the in-band signal and a value of the out-of-band signal, and where the priority is no less than as indicated by the value on the out-of-band signal ( 806 ).

Abstract

Quality-of-Service (QoS) is an important system-level requirement in the design and implementation of on-chip networks. QoS requirements can be implemented in an on-chip-interconnect by providing for at least two signals indicating priority at a transaction-level interface where one signal transfers information in-band with the transaction and the other signal transfers information out-of-band with the transaction. The signals can be processed by the on-chip-interconnect to deliver the required QoS. In addition, the disclosed embodiments can be extended to a Network-on-Chip (NoC).

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This application is a continuation application of, and claims priority to, pending U.S. patent application Ser. No. 12/835,623, filed on Jul. 13, 2010, entitled “NETWORK ON CHIP (NOC) WITH QOS FEATURES”, which claims the benefit of a pending foreign priority application filed in France on Jun. 3, 2010, entitled “Network on Chip (NOC) with QOS Features,” and assigned French Patent Application No. FR1054339, the content of each of these applications is incorporated by reference herein in their entirety.
  • TECHNICAL FIELD
  • This subject matter is related generally to semiconductor Intellectual Property (IP) interconnect technology.
  • BACKGROUND
  • Many semiconductor designs, both in Integrated Circuit (IC) and in Field Programmable Gate Array (FPGA) applications are constructed in a modular fashion by combining a set of IP cores, such as Central Processing Units (CPUs), Digital Signal Processors (DSPs), video and networking processing blocks, memory controllers and others with an interconnect system. The interconnect system implements the system-level communications of the particular design. The IP cores are typically designed using a standard IP interface protocol, either public or proprietary. These IP interface protocols are referred to as transaction protocols. An example transaction protocol is Open Core Protocol (OCP) from OCP-IP, and Advanced Extensible Interface (AXI™) and Advanced High-performance Bus (AHB™) from Arm Inc. As semiconductor designs have evolved from relatively small, simple designs with a few IP cores into large, complex designs which may contain hundreds of IP cores, the IP core interconnect technology has also evolved.
  • The first generation of IP core interconnect technology consisted of a hierarchical set of busses and crossbars. The interconnect itself consists mostly of a set of wires, connecting the IP cores together, and one or more arbiters which arbitrate access to the communication system. A hierarchical approach is used to separate high-speed, high performance communications from lower-speed, lower performance subsystems. This solution is an appropriate solution for simple designs. A common topology used for these interconnects is either a bus or a crossbar. The trade-off between these topologies is straightforward. The bus topology has fewer physical wires which saves area and hence cost, but it is limited in bandwidth. The wire-intensive crossbar approach provides a higher aggregate communication bandwidth.
  • The above approach has a severe limitation in that the re-use of the IP cores is limited. The interfaces of all the IP cores connecting to the same interconnect are required to be the same. This can result in the re-design of the interface of an IP core or the design of bridge logic when a particular IP core needs to be used in another system.
  • This first generation of interconnect also implements a limited amount of system-level functions. This first generation of IP core interconnect technology can be described as a coupled solution. Since the IP interfaces are logically and physically not independent from each other, they are coupled such that modifying one interface requires modifying all the interfaces.
  • The second generation of IP interconnect is a partially decoupled implementation of the above described bus and crossbar topologies. In these solutions, the internal communication protocol of the communications system, or transport protocol, is decoupled from the IP interface protocol, or transaction protocol. These solutions are more flexible with regards to IP reuse as in these solutions the semiconductor system integrator can connect IP cores with different interfaces to the same communication system through some means of configurability.
  • The third generation of IP core interconnect technology is the NoC, which implements not only decoupling between transaction and transport layers, but also a clean decoupling between transport and physical layers. The key innovation enabling this solution is the packetization of the transaction layer information. The command and data information that is to be transported is encapsulated in a packet and the transport of the packet over the physical medium is independent of the physical layer. The packet format consists of a header and a payload. The payload contains the data and data-related qualifiers such as byte-enable signals. The header contains routing information, system-level address, and additional control information such as security-related indicators. In this architecture, the NoC is constructed by connecting a set of IP elements such as network interface units, switches, synchronizers, width converters together through physical links. The elements are selected and connected in such a manner as to meet protocol, performance, Quality-of-Service (QoS), area and timing requirements of the System-on-Chip (SoC).
  • One challenge in the design of on-chip-interconnect and NoC technology is the requirement to meet the QoS requirements of the IC. Each data-flow from initiator to target has its own bandwidth and latency requirement to be met by the on-chip-interconnect. Certain traffic, such as from CPU to DRAM Controller has a low latency constraint. Other traffic, such as the traffic originating from a video-processing block has a bandwidth constraint that is driven by the “real-time” nature of the data stream. Other traffic may exhibit a burst-like behavior, that is, the traffic intermittently requires a high bandwidth access to a target resource.
  • SUMMARY
  • The disclosed implementations include systems and methods for controlling the arbitration of transfers in a NoC. Using these systems and methods, additional interface signals are enabled at IP Core interfaces of the NoC, and logic structures are implemented within the NoC, which take advantage of the information on the interface signals to control the priority of processing of request and response packets within the NoC.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a block diagram of an example NoC.
  • FIG. 2. is an example physical link.
  • FIG. 3 is an example sequence of a packet transport.
  • FIG. 4 is an example packet.
  • FIG. 5A is an example switch element.
  • FIG. 5B is an example packet including a header with an “urgency” field.
  • FIG. 6 is an example physical link with out-of-band signaling.
  • FIG. 7 is an example multi-level arbitration structure in an on-chip interconnect
  • FIG. 8 is a flow diagram of an example process for implementing QoS features in a multi-level arbitration structure in an on-chip interconnect.
  • FIG. 9 illustrates an example of routing within an on-chip-interconnect.
  • DETAILED DESCRIPTION
  • Example NoC
  • FIG. 1 is a block diagram of an example NoC 100. In some implementations, NoC 100 can be constructed out of a set of IP elements 102 which communicate with each other through a packet-based transport-protocol. Examples of IP elements 102 include but are not limited to: switches 102 a, clock converters 102 b, bandwidth regulators 102 c, sync First-In-First-Out (FIFO) 102 d, width converters 102 e, Endian converters 102 f, rate adaptors 102 g, power isolators 102 h and other IP elements.
  • In some implementations, at the edges of NoC 100, Network Interface Units (NIUs) 104 implement a conversion between transaction protocol and transport protocol (ingress) and vice versa (egress). Some examples of NIUs for transaction protocols include but are not limited to: OCP NIU 104 a, AXI™ NIU 104 b, AHB™ NIU 104 c, memory scheduler 104 d and a proprietary NIU 104 e. The NIUs 104 couple to various IP cores 110. Some examples of IP cores are DSP 110 a, CPU 110 b, Direct Memory Access 110 c, OCP subsystem 110 d, DRAM 110 e, SRAM 110 f and other types of IP cores.
  • In NoC 100, the transport protocol is packet-based. The commands of the transaction layer can include load and store instructions of one or more words of data that are converted into packets for transmission over physical links. Physical links form connections between the IP elements. An implementation of a transport port protocol used by NoC 100 is described in reference to FIG. 2.
  • Example Physical Link
  • FIG. 2 is a block diagram of an example physical link 200 connecting a transmitter 202 (TX) and a receiver 204 (RX) in NoC 100 of FIG. 1. A transport protocol socket can be used to transfer a packet from transmitter 202 to receiver 204 over physical link 200. The socket can contain flow control signals (Vld, Rdy), framing signals (Head, Tail) and information signals (Data). The socket can be a synchronous interface working on rising edges of a clock signal (Clk). An active low reset signal (RStN) can also be included in the physical link 200. The logical meaning of the different signals in this particular implementation is described next.
  • Vld: Indicates that transmitter 202 presents valid information (Head, Tail and Data) in a current clock cycle. When Vld is negated, transmitter 202 drives an X value on Head, Tail and Data and receiver 204 discards these signals. Once transmitter 202 asserts Vld, the signals Head, Tail, Data and Vld remain constant until Rdy is asserted by receiver 204. In this particular implementation, the width of Vld can be 1. Other widths can also be used.
  • Rdy: Indicates that receiver 204 is ready to accept Data in a current clock cycle. Rdy can depend (in combination) on Vld, Head, Tail and Data, or can only depend on the internal state of receiver 204. In this particular implementation, the width of Rdy can be 1. Other widths can also be used.
  • Head: Indicates a first clock cycle of a packet. In this particular implementation, the width of Head is 1. Other widths can also be used.
  • Tail: Indicates a last clock cycle of a packet. In this particular implementation, the width of Tail is 1. Other widths can also be used.
  • Data: Effective information transferred from transmitter 202 to receiver 204. Data contains a header and a payload. A data word transfer can occur when the condition Vld AND Rdy is true. The width of Data can be configurable.
  • Example Packet Transport Sequence
  • FIG. 3 is an example sequence of packet transport over the link of FIG. 2. In some implementations, a packet starts when Vld and Head are asserted, and completes when Vld and Tail are asserted. A single cycle packet can have both Head and Tail asserted. Inside a packet, Head is negated when Vld is asserted, and outside a packet, Head is asserted simultaneously with Vld. Packet content is carried on the Data signals. In this particular implementation, there are two packet formats: packets with payload (e.g., write requests, read responses), and packets without payload (e.g., all other packet types).
  • Example Packet
  • FIG. 4 is an example packet for use with NoC 100 of FIG. 1. More particularly, FIG. 4 illustrates an example packet 400 including a header 402 and a payload 404. The example packet 400 can be defined by four bytes (with byte-enables) of payload width and one cycle header penalty. In some implementations of the packet 400, some fields may be optional. In some implementations, header 402 includes a header field containing a RouteID, an Address field (Addr) and several Control fields. The Control fields in the header 402 can carry additional end-to-end or transport protocol information. The use of this Control field for implementing QoS features is the subject of this invention and described in later sections. The meaning of the other fields in header 402 is explained next.
  • Addr: This header field indicates the start address of a transaction, expressed in bytes, in the target address space.
  • RouteId: This header field uniquely identifies a “initiator-mapping, target-mapping” pair. The pair can be unique information used by routing tables to steer a packet inside NoC 100.
  • The fields in the payload of the packet can be Byte-Enable (BE) field and Data field (Byte). The meaning of these fields is explained next.
  • BE: Indicates one Byte Enable bit per payload byte.
  • Byte: This field contains the payload part of the packet. The width of this field is configurable, and in some implementations, contains at least 8 bits of data. The width of a Byte can be extended to contain additional information such as protection or security information.
  • Switch Arbitration, Urgency and Pressure
  • FIG. 5A shows an example of a switch element 500. In this particular example, three inputs 501 a-501 c are sending packets to a single output 504. When packets arrive at the switch element 500 and are addressed to the single output 504 of the switch 500, an arbitration point 502 is introduced. There are many known arbitration mechanisms that can be used for arbitrating at arbitration point 502 including but not limited to: round-robin, first-in-first-out (FIFO), least-recently-used (LRU), fixed priority and any other mechanism that can be used to arbitrate. These static arbitration schemes operate on input (or port) information on which a particular packet is arriving on, and on the prior arbitration history. In many designs, however, it is desirable to use a dynamic priority mechanism.
  • Before describing a dynamic arbitration scheme the concept of “urgency” will be explained. “Urgency” is an indicator of the relative priority of a packet that is used by arbitration points in a NoC to implement a multi-level arbitration scheme, where the selection of one output transfer from several available input transfers is made based on a static arbitration between transfers with equal urgency values. In some embodiments, the highest value of “urgency” can be assigned the highest priority. In an alternative embodiment, different values of urgency can be allocated different priorities.
  • The above described scheme implements a dynamic arbitration scheme. Transfers with the highest priority, for example, as determined by their urgency value, are considered first and arbitrated using a fixed arbitration scheme. If no highest priority transfers are present, transfers with the next level of priority can be considered for transfer, and so forth. In one embodiment, the same static arbitration scheme can be used for arbitration within each level of priority. In other embodiments, different levels of priority can have a different static arbitration schemes.
  • In some embodiments, an “urgency” value is encoded in-band with the transfer. For example, in a transaction-level based interconnect such as an OCP crossbar, the urgency value can be transmitted with the command information. As another example, in a NoC, the urgency can be transmitted in the header of the packet. In one embodiment, the encoding can be binary, such that n bits (e.g., 3 bits) allow for m=2̂n levels of urgency (e.g., m=8 levels using 3 bits).
  • In another embodiment, the urgency information is encoded using a bar-graph encoding. In one embodiment of bar graph encoding, the value of the urgency is indicated by the highest bit-position in a multi-bit field with the value of “1,” while lower bit-positions also have the value of “1.” In this example, a 3-bit field can indicate 4 levels of urgency with the four values “000,” “001,” 011,” “111.” This or other implementations of bar-graph encoding are particularly efficient in combining urgency information from multiple sources into a common result because a simple OR function can be used to give the desired result. Since the “urgency” value is encoded in the header of a packet, or, is transmitted with the command or data information, the value is considered an in-band signal. FIG. 5B illustrates one implementation of a packet 506 where the “urgency” is a 4-bit field 510 transferred as part of the header 508. Other than the inclusion of the field 510 in the header 508, the packet 506 is the same as packet 400 shown in FIG. 4.
  • In an on-chip-interconnect, where one particular path from initiator to target can include multiple arbitration points in series, if the urgency is encoded in the header or sent out in-band with a command, an arbiter further down the path cannot take into account the priority of a packet that has not yet reached the input of the arbiter. This can cause delay in the processing of a higher priority packet. To address this potential delay, one embodiment of the invention introduces the concept of “pressure.” Pressure is an out-of-band signal that can include one or more bits for carrying urgency information of the urgency field out-of-band through the NoC. In one embodiment, the “pressure” signal can be a bundle of one or more physical wires that are part of the physical links that interconnect different IP elements on a NoC. The “pressure” signal can transport information in an out-of-band manner as described below in reference to FIG. 6.
  • Example Physical Link With Pressure
  • FIG. 6 is a block diagram of an example physical link 600 extending the physical link 200 of FIG. 2 to incorporate an out-of-band pressure (priority) signal 604. When a packet in the network-on-chip is being transmitted through the NoC and encounters back-pressure, that is, the Rdy signal is de-asserted, effectively blocking the packet from progressing through the network, the out-of-band pressure signal 604 on the physical link 600 takes on the value of the urgency that is encoded in the header of that particular packet. Back pressure normally occurs at a point where arbitration occurs, since the basic function of arbitration logic is to select one of a plurality of packets to proceed while holding back the unselected packets using back-pressure.
  • This above described pressure mechanism allows the urgency value of a packet to propagate ahead of the packet, since the pressure signal 604 is out-of-band. In this particular embodiment, the out-of-band pressure values are routed forward starting from the point of origin through all paths that are back-pressured. With the implementation of the pressure mechanism, an arbiter in an arbitration can consider two values when deciding what input packet to select: the in-band urgency value of the current packet and the out-of-band pressure value on the pressure signal 604 on the link 600 associated with the input port. In one embodiment, the priority value can be determined from the maximum of the urgency value and the pressure value.
  • FIG. 7 illustrates one example of a multi-level arbitration structure 700 in an on-chip interconnect. Two-input arbitration points 701 and 702 are selecting between three input links 710, 711 and 712. In this implementation, arbitration point 702 has a fixed arbitration, always giving link 712 highest priority for equal levels of in-band urgency. Assume that a packet has started to transfer on link 711 and is now presented on link 713 to arbitration point 702. As long as new requests are presented on link 712, the packet on link 713 will be back-pressured and not selected by the arbitration scheme in arbitration point 702. If now, a high urgency packet arrives on link 710, this packet cannot be arbitrated for by 701 since the packet from link 711 is already selected and “waiting” on link 713. Using the pressure mechanism, the urgency value of the packet on link 710 will be transferred on the out-of-band pressure signal on link 713. If this pressure value is higher than the urgency value of the packets on link 712, arbitration point 702 will arbitrate for the packets on link 713 by first processing the packet that originated from link 711 then processing the high urgency packet that originated on link 710.
  • Interface Signals
  • At the transaction-level interface of the NoC (e.g., an interface with an IP Core), three signals can be defined. In one embodiment of the invention, the signals can be called “IF_Urgency”, “IF_Pressure” and “IF_Hurry”, where the prefix “IF” in the signal names refers to term “InterFace.” Each one of these signals can be configured to be one or more bits and a priority level can be encoded onto those bits using, for example, binary encoding. Not all signals need to be present or used at the transaction-level interface. In one embodiment, the selection of which signals to implement at the interface can be configurable by a user through a Graphical User Interface (GUI).
  • In the initiator NIU, where the transaction level information is packetized, the information on the IF_Urgency signal is used to define the urgency field in the header of the packet. In one embodiment, the urgency field is determined from the IF_Urgency signal through a one-to-one mapping, but other mappings can also be used. Such other mappings are useful to translate the values of “IF_Urgency” of the local transaction-level interface to global system-wide values. The selection of values of “IF_Urgency” can be defined by the IP Core design and these values may not be consistent with the system-requirements when multiple independently designed IP Cores are being interconnected through a NoC.
  • In a NoC architecture, the request and response paths can be configured independently. In one embodiment, the urgency of a request packet is transferred to the response packet to create an end-to-end priority. In one embodiment, this transfer of the urgency value can be implemented in the Target NIU. Optionally, the NoC can be configured to only implement the priority on the request path, but not on the response path. In yet another embodiment, the decision whether to transfer the value of the urgency of the request packet to the response packet can be made dependent on the type of transfer. For example, “READ” and “NON-POSTED WRITE” requests transfer the request urgency to the response packet, while “POSTED WRITE” commands do not.
  • A NON-POSTED write requires an acknowledgement of completion of the write. A POSTED WRITE is passed through the NoC for which an acknowledgement is not required.
  • The transaction-level interface field “IF Pressure” can be connected to the out-of-band pressure field that was described above. It is possible that the logic in an IP Core is already aware of impending high priority requests before the requests that have been issued are accepted by the NoC. In this particular embodiment, the IF_Pressure field at the interface can be used to indicate such an occurrence and the value of “IP Pressure” can be transferred onto the above described out-of-bend pressure field.
  • Similar to the potential need of remapping the valued of IF_Urgency to the urgency field, re-mapping of values on the IF_pressure field to the value on the pressure wires can be supported.
  • The third field “IF_Hurry” addresses the following circumstance. An IP Core may have issued a set of requests with a particular value of urgency and at some point in time, the IP Core may decide that these requests should be processed with a higher urgency. For example, an IP Core could have issued a set or “READ” requests where the response data is stored in a local FIFO and the FIFO may be in danger of under-flowing; that is, the read data is not returning quickly enough. In this case, the IP Core can assert a higher priority value on the “IF_Hurry” field. When this event occurs, the NIU can issue a special “Hurry” request packet to all the targets towards which that the NIU has outstanding transactions. This packet can have a value of urgency in the header defined by the value on “IF_Hurry” and can be routed through the NoC as the other request packets. An “Urgency” packet can “push” all the transactions in front of it on any path from the initiator to the target on which there are outstanding requests.
  • Since the value of “IF_Hurry” is encoded in the urgency field of the header, this embodiment allows for similar configuration options regarding the transfer of the urgency field from request to response packet as previously described. The value of the urgency field of the “Urgency” packet can optionally either be transferred to the response packet associated with the “Urgency” packet or not.
  • Similar to the potential need of remapping the valued of IF_Urgency to the urgency field, the optional need of re-mapping values on the IF_Hurry field to the value on the urgency field in the header of the “Urgency” packet can be supported.
  • Urgency Packet Generation
  • In one embodiment, an initiator NIU can contain a table. The table has an entry for every route that a packet can take through the interconnect. For each route represented in the table, a value is stored representing the priority level of the most recent transaction request sent on that route. For each route represented in the table a count is stored. The count is incremented whenever a transaction request is sent on that route and the count is decremented whenever a response is received from a transaction sent on that route. When an “IF_Hurry” is asserted at the initiator interface an “Urgency” packet is sent only on the routes for which the “IF_Hurry” priority level of the generated transaction is greater than the priority level of the most recent transaction previously sent along the route. The table is then updated with the priority level of the urgency packet. If the packet is blocked by flow control then the pressure mechanism described above is applied.
  • In another embodiment the table of transaction counts and priorities has entries that correspond to address ranges instead of routes.
  • In one embodiment, the priority level of a transaction is determined within the NoC by a bandwidth limiter. A bandwidth limiter can be associated with an initiator. The bandwidth limiter can observe the size of each request packet issued to the interconnect by the initiator. The size can be added to a counter. The counter can accumulate the count within a sliding window of time. When the count in the bandwidth limiter exceeds a threshold, the bandwidth limiter can block transactions until the count falls below the threshold.
  • In another embodiment, the priority level of a transaction is assigned within the NoC by a bandwidth regulator. The bandwidth regulator measures the bandwidth on the response network. When the bandwidth exceeds a threshold, the bandwidth regulator can reduce the priority of transaction requests until the response bandwidth falls below the threshold. More than one threshold can be applied such that a higher threshold causes a lower priority. When the observed bandwidth crosses below a threshold, the priority of transaction can be increased and an “Urgency” packet can be generated.
  • In one embodiment, the time span covered by the windows in the above two embodiments may can be configured. This has the benefit of providing bandwidth control for initiators with different long-term or short-term bandwidth requirements. Controlling maximum bandwidth limits on one initiator ensures the best interconnect availability to other initiators. This can improve their response latency.
  • In another embodiment, the priority level of a transaction is determined by the fullness of a write FIFO and a read FIFO associated with an initiator. More than one threshold can be applied to each FIFO. The priority level of transactions can be determined by the fullness of the FIFOs relative to thresholds. When the fullness of a write FIFO associated with the initiator exceeds a particular threshold, an “Urgency” packet can be generated. When the fullness of a read FIFO associated with the initiator falls below a particular threshold, an “Urgency” packet can be generated.
  • Example Routing Within On-Chip-Interconnect
  • FIG. 9 illustrates an example routing 900 within an on-chip-interconnect. Two routes 901 and 902 are shown. One route between initiator interface 911 and target interface 912 is congested at switch 913. A table 921 stores one entry for each route that exists between the initiator interface 911 and target interfaces in the system. The table 921 entries can include the priority of most recently transmitted transactions and the number of transactions that have been transmitted without having received a response. Routing priority for initiated transactions can be determined as a function of the “IF_Hurry” signal. The “IF_Hurry” signal can be a side-band signal to the transaction request path. The priority of urgency packets can be a function of the “IF_Hurry” signal and bandwidth regulator 931. When congestion at switch 913 causes transaction responses to be delayed, threatening an overflow or underflow condition in the initiator IP, switch 913 asserts the “IF_Hurry” signal. If the “IF Hurry” priority exceeds the priority of the table 921 entry for route 901, and route 901 has one or more transactions requests pending for which a response has not been received, then an “urgency” packet can be generated and transmitted on route 901. If bandwidth regulator 931 indicates that route 901 requires more bandwidth, then an “urgency” packet can be generated and transmitted on route 901, and the priority of the most recently transmitted packet in the entry of table 921 corresponding to route 901 can be set to the urgency level of the transmitted “urgency” packet.
  • Target NIU Interface
  • As described above, in this embodiment mechanisms have been implemented to transfer and use priority information as presented on up to three fields at an initiator interface. This information can optionally be presented to the target at the target NIU interface. The value of the pressure field can define the value on the “IF_Pressure” field, the value of the in-band urgency field in the header can define the value on the in-band “IF_Urgency” field of the transaction and the value of the “IF Hurry” field can be determined by the urgency value in the “Hurry” request packet.
  • Similar to the potential need of remapping the values of IF_Urgency, IF_Hurry and IF_Pressure in the initiator NIU from the “local” interface values to global values, the NIU at the target can optionally remap the global values of pressure and urgency to the values required by the Target IP Core at the target NIU Interface.
  • Since an on-chip-interconnect or a NoC for a complex SoC, such as a cell-phone application processor or video processing chip can be quite complex, many configuration decisions need to be managed. With regards to the embodiments presently discussed, examples of the additional configuration decisions are the selection of the transaction-level interface options, the selection of the encoding of the priority information, the selection of the re-mapping of the priority information, and the decision on whether to transfer the request priority information to the response. In one implementation, the selection of the parameters can be made through a Graphical User Interface (GUI), allowing for a quick and efficient configuration of the packet-formats and associated links.
  • FIG. 8 is a flow diagram of an exemplary process 800 for implementing QoS features in a multi-level arbitration structure in an on-chip interconnect. In some implementations, process 800 can begin by receiving in-band priority information (802) and receiving out-of-band priority information (804) at a transaction-level interface of an on-chip interconnect. Process 800 processes transaction requests subject to back pressure, where the processing is performed with a priority based on a value of the in-band signal and a value of the out-of-band signal, and where the priority is no less than as indicated by the value on the out-of-band signal (806).
  • While this specification contains many specifics, these should not be construed as limitations on the scope of what is claimed or of what can be claimed, but rather as descriptions of features specific to particular implementations. Certain features that are described in this specification in the context of separate implementations can also be implemented in combination in a single implementation.
  • Conversely, various features that are described in the context of a single implementation can also be implemented in multiple implementations separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.
  • Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous. Moreover, the separation of various system components in the implementations described above should not be understood as requiring such separation in all implementations, and it should be understood that the described program components and systems can generally be integrated together in a single product or packaged into multiple products.
  • Thus, particular implementations have been described. Other implementations are within the scope of the following claims.

Claims (8)

What is claimed is:
1. A system for transmitting priority information in an on-chip-interconnect, the system comprising:
a plurality of signal carriers for carrying an in-band priority signal an out-of-band priority signal and a third priority signal;
an arbitration point comprising a first input and a second input; and
wherein the arbitration point selects the first input over the second input when a priority indicated by the in-band priority signal or a priority indicated by the out-of-band priority signal is greater than a priority indicated by the third priority signal.
2. The system of claim 1, wherein the priorities of at least two of the in-band priority signal, the out-of-band priority signal and the third priority signal are bar-graph encoded.
3. A method of transmitting priority information at a transport layer interface of an on-chip-interconnect comprising:
receiving an in-band signal;
receiving an out-of-band signal; and
processing a downstream signal that is assigned a priority that is the greater of a priority indicated by the in-band signal and a priority indicated by the out-of-band signal.
4. The method of claim 3, wherein processing the downstream signal comprises:
arbitrating the downstream signal based at least in part on the assigned priority that is the greater of the priority indicated by the in-band signal and the priority indicated by the out-of-band signal.
5. An on-chip-interconnect comprising:
a first network of transport IP elements for transmitting request packets; and
a second network of transport IP elements for transmitting response packets, wherein the first network and the second network are logically independent.
6. The on-chip-interconnect of claim 5 wherein packets of the first network implement an urgency value and packets of the second network do not.
7. An initiator network interface unit comprising:
a memory unit that stores a value of a priority level of a most recent packet sent on a route; and
a logic unit configured to:
identify a signal that indicates a desired priority level; and
generate an urgency packet and send the urgency packet on the route when the value of the priority level stored in the memory unit is less than a value of the desired priority level indicated by the signal.
8. The initiator network interface unit of claim 7 comprising:
a memory unit that stores a value of a priority level of a most recent packet sent on an alternate route,
wherein the logic unit, separately from generating and sending the urgency packet on the route, generates an urgency packet for the alternate route and sends the urgency packet on the alternate route when the value of the priority level of the most recent packet sent on the alternate route is less than the value of the desired priority level indicated by the signal.
US13/680,965 2010-06-03 2012-11-19 Network on chip (noc) with qos features Abandoned US20130179613A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US13/680,965 US20130179613A1 (en) 2010-06-03 2012-11-19 Network on chip (noc) with qos features

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
FR1054339A FR2961048B1 (en) 2010-06-03 2010-06-03 CHIP NETWORK WITH QUALITY-OF-SERVICE CHARACTERISTICS
FR1054339 2010-06-03
US12/835,623 US8316171B2 (en) 2010-06-03 2010-07-13 Network on chip (NoC) with QoS features
US13/680,965 US20130179613A1 (en) 2010-06-03 2012-11-19 Network on chip (noc) with qos features

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US12/835,623 Continuation US8316171B2 (en) 2010-06-03 2010-07-13 Network on chip (NoC) with QoS features

Publications (1)

Publication Number Publication Date
US20130179613A1 true US20130179613A1 (en) 2013-07-11

Family

ID=43500113

Family Applications (2)

Application Number Title Priority Date Filing Date
US12/835,623 Expired - Fee Related US8316171B2 (en) 2010-06-03 2010-07-13 Network on chip (NoC) with QoS features
US13/680,965 Abandoned US20130179613A1 (en) 2010-06-03 2012-11-19 Network on chip (noc) with qos features

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US12/835,623 Expired - Fee Related US8316171B2 (en) 2010-06-03 2010-07-13 Network on chip (NoC) with QoS features

Country Status (5)

Country Link
US (2) US8316171B2 (en)
CN (1) CN103039044A (en)
FR (1) FR2961048B1 (en)
GB (1) GB2493682A (en)
WO (1) WO2011151241A1 (en)

Cited By (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150117261A1 (en) * 2013-10-24 2015-04-30 Netspeed Systems Using multiple traffic profiles to design a network on chip
US9444702B1 (en) 2015-02-06 2016-09-13 Netspeed Systems System and method for visualization of NoC performance based on simulation output
US9568970B1 (en) * 2015-02-12 2017-02-14 Netspeed Systems, Inc. Hardware and software enabled implementation of power profile management instructions in system on chip
US9590813B1 (en) 2013-08-07 2017-03-07 Netspeed Systems Supporting multicast in NoC interconnect
US9742630B2 (en) 2014-09-22 2017-08-22 Netspeed Systems Configurable router for a network on chip (NoC)
JP2017527219A (en) * 2014-09-08 2017-09-14 クゥアルコム・テクノロジーズ・インコーポレイテッド Tunneling within a network-on-chip topology
US9769077B2 (en) 2014-02-20 2017-09-19 Netspeed Systems QoS in a system with end-to-end flow control and QoS aware buffer allocation
US9825809B2 (en) 2015-05-29 2017-11-21 Netspeed Systems Dynamically configuring store-and-forward channels and cut-through channels in a network-on-chip
US9825887B2 (en) 2015-02-03 2017-11-21 Netspeed Systems Automatic buffer sizing for optimal network-on-chip design
US9864728B2 (en) 2015-05-29 2018-01-09 Netspeed Systems, Inc. Automatic generation of physically aware aggregation/distribution networks
US9928204B2 (en) 2015-02-12 2018-03-27 Netspeed Systems, Inc. Transaction expansion for NoC simulation and NoC design
US20180159786A1 (en) * 2016-12-02 2018-06-07 Netspeed Systems, Inc. Interface virtualization and fast path for network on chip
US10050843B2 (en) 2015-02-18 2018-08-14 Netspeed Systems Generation of network-on-chip layout based on user specified topological constraints
US10063496B2 (en) 2017-01-10 2018-08-28 Netspeed Systems Inc. Buffer sizing of a NoC through machine learning
US10074053B2 (en) 2014-10-01 2018-09-11 Netspeed Systems Clock gating for system-on-chip elements
US10084692B2 (en) 2013-12-30 2018-09-25 Netspeed Systems, Inc. Streaming bridge design with host interfaces and network on chip (NoC) layers
US10084725B2 (en) 2017-01-11 2018-09-25 Netspeed Systems, Inc. Extracting features from a NoC for machine learning construction
US10218580B2 (en) 2015-06-18 2019-02-26 Netspeed Systems Generating physically aware network-on-chip design from a physical system-on-chip specification
US10298485B2 (en) 2017-02-06 2019-05-21 Netspeed Systems, Inc. Systems and methods for NoC construction
US10313269B2 (en) 2016-12-26 2019-06-04 Netspeed Systems, Inc. System and method for network on chip construction through machine learning
US10348563B2 (en) 2015-02-18 2019-07-09 Netspeed Systems, Inc. System-on-chip (SoC) optimization through transformation and generation of a network-on-chip (NoC) topology
US10355996B2 (en) 2012-10-09 2019-07-16 Netspeed Systems Heterogeneous channel capacities in an interconnect
US10419300B2 (en) 2017-02-01 2019-09-17 Netspeed Systems, Inc. Cost management against requirements for the generation of a NoC
US10452124B2 (en) 2016-09-12 2019-10-22 Netspeed Systems, Inc. Systems and methods for facilitating low power on a network-on-chip
US10496770B2 (en) 2013-07-25 2019-12-03 Netspeed Systems System level simulation in Network on Chip architecture
US10547514B2 (en) 2018-02-22 2020-01-28 Netspeed Systems, Inc. Automatic crossbar generation and router connections for network-on-chip (NOC) topology generation
US10802882B2 (en) 2018-12-13 2020-10-13 International Business Machines Corporation Accelerating memory access in a network using thread progress based arbitration
US10896476B2 (en) 2018-02-22 2021-01-19 Netspeed Systems, Inc. Repository of integration description of hardware intellectual property for NoC construction and SoC integration
US10983910B2 (en) 2018-02-22 2021-04-20 Netspeed Systems, Inc. Bandwidth weighting mechanism based network-on-chip (NoC) configuration
US11023377B2 (en) 2018-02-23 2021-06-01 Netspeed Systems, Inc. Application mapping on hardened network-on-chip (NoC) of field-programmable gate array (FPGA)
US11144457B2 (en) 2018-02-22 2021-10-12 Netspeed Systems, Inc. Enhanced page locality in network-on-chip (NoC) architectures
US11176302B2 (en) 2018-02-23 2021-11-16 Netspeed Systems, Inc. System on chip (SoC) builder

Families Citing this family (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2961048B1 (en) * 2010-06-03 2013-04-26 Arteris Inc CHIP NETWORK WITH QUALITY-OF-SERVICE CHARACTERISTICS
US8861386B2 (en) 2011-01-18 2014-10-14 Apple Inc. Write traffic shaper circuits
US8649286B2 (en) * 2011-01-18 2014-02-11 Apple Inc. Quality of service (QoS)-related fabric control
US8744602B2 (en) 2011-01-18 2014-06-03 Apple Inc. Fabric limiter circuits
US8493863B2 (en) 2011-01-18 2013-07-23 Apple Inc. Hierarchical fabric control circuits
KR101855399B1 (en) * 2011-03-24 2018-05-09 삼성전자주식회사 System on chip improving data traffic and operating method thereof
US8539132B2 (en) * 2011-05-16 2013-09-17 Qualcomm Innovation Center, Inc. Method and system for dynamically managing a bus of a portable computing device
US8514889B2 (en) 2011-08-26 2013-08-20 Sonics, Inc. Use of common data format to facilitate link width conversion in a router with flexible link widths
CN104285415B (en) * 2012-05-10 2018-02-27 马维尔国际贸易有限公司 Blended data stream handle
US20140233582A1 (en) * 2012-08-29 2014-08-21 Marvell World Trade Ltd. Semaphore soft and hard hybrid architecture
IN2015MN00441A (en) * 2012-09-25 2015-09-11 Qualcomm Technologies Inc
US9225665B2 (en) * 2012-09-25 2015-12-29 Qualcomm Technologies, Inc. Network on a chip socket protocol
US9471538B2 (en) * 2012-09-25 2016-10-18 Qualcomm Technologies, Inc. Network on a chip socket protocol
US8935578B2 (en) * 2012-09-29 2015-01-13 Intel Corporation Method and apparatus for optimizing power and latency on a link
KR102014118B1 (en) * 2012-10-19 2019-08-26 삼성전자주식회사 Method and Apparatus for Channel Management of Sub-Channel Scheme in Network Backbone System based Advanced Extensible Interface
US9053058B2 (en) 2012-12-20 2015-06-09 Apple Inc. QoS inband upgrade
US9372818B2 (en) 2013-03-15 2016-06-21 Atmel Corporation Proactive quality of service in multi-matrix system bus
US9571402B2 (en) * 2013-05-03 2017-02-14 Netspeed Systems Congestion control and QoS in NoC by regulating the injection traffic
US20150019776A1 (en) * 2013-07-14 2015-01-15 Qualcomm Technologies, Inc. Selective change of pending transaction urgency
US8959266B1 (en) * 2013-08-02 2015-02-17 Intel Corporation Dynamic priority control based on latency tolerance
US9471524B2 (en) 2013-12-09 2016-10-18 Atmel Corporation System bus transaction queue reallocation
US9473359B2 (en) * 2014-06-06 2016-10-18 Netspeed Systems Transactional traffic specification for network-on-chip design
CN105740178B (en) * 2014-12-09 2018-11-16 扬智科技股份有限公司 Chip network system with and forming method thereof
CN104965942B (en) * 2015-06-08 2018-01-02 浪潮集团有限公司 A kind of network service quality IP kernel based on FPGA
KR102497804B1 (en) 2016-04-01 2023-02-10 한국전자통신연구원 On-chip network device capable of networking in dual swithching network modes and operation method thereof
US10666578B2 (en) * 2016-09-06 2020-05-26 Taiwan Semiconductor Manufacturing Company Limited Network-on-chip system and a method of generating the same
US10223312B2 (en) * 2016-10-18 2019-03-05 Analog Devices, Inc. Quality of service ordinal modification
US11025678B2 (en) 2018-01-25 2021-06-01 Seagate Technology Llc AXI interconnect module communication network platform
US10673745B2 (en) * 2018-02-01 2020-06-02 Xilinx, Inc. End-to-end quality-of-service in a network-on-chip
US11431648B2 (en) 2018-06-11 2022-08-30 Intel Corporation Technologies for providing adaptive utilization of different interconnects for workloads
CN112559399A (en) * 2020-11-27 2021-03-26 山东云海国创云计算装备产业创新中心有限公司 DDR controller with multiple AXI interfaces and control method thereof

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050254423A1 (en) * 2004-05-12 2005-11-17 Nokia Corporation Rate shaper algorithm
US20050281196A1 (en) * 2004-06-21 2005-12-22 Tornetta Anthony G Rule based routing in a switch
US20080069094A1 (en) * 2006-09-19 2008-03-20 Samsung Electronics Co., Ltd. Urgent packet latency control of network on chip (NOC) apparatus and method of the same
US20100061321A1 (en) * 2008-07-21 2010-03-11 Commissariat A L' Energie Atomique Method of scheduling packets
US20100195495A1 (en) * 2009-02-05 2010-08-05 Silver Spring Networks System and method of monitoring packets in flight for optimizing packet traffic in a network
US8316171B2 (en) * 2010-06-03 2012-11-20 Arteris S.A. Network on chip (NoC) with QoS features

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6584529B1 (en) * 2000-09-08 2003-06-24 Koninklijke Philips Electronics N.V. Intermediate buffer control for improving throughput of split transaction interconnect
US7188198B2 (en) * 2003-09-11 2007-03-06 International Business Machines Corporation Method for implementing dynamic virtual lane buffer reconfiguration
US7631132B1 (en) * 2004-12-27 2009-12-08 Unisys Corporation Method and apparatus for prioritized transaction queuing
CN101341474B (en) * 2005-12-22 2012-02-08 Arm有限公司 Arbitration method reordering transactions to ensure quality of service specified by each transaction
US7936669B2 (en) * 2008-06-04 2011-05-03 Entropic Communications, Inc. Systems and methods for flow control and quality of service

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050254423A1 (en) * 2004-05-12 2005-11-17 Nokia Corporation Rate shaper algorithm
US20050281196A1 (en) * 2004-06-21 2005-12-22 Tornetta Anthony G Rule based routing in a switch
US20080069094A1 (en) * 2006-09-19 2008-03-20 Samsung Electronics Co., Ltd. Urgent packet latency control of network on chip (NOC) apparatus and method of the same
US20100061321A1 (en) * 2008-07-21 2010-03-11 Commissariat A L' Energie Atomique Method of scheduling packets
US20100195495A1 (en) * 2009-02-05 2010-08-05 Silver Spring Networks System and method of monitoring packets in flight for optimizing packet traffic in a network
US8316171B2 (en) * 2010-06-03 2012-11-20 Arteris S.A. Network on chip (NoC) with QoS features

Cited By (48)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10355996B2 (en) 2012-10-09 2019-07-16 Netspeed Systems Heterogeneous channel capacities in an interconnect
US10496770B2 (en) 2013-07-25 2019-12-03 Netspeed Systems System level simulation in Network on Chip architecture
US9590813B1 (en) 2013-08-07 2017-03-07 Netspeed Systems Supporting multicast in NoC interconnect
US20150117261A1 (en) * 2013-10-24 2015-04-30 Netspeed Systems Using multiple traffic profiles to design a network on chip
WO2015061126A1 (en) * 2013-10-24 2015-04-30 Netspeed System Using multiple traffic profiles to design a network on chip
US9294354B2 (en) * 2013-10-24 2016-03-22 Netspeed Systems Using multiple traffic profiles to design a network on chip
US10084692B2 (en) 2013-12-30 2018-09-25 Netspeed Systems, Inc. Streaming bridge design with host interfaces and network on chip (NoC) layers
US9769077B2 (en) 2014-02-20 2017-09-19 Netspeed Systems QoS in a system with end-to-end flow control and QoS aware buffer allocation
US10110499B2 (en) 2014-02-20 2018-10-23 Netspeed Systems QoS in a system with end-to-end flow control and QoS aware buffer allocation
JP2017527219A (en) * 2014-09-08 2017-09-14 クゥアルコム・テクノロジーズ・インコーポレイテッド Tunneling within a network-on-chip topology
US9742630B2 (en) 2014-09-22 2017-08-22 Netspeed Systems Configurable router for a network on chip (NoC)
US10074053B2 (en) 2014-10-01 2018-09-11 Netspeed Systems Clock gating for system-on-chip elements
US9825887B2 (en) 2015-02-03 2017-11-21 Netspeed Systems Automatic buffer sizing for optimal network-on-chip design
US9860197B2 (en) 2015-02-03 2018-01-02 Netspeed Systems, Inc. Automatic buffer sizing for optimal network-on-chip design
US9444702B1 (en) 2015-02-06 2016-09-13 Netspeed Systems System and method for visualization of NoC performance based on simulation output
US9568970B1 (en) * 2015-02-12 2017-02-14 Netspeed Systems, Inc. Hardware and software enabled implementation of power profile management instructions in system on chip
US9928204B2 (en) 2015-02-12 2018-03-27 Netspeed Systems, Inc. Transaction expansion for NoC simulation and NoC design
US9829962B2 (en) * 2015-02-12 2017-11-28 Netspeed Systems, Inc. Hardware and software enabled implementation of power profile management instructions in system on chip
US20170097672A1 (en) * 2015-02-12 2017-04-06 Netspeed Systems, Inc. Hardware and software enabled implementation of power profile management instructions in system on chip
US10218581B2 (en) 2015-02-18 2019-02-26 Netspeed Systems Generation of network-on-chip layout based on user specified topological constraints
US10050843B2 (en) 2015-02-18 2018-08-14 Netspeed Systems Generation of network-on-chip layout based on user specified topological constraints
US10348563B2 (en) 2015-02-18 2019-07-09 Netspeed Systems, Inc. System-on-chip (SoC) optimization through transformation and generation of a network-on-chip (NoC) topology
US9864728B2 (en) 2015-05-29 2018-01-09 Netspeed Systems, Inc. Automatic generation of physically aware aggregation/distribution networks
US9825809B2 (en) 2015-05-29 2017-11-21 Netspeed Systems Dynamically configuring store-and-forward channels and cut-through channels in a network-on-chip
US10218580B2 (en) 2015-06-18 2019-02-26 Netspeed Systems Generating physically aware network-on-chip design from a physical system-on-chip specification
US10613616B2 (en) 2016-09-12 2020-04-07 Netspeed Systems, Inc. Systems and methods for facilitating low power on a network-on-chip
US10564704B2 (en) 2016-09-12 2020-02-18 Netspeed Systems, Inc. Systems and methods for facilitating low power on a network-on-chip
US10564703B2 (en) 2016-09-12 2020-02-18 Netspeed Systems, Inc. Systems and methods for facilitating low power on a network-on-chip
US10452124B2 (en) 2016-09-12 2019-10-22 Netspeed Systems, Inc. Systems and methods for facilitating low power on a network-on-chip
US10749811B2 (en) 2016-12-02 2020-08-18 Netspeed Systems, Inc. Interface virtualization and fast path for Network on Chip
US10735335B2 (en) * 2016-12-02 2020-08-04 Netspeed Systems, Inc. Interface virtualization and fast path for network on chip
US20180159786A1 (en) * 2016-12-02 2018-06-07 Netspeed Systems, Inc. Interface virtualization and fast path for network on chip
US20180191626A1 (en) * 2016-12-02 2018-07-05 Netspeed Systems, Inc. Interface virtualization and fast path for network on chip
US10313269B2 (en) 2016-12-26 2019-06-04 Netspeed Systems, Inc. System and method for network on chip construction through machine learning
US10523599B2 (en) 2017-01-10 2019-12-31 Netspeed Systems, Inc. Buffer sizing of a NoC through machine learning
US10063496B2 (en) 2017-01-10 2018-08-28 Netspeed Systems Inc. Buffer sizing of a NoC through machine learning
US10084725B2 (en) 2017-01-11 2018-09-25 Netspeed Systems, Inc. Extracting features from a NoC for machine learning construction
US10469338B2 (en) 2017-02-01 2019-11-05 Netspeed Systems, Inc. Cost management against requirements for the generation of a NoC
US10469337B2 (en) 2017-02-01 2019-11-05 Netspeed Systems, Inc. Cost management against requirements for the generation of a NoC
US10419300B2 (en) 2017-02-01 2019-09-17 Netspeed Systems, Inc. Cost management against requirements for the generation of a NoC
US10298485B2 (en) 2017-02-06 2019-05-21 Netspeed Systems, Inc. Systems and methods for NoC construction
US10547514B2 (en) 2018-02-22 2020-01-28 Netspeed Systems, Inc. Automatic crossbar generation and router connections for network-on-chip (NOC) topology generation
US10896476B2 (en) 2018-02-22 2021-01-19 Netspeed Systems, Inc. Repository of integration description of hardware intellectual property for NoC construction and SoC integration
US10983910B2 (en) 2018-02-22 2021-04-20 Netspeed Systems, Inc. Bandwidth weighting mechanism based network-on-chip (NoC) configuration
US11144457B2 (en) 2018-02-22 2021-10-12 Netspeed Systems, Inc. Enhanced page locality in network-on-chip (NoC) architectures
US11023377B2 (en) 2018-02-23 2021-06-01 Netspeed Systems, Inc. Application mapping on hardened network-on-chip (NoC) of field-programmable gate array (FPGA)
US11176302B2 (en) 2018-02-23 2021-11-16 Netspeed Systems, Inc. System on chip (SoC) builder
US10802882B2 (en) 2018-12-13 2020-10-13 International Business Machines Corporation Accelerating memory access in a network using thread progress based arbitration

Also Published As

Publication number Publication date
CN103039044A (en) 2013-04-10
FR2961048A1 (en) 2011-12-09
US8316171B2 (en) 2012-11-20
US20110302345A1 (en) 2011-12-08
GB2493682A (en) 2013-02-13
FR2961048B1 (en) 2013-04-26
WO2011151241A1 (en) 2011-12-08

Similar Documents

Publication Publication Date Title
US8316171B2 (en) Network on chip (NoC) with QoS features
US10848442B2 (en) Heterogeneous packet-based transport
EP1775897B1 (en) Interleaving in a NoC (Network on Chip) employing the AXI protocol
US8085801B2 (en) Resource arbitration
US11695708B2 (en) Deterministic real time multi protocol heterogeneous packet based transport
US5546543A (en) Method for assigning priority to receive and transmit requests in response to occupancy of receive and transmit buffers when transmission and reception are in progress
US7039058B2 (en) Switched interconnection network with increased bandwidth and port count
US8867559B2 (en) Managing starvation and congestion in a two-dimensional network having flow control
US7643477B2 (en) Buffering data packets according to multiple flow control schemes
US20050271054A1 (en) Asynchronous switch based on butterfly fat-tree for network on chip application
US8949501B1 (en) Method and apparatus for a configurable packet routing, buffering and scheduling scheme to optimize throughput with deadlock prevention in SRIO-to-PCIe bridges
Birrittella et al. Enabling scalable high-performance systems with the Intel Omni-Path architecture
JP4255833B2 (en) Tagging and arbitration mechanisms at the input / output nodes of computer systems
JP2004242334A (en) System, method and logic for multicasting in high-speed exchange environment
JP4391819B2 (en) I / O node of computer system
US8730983B1 (en) Method and apparatus for a configurable packet routing, buffering and scheduling scheme to optimize throughput with deadlock prevention in SRIO-to-PCIe bridges
US20230388251A1 (en) Tightly-Coupled, Loosely Connected Heterogeneous Packet Based Transport
US7218638B2 (en) Switch operation scheduling mechanism with concurrent connection and queue scheduling
US8174969B1 (en) Congestion management for a packet switch
US9590924B1 (en) Network device scheduler and methods thereof
US9154569B1 (en) Method and system for buffer management
US9143346B1 (en) Method and apparatus for a configurable packet routing, buffering and scheduling scheme to optimize throughput with deadlock prevention in SRIO-to-PCIe bridges
Chen PCI express-based ethernet switch

Legal Events

Date Code Title Description
AS Assignment

Owner name: ARTERIS S.A., FRANCE

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:BOUCARD, PHILIPPE;MARTIN, PHILIPPE;LECLER, JEAN-JACQUES;SIGNING DATES FROM 20101119 TO 20101201;REEL/FRAME:030048/0867

AS Assignment

Owner name: QUALCOMM TECHNOLOGIES, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:ARTERIS, SAS;REEL/FRAME:031437/0901

Effective date: 20131011

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION