US20110134752A1 - Multilink traffic shaping - Google Patents

Multilink traffic shaping Download PDF

Info

Publication number
US20110134752A1
US20110134752A1 US13/029,181 US201113029181A US2011134752A1 US 20110134752 A1 US20110134752 A1 US 20110134752A1 US 201113029181 A US201113029181 A US 201113029181A US 2011134752 A1 US2011134752 A1 US 2011134752A1
Authority
US
United States
Prior art keywords
traffic
link
egress
control unit
sequenced
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US13/029,181
Inventor
Uros Prestor
Raghu Subramanian
Stephen W. Turner
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Juniper Networks Inc
Original Assignee
Juniper Networks Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Juniper Networks Inc filed Critical Juniper Networks Inc
Priority to US13/029,181 priority Critical patent/US20110134752A1/en
Publication of US20110134752A1 publication Critical patent/US20110134752A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/24Multipath
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • H04L47/125Avoiding congestion; Recovering from congestion by balancing the load, e.g. traffic engineering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/22Traffic shaping
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/34Flow control; Congestion control ensuring sequence integrity, e.g. using sequence numbers

Definitions

  • Implementations consistent with the principles of the invention relate generally to communication networks and, more particularly, to shaping traffic across multiple links without requiring backpressure from an egress interface.
  • Network devices such as routers, may be configured to distribute incoming traffic received at an ingress interface, or port, across multiple output links associated with an egress interface. Incoming traffic may be distributed across multiple links in order to achieve an output bandwidth that is greater than the bandwidth associated with any one of the multiple links. Multilink networking protocols may be used to facilitate distributing incoming traffic across multiple output links and to facilitate reassembling multilink traffic received at a destination device.
  • Existing multilink implementations may associate the control of output links with a board, such as an egress interface board, in a manner that does not allow reconfiguration of output links.
  • network devices may be configured to operate with particular multilink implementations.
  • an egress interface board in a network device may be hardwired with four T1 links that can be used for multilink transmissions.
  • the egress interface board may have to operate at a reduced throughput using three T1 links.
  • Reconfiguration of links may be discouraged because a controller on the egress interface board may be aware of only those links physically associated with that board.
  • Existing implementations may not provide flexibility in selecting links for use in multilink transmission implementations.
  • the lack of flexibility in reconfiguring and/or selecting links for use in multilink implementations may prevent communication networks from operating efficiently.
  • a network device adapted to facilitate multilink communications using multilink traffic.
  • the network device may include a control unit adapted to apply a quality-of-service (QoS) policy to incoming traffic, where the QoS policy associates a first priority with a first portion of incoming traffic and a second priority with a second portion of the incoming traffic.
  • QoS quality-of-service
  • the control unit may be adapted to fragment the second portion of the incoming traffic to produce a group of fragments, and sequence the second portion of the incoming traffic with the first portion of the incoming traffic to produce sequenced traffic.
  • the control unit may be adapted to make a first portion of the sequenced traffic available as a first portion of the multilink traffic, and make a second portion of the sequenced traffic available as a second portion of the multilink traffic.
  • a method for performing multilink communications may be provided.
  • the method may include applying a quality-of-service (QoS) policy to incoming traffic, where the QoS policy operates to identify a first portion and a second portion of the incoming traffic.
  • QoS quality-of-service
  • the method may include fragmenting the first portion of the incoming traffic into a group of fragments.
  • the method may include sequencing the group of fragments and the second portion of the incoming traffic into a sequenced flow, where the sequencing causes the second portion to be interleaved among the group of fragments so that the sequenced flow can be made available to a first link and a second link as multilink traffic, where the first link carries a first portion of the multilink traffic and the second link carries a second portion of the multilink traffic.
  • a system to provide multilink communications may include means for receiving incoming traffic from a network.
  • the system may include means for applying a quality-of-service (QoS) policy to the incoming traffic, where applying the QoS policy takes into account a group of egress links associated as a bundle.
  • QoS quality-of-service
  • the system may include means for fragmenting and sequencing a portion of the incoming traffic to produce a sequenced flow that includes a group of fragments.
  • the system may include means for shaping the sequenced flow in association with the QoS policy to discourage overrunning at least one of a group of egress queues associated with the group of egress links.
  • FIG. 1 illustrates an exemplary system adapted to implement multilink communications consistent with the principles of the invention
  • FIG. 2 illustrates a functional block diagram that may be used to implement multilink communication techniques in a network device, such as customer device 104 , consistent with the principles of the invention
  • FIG. 3 illustrates an exemplary configuration of a network device that may be configured to perform multilink communication via card based components supported in a chassis consistent with the principles of the invention
  • FIG. 4 illustrates an exemplary method for implementing multilink communication techniques consistent with the principles of the invention.
  • Implementations consistent with the principles of the invention may centralize the control and shaping of multilink traffic. Centralized control may be performed without requiring backpressure from egress interfaces used to make multilink data available to a destination device. Implementations may facilitate bundling substantially any number and/or type of physical links into a multilink path.
  • Incoming traffic may be categorized according to priorities by applying one or more quality of service (QoS) policies thereto. Portions of the prioritized incoming traffic may be fragmented by dividing data units into smaller pieces. Other portions of the prioritized incoming traffic may remain in tact and may be referred to as non-fragmented traffic.
  • QoS quality of service
  • the application of centralized multilink processing and/or control may ensure that QoS policies and/or traffic shaping are performed in a manner that prevents driving physical links, that make up a bundle, beyond the respective bandwidth capacities of those physical links. As a result, dropped multilink fragments may be reduced and/or eliminated.
  • the centralized multilink processing and/or control may adaptively provide a portion of the multilink traffic to another physical link so that an aggregate multilink bandwidth associated with a bundle is not adversely impacted.
  • FIG. 1 illustrates an exemplary system adapted to implement multilink communications consistent with the principles of the invention.
  • FIG. 1 may include private network 102 , customer network device 104 (hereinafter customer device 104 ), physical links 106 A-D, provider network device 108 (hereinafter provider device 108 ), and public network 110 .
  • customer device 104 customer network device 104
  • provider device 108 provider network device 108
  • public network 110 public network 110 .
  • Private network 102 may include any network capable of transporting a data unit.
  • private network 102 may be a local area network (LAN), metropolitan area network (MAN) and/or a wide area network (WAN), such as a LAN associated with a corporation, university campus, hospital, and/or a government facility.
  • LAN local area network
  • MAN metropolitan area network
  • WAN wide area network
  • Data unit refers to any unit of data that is capable of being transported across a network. Data units may include packet data and/or non-packet data. As such, a data unit is not limited to any particular type of network architecture and/or network protocol.
  • Customer device 104 may include any device capable of receiving a data unit and/or making the data unit available to an interface, or port, such as an egress interface.
  • Customer device 104 may include a data transfer device, such as, for example, a router, switch, server, and/or firewall, and may be implemented in a standalone configuration and/or a distributed configuration.
  • Customer device 104 may receive data units at an ingress interface and may make the data units available to an egress interface having a number of physical links 106 A-D associated therewith.
  • customer device 104 may operate on incoming traffic received via a single link. The received traffic may be operated on to, for example, fragment the traffic. Fragmenting may refer to dividing a large data unit into smaller pieces, referred to as fragments.
  • Customer device 104 may make the fragments available to a number of physical links 106 A-D in conjunction with multilink networking protocols.
  • customer device 104 may employ multilink protocols, such as point-to-point (PPP) protocol (RFC 1990), multi-class extension to multi-link PPP (RFC 2686), frame relay fragmentation implementation agreement (FRF 12), and/or multilink frame-relay user-to-network interface (UNI)/network-to-user interface (NNI) implementation agreement (FRF 16).
  • Physical links 106 A-D may include any device, technique and/or structure capable of conveying a data unit from a source location to a destination location.
  • Links 106 A-D may include optical fibers, conductors, and/or free-space links such as optical and/or radio frequency (RF) links.
  • Physical links 106 A-D may be associated into a virtual group that is herein referred to as bundle 106 .
  • Physical links 106 A-D may be bi-directional and may carry traffic from customer device 104 to provider device 108 and/or may carry traffic from provider device 108 to customer device 104 . While shown as direct links between customer device 104 and provider device 108 , links 106 A-D may be virtual links carried, for example, over a network.
  • Bundle 106 may be used to aggregate data associated with a number of physical links into a single bundle of bandwidth. For example, if each physical link 106 A-D is capable of carrying 1 Mbit/sec of traffic, bundle 106 may provide 4 Mbit/sec to a destination device, such as provider device 108 . The traffic rate provided by bundle 106 may be on the order of the sum of the bandwidths of the physical links 106 A-D making up bundle 106 . Bundle 106 may be formed from substantially any number and/or type of physical links. Customer device 104 may employ multilink protocols to fragment incoming traffic. Customer device 104 may sequence fragmented traffic across physical links 106 A-D making up bundle 106 .
  • Provider device 108 may include any device capable of receiving multilink traffic.
  • Provider device 108 may include a data transfer device, such as a router, switch, gateway, server and/or firewall.
  • Provider device 108 may receive multilink traffic via substantially any number and/or type of physical links 106 A-D.
  • Provider device 108 may reassemble multilink traffic received via a number of physical links 106 A-D into a format adapted for transmission on a single link.
  • Provider device 108 may be associated with a service provider and may operate to make customer data available to a public network 110 .
  • Provider device 108 may also make multilink traffic available to customer device 104 via physical links 106 A-D.
  • Public network 110 may include any network capable of carrying a data unit from a source to a destination.
  • Public network 110 may employ one or more network protocols and may transport data via hardwired links and/or wireless links.
  • public network 110 may include a WAN, such as the Internet, a switched network, such as the public switched telephone network (PSTN), or the like.
  • PSTN public switched telephone network
  • FIG. 2 illustrates a functional block diagram that may be used to implement multilink communication techniques in a network device, such as customer device 104 , consistent with the principles of the invention.
  • the functional block diagram of FIG. 2 may include an interconnect 210 , a network interface 220 , a control unit 230 , a memory 240 , a fragmenting engine 250 , a sequencer 260 , and a shaper 270 .
  • control unit 230 may be configured to include the functionality of fragmenting engine 250 , sequencer 260 and/or shaper 270 .
  • Implementations of customer device 104 may be deployed in, for example, a board based configuration where the boards are retained in slots associated with a chassis.
  • the components of FIG. 2 may be implemented in hardware and/or software consistent with the principles of the invention.
  • Interconnect 210 may include one or more communication paths that permit communication among the components of customer device 104 .
  • Network interface 220 may include any device capable of receiving a data unit from a network and/or making a data unit available to a network.
  • Network interface 220 may include an ingress port to receive data units from a network and/or an egress port to make data units available to a network.
  • the egress port may operate to make fragmented data units available to a number of physical links 106 A-D, possibly operating as a bundle 106 .
  • Control unit 230 may include any type of processor or microprocessor, and may interpret and execute instructions. Control unit 230 may be implemented in a standalone configuration and/or in a distributed configuration, such as in a parallel processing implementation. Control unit 230 may operate to provide centralized control to the components of customer device 104 to facilitate efficient communication with a destination device via bundle 106 . Control unit 230 may be implemented as an application specific integrated circuit (ASIC) configured to control operation of customer device 104 . Control unit 230 may use QoS policies to determine which data units should be fragmented and/or which data units should be sent on a particular physical link 106 A-D.
  • ASIC application specific integrated circuit
  • Memory 240 may include a random access memory (RAM) or another type of dynamic storage device that stores information and instructions for execution by control unit 230 . Memory 240 may also be used to store temporary variables or other intermediate information during execution of instructions by control unit 230 . Memory 240 may be used for storing information, such as QoS policies, fragmentation lists, parameters used by shaper 270 , and/or parameters associated with physical links 106 A-D.
  • RAM random access memory
  • Memory 240 may be used to store temporary variables or other intermediate information during execution of instructions by control unit 230 .
  • Memory 240 may be used for storing information, such as QoS policies, fragmentation lists, parameters used by shaper 270 , and/or parameters associated with physical links 106 A-D.
  • Memory 240 may include one or more queues that may be used to facilitate multilink communications.
  • memory 240 may include one or more ingress queues for use on traffic received from private network 120 .
  • memory 240 may include a strict high priority, a high priority, and/or a low priority queue for use with incoming traffic.
  • Memory 240 may also include one or more egress queues.
  • memory 240 may operate a first queue adapted to hold fragmented traffic and a second queue adapted to hold non-fragmented traffic prior to transmission to provider device 108 via bundle 106 .
  • Memory 240 may also operate queues associated with respective ones of physical links 106 A-D.
  • Memory 240 may operate in cooperation with data storage devices, such as a magnetic disk or optical disk and its corresponding drive and/or some other type of magnetic or optical recording medium and its corresponding drive for storing information and/or instructions.
  • Fragmenting engine 250 may include any device and/or technique capable of fragmenting an incoming data unit into a number of pieces for transmission via bundle 106 .
  • fragmenting engine 250 may receive incoming traffic that is to be fragmented.
  • Fragmenting engine 250 may split an incoming data unit into a number of fragments.
  • Fragmenting engine 250 may operate with a fragmenting list, or index, to maintain information about fragments.
  • Fragmenting engine 250 may operate in conjunction with one or more QoS policies when fragmenting data units to ensure that traffic is handled according to predetermined criteria with respect to priorities.
  • Fragmenting engine 250 may receive data from an input queue and may make the fragmented data available to sequencer 260 and/or one or more output queues.
  • Sequencer 260 may include any device and/or technique for sequencing fragmented traffic and/or non-fragmented traffic.
  • sequencer 260 may operate on delay sensitive traffic, such as voice and/or video data, that has not been fragmented as well as operating on lower priority traffic, such as text data, that has been fragmented.
  • Sequencer 260 may interleave data units, such as packets, of delay sensitive traffic with fragments of lower priority data units to efficiently use individual link bandwidths and/or bundle bandwidth.
  • Sequencer 260 may operate in conjunction with one or more QoS policies to perform load balancing across physical links 106 A-D.
  • sequencer 260 and/or a QoS policy may perform link based, bundle based, hash based, and/or byte-wise load balancing across physical links 106 A-D.
  • Sequencer 260 may encapsulate data to facilitate error detection and/or transmission of sequenced traffic to a destination device, such as provider device 108 .
  • Sequencer 260 may receive non-fragmented traffic, such as voice, directly from a device implementing QoS policies on incoming traffic and/or sequencer 260 may receive fragmented traffic from fragmenting engine 250 . Sequencer 260 may assign consecutive sequence numbers to fragments associated with fragmented traffic. Sequencer 260 may interleave non-fragmented data units with fragments to facilitate timely delivery of non-fragmented data units. Sequencer 260 may also encapsulate fragments with multilink headers. Sequencer 260 may choose a physical link 106 A-D for each fragment and/or non-fragmented data unit in a manner that maintains link utilizations and/or bundle utilizations at a determined rate.
  • Sequencer 260 may be configured to compress fragments and/or non-fragmented data units to reduce bandwidth demands. Compression may reduce the bandwidth required to send a particular stream of multilink traffic as compared to the size of the stream if compression were not employed. Sequencer 260 may employ tokens and/or other counting devices to inform control unit 230 that compression has freed up additional bandwidth that may be used for subsequent traffic. For example, a bandwidth reduction value may be associated with an amount of bandwidth freed up thorough the use of compression techniques and tokens may be used to represent the bandwidth reduction value. The tokens may be sent from sequencer 260 to control unit 230 for use in managing incoming traffic.
  • Sequencer 260 may operate in conjunction with one or more output queues that may be used to make non-fragmented data units and/or fragments available to bundle 106 .
  • sequencer 260 may operate with a first queue for holding non-fragmented data units and a second queue for holding data units that have been fragmented.
  • Shaper 270 may include any device and/or technique for shaping a traffic flow. Shaper 270 may operate in conjunction with sequencer 260 and/or control unit 230 to shape fragmented and/or non-fragmented data units that are made available to bundle 106 . Shaper 270 may be configured to shape outgoing traffic in a manner that does not exceed a bundle bandwidth. Shaper 270 may be adapted via an operator input and/or system inputs to facilitate adjustment of an output bandwidth in order to minimize and/or eliminate dropped data units and/or fragments.
  • Settings associated with shaper 270 may be determined via network measurements, modeling, and/or rules of thumb, such as estimates. For example, test data may be sent from customer device 104 to provider device 108 via bundle 106 . Provider device 108 may determine that certain fragments of test data were dropped (i.e., were not received at provider device 108 ). Provider device 108 may make dropped fragment determinations available to customer device 104 and/or an operator associated therewith. Shaper 270 may be adjusted based on the measured fragments to eliminate dropped fragments. In certain situations, it may be desirable to set shaper 270 so that less than one hundred percent of the bundle bandwidth is used in order to prevent dropped fragments. Shaper 270 may monitor queues, such as egress queues, and may shape traffic based on the monitoring. Shaper 270 may take into account both the size of incoming traffic and processing overhead, such as multilink headers and/or bits associated with bit stuffing, when shaping traffic.
  • Customer device 104 may implement the functions described below in response to control unit 230 executing software instructions contained in a computer-readable medium, such as memory 240 .
  • a computer-readable medium may be defined as one or more memory devices and/or carrier waves.
  • hardwired circuitry may be used in place of or in combination with software instructions to implement features consistent with the principles of the invention.
  • implementations consistent with the principles of the invention are not limited to any specific combination of hardware circuitry and software.
  • FIG. 3 illustrates an exemplary configuration of a network device that may be configured to perform multilink communication via card based components supported in a chassis consistent with the principles of the invention.
  • Card based implementations of customer device 104 may facilitate scaling via the addition or removal of cards from customer device 104 .
  • PICCs physical interface cards
  • Customer device 104 ( FIG. 3 ) may include an ingress card 302 , a multilink card 304 , a first egress card 306 , and a second egress card 308 .
  • Ingress card 302 may include any device and/or component capable of receiving data units from a network. Ingress card 302 may include functionality associated with network interface 220 to provide incoming data units to multilink card 304 . Incoming data units may be associated with, for example, strict-high priority traffic, such as voice and/or video data, high priority traffic, such as time sensitive data, and/or low priority traffic, such as text data. Ingress card 302 may be configured to decrypt and/or decompress incoming traffic as needed.
  • strict-high priority traffic such as voice and/or video data
  • high priority traffic such as time sensitive data
  • low priority traffic such as text data.
  • Ingress card 302 may be configured to decrypt and/or decompress incoming traffic as needed.
  • Multilink card 304 may include any device and/or component capable of facilitating multilink communication in a network.
  • Multilink card 304 may include one or more control units 230 adapted to perform operations, such as performing per bundle QoS, multilink fragmentation and/or reassembly, estimating traffic in an egress queue associated with an egress link, and/or scheduling of fragmented and/or non-fragmented traffic using a per bundle scheduling map.
  • multilink card 304 may allow ingress card 302 , first egress card 306 , and/or second egress cards 308 to be less sophisticated and/or costly than if control unit functionality were distributed onto ingress card 302 and/or egress cards 306 , 308 .
  • QoS policies, fragmentation, and/or sequencing may be performed in a centralized location via multilink card 304 as opposed to performing these functions locally on other cards, such as egress cards 306 , 308 .
  • Multilink card 304 may perform QoS shaping by, for example, associating a first portion of multilink traffic with a first link and a second portion of multilink traffic with a second link using a per bundle scheduling map.
  • QoS policies and/or traffic shaping may take into account the number of egress links, the type of egress links, and/or the throughput capabilities associated with egress queues and/or links individually and/or as a bundle.
  • Egress links may have one or more queues associated therewith that may have a corresponding transmit rate, buffer size, and/or a priority associated therewith.
  • a QoS policy may operate to associate a first priority with a first portion of multilink traffic and a second priority with a second portion of multilink traffic.
  • the first portion of multilink traffic may be associated with a first egress link in accordance with a first bandwidth capability of the first egress link and the second portion may be associated with a second egress link in accordance with a second bandwidth capability associated with the second link.
  • the QoS policy may implement per link, per byte and/or per bundle load balancing via the association of priorities to portions of the incoming traffic.
  • QoS policies and/or traffic shaping may use a transmit rate to dynamically adjust the operation of a device to match an available bundle capacity. If the available bundle capacity is insufficient, rates, such as absolute rates, may be scaled down until adequate bundle capacity is available. Implementations may use a scale-down ratio to facilitate bandwidth adjustments. The scale-down ratio may be represented as a fraction of the available bundle bandwidth divided by the sum of all absolute rates.
  • Multilink card 304 may apply QoS as a weighted round robin to egress queues associated with a bundle. Multilink card 304 may apply QoS at the bundle level to avoid overrunning queues associated with physical links 106 A-D. For example, a queue associated with strict-high priority traffic may be emptied before traffic in queues associated with lower priority traffic are emptied. QoS may also cause traffic associated with a particular flow to be routed to a single physical link 106 A, B, C, or D.
  • a centralized controller such as multilink card 304 may allow first and second egress cards 306 , 308 to be managed in concert to ensure efficient use of bundle bandwidth.
  • multilink card 304 may perform byte-wise load balancing for physical links 106 A-D. Byte-wise load balancing may determine which link has the least amount of traffic queued, and that link may be selected to receive a next byte.
  • Multilink card 304 may also perform hash based load balancing by computing a hash value that may be based, for example, on a source address, a destination address, and/or one or more multi-protocol label switching (MPLS) labels.
  • MPLS multi-protocol label switching
  • the resulting hash value may be used to select an active physical link 106 A-D within bundle 106 .
  • Implementations employing load balancing may operate in a manner that prevents egress queues, such as those associated with physical links 106 A-D, from being overrun.
  • Multilink card 304 may adaptively reconfigure first egress card 306 and/or second egress card 308 based on bandwidth needs and/or equipment availability. For example, assume that physical link 106 B becomes disabled. Multilink card 304 may reallocate traffic from an egress queue associated with physical link 106 B, on first egress card 306 , with a new link 106 E, on second egress card 308 . New link 106 E may carry traffic previously carried by physical link 106 B so that bundle bandwidth is not adversely impacted.
  • multilink card 304 may not receive feedback, such as backpressure, from first egress card 306 and/or second egress card 308 regarding the status of egress queues associated with physical links 106 A-D. Therefore, multilink card 304 may use estimates and/or known values for egress queue throughputs associated with physical links 106 A-D so that egress queues are not overrun. Multilink card 304 may apply estimates at a bundle level if desired.
  • First egress card 306 and second egress card 308 may include any device and/or component capable of making a data unit available to another device and/or network.
  • Egress cards 306 , 308 may include functionality associated with network interface 220 and may receive fragmented data units and non-fragmented data units from multilink card 304 .
  • Egress cards 306 , 308 may include at least one egress queue associated with each physical link 106 A-D used in bundle 106 .
  • Egress queues may be configured to allow higher priority data units to be inserted ahead of lower priority data units and/or data unit fragments.
  • Egress queues may be collectively associated as a bundled queue.
  • FIG. 4 illustrates an exemplary method for implementing multilink communication techniques consistent with the principles of the invention.
  • incoming traffic may include voice traffic and data traffic.
  • Voice traffic may include delay-sensitive traffic that cannot tolerate transmission delays in going from customer device 104 to provider device 108 .
  • data traffic may be able to tolerate transmission delays in going from customer device 104 to provider device 108 .
  • Customer device 104 may be configured to operate on voice traffic in a manner that prevents it from incurring undesirable delays when being conveyed from customer device 104 to provider device 108 .
  • the incoming traffic may be prioritized. For example, voice traffic may be given a first priority, such as a strict-high priority, and data traffic may be given a priority lower than the first priority. Strict-high priority may identify traffic that cannot tolerate delays, while the lower priority associated with data traffic may indicate that data traffic can tolerate delays.
  • One or more QoS policies may be applied to the incoming traffic to assign priorities to the incoming traffic (act 402 ).
  • a QoS policy may dictate that voice traffic should be transferred from customer device 104 to provider device 108 in a manner that prevents the voice traffic from being delayed.
  • the applied QoS policy may further dictate that data traffic can be handled in a way that facilitates transfer from customer device 104 to provider device 108 using techniques that can lead to some delay, such as by breaking data traffic into smaller pieces before transferring the data traffic from customer device 104 to provider device 108 .
  • a QoS policy may take into account aspects of customer device 104 and/or aspects of provider device 108 when applying policies to queued traffic. For example, the QoS policy may take into account the bandwidth of egress links coupling customer device 104 to provider device 108 . Egress links coupling customer device 104 and provider device 108 may be accounted for individually or as a logical group, such as a bundle. For example, the QoS policy may take into account a first throughput rate associated with a first egress link and a second throughput associated with a second egress link.
  • a QoS policy that is applied to the voice traffic and/or data traffic in conjunction with egress links that are treated as a bundle may be referred to as a per-bundle QoS policy.
  • a per bundle QoS policy may refer to an aggregate throughput for egress links included in the bundle. Treating egress links as a bundle may facilitate more efficient operation of customer device 104 since substantially all egress links associated with customer device 104 may be run at their maximum bandwidths to achieve an aggregate throughput associated with the bundle.
  • Incoming traffic may be queued according to the one or more applied QoS policies (act 404 ).
  • voice traffic and data traffic may be associated with an input queue where the voice and data traffic are arranged in the queue according to the assigned priorities.
  • the QoS policy may cause voice traffic to be arranged in the queue so as to exit the queue before some or all of the data traffic.
  • Queuing of traffic and the application of QoS policies to incoming traffic may operate to designate incoming traffic into two groups. For example voice traffic may be designated as traffic that should not be fragmented (e.g., traffic that should be transmitted intact), while data traffic may be designated as traffic that can be fragmented (e.g., traffic that can be divided into smaller units).
  • voice traffic may be designated as traffic that should not be fragmented (e.g., traffic that should be transmitted intact)
  • data traffic may be designated as traffic that can be fragmented (e.g., traffic that can be divided into smaller units).
  • fragmenting engine 250 may divide data traffic into fragments before making them available to another device, such as sequencer 260 . Fragments may facilitate load balancing by providing small pieces of data that can be spread across egress links in a manner that provides for more controlled bandwidth management than can be accomplished via entire data units, such as would be the case if data traffic were not fragmented.
  • the fragmented traffic and the nonfragmented traffic may be sequenced (act 408 ).
  • sequence numbers may be associated with each fragment via sequencer 260 .
  • Sequencer 260 may sequence fragmented traffic and/or non-fragmented traffic by interleaving the fragmented traffic and non-fragmented traffic.
  • sequencer 260 may interleave the non-fragmented voice traffic with fragments formed from data traffic.
  • the sequencing operation may allow interleaved traffic to be spread across multiple egress links, such as physical links 106 A-D, in a manner that facilitates efficient use of the individual bandwidths associated with the egress links as well as a bandwidth associated with a bundle formed by the virtual grouping of the egress links.
  • non-fragmented voice traffic may be interleaved with fragments of data traffic so as to ensure that voice traffic is not delayed when being transferred from customer device 104 to provider device 108 .
  • sequencer 260 may assign sequence numbers to the traffic.
  • the sequence numbers may facilitate the reassembly of the traffic.
  • the sequenced traffic may be shaped according to centralized QoS policies implemented by control unit 230 (act 410 ). For example, interleaved traffic may be shaped so as not to exceed individual link bandwidths and/or a bundle bandwidth.
  • an egress link may have an egress queue associated therewith.
  • Interleaved traffic may be shaped so as not to exceed a throughput capability of the egress queue. As a result, an egress queue may be able to operate without providing feedback, or backpressure, to control unit 230 .
  • Shaped traffic may be provided to an egress device, such as first egress board 306 , in conjunction with making multilink traffic available to a destination, such as provider device 108 (act 412 ).
  • the egress device may include a number of egress queues that may be associated with a number of egress links, such as physical links 106 A-D.
  • the egress device may associate interleaved traffic, received from sequencer 260 , with one or more egress queues based on criteria from the QoS policies. For example, QoS policies associated with voice traffic may dictate that other types of lower priority traffic, such as data traffic fragments, be sent after the voice traffic so as not to delay voice traffic.
  • Shaped traffic may be made available to egress queues in a manner that facilitates maintaining QoS priorities for traffic residing in the queues. For example, voice traffic may be provided to an egress queue in a manner that makes it leave the queue prior to a fragment of data traffic. In addition, fragments may be placed in the egress queue in a manner that causes them to exit the queue after voice traffic. In certain implementations, an egress queue may not be capable of providing feedback, or backpressure, to control unit 230 regarding a status of the egress queue.
  • Implementations where an egress queue may not be capable of providing feedback may include, for example, implementations where control unit 230 is on a first card and the egress queue is on a second card, where both cards may operate in a device, such as customer device 104 .
  • Control unit 230 may employ techniques for performing load balancing across egress links without requiring feedback information from egress queues and/or egress links, such as the status of a current fill rate associated with the egress queue.
  • Control unit 230 may apply load balancing techniques at the link level and/or at a bundle level.
  • Load balancing techniques may include balancing traffic according to increments of traffic, such as by packet, byte, fragment, etc. Increments used for balancing traffic can be substantially any size.
  • byte-wise load balancing may be employed to shape traffic associated with an egress queue. Shaped traffic may be made available to a destination, such as provider device 108 , as multilink traffic via a number of egress links, such as physical links 106 A-D.

Abstract

A method for performing multilink communications may include applying a quality-of-service (QoS) policy to incoming traffic, where the QoS policy operates to identify a first portion and a second portion of the incoming traffic. The method may include fragmenting the first portion of the incoming traffic into a group of fragments. The method may include sequencing the group of fragments and the second portion of the incoming traffic into a sequenced flow, where the sequencing causes the second portion to be interleaved among the group of fragments so that the sequenced flow can be made available to a first link and a second link as multilink traffic, where the first link carries a first portion of the multilink traffic and the second link carries a second portion of the multilink traffic.

Description

    FIELD OF THE INVENTION
  • Implementations consistent with the principles of the invention relate generally to communication networks and, more particularly, to shaping traffic across multiple links without requiring backpressure from an egress interface.
  • BACKGROUND OF THE INVENTION
  • Network devices, such as routers, may be configured to distribute incoming traffic received at an ingress interface, or port, across multiple output links associated with an egress interface. Incoming traffic may be distributed across multiple links in order to achieve an output bandwidth that is greater than the bandwidth associated with any one of the multiple links. Multilink networking protocols may be used to facilitate distributing incoming traffic across multiple output links and to facilitate reassembling multilink traffic received at a destination device.
  • Existing multilink implementations may associate the control of output links with a board, such as an egress interface board, in a manner that does not allow reconfiguration of output links. As a result, network devices may be configured to operate with particular multilink implementations. For example, an egress interface board in a network device may be hardwired with four T1 links that can be used for multilink transmissions. In this example, if one T1 link becomes disabled, the egress interface board may have to operate at a reduced throughput using three T1 links. Reconfiguration of links may be discouraged because a controller on the egress interface board may be aware of only those links physically associated with that board.
  • Existing implementations may not provide flexibility in selecting links for use in multilink transmission implementations. The lack of flexibility in reconfiguring and/or selecting links for use in multilink implementations may prevent communication networks from operating efficiently.
  • SUMMARY OF THE INVENTION
  • In accordance with an implementation, a network device adapted to facilitate multilink communications using multilink traffic is provided. The network device may include a control unit adapted to apply a quality-of-service (QoS) policy to incoming traffic, where the QoS policy associates a first priority with a first portion of incoming traffic and a second priority with a second portion of the incoming traffic. The control unit may be adapted to fragment the second portion of the incoming traffic to produce a group of fragments, and sequence the second portion of the incoming traffic with the first portion of the incoming traffic to produce sequenced traffic. The control unit may be adapted to make a first portion of the sequenced traffic available as a first portion of the multilink traffic, and make a second portion of the sequenced traffic available as a second portion of the multilink traffic.
  • In accordance with another implementation, a method for performing multilink communications may be provided. The method may include applying a quality-of-service (QoS) policy to incoming traffic, where the QoS policy operates to identify a first portion and a second portion of the incoming traffic. The method may include fragmenting the first portion of the incoming traffic into a group of fragments. The method may include sequencing the group of fragments and the second portion of the incoming traffic into a sequenced flow, where the sequencing causes the second portion to be interleaved among the group of fragments so that the sequenced flow can be made available to a first link and a second link as multilink traffic, where the first link carries a first portion of the multilink traffic and the second link carries a second portion of the multilink traffic.
  • In accordance with yet another implementation, a system to provide multilink communications is provided. The system may include means for receiving incoming traffic from a network. The system may include means for applying a quality-of-service (QoS) policy to the incoming traffic, where applying the QoS policy takes into account a group of egress links associated as a bundle. The system may include means for fragmenting and sequencing a portion of the incoming traffic to produce a sequenced flow that includes a group of fragments. The system may include means for shaping the sequenced flow in association with the QoS policy to discourage overrunning at least one of a group of egress queues associated with the group of egress links.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate an embodiment of the invention and, together with the description, explain the invention. In the drawings,
  • FIG. 1 illustrates an exemplary system adapted to implement multilink communications consistent with the principles of the invention;
  • FIG. 2 illustrates a functional block diagram that may be used to implement multilink communication techniques in a network device, such as customer device 104, consistent with the principles of the invention;
  • FIG. 3 illustrates an exemplary configuration of a network device that may be configured to perform multilink communication via card based components supported in a chassis consistent with the principles of the invention; and
  • FIG. 4 illustrates an exemplary method for implementing multilink communication techniques consistent with the principles of the invention.
  • DETAILED DESCRIPTION
  • The following detailed description of implementations consistent with the principles of the invention refers to the accompanying drawings. The same reference numbers in different drawings may identify the same or similar elements. Also, the following detailed description does not limit the invention. Instead, the scope of the invention is defined by the appended claims and their equivalents.
  • Implementations consistent with the principles of the invention may centralize the control and shaping of multilink traffic. Centralized control may be performed without requiring backpressure from egress interfaces used to make multilink data available to a destination device. Implementations may facilitate bundling substantially any number and/or type of physical links into a multilink path. Incoming traffic may be categorized according to priorities by applying one or more quality of service (QoS) policies thereto. Portions of the prioritized incoming traffic may be fragmented by dividing data units into smaller pieces. Other portions of the prioritized incoming traffic may remain in tact and may be referred to as non-fragmented traffic.
  • The application of centralized multilink processing and/or control may ensure that QoS policies and/or traffic shaping are performed in a manner that prevents driving physical links, that make up a bundle, beyond the respective bandwidth capacities of those physical links. As a result, dropped multilink fragments may be reduced and/or eliminated. In addition, if a physical link becomes disabled, the centralized multilink processing and/or control may adaptively provide a portion of the multilink traffic to another physical link so that an aggregate multilink bandwidth associated with a bundle is not adversely impacted.
  • Exemplary System
  • FIG. 1 illustrates an exemplary system adapted to implement multilink communications consistent with the principles of the invention. FIG. 1 may include private network 102, customer network device 104 (hereinafter customer device 104), physical links 106A-D, provider network device 108 (hereinafter provider device 108), and public network 110.
  • Private network 102 may include any network capable of transporting a data unit. For example, private network 102 may be a local area network (LAN), metropolitan area network (MAN) and/or a wide area network (WAN), such as a LAN associated with a corporation, university campus, hospital, and/or a government facility. “Data unit,” as used herein, refers to any unit of data that is capable of being transported across a network. Data units may include packet data and/or non-packet data. As such, a data unit is not limited to any particular type of network architecture and/or network protocol.
  • Customer device 104 may include any device capable of receiving a data unit and/or making the data unit available to an interface, or port, such as an egress interface. Customer device 104 may include a data transfer device, such as, for example, a router, switch, server, and/or firewall, and may be implemented in a standalone configuration and/or a distributed configuration. Customer device 104 may receive data units at an ingress interface and may make the data units available to an egress interface having a number of physical links 106A-D associated therewith. For example, customer device 104 may operate on incoming traffic received via a single link. The received traffic may be operated on to, for example, fragment the traffic. Fragmenting may refer to dividing a large data unit into smaller pieces, referred to as fragments. Customer device 104 may make the fragments available to a number of physical links 106A-D in conjunction with multilink networking protocols. For example, customer device 104 may employ multilink protocols, such as point-to-point (PPP) protocol (RFC 1990), multi-class extension to multi-link PPP (RFC 2686), frame relay fragmentation implementation agreement (FRF 12), and/or multilink frame-relay user-to-network interface (UNI)/network-to-user interface (NNI) implementation agreement (FRF 16).
  • Physical links 106A-D may include any device, technique and/or structure capable of conveying a data unit from a source location to a destination location. Links 106A-D may include optical fibers, conductors, and/or free-space links such as optical and/or radio frequency (RF) links. Physical links 106A-D may be associated into a virtual group that is herein referred to as bundle 106. Physical links 106A-D may be bi-directional and may carry traffic from customer device 104 to provider device 108 and/or may carry traffic from provider device 108 to customer device 104. While shown as direct links between customer device 104 and provider device 108, links 106A-D may be virtual links carried, for example, over a network.
  • Bundle 106 may be used to aggregate data associated with a number of physical links into a single bundle of bandwidth. For example, if each physical link 106A-D is capable of carrying 1 Mbit/sec of traffic, bundle 106 may provide 4 Mbit/sec to a destination device, such as provider device 108. The traffic rate provided by bundle 106 may be on the order of the sum of the bandwidths of the physical links 106A-D making up bundle 106. Bundle 106 may be formed from substantially any number and/or type of physical links. Customer device 104 may employ multilink protocols to fragment incoming traffic. Customer device 104 may sequence fragmented traffic across physical links 106A-D making up bundle 106.
  • Provider device 108 may include any device capable of receiving multilink traffic. Provider device 108 may include a data transfer device, such as a router, switch, gateway, server and/or firewall. Provider device 108 may receive multilink traffic via substantially any number and/or type of physical links 106A-D. Provider device 108 may reassemble multilink traffic received via a number of physical links 106A-D into a format adapted for transmission on a single link. Provider device 108 may be associated with a service provider and may operate to make customer data available to a public network 110. Provider device 108 may also make multilink traffic available to customer device 104 via physical links 106A-D.
  • Public network 110 may include any network capable of carrying a data unit from a source to a destination. Public network 110 may employ one or more network protocols and may transport data via hardwired links and/or wireless links. For example, public network 110 may include a WAN, such as the Internet, a switched network, such as the public switched telephone network (PSTN), or the like.
  • Exemplary Functional Diagram
  • FIG. 2 illustrates a functional block diagram that may be used to implement multilink communication techniques in a network device, such as customer device 104, consistent with the principles of the invention. The functional block diagram of FIG. 2 may include an interconnect 210, a network interface 220, a control unit 230, a memory 240, a fragmenting engine 250, a sequencer 260, and a shaper 270.
  • The functional block diagram of FIG. 2 illustrates discrete components performing operations described below. It may be possible for one of the components to incorporate functionality associated with another one or more of the components. For example, control unit 230 may be configured to include the functionality of fragmenting engine 250, sequencer 260 and/or shaper 270. Implementations of customer device 104 may be deployed in, for example, a board based configuration where the boards are retained in slots associated with a chassis. Furthermore, the components of FIG. 2 may be implemented in hardware and/or software consistent with the principles of the invention.
  • Interconnect 210 may include one or more communication paths that permit communication among the components of customer device 104.
  • Network interface 220 may include any device capable of receiving a data unit from a network and/or making a data unit available to a network. Network interface 220 may include an ingress port to receive data units from a network and/or an egress port to make data units available to a network. The egress port may operate to make fragmented data units available to a number of physical links 106A-D, possibly operating as a bundle 106.
  • Control unit 230 may include any type of processor or microprocessor, and may interpret and execute instructions. Control unit 230 may be implemented in a standalone configuration and/or in a distributed configuration, such as in a parallel processing implementation. Control unit 230 may operate to provide centralized control to the components of customer device 104 to facilitate efficient communication with a destination device via bundle 106. Control unit 230 may be implemented as an application specific integrated circuit (ASIC) configured to control operation of customer device 104. Control unit 230 may use QoS policies to determine which data units should be fragmented and/or which data units should be sent on a particular physical link 106A-D.
  • Memory 240 may include a random access memory (RAM) or another type of dynamic storage device that stores information and instructions for execution by control unit 230. Memory 240 may also be used to store temporary variables or other intermediate information during execution of instructions by control unit 230. Memory 240 may be used for storing information, such as QoS policies, fragmentation lists, parameters used by shaper 270, and/or parameters associated with physical links 106A-D.
  • Memory 240 may include one or more queues that may be used to facilitate multilink communications. For example, memory 240 may include one or more ingress queues for use on traffic received from private network 120. For example, memory 240 may include a strict high priority, a high priority, and/or a low priority queue for use with incoming traffic. Memory 240 may also include one or more egress queues. For example, memory 240 may operate a first queue adapted to hold fragmented traffic and a second queue adapted to hold non-fragmented traffic prior to transmission to provider device 108 via bundle 106. Memory 240 may also operate queues associated with respective ones of physical links 106A-D. Memory 240 may operate in cooperation with data storage devices, such as a magnetic disk or optical disk and its corresponding drive and/or some other type of magnetic or optical recording medium and its corresponding drive for storing information and/or instructions.
  • Fragmenting engine 250 may include any device and/or technique capable of fragmenting an incoming data unit into a number of pieces for transmission via bundle 106. For example, fragmenting engine 250 may receive incoming traffic that is to be fragmented. Fragmenting engine 250 may split an incoming data unit into a number of fragments. Fragmenting engine 250 may operate with a fragmenting list, or index, to maintain information about fragments. Fragmenting engine 250 may operate in conjunction with one or more QoS policies when fragmenting data units to ensure that traffic is handled according to predetermined criteria with respect to priorities. Fragmenting engine 250 may receive data from an input queue and may make the fragmented data available to sequencer 260 and/or one or more output queues.
  • Sequencer 260 may include any device and/or technique for sequencing fragmented traffic and/or non-fragmented traffic. For example, sequencer 260 may operate on delay sensitive traffic, such as voice and/or video data, that has not been fragmented as well as operating on lower priority traffic, such as text data, that has been fragmented. Sequencer 260 may interleave data units, such as packets, of delay sensitive traffic with fragments of lower priority data units to efficiently use individual link bandwidths and/or bundle bandwidth. Sequencer 260 may operate in conjunction with one or more QoS policies to perform load balancing across physical links 106A-D. For example, sequencer 260 and/or a QoS policy may perform link based, bundle based, hash based, and/or byte-wise load balancing across physical links 106A-D. Sequencer 260 may encapsulate data to facilitate error detection and/or transmission of sequenced traffic to a destination device, such as provider device 108.
  • Sequencer 260 may receive non-fragmented traffic, such as voice, directly from a device implementing QoS policies on incoming traffic and/or sequencer 260 may receive fragmented traffic from fragmenting engine 250. Sequencer 260 may assign consecutive sequence numbers to fragments associated with fragmented traffic. Sequencer 260 may interleave non-fragmented data units with fragments to facilitate timely delivery of non-fragmented data units. Sequencer 260 may also encapsulate fragments with multilink headers. Sequencer 260 may choose a physical link 106A-D for each fragment and/or non-fragmented data unit in a manner that maintains link utilizations and/or bundle utilizations at a determined rate.
  • Sequencer 260 may be configured to compress fragments and/or non-fragmented data units to reduce bandwidth demands. Compression may reduce the bandwidth required to send a particular stream of multilink traffic as compared to the size of the stream if compression were not employed. Sequencer 260 may employ tokens and/or other counting devices to inform control unit 230 that compression has freed up additional bandwidth that may be used for subsequent traffic. For example, a bandwidth reduction value may be associated with an amount of bandwidth freed up thorough the use of compression techniques and tokens may be used to represent the bandwidth reduction value. The tokens may be sent from sequencer 260 to control unit 230 for use in managing incoming traffic.
  • Sequencer 260 may operate in conjunction with one or more output queues that may be used to make non-fragmented data units and/or fragments available to bundle 106. For example, sequencer 260 may operate with a first queue for holding non-fragmented data units and a second queue for holding data units that have been fragmented.
  • Shaper 270 may include any device and/or technique for shaping a traffic flow. Shaper 270 may operate in conjunction with sequencer 260 and/or control unit 230 to shape fragmented and/or non-fragmented data units that are made available to bundle 106. Shaper 270 may be configured to shape outgoing traffic in a manner that does not exceed a bundle bandwidth. Shaper 270 may be adapted via an operator input and/or system inputs to facilitate adjustment of an output bandwidth in order to minimize and/or eliminate dropped data units and/or fragments.
  • Settings associated with shaper 270 may be determined via network measurements, modeling, and/or rules of thumb, such as estimates. For example, test data may be sent from customer device 104 to provider device 108 via bundle 106. Provider device 108 may determine that certain fragments of test data were dropped (i.e., were not received at provider device 108). Provider device 108 may make dropped fragment determinations available to customer device 104 and/or an operator associated therewith. Shaper 270 may be adjusted based on the measured fragments to eliminate dropped fragments. In certain situations, it may be desirable to set shaper 270 so that less than one hundred percent of the bundle bandwidth is used in order to prevent dropped fragments. Shaper 270 may monitor queues, such as egress queues, and may shape traffic based on the monitoring. Shaper 270 may take into account both the size of incoming traffic and processing overhead, such as multilink headers and/or bits associated with bit stuffing, when shaping traffic.
  • Customer device 104 may implement the functions described below in response to control unit 230 executing software instructions contained in a computer-readable medium, such as memory 240. A computer-readable medium may be defined as one or more memory devices and/or carrier waves. In alternative embodiments, hardwired circuitry may be used in place of or in combination with software instructions to implement features consistent with the principles of the invention. Thus, implementations consistent with the principles of the invention are not limited to any specific combination of hardware circuitry and software.
  • Exemplary Implementation
  • FIG. 3 illustrates an exemplary configuration of a network device that may be configured to perform multilink communication via card based components supported in a chassis consistent with the principles of the invention. Card based implementations of customer device 104 may facilitate scaling via the addition or removal of cards from customer device 104. For example, physical interface cards (PICs) may be adapted for use in customer device 104 to provide multilink capabilities. Customer device 104 (FIG. 3) may include an ingress card 302, a multilink card 304, a first egress card 306, and a second egress card 308.
  • Ingress card 302 may include any device and/or component capable of receiving data units from a network. Ingress card 302 may include functionality associated with network interface 220 to provide incoming data units to multilink card 304. Incoming data units may be associated with, for example, strict-high priority traffic, such as voice and/or video data, high priority traffic, such as time sensitive data, and/or low priority traffic, such as text data. Ingress card 302 may be configured to decrypt and/or decompress incoming traffic as needed.
  • Multilink card 304 may include any device and/or component capable of facilitating multilink communication in a network. Multilink card 304 may include one or more control units 230 adapted to perform operations, such as performing per bundle QoS, multilink fragmentation and/or reassembly, estimating traffic in an egress queue associated with an egress link, and/or scheduling of fragmented and/or non-fragmented traffic using a per bundle scheduling map.
  • The use of multilink card 304 may allow ingress card 302, first egress card 306, and/or second egress cards 308 to be less sophisticated and/or costly than if control unit functionality were distributed onto ingress card 302 and/or egress cards 306, 308. For example, QoS policies, fragmentation, and/or sequencing may be performed in a centralized location via multilink card 304 as opposed to performing these functions locally on other cards, such as egress cards 306, 308. Multilink card 304 may perform QoS shaping by, for example, associating a first portion of multilink traffic with a first link and a second portion of multilink traffic with a second link using a per bundle scheduling map.
  • QoS policies and/or traffic shaping may take into account the number of egress links, the type of egress links, and/or the throughput capabilities associated with egress queues and/or links individually and/or as a bundle. Egress links may have one or more queues associated therewith that may have a corresponding transmit rate, buffer size, and/or a priority associated therewith. For example, a QoS policy may operate to associate a first priority with a first portion of multilink traffic and a second priority with a second portion of multilink traffic. The first portion of multilink traffic may be associated with a first egress link in accordance with a first bandwidth capability of the first egress link and the second portion may be associated with a second egress link in accordance with a second bandwidth capability associated with the second link. The QoS policy may implement per link, per byte and/or per bundle load balancing via the association of priorities to portions of the incoming traffic.
  • QoS policies and/or traffic shaping may use a transmit rate to dynamically adjust the operation of a device to match an available bundle capacity. If the available bundle capacity is insufficient, rates, such as absolute rates, may be scaled down until adequate bundle capacity is available. Implementations may use a scale-down ratio to facilitate bandwidth adjustments. The scale-down ratio may be represented as a fraction of the available bundle bandwidth divided by the sum of all absolute rates.
  • Multilink card 304 may apply QoS as a weighted round robin to egress queues associated with a bundle. Multilink card 304 may apply QoS at the bundle level to avoid overrunning queues associated with physical links 106A-D. For example, a queue associated with strict-high priority traffic may be emptied before traffic in queues associated with lower priority traffic are emptied. QoS may also cause traffic associated with a particular flow to be routed to a single physical link 106A, B, C, or D.
  • A centralized controller, such as multilink card 304, may allow first and second egress cards 306, 308 to be managed in concert to ensure efficient use of bundle bandwidth. For example, multilink card 304 may perform byte-wise load balancing for physical links 106A-D. Byte-wise load balancing may determine which link has the least amount of traffic queued, and that link may be selected to receive a next byte. Multilink card 304 may also perform hash based load balancing by computing a hash value that may be based, for example, on a source address, a destination address, and/or one or more multi-protocol label switching (MPLS) labels. The resulting hash value may be used to select an active physical link 106A-D within bundle 106. Implementations employing load balancing may operate in a manner that prevents egress queues, such as those associated with physical links 106A-D, from being overrun.
  • Multilink card 304 may adaptively reconfigure first egress card 306 and/or second egress card 308 based on bandwidth needs and/or equipment availability. For example, assume that physical link 106B becomes disabled. Multilink card 304 may reallocate traffic from an egress queue associated with physical link 106B, on first egress card 306, with a new link 106E, on second egress card 308. New link 106E may carry traffic previously carried by physical link 106B so that bundle bandwidth is not adversely impacted.
  • In one implementation, multilink card 304 may not receive feedback, such as backpressure, from first egress card 306 and/or second egress card 308 regarding the status of egress queues associated with physical links 106A-D. Therefore, multilink card 304 may use estimates and/or known values for egress queue throughputs associated with physical links 106A-D so that egress queues are not overrun. Multilink card 304 may apply estimates at a bundle level if desired.
  • First egress card 306 and second egress card 308 may include any device and/or component capable of making a data unit available to another device and/or network. Egress cards 306, 308 may include functionality associated with network interface 220 and may receive fragmented data units and non-fragmented data units from multilink card 304. Egress cards 306, 308 may include at least one egress queue associated with each physical link 106A-D used in bundle 106. Egress queues may be configured to allow higher priority data units to be inserted ahead of lower priority data units and/or data unit fragments. Egress queues may be collectively associated as a bundled queue.
  • FIG. 4 illustrates an exemplary method for implementing multilink communication techniques consistent with the principles of the invention. By way of example, incoming traffic may include voice traffic and data traffic. Voice traffic may include delay-sensitive traffic that cannot tolerate transmission delays in going from customer device 104 to provider device 108. In contrast, data traffic may be able to tolerate transmission delays in going from customer device 104 to provider device 108. Customer device 104 may be configured to operate on voice traffic in a manner that prevents it from incurring undesirable delays when being conveyed from customer device 104 to provider device 108.
  • When incoming traffic is received at customer device 104, the incoming traffic may be prioritized. For example, voice traffic may be given a first priority, such as a strict-high priority, and data traffic may be given a priority lower than the first priority. Strict-high priority may identify traffic that cannot tolerate delays, while the lower priority associated with data traffic may indicate that data traffic can tolerate delays.
  • One or more QoS policies may be applied to the incoming traffic to assign priorities to the incoming traffic (act 402). For example, a QoS policy may dictate that voice traffic should be transferred from customer device 104 to provider device 108 in a manner that prevents the voice traffic from being delayed. The applied QoS policy may further dictate that data traffic can be handled in a way that facilitates transfer from customer device 104 to provider device 108 using techniques that can lead to some delay, such as by breaking data traffic into smaller pieces before transferring the data traffic from customer device 104 to provider device 108.
  • A QoS policy may take into account aspects of customer device 104 and/or aspects of provider device 108 when applying policies to queued traffic. For example, the QoS policy may take into account the bandwidth of egress links coupling customer device 104 to provider device 108. Egress links coupling customer device 104 and provider device 108 may be accounted for individually or as a logical group, such as a bundle. For example, the QoS policy may take into account a first throughput rate associated with a first egress link and a second throughput associated with a second egress link. A QoS policy that is applied to the voice traffic and/or data traffic in conjunction with egress links that are treated as a bundle may be referred to as a per-bundle QoS policy. A per bundle QoS policy may refer to an aggregate throughput for egress links included in the bundle. Treating egress links as a bundle may facilitate more efficient operation of customer device 104 since substantially all egress links associated with customer device 104 may be run at their maximum bandwidths to achieve an aggregate throughput associated with the bundle.
  • Incoming traffic may be queued according to the one or more applied QoS policies (act 404). For example, voice traffic and data traffic may be associated with an input queue where the voice and data traffic are arranged in the queue according to the assigned priorities. The QoS policy may cause voice traffic to be arranged in the queue so as to exit the queue before some or all of the data traffic.
  • Queuing of traffic and the application of QoS policies to incoming traffic may operate to designate incoming traffic into two groups. For example voice traffic may be designated as traffic that should not be fragmented (e.g., traffic that should be transmitted intact), while data traffic may be designated as traffic that can be fragmented (e.g., traffic that can be divided into smaller units).
  • Traffic that can be fragmented may be fragmented into smaller units, referred to as fragments (act 406). For example, fragmenting engine 250 may divide data traffic into fragments before making them available to another device, such as sequencer 260. Fragments may facilitate load balancing by providing small pieces of data that can be spread across egress links in a manner that provides for more controlled bandwidth management than can be accomplished via entire data units, such as would be the case if data traffic were not fragmented.
  • The fragmented traffic and the nonfragmented traffic may be sequenced (act 408). For example, sequence numbers may be associated with each fragment via sequencer 260. Sequencer 260 may sequence fragmented traffic and/or non-fragmented traffic by interleaving the fragmented traffic and non-fragmented traffic. For example, sequencer 260 may interleave the non-fragmented voice traffic with fragments formed from data traffic. The sequencing operation may allow interleaved traffic to be spread across multiple egress links, such as physical links 106A-D, in a manner that facilitates efficient use of the individual bandwidths associated with the egress links as well as a bandwidth associated with a bundle formed by the virtual grouping of the egress links. Also, non-fragmented voice traffic may be interleaved with fragments of data traffic so as to ensure that voice traffic is not delayed when being transferred from customer device 104 to provider device 108.
  • To facilitate the tracking of fragments in the fragmented traffic, sequencer 260 may assign sequence numbers to the traffic. The sequence numbers may facilitate the reassembly of the traffic. The sequenced traffic may be shaped according to centralized QoS policies implemented by control unit 230 (act 410). For example, interleaved traffic may be shaped so as not to exceed individual link bandwidths and/or a bundle bandwidth. For example, an egress link may have an egress queue associated therewith. Interleaved traffic may be shaped so as not to exceed a throughput capability of the egress queue. As a result, an egress queue may be able to operate without providing feedback, or backpressure, to control unit 230.
  • Shaped traffic may be provided to an egress device, such as first egress board 306, in conjunction with making multilink traffic available to a destination, such as provider device 108 (act 412). The egress device may include a number of egress queues that may be associated with a number of egress links, such as physical links 106A-D. The egress device may associate interleaved traffic, received from sequencer 260, with one or more egress queues based on criteria from the QoS policies. For example, QoS policies associated with voice traffic may dictate that other types of lower priority traffic, such as data traffic fragments, be sent after the voice traffic so as not to delay voice traffic.
  • Shaped traffic may be made available to egress queues in a manner that facilitates maintaining QoS priorities for traffic residing in the queues. For example, voice traffic may be provided to an egress queue in a manner that makes it leave the queue prior to a fragment of data traffic. In addition, fragments may be placed in the egress queue in a manner that causes them to exit the queue after voice traffic. In certain implementations, an egress queue may not be capable of providing feedback, or backpressure, to control unit 230 regarding a status of the egress queue. Implementations where an egress queue may not be capable of providing feedback may include, for example, implementations where control unit 230 is on a first card and the egress queue is on a second card, where both cards may operate in a device, such as customer device 104.
  • Control unit 230 may employ techniques for performing load balancing across egress links without requiring feedback information from egress queues and/or egress links, such as the status of a current fill rate associated with the egress queue. Control unit 230 may apply load balancing techniques at the link level and/or at a bundle level. Load balancing techniques may include balancing traffic according to increments of traffic, such as by packet, byte, fragment, etc. Increments used for balancing traffic can be substantially any size. In one implementation, byte-wise load balancing may be employed to shape traffic associated with an egress queue. Shaped traffic may be made available to a destination, such as provider device 108, as multilink traffic via a number of egress links, such as physical links 106A-D.
  • CONCLUSION
  • The foregoing description of exemplary embodiments of the invention provides illustration and description, but is not intended to be exhaustive or to limit the invention to the precise form disclosed. Modifications and variations are possible in light of the above teachings or may be acquired from practice of the invention. For example, while a series of acts have been described with respect to FIG. 4, the order of the acts may be varied in other implementations consistent with the invention. Moreover, non-dependent acts may be implemented in parallel.
  • No element, act, instruction, or signal flow used in the description of the application should be construed as critical or essential to the invention unless explicitly described as such. Also, as used herein, the article “a” is intended to include one or more items. Where only one item is intended, the term “one” or similar language is used. Further, the phrase “based on” is intended to mean “based, at least in part, on” unless explicitly stated otherwise.
  • The scope of the invention is defined by the claims and their equivalents.

Claims (26)

1. A network device comprising:
a control unit to:
associate a first priority with a first portion of incoming traffic and associate a second priority, which is different than the first priority, with a second portion of the incoming traffic;
fragment one of the first portion or the second portion of the incoming traffic, based on which of the first portion or the second portion has a higher priority, to produce a plurality of fragments; and
sequentially interleave the fragmented one of the first portion or the second portion of the incoming traffic with a non-fragmented portion of the incoming traffic to produce sequenced traffic.
2-38. (canceled)
39. The network device of claim 1, where when the fragmented one of the first portion or the second portion, of the incoming traffic, is sequentially interleaved with the non-fragmented portion, of the incoming traffic, the control unit further is to:
associate a sequence number with each of the plurality of fragments, and
sequentially interleave the fragmented one of the first portion or the second portion of the incoming traffic based on the sequence numbers.
40. The network device of claim 1, where the control unit is further to:
shape the sequenced traffic to produce shaped traffic.
41. The network device of claim 1, further comprising:
at least one egress interface to output the sequenced traffic from the network device.
42. The network device of claim 41, where the at least one egress interface includes two or more egress interfaces, and the control unit is further to:
divide the sequenced traffic among the two or more egress interfaces.
43. The network device of claim 1, further comprising:
an egress interface, including an egress queue, where the egress interface is to receive the first portion and second portion of the sequenced traffic and provide no information to the control unit regarding a status of the egress queue.
44. The network device of claim 1, where the first priority and the second priority are selected from a group of priorities according to a Quality of Service (QoS) policy.
45. The network device of claim 44, where the group of priorities includes at least the first priority and the second priority, where the first priority is higher than the second priority.
46. The network device of claim 45, where members of the group of priorities are used in conjunction with the QoS policy to determine whether the first portion of the incoming traffic or the second portion of the incoming traffic should be fragmented.
47. The network device of claim 1, where the control unit further is to:
perform load balancing of a first link that is associated with the first portion of the multilink traffic and load balancing of a second link that is associated with the second portion of the multilink traffic.
48. The network device of claim 47, where the load balancing takes into account a transmit rate, a buffer size, or a link priority associated with the first link or the second link.
49. The network device of claim 1, where a first link is associated with the first portion of multilink traffic and a second link is associated with the second portion of multilink traffic and where the control unit further is to:
implement byte-wise load balancing of the first link and the second link.
50. The network device of claim 1, where the control unit further is to:
generate, for the plurality of fragments, a bandwidth reduction value based on a difference between a first length of the plurality of fragments and a second, compressed length of the plurality of fragments, where the bandwidth reduction value is to be used by the control unit to handle subsequent incoming traffic in a manner that uses an additional amount of bandwidth that is substantially equal to the bandwidth reduction value.
51. The network device of claim 40, further comprising:
a first egress queue associated with the first portion of the incoming traffic, where the first egress queue is to receive shaped traffic in a manner that discourages overrunning the first egress queue; and
a second egress queue associated with the second portion of the incoming traffic, where the second egress queue is to receive shaped traffic in a manner that discourages overrunning the second egress queue.
52. The network device of claim 51, where a first link is associated with the first egress queue and a second link is associated with the second egress queue, and where the first link and the second link are logically associated as a bundle having a bundle bandwidth that includes a first bandwidth associated with the first link and a second bandwidth associated with the second link, and where shaping the sequenced traffic takes the bundle bandwidth into account.
53. A method comprising:
associating, using a control unit, a first priority with a first portion of incoming traffic and associating a second priority, which is different than the first priority, with a second portion of the incoming traffic;
fragmenting, using the control unit, one of the first portion or the second portion of the incoming traffic, based on which of the first portion or the second portion has a higher priority, to produce a plurality of fragments; and
sequentially interleaving, using the control unit, the fragmented one of the first portion or the second portion of the incoming traffic with a non-fragmented portion of the incoming traffic to produce sequenced traffic that is to be made available to a first link and a second link, as multilink traffic, where the first link carries a first portion of the multilink traffic and the second link carries a second portion of the multilink traffic.
54. The method of claim 53, where when the fragmented one of the first portion or the second portion, of the incoming traffic, is sequentially interleaved with the non-fragmented portion, of the incoming traffic, the method further comprises:
associating, using the control unit, a sequence number with each of the plurality of fragments, and
sequentially interleaving, using the control unit, the fragmented one of the first portion or the second portion of the incoming traffic based on the sequence numbers.
55. The method of claim 53, further comprising:
shaping, using the control unit, the sequenced traffic based on a bundle bandwidth that is formed by a logical association of a first bandwidth associated with the first link and a second bandwidth associated with the second link.
56. The method of claim 55, where shaping the sequenced traffic further comprises:
shaping, using the control unit, the sequenced traffic so that the sequenced traffic uses less than one hundred percent of the bundle bandwidth.
57. The method of claim 53, further comprising:
outputting, using the control unit, the sequenced traffic from at least one egress interface associated with the network device.
58. The method of claim 57, where the at least one egress interface includes 2 or more egress interfaces, and outputting the sequenced traffic further comprises:
dividing the sequenced traffic among the at least one egress interface.
59. The method of claim 53, further comprising:
shaping the sequenced traffic to discourage overrunning a first queue associated with the first link or a second queue associated with the second link.
60. The method of claim 53, further comprising:
shaping, using the control unit, the sequenced traffic based on an operator input or a system input, where the operator input or the system input is to configure a shaping rate to discourage dropping one or more of the plurality of fragments.
61. The method of claim 53, further comprising:
shaping, using the control unit, the sequenced flow, where the shaping takes into account a traffic rate and overhead associated with shaping traffic.
62. A system, comprising:
a processor; and
a memory that stores one or more instructions that when executed by the processor, cause the processor to:
receive incoming traffic from a network;
associate a first priority with a first portion of incoming traffic and associate a second priority, which is different than the first priority, with a second portion of the incoming traffic;
fragment the portion of the incoming traffic that has a lower priority level and sequentially interleave the fragmented portion of the incoming traffic with another, non-fragmented, portion of the incoming traffic, having a higher priority level, to produce a sequenced flow;
shape the sequenced flow to discourage overrunning at least one of a plurality of egress queues associated with sequenced traffic.
US13/029,181 2005-11-16 2011-02-17 Multilink traffic shaping Abandoned US20110134752A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US13/029,181 US20110134752A1 (en) 2005-11-16 2011-02-17 Multilink traffic shaping

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US11/274,509 US7911953B1 (en) 2005-11-16 2005-11-16 Multilink traffic shaping
US13/029,181 US20110134752A1 (en) 2005-11-16 2011-02-17 Multilink traffic shaping

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US11/274,509 Continuation US7911953B1 (en) 2005-11-16 2005-11-16 Multilink traffic shaping

Publications (1)

Publication Number Publication Date
US20110134752A1 true US20110134752A1 (en) 2011-06-09

Family

ID=43741799

Family Applications (2)

Application Number Title Priority Date Filing Date
US11/274,509 Active 2030-01-18 US7911953B1 (en) 2005-11-16 2005-11-16 Multilink traffic shaping
US13/029,181 Abandoned US20110134752A1 (en) 2005-11-16 2011-02-17 Multilink traffic shaping

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US11/274,509 Active 2030-01-18 US7911953B1 (en) 2005-11-16 2005-11-16 Multilink traffic shaping

Country Status (1)

Country Link
US (2) US7911953B1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110312267A1 (en) * 2009-02-02 2011-12-22 Ajou University Industry-Academic Cooperation Foundation Apparatus and method for relaying multiple links in a communication system
US20150003466A1 (en) * 2013-06-28 2015-01-01 Broadcom Corporation Enhanced Link Aggregation in a Communications System
CN105207947A (en) * 2015-08-28 2015-12-30 网宿科技股份有限公司 rogressive flow scheduling method and system capable of filtering vibration
US10367723B2 (en) * 2015-03-28 2019-07-30 Huawei Technologies, Co., Ltd. Packet sending method and apparatus based on multi-link aggregation
EP3968578A1 (en) * 2020-09-11 2022-03-16 Deutsche Telekom AG Multipath-capable communication device

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8693308B2 (en) 2006-02-10 2014-04-08 Aviat U.S., Inc. System and method for resilient wireless packet communications
US7756029B2 (en) * 2007-05-24 2010-07-13 Harris Stratex Networks Operating Corporation Dynamic load balancing for layer-2 link aggregation
US8264953B2 (en) 2007-09-06 2012-09-11 Harris Stratex Networks, Inc. Resilient data communications with physical layer link aggregation, extended failure detection and load balancing
US8959245B2 (en) * 2008-11-25 2015-02-17 Broadcom Corporation Multiple pathway session setup to support QoS services
US9491098B1 (en) * 2013-11-18 2016-11-08 Amazon Technologies, Inc. Transparent network multipath utilization through encapsulation
US9509616B1 (en) 2014-11-24 2016-11-29 Amazon Technologies, Inc. Congestion sensitive path-balancing
US10038741B1 (en) 2014-11-24 2018-07-31 Amazon Technologies, Inc. Selective enabling of sequencing for encapsulated network traffic

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5392280A (en) * 1994-04-07 1995-02-21 Mitsubishi Electric Research Laboratories, Inc. Data transmission system and scheduling protocol for connection-oriented packet or cell switching networks
US6502139B1 (en) * 1999-06-01 2002-12-31 Technion Research And Development Foundation Ltd. System for optimizing video on demand transmission by partitioning video program into multiple segments, decreasing transmission rate for successive segments and repeatedly, simultaneously transmission
US20040032875A1 (en) * 2002-08-19 2004-02-19 Bly Keith Michael Bandwidth allocation systems and methods
US6738351B1 (en) * 2000-05-24 2004-05-18 Lucent Technologies Inc. Method and apparatus for congestion control for packet-based networks using voice compression
US6778495B1 (en) * 2000-05-17 2004-08-17 Cisco Technology, Inc. Combining multilink and IP per-destination load balancing over a multilink bundle
US20040208120A1 (en) * 2003-01-21 2004-10-21 Kishan Shenoi Multiple transmission bandwidth streams with defferentiated quality of service
US20050204252A1 (en) * 1999-03-10 2005-09-15 Matsushita Electric Industrial Co., Ltd. Reception apparatus
US20060062224A1 (en) * 2004-09-22 2006-03-23 Amir Levy Link fragment interleaving with fragmentation preceding queuing
US20060165172A1 (en) * 2005-01-21 2006-07-27 Samsung Electronics Co., Ltd. Method for transmitting data without jitter in synchronous Ethernet
US20060239303A1 (en) * 2005-04-26 2006-10-26 Samsung Electronics Co., Ltd Method of performing periodical synchronization for ensuring start of super frame in residential Ethernet system
US7315900B1 (en) * 2001-06-20 2008-01-01 Juniper Networks, Inc. Multi-link routing
US7317730B1 (en) * 2001-10-13 2008-01-08 Greenfield Networks, Inc. Queueing architecture and load balancing for parallel packet processing in communication networks
US20080155146A1 (en) * 2003-08-15 2008-06-26 Carl Christensen Broadcast Router With Multiple Expansion Capabilities

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5392280A (en) * 1994-04-07 1995-02-21 Mitsubishi Electric Research Laboratories, Inc. Data transmission system and scheduling protocol for connection-oriented packet or cell switching networks
US20050204252A1 (en) * 1999-03-10 2005-09-15 Matsushita Electric Industrial Co., Ltd. Reception apparatus
US6502139B1 (en) * 1999-06-01 2002-12-31 Technion Research And Development Foundation Ltd. System for optimizing video on demand transmission by partitioning video program into multiple segments, decreasing transmission rate for successive segments and repeatedly, simultaneously transmission
US7613110B1 (en) * 2000-05-17 2009-11-03 Cisco Technology, Inc. Combining multilink and IP per-destination load balancing over a multilink bundle
US6778495B1 (en) * 2000-05-17 2004-08-17 Cisco Technology, Inc. Combining multilink and IP per-destination load balancing over a multilink bundle
US6738351B1 (en) * 2000-05-24 2004-05-18 Lucent Technologies Inc. Method and apparatus for congestion control for packet-based networks using voice compression
US7315900B1 (en) * 2001-06-20 2008-01-01 Juniper Networks, Inc. Multi-link routing
US7317730B1 (en) * 2001-10-13 2008-01-08 Greenfield Networks, Inc. Queueing architecture and load balancing for parallel packet processing in communication networks
US20040032875A1 (en) * 2002-08-19 2004-02-19 Bly Keith Michael Bandwidth allocation systems and methods
US20040208120A1 (en) * 2003-01-21 2004-10-21 Kishan Shenoi Multiple transmission bandwidth streams with defferentiated quality of service
US20080155146A1 (en) * 2003-08-15 2008-06-26 Carl Christensen Broadcast Router With Multiple Expansion Capabilities
US20060062224A1 (en) * 2004-09-22 2006-03-23 Amir Levy Link fragment interleaving with fragmentation preceding queuing
US20060165172A1 (en) * 2005-01-21 2006-07-27 Samsung Electronics Co., Ltd. Method for transmitting data without jitter in synchronous Ethernet
US20060239303A1 (en) * 2005-04-26 2006-10-26 Samsung Electronics Co., Ltd Method of performing periodical synchronization for ensuring start of super frame in residential Ethernet system

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110312267A1 (en) * 2009-02-02 2011-12-22 Ajou University Industry-Academic Cooperation Foundation Apparatus and method for relaying multiple links in a communication system
US9071994B2 (en) * 2009-02-02 2015-06-30 Ajou University Industry-Academic Cooperation Foundation Apparatus and method for relaying multiple links in a communication system
US20150003466A1 (en) * 2013-06-28 2015-01-01 Broadcom Corporation Enhanced Link Aggregation in a Communications System
US9203770B2 (en) * 2013-06-28 2015-12-01 Broadcom Corporation Enhanced link aggregation in a communications system
US10367723B2 (en) * 2015-03-28 2019-07-30 Huawei Technologies, Co., Ltd. Packet sending method and apparatus based on multi-link aggregation
CN105207947A (en) * 2015-08-28 2015-12-30 网宿科技股份有限公司 rogressive flow scheduling method and system capable of filtering vibration
EP3968578A1 (en) * 2020-09-11 2022-03-16 Deutsche Telekom AG Multipath-capable communication device
US20220086094A1 (en) * 2020-09-11 2022-03-17 Deutsche Telekom Ag Multipath-capable communication device

Also Published As

Publication number Publication date
US7911953B1 (en) 2011-03-22

Similar Documents

Publication Publication Date Title
US7911953B1 (en) Multilink traffic shaping
US7936770B1 (en) Method and apparatus of virtual class of service and logical queue representation through network traffic distribution over multiple port interfaces
EP1457008B1 (en) Methods and apparatus for network congestion control
US8064344B2 (en) Flow-based queuing of network traffic
US7151744B2 (en) Multi-service queuing method and apparatus that provides exhaustive arbitration, load balancing, and support for rapid port failover
US5918021A (en) System and method for dynamic distribution of data packets through multiple channels
US8125904B2 (en) Method and system for adaptive queue and buffer control based on monitoring and active congestion avoidance in a packet network switch
US8472444B2 (en) Method and apparatus for handling traffic in a data communication network
US7602809B2 (en) Reducing transmission time for data packets controlled by a link layer protocol comprising a fragmenting/defragmenting capability
US8284789B2 (en) Methods and apparatus for providing dynamic data flow queues
JP5677588B2 (en) System and method for multi-channel packet transmission
US8711689B1 (en) Dynamic trunk distribution on egress
WO2007088525A2 (en) Method and system for internal data loop back in a high data rate switch
US6473815B1 (en) Queue sharing
US8630296B2 (en) Shared and separate network stack instances
EP1206079B1 (en) End-to-end prioritized data delivery on networks using ip over frame relay
US7016302B1 (en) Apparatus and method for controlling queuing of data at a node on a network
EP1476994B1 (en) Multiplexing of managed and unmanaged traffic flows over a multi-star network
US7072352B2 (en) Inverse multiplexing of unmanaged traffic flows over a multi-star network
US7009973B2 (en) Switch using a segmented ring
EP1797682A2 (en) Quality of service (qos) class reordering
JP4391346B2 (en) COMMUNICATION CONTROL METHOD, COMMUNICATION CONTROL DEVICE, CONTROL PROGRAM, AND RECORDING MEDIUM
JP2002135309A (en) Band controller and band control network using it

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION