WO2010063063A1 - Method and apparatus for network traffic control - Google Patents

Method and apparatus for network traffic control Download PDF

Info

Publication number
WO2010063063A1
WO2010063063A1 PCT/AU2009/001546 AU2009001546W WO2010063063A1 WO 2010063063 A1 WO2010063063 A1 WO 2010063063A1 AU 2009001546 W AU2009001546 W AU 2009001546W WO 2010063063 A1 WO2010063063 A1 WO 2010063063A1
Authority
WO
WIPO (PCT)
Prior art keywords
delay
network
edge
queuing
time
Prior art date
Application number
PCT/AU2009/001546
Other languages
French (fr)
Inventor
Zvi Rosberg
Original Assignee
Commonwealth Scientific And Industrial Research Organisation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from AU2008906199A external-priority patent/AU2008906199A0/en
Application filed by Commonwealth Scientific And Industrial Research Organisation filed Critical Commonwealth Scientific And Industrial Research Organisation
Publication of WO2010063063A1 publication Critical patent/WO2010063063A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/02Details
    • H04L12/14Charging, metering or billing arrangements for data wireline or wireless communications
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/02Details
    • H04L12/14Charging, metering or billing arrangements for data wireline or wireless communications
    • H04L12/1485Tariff-related aspects
    • H04L12/1489Tariff-related aspects dependent on congestion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/25Flow control; Congestion control with rate being modified by the source upon detecting a change of network conditions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/28Timers or timing mechanisms used in protocols

Definitions

  • the present invention relates to communication networks and in particular to communication network control.
  • the invention has been developed primarily for use as a method and apparatus for communication network control based on forward queuing time and forward packet loss probing and will be described hereinafter with reference to this application. However, it will be appreciated that the invention is not limited to this particular field of use. BACKGROUND OF THE INVENTION [0003] Any discussion of the prior art throughout the specification should in no way be considered as an admission that such prior art is widely known or forms part of the common general knowledge in the field.
  • Rate control of flows in communication networks has been considered in the context of transfer control protocol (TCP).
  • TCP transfer control protocol
  • Previous analytical approaches have advocated extended proportional fair rates whereby rate control is modelled as an optimization problem- subject to link capacity constraints using a fluid model.
  • Two special cases of extended proportional fairness are proportional fairness and max-min fairness.
  • United States Patent Application 11/608,834 discloses a method and apparatus for communication network flow control.
  • the approach taken in this disclosure involves a communication networks comprising user devices, edge routers, core routers, access and core links.
  • An example is given for a method of computing and allocating fair transmission rates to user data flows from a plurality of quality of service levels. These fair rates satisfy the minimum transmission rates, the end-to-end delays and the data loss rates required by each flow, and can avoid network congestion.
  • the method comprises an edge router process and a flow control policing unit for each edge router and a core router process for each edge and core router. All processes are executed in a distributed and asynchronous manner, are stable and converge to the desired fair rates.
  • FIG. 1 shows a typical network infrastructure 100 associated with a wide area network (WAN).
  • the wide area network 110 includes a plurality of core routers 120 for facilitating the transport and switching (routing) of packets across the network.
  • An edge router 130 typically belongs to an end user (enterprise) organization and is located at the edge of a subscriber's network.
  • the edge router provides an interface between the local or metropolitan network (for example 140) and wide area network 110, such that devices 150 connected to a local or metropolitan network can transmit data onto - and received data from - the wide area network.
  • the preferred embodiments can be implemented by edge network devices and does not rely on the reconfiguration of the core network infrastructure. This edge device is typically owned and controlled by enterprises or dedicated application developers that require a substantially guaranteed end-to-end QoS across their systems.
  • FIGS. 2 A and 2B shows two example embodiments of an apparatus used in the implementation of a method for traffic control in a communications network.
  • FIG. 2A shows a modified edge router 230 in which the processing element 210 is included in the edge router.
  • FIG. 2B shows a network device 235 coupled to an edge router 130.
  • This network device 235 includes a processing element 210 and communication line adaptors 220 and 225, such that network packets can be received from - and transmitted to - the wide area network 110 to pass through the device and can be processed according to a method disclosed herein.
  • processing of network traffic can occur in a standalone blade or rack-mounted device, or within network edge router.
  • a method and apparatus for rate, delay and packet loss control of best effort application flows can be based on forward queuing time and forward packet loss probing and estimation to provide an edge-to-edge quality of service (QoS) solution for communication networks.
  • This control method and apparatus is implemented at the network edge in either an edge router or in a separate network device that is externally attaches to an edge router.
  • the control method utilizes only the information gathered in response to probe packets requesting provision for rate, delay and loss control of best effort flows.
  • Present control schemes for best-effort application flows are typically based on round- trip time (RTT) feedback information - or on some other feedback information - produced by and delivered to the flow sources from each core router along the forward path.
  • RTT round- trip time
  • RTT-based rate controls can result in unjustified low rates due to congestion on the backward path even though the forward path is not congested at all.
  • Alternative rate control methods that are based on feedback information typically depend on either the limited standard information provided by state of the art core routers or non-standard enhancements made to the core routers.
  • One previous rate-based algorithm which achieves proportional fair rates is based on a gradient search for an optimal solution of a primal problem (called the primal algorithm).
  • Another previous rate-based algorithm which achieves proportional fair rates is based on a gradient search for an optimal solution of a dual problem (called the dual algorithm).
  • Yet another previous rate-based algorithm which achieves proportional fair rates combines the primal and the dual algorithms, (see D. X. Wei, C. Jin and S. H. Low, "FAST TCP: Motivation, architecture, algorithms, performance," IEEE/ACM TON, vol. 14, no. 6, pp. 1246—1259, Dec. 2006, and Z. Rosberg, "Control Plane for End-to-End QoS Guarantee: A Theory and Its Application," Proc. IWQoS'08, pp. 269-278, June 2008).
  • RTT packet round-trip-time
  • a control method combining the explicit congestion notification (ECN) marking scheme with adaptive virtual queues has previously been developed, and stability of the primal and dual control algorithms under arbitrary time lags have been also studied. General sufficient conditions have been established for global stability in terms of the increase/decrease parameters of a congestion control algorithm and the price functions used at network links.
  • window-based controls use packet delay information, such as RTT, these delays are incorporated into the control method only through the delay-window assumption, which is an average law. More explicit incorporations of link delays have been previously proposed. For example, by representing each link delay as a function of its total load, and where the actual link delay trajectories are not left out of the framework.
  • the rate of flow from source to destination can be unjustifiably reduced due a congested backward path although the forward path is not congested and has no packet loss.
  • best-effort flows have no stringent delay and packet loss requirements, it is not acceptable to provide them a service with RTT larger than 250 milliseconds and packet loss rate larger than 1% to 3% in the core network.
  • a method for network traffic control between a first edge device and a second edge device comprising the steps of: (a) performing optimisation of an objective function utilising the forward queuing time associated with each active flow; and (b) determining a respective flow rate for each said active flow.
  • the optimisation applies a respective fixed delay price to each respective said active flow.
  • the optimisation maximizes ⁇ U n ⁇ x n )/cc n subject to link capacity n constraints, wherein U n (x n ) is a rate utility function, and a n is a positive delay price.
  • the forward delay estimation can be used to control the rate of best effort application network traffic flow.
  • the method also includes the step of setting an upper bound a n for each said delay price.
  • the upper bound applied to each said delay price is inversely proportional to respective forward queuing delays.
  • the method can also comprise the step of adapting each said delay price based on a calculated forward queuing delay and/or calculated forward packet loss.
  • each delay price is adapted based on a discrete time formula of the form: dt if excess delay or excess loss event occures
  • the method further comprises the step of substantially synchronising a local clock signal of said first device with a local clock signal of said second edge device.
  • the synchronising a local clock signal includes a step of sending to said second edge device a zero queuing packet comprising a measure indicative of said first device local time clock, such that receipt of said zero queuing packet at said second edge device is delayed substantially by a one forward propagation delay, and wherein said local clock synchronisation incorporates a one forward propagation delay offset.
  • the method also comprises the step of calculating a measure indicative of forward queuing delay between said first device and said second device.
  • the method also comprises the step of continuously calculating an arithmetic or damped average of at least two instances of said measure indicative of forward queuing delay.
  • the method also comprises the step of calculating a measure indicative of forward packet loss between said first device and said second device.
  • the method also comprises the step of calculating an arithmetic or damped average of at least two instances of said measure indicative of forward packet loss.
  • the method also provides rate, delay and packet loss control of best effort application flows based on forward queuing time probing and estimation and on forward packet loss probing and estimation, wherein said method provides an edge-to-edge quality of service solution across said communication network.
  • a method for controlling active network traffic flows between a first and second edge device coupled to a communication network comprising a distributed control algorithms selected from the set including an algorithm based on a gradient search for an optimal solution of a primal problem; and an algorithm based on a gradient search for an optimal solution of a dual problem.
  • the method uses forward time (FWT) feedback information and/or and forward packet loss such that said method is suitable for best-effort flow control.
  • a first edge device for controlling active network traffic flows between said first edge device and a second edge device, wherein said first device is coupled to said second device via a communication network, said first edge device comprising: at least one network interface for allowing said first device to communicate with said second edge device via said communication data network; a processor element coupled to said network interface for performing a method of for controlling active network traffic flows between said first edge device and said second edge device, wherein said method comprises the steps of: (a) performing optimisation of an objective function utilising forward queuing time associated with each active flow; determining a respective flow rate for each said active flow.
  • the device is an edge router of said communication network or the device is coupled to an edge router of said communication network.
  • the communication network can includes a wired network and/or a wireless network.
  • FIG. 1 shows a typical network infrastructure for a wide area network (WAN);
  • WAN wide area network
  • FIG. 2A shows an example modified edge router in which a processing element is included for implementing a method according to the present invention
  • FIG. 2B shows an example network device coupled to an edge router, where the network device includes a processing element and communication line adaptors for implementing a method according to the present invention
  • FIG. 3 shows an example flowchart for implementing a method according to the present invention.
  • the method of the preferred embodiment uses forward time FWT (rather than RTT) and does not require additional feedback information from the routers.
  • FWT forward time
  • the preferred embodiment can be implemented in an apparatus placed only the edge network nodes.
  • this control method can be based on FWT and forward packet loss and is suitable for best-effort flow control.
  • present invention can be utilised by network equipment vendors, enterprise networked systems, and applications developers.
  • a deployment scenario could include the enterprise market, applications market (such as video/audio content delivery, real-time game systems) and vendors.
  • the preferred embodiments can include a processing element 210 which further comprising one or more from the set including Network Processors (NP), Application-Specific Integrated Circuit (ASIC), and Field Programmable Gate Array (FPGA).
  • This processing element is further coupled to communication line adaptors 220 and 225.
  • the preferred embodiment can be deployed amongst a wired and/or wireless broadband architecture, and can be used to substantially ensure an appropriate responsiveness of critical control traffic, voice and video traffic over wide area communication networks.
  • an embodiment of the present invention discloses a method to estimate forward delay (or forward time FWT) that is used to control the rates of best effort application network traffic flows. This method substantially removes bias associated with backward path congestion. This method does not require feedback from core routers, facilitating a solution that is suitable for deployment in existing communication network.
  • a RTT larger than 250 milliseconds and packet loss rate larger than 1% to 3% are generally not acceptable in current network deployments even for best effort flows.
  • flow rate control can substantially guarantee the meeting of such delay and packet loss requirement, without requiring any changes in the core routers or requiring over-provisioning. This can facilitate a network that is more economical and adaptable to traffic changes.
  • QoS quality of service
  • a method and apparatus for rate, delay and packet loss control of best effort flows based on forward queuing time and forward packet loss estimation is disclosed. This control method and apparatus is implemented at the edge network device (such as an edge router or a standalone device), and is facilitated by forward path probing.
  • FIG. 3 shows, by way of example only, a flow chart of the steps for controlling active network traffic flows between a first edge device and a second edge device coupled to a communication network, wherein the first edge device performs the method comprising the steps of:
  • Each flow is associated with a rate utility function U n ⁇ x n ) , and a route delay penalty function L n (t) .
  • U n U n (x n ) is a differentiable, strictly increasing and strictly concave function
  • L n (Y) X n Q n (t) , where Q n ⁇ t) is the forward (from source to destination) queuing time of flow n at time t .
  • the forward queuing time Q n (t) can be expressed mathematically by the following equation.
  • Rate utilities and delay penalties can be combined into a single objective function.
  • an objective function can be expressed mathematically as follows.
  • dt is a discrete time tuning parameter defining the update frequency.
  • Q(t) is estimated at the flow source edge by applying "foi"w ⁇ rd queuing time probing and estimation" as described below.
  • An advantage of using Q n (t) over other control methods based on route penalties delivered by the core routers is that it does not require core router modification (or enhancement). It will be further appreciated that the present control method (or algorithm) maximizes the following utility function subject to the link capacity constraints.
  • Best effort flows do not have stringent delay and loss requirement; however, they often prefer a maximum round trip delay (for example 250 milliseconds) and a packet loss less than few percents but not at the expense of a significantly low data rate.
  • the present method can set an upper bound Cc n on the delay price of each flow n .
  • the rate control expressed in equation (3) can be amended by adapting cc n . This amendment is based on two estimators, the forward queuing delay Q n ⁇ t), and the forward packet loss L n ⁇ f) .
  • Q n is defined to represent the associated upper bound on the end-to- end forward queuing time
  • L n is defined to represent the associated upper bound on the end- to-end forward packet loss.
  • a ⁇ t is defined to represent a delay price vector used at time t .
  • An excess delay event can be expressed at an updated delay estimation time t by ⁇ 2,, it) > Q n ⁇ .
  • An excess loss event can be expressed at an updated loss estimation time t by
  • the delay price vector used at time t can be adapted as follows:
  • Equation (6) shows a discrete time form of equation (5).
  • Equation (3) is discrete time tuning parameter that defines an update frequency. It will be appreciated that equations (3) and (6) express a mathematical relationship applicable to a method of discrete rate, delay and loss control.
  • Forward queuing delay estimation apparatus can be defined for networks having a zero queuing (ZQ) mechanism that forwards packets from a source to a destination without any queuing time.
  • ZQ zero queuing
  • a zero queuing (ZQ) mechanism can include the low latency queuing (LLQ) of Cisco IOS.
  • Packets employing a ZQ mechanism are marked accordingly and receive top priority scheduling in each router along their traffic path.
  • associated routers are typically configured to police (drop) all data packets marked as ZQ that exceed a predefined bandwidth allocated to ZQ packets. This queuing/scheduling scheme results in the total forward delay from source to destination of the un-policed ZQ packets equaling the forward propagation delay.
  • Substantial synchronization between edge nodes, defined by a source node s and a destination node d can be achieved using a virtual clock at d.
  • a virtual clock, v. , of a source s can be maintained at each destination d of a source-destination pair (s, d) associated with an active flow comprising an exchange of data packets.
  • a source s can regularly send ZQ packets to the destination d , to provide data indicative of the source local clock, t s .
  • the destination d Upon receiving the source local clock update t s , the destination d calculates (or updates) a time offset from s .
  • a ZQ packet providing data indicative of the source local clock t s , arrives at a destination node d after a one-way propagation delay.
  • the virtual clock of s at d is lagging by a one-way propagation delay.
  • Regularly updating OFS ⁇ d, s) can assist in minimizing an offset due to individual clock drifts.
  • Forward queuing time probing and estimation can be established between edge nodes, defined by a source node £ and a destination node d .
  • Each source s of a source-destination pair, (s, d) , with an active flow can maintain a forward queuing delay estimator, Q .
  • the source s can sample values of Q by regularly sending probe packets to the destination d .
  • Probe packets sent to the destination can be used for multiple purposes, one of which is delay estimation.
  • the probe packets are sent as best effort packets, i.e. they are not marked as ZQ packets, and therefore may experience queuing time.
  • each probe packet is time stamped by the source S with its local clock t s .
  • the destination time stamps the probe with its virtual clock V ⁇ of the relevant source node s .
  • the destination then can respond by sending the corresponding probe packet back to source s .
  • a source node receives a packet corresponding to a previously sent probe packet n
  • the source records q n as the nth sample of Q . This can be expressed mathematically be the following equation, where v s and t s are time stamps carried in the fields of the «th probe packet:
  • q n is the forward queuing time of probe n .
  • the value of Q can be initialized to a very large number and updated as a probe packet arrives. For example as the source node receives a returned probe packet n , Q can be updated by one of plurality of estimation procedures based on the sample f ⁇ ; ;1 ⁇ i ⁇ n) . Two example embodiments of estimation procedures are disclosed.
  • a first example embodiment for estimating forward queuing time involving calculation of the arithmetic average of the last k samples, can be expressed mathematically in the following equation.
  • a second example embodiment for estimating forward queuing time involving calculation of a damped average with factor 0 ⁇ ⁇ ⁇ 1 , can be expressed mathematically in the following equation.
  • Each source node s of a source-destination pair, (s, d) can maintain a forward packet loss estimator, L s .
  • the destination d also maintains a current forward packet loss estimator on behalf of the source s , Ll s .
  • the last sequence number received from a respective source node S is denoted by ?7 last .
  • L ⁇ s and L s can be set to one.
  • the destination node d receives probe packet representing with sequence number n , it records the next sample (for example the /th sample), of the number of losses so far denoted by /, .
  • the destination node d can then also update « last as expressed below.
  • L ⁇ s can be estimated by one of a plurality of estimation procedures based on the samples ⁇ / ;1 ⁇ j ⁇
  • Two embodiments for estimating forward packet loss are disclosed by way of example only.
  • a first example embodiment for estimating forward packet loss involving calculation of the arithmetic average of the last k samples, can be expressed mathematically in the following equation.
  • This on-the-fly calculation of a forward packet loss estimation can be implemented by setting ⁇ — / • ,, ,i + /;
  • a second example embodiment for estimating forward packet loss involving calculation of the damped average with factor 0 ⁇ ⁇ ⁇ 1 , can be expressed mathematically in the following equation. Ll ⁇ - ⁇ 1- ⁇ )Ll+ ⁇ JJ
  • the destination After every update of Ll at destination node d, the destination writes Ll in the probe packet and sends the probe back to source 5.
  • the source Upon probe reception at node s, the source updates L by setting Ll into L, i.e,, L ⁇ r- Ll
  • the invention may be embodied using devices conforming to other network standards and for other applications, including, for example other WLAN standards and other wireless standards.
  • Applications that can be accommodated include IEEE 802.11 wireless LANs and links, wired and wireless Ethernet.
  • wireless and its derivatives may be used to describe circuits, devices, systems, methods, techniques, communications channels, etc., that may communicate data through the use of modulated electromagnetic radiation through a non- solid medium.
  • the term does not imply that the associated devices do not contain any wires, although in some embodiments they might not.
  • the term "wired” and its derivatives may be used to describe circuits, devices, systems, methods, techniques, communications channels, etc., that may communicate data through the use of modulated electromagnetic radiation through a solid medium.
  • the associated devices are coupled by electrically conductive wires.
  • processor may refer to any device or portion of a device that processes electronic data, e.g., from registers and/or memory to transform that electronic data into other electronic data that, e.g., may be stored in registers and/or memory.
  • a "computer” or a “computing device” or a “computing machine” or a “computing platform” may include one or more processors.
  • the methodologies described herein are, in one embodiment, performable by one or more processors that accept computer-readable (also called machine-readable) code containing a set of instructions that when executed by one or more of the processors carry out at least one of the methods described herein.
  • Any processor capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken are included.
  • a typical processing system that includes one or more processors.
  • the processing system further may include a memory subsystem including main RAM and/or a static RAM, and/or ROM.
  • a computer-readable carrier medium may form, or be included in a computer program product.
  • the one or more processors operate as a standalone device or may be connected, e.g., networked to other processor(s), in a networked deployment, the one or more processors may operate in the capacity of a server or a client machine in server-client network environment, or as a peer machine in a peer-to-peer or distributed network environment.
  • the one or more processors may form a web appliance, a network router, switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine.
  • each of the methods described herein is in the form of a computer-readable carrier medium carrying a set of instructions, e.g., a computer program that are for execution on one or more processors.
  • embodiments of the present invention may be embodied as a method, an apparatus such as a special purpose apparatus, an apparatus such as a data processing system, or a computer-readable carrier medium.
  • the computer-readable carrier medium carries computer readable code including a set of instructions that when executed on one or more processors cause a processor or processors to implement a method.
  • aspects of the present invention may take the form of a method, an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects.
  • the present invention may take the form of carrier medium (e.g., a computer program product on a computer-readable storage medium) carrying computer-readable program code embodied in the medium.
  • the software may further be transmitted or received over a network via a network interface device.
  • the carrier medium is shown in an example embodiment to be a single medium, the term “carrier medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions.
  • the term “carrier medium” shall also be taken to include any medium that is capable of storing, encoding or carrying a set of instructions for execution by one or more of the processors and that cause the one or more processors to perform any one or more of the methodologies of the present invention.
  • a carrier medium may take many forms, including but not limited to, non- volatile media, volatile media, and transmission media.
  • any one of the terms comprises, comprising, comprised of or which comprises is an open term that means including at least the elements/features that follow, but not excluding others.
  • the term comprising, when used in the claims should not be interpreted as being limitative to the means or elements or steps listed thereafter.
  • the scope of the expression a device comprising A and B should not be limited to devices consisting only of elements A and B.
  • Any one of the terms including or which includes or that includes as used herein is also an open term that also means including at least the elements/features that follow the term, but not excluding others. Thus, including is synonymous with and means comprising.
  • Coupled when used in the claims, should not be interpreted as being limitative to direct connections only.
  • the terms “coupled” and “connected”, along with their derivatives, may be used. It should be understood that these terms are not intended as synonyms for each other.
  • the scope of the expression a device A coupled to a device B should not be limited to devices or systems wherein an output of device A is directly connected to an input of device B. It means that there exists a path between an output of A and an input of B which may be a path including other devices or means.
  • Coupled may mean that two or more elements are either in direct physical or electrical contact, or that two or more elements are not in direct contact with each other but yet still co-operate or interact with each other.

Abstract

A method and apparatus for controlling active network traffic flows between a first edge device and a second edge device, wherein said first device and said second device are coupled via a communication network, the method including the steps of: performing optimisation of an objective function utilising forward queuing time associated with each active flow; and determining a respective flow rate for each said active flow.

Description

METHOD AND APPARATUS FOR NETWORK TRAFFIC CONTROL FIELD OF THE INVENTION
[0001] The present invention relates to communication networks and in particular to communication network control. [0002] The invention has been developed primarily for use as a method and apparatus for communication network control based on forward queuing time and forward packet loss probing and will be described hereinafter with reference to this application. However, it will be appreciated that the invention is not limited to this particular field of use. BACKGROUND OF THE INVENTION [0003] Any discussion of the prior art throughout the specification should in no way be considered as an admission that such prior art is widely known or forms part of the common general knowledge in the field.
[0004] Rate control of flows in communication networks has been considered in the context of transfer control protocol (TCP). Previous analytical approaches have advocated extended proportional fair rates whereby rate control is modelled as an optimization problem- subject to link capacity constraints using a fluid model. Two special cases of extended proportional fairness are proportional fairness and max-min fairness.
[0005] Present control schemes for best-effort application flows typically aim at achieving fair rates subject to the link capacity constraints, with an aim for avoiding congestion. Moreover, these control schemes are typically either based on round-trip time (RTT) feedback information or on some other feedback information produced by and delivered to the flow sources from one or more core routers along the forward path. RTT-based rate controls can result in unjustified low rates due to congestion on the backward path, even though the forward path is not congested at all. Alternative rate control based on standard feedback information from the core routers can also result in substantially less than optimal network capacity utilization, due in part to a limited amount of information that can be extracted from standard feedback data produced in the prior art implementations. The controls that use non-standard feedback information from core routers further requires non-standard enhancement to be made to core routers along with non-standard signalling protocols. Such dependency on non-standard enhancements of core routers can make these controls schemes less commercially attractive.
[0006] United States Patent Application 11/608,834, by way of example, discloses a method and apparatus for communication network flow control. The approach taken in this disclosure involves a communication networks comprising user devices, edge routers, core routers, access and core links. An example is given for a method of computing and allocating fair transmission rates to user data flows from a plurality of quality of service levels. These fair rates satisfy the minimum transmission rates, the end-to-end delays and the data loss rates required by each flow, and can avoid network congestion. The method comprises an edge router process and a flow control policing unit for each edge router and a core router process for each edge and core router. All processes are executed in a distributed and asynchronous manner, are stable and converge to the desired fair rates. [0007] Although best-effort flows are typically not associated with stringent delay and packet loss requirements, an RTT larger than 250 milliseconds or a packet loss rate larger than 1% to 3% are generally not acceptable in current network deployments. Current controls schemes have no means to guarantee such delay and packet loss requirements for best effort flows except by over-provis ioning. [0008] FIG. 1 shows a typical network infrastructure 100 associated with a wide area network (WAN). The wide area network 110 includes a plurality of core routers 120 for facilitating the transport and switching (routing) of packets across the network. An edge router 130 typically belongs to an end user (enterprise) organization and is located at the edge of a subscriber's network. The edge router provides an interface between the local or metropolitan network (for example 140) and wide area network 110, such that devices 150 connected to a local or metropolitan network can transmit data onto - and received data from - the wide area network. [0009] The preferred embodiments can be implemented by edge network devices and does not rely on the reconfiguration of the core network infrastructure. This edge device is typically owned and controlled by enterprises or dedicated application developers that require a substantially guaranteed end-to-end QoS across their systems.
[0010] FIGS. 2 A and 2B shows two example embodiments of an apparatus used in the implementation of a method for traffic control in a communications network. [0011] FIG. 2A shows a modified edge router 230 in which the processing element 210 is included in the edge router. [0012] FIG. 2B shows a network device 235 coupled to an edge router 130. This network device 235 includes a processing element 210 and communication line adaptors 220 and 225, such that network packets can be received from - and transmitted to - the wide area network 110 to pass through the device and can be processed according to a method disclosed herein. [0013] It will be appreciated that processing of network traffic can occur in a standalone blade or rack-mounted device, or within network edge router.
[0014] By way of example, a method and apparatus for rate, delay and packet loss control of best effort application flows can be based on forward queuing time and forward packet loss probing and estimation to provide an edge-to-edge quality of service (QoS) solution for communication networks. This control method and apparatus is implemented at the network edge in either an edge router or in a separate network device that is externally attaches to an edge router. The control method utilizes only the information gathered in response to probe packets requesting provision for rate, delay and loss control of best effort flows. [0015] Present control schemes for best-effort application flows are typically based on round- trip time (RTT) feedback information - or on some other feedback information - produced by and delivered to the flow sources from each core router along the forward path. As noted previously, RTT-based rate controls can result in unjustified low rates due to congestion on the backward path even though the forward path is not congested at all. [0016] Alternative rate control methods that are based on feedback information typically depend on either the limited standard information provided by state of the art core routers or non-standard enhancements made to the core routers.
[0017] It will be appreciated that previous analytical alternatives provide two distributed control algorithms and have advocated extended proportional fairness rates whereby rate control is modelled as an optimization problem, subject to link capacity constraints, using a fluid model. Two special cases of extended proportional fairness are proportional fairness and max-min fairness, (see A. Charny, "An algorithm for rate allocation in a packet-switching network with feedback" M. A. thesis, MIT, Cambridge, MA, 1994. J. Mo and J. Walrand, "Fair end-to-end window-based congestion control" IEEE/ACM TON, vol. 8, no. 5, pp. 556-567, Oct. 2000). [0018] One previous rate-based algorithm which achieves proportional fair rates is based on a gradient search for an optimal solution of a primal problem (called the primal algorithm). Another previous rate-based algorithm which achieves proportional fair rates is based on a gradient search for an optimal solution of a dual problem (called the dual algorithm). Yet another previous rate-based algorithm which achieves proportional fair rates combines the primal and the dual algorithms, (see D. X. Wei, C. Jin and S. H. Low, "FAST TCP: Motivation, architecture, algorithms, performance," IEEE/ACM TON, vol. 14, no. 6, pp. 1246—1259, Dec. 2006, and Z. Rosberg, "Control Plane for End-to-End QoS Guarantee: A Theory and Its Application," Proc. IWQoS'08, pp. 269-278, June 2008).
[0019] Other previous end-to-end window-based congestion controls (in contrast to the rate- based algorithms above) use packet round-trip-time (RTT) information to achieve either extended fair rates, or proportional fair rates or max-min fair rates. [0020] Window-based controls using RTT information typically assume at any time t that, the rate of each nth flow ( Xn (t)), its associated window size ( Wn (t)), and its associated RTT
(RTT11 (t)), are related by the following equation:
/ x W (t ) xn(t) = Λ >
RTTM [0021] This relation can be described as a deterministic version of "Little's Theorem" for ergodic queuing systems, and can be referred to as the delay-window assumption. It will be appreciated that this relation can be further relaxed.
[0022] A control method combining the explicit congestion notification (ECN) marking scheme with adaptive virtual queues has previously been developed, and stability of the primal and dual control algorithms under arbitrary time lags have been also studied. General sufficient conditions have been established for global stability in terms of the increase/decrease parameters of a congestion control algorithm and the price functions used at network links. [0023] Although window-based controls use packet delay information, such as RTT, these delays are incorporated into the control method only through the delay-window assumption, which is an average law. More explicit incorporations of link delays have been previously proposed. For example, by representing each link delay as a function of its total load, and where the actual link delay trajectories are not left out of the framework. Alternatively, actual link delay trajectories have been included in the framework by using differential equations to specify dynamic for which global asymptotic stable primal, dual and combined primal-dual controls were proposed. An earlier proposed combined rate and end-to-end delay control that aims at QoS guarantee flows which is based on RTT feedback information and requires new components to be included in core routers has also been proposed. [0024] Typically, the control schemes for best-effort flows are trying to achieve fair rates subject only to the link capacity constraints. It has been identified that these control schemes are typically based on round-trip time (RTT) feedback information rather than on forward time (FWT) feedback information. RTT-based rate controls are biased and may not achieve the desired effect since they also depend on the backward time (BWT). For example, the rate of flow from source to destination can be unjustifiably reduced due a congested backward path although the forward path is not congested and has no packet loss. Although best-effort flows have no stringent delay and packet loss requirements, it is not acceptable to provide them a service with RTT larger than 250 milliseconds and packet loss rate larger than 1% to 3% in the core network. OBJECT OF THE INVENTION [0025] It is an object of the present invention to overcome or ameliorate at least one of the disadvantages of the prior art, or to provide a useful alternative.
[0026] It is an object of the invention in its preferred form to provide a method for network traffic control.
SUMMARY OF THE INVENTION
[0027] In accordance with a first aspect of the present invention, there is provided a method for network traffic control between a first edge device and a second edge device, wherein said first device and said second device are coupled via a communication network, said method comprising the steps of: (a) performing optimisation of an objective function utilising the forward queuing time associated with each active flow; and (b) determining a respective flow rate for each said active flow. [0028] Preferably, the optimisation applies a respective fixed delay price to each respective said active flow. Preferably, the optimisation maximizes ^ Un \xn )/ccn subject to link capacity n constraints, wherein Un (xn ) is a rate utility function, and an is a positive delay price.
[0029] Preferably, the determining a respective flow rate for each said active flow includes the step of solving a discrete time formula of the form: xn(t + dt) = xn{t)+ dtx (Un' (xM - an(t) Qn{t)), [0030] wherein X11 (t) is a flow rates at time t , Un (xn ) is a rate utility function, an is a positive delay price, Qn (t) is a forward queuing time, and dt is a discrete time tuning parameter defining the update frequency. The forward delay estimation can be used to control the rate of best effort application network traffic flow. Preferably, the method also includes the step of setting an upper bound an for each said delay price. Preferably, the upper bound applied to each said delay price is inversely proportional to respective forward queuing delays. The method can also comprise the step of adapting each said delay price based on a calculated forward queuing delay and/or calculated forward packet loss. Preferably, each delay price is adapted based on a discrete time formula of the form: dt if excess delay or excess loss event occures
«„(0+ an(t)' and an {t) < an an (t + dt) = 0 otherwise
[0031] wherein an it) is a positive delay price at time /, an is an upper bound for each delay price, and dt is discrete time tuning parameter for defining an update frequency. [0032] Preferably, the method further comprises the step of substantially synchronising a local clock signal of said first device with a local clock signal of said second edge device. Preferably, the synchronising a local clock signal includes a step of sending to said second edge device a zero queuing packet comprising a measure indicative of said first device local time clock, such that receipt of said zero queuing packet at said second edge device is delayed substantially by a one forward propagation delay, and wherein said local clock synchronisation incorporates a one forward propagation delay offset. [0033] Preferably, the method also comprises the step of calculating a measure indicative of forward queuing delay between said first device and said second device. Preferably, the method also comprises the step of continuously calculating an arithmetic or damped average of at least two instances of said measure indicative of forward queuing delay. Preferably, the method also comprises the step of calculating a measure indicative of forward packet loss between said first device and said second device. Preferably, the method also comprises the step of calculating an arithmetic or damped average of at least two instances of said measure indicative of forward packet loss. Preferably, the method also provides rate, delay and packet loss control of best effort application flows based on forward queuing time probing and estimation and on forward packet loss probing and estimation, wherein said method provides an edge-to-edge quality of service solution across said communication network. [0034] In accordance with a further aspect of the present invention, there is provided a method for controlling active network traffic flows between a first and second edge device coupled to a communication network, said method comprising a distributed control algorithms selected from the set including an algorithm based on a gradient search for an optimal solution of a primal problem; and an algorithm based on a gradient search for an optimal solution of a dual problem. Preferably, the method uses forward time (FWT) feedback information and/or and forward packet loss such that said method is suitable for best-effort flow control. [0035] In accordance with a further aspect of the present invention, there is provided a first edge device for controlling active network traffic flows between said first edge device and a second edge device, wherein said first device is coupled to said second device via a communication network, said first edge device comprising: at least one network interface for allowing said first device to communicate with said second edge device via said communication data network; a processor element coupled to said network interface for performing a method of for controlling active network traffic flows between said first edge device and said second edge device, wherein said method comprises the steps of: (a) performing optimisation of an objective function utilising forward queuing time associated with each active flow; determining a respective flow rate for each said active flow.
[0036] Preferably, the device is an edge router of said communication network or the device is coupled to an edge router of said communication network. The communication network can includes a wired network and/or a wireless network. BRIEF DESCRIPTION OF THE DRA WINGS
[0037] A preferred embodiment of the invention will now be described, by way of example only, with reference to the accompanying drawings in which:
FIG. 1 shows a typical network infrastructure for a wide area network (WAN);
FIG. 2A shows an example modified edge router in which a processing element is included for implementing a method according to the present invention; FIG. 2B shows an example network device coupled to an edge router, where the network device includes a processing element and communication line adaptors for implementing a method according to the present invention; and
FIG. 3 shows an example flowchart for implementing a method according to the present invention.
PREFERRED EMBODIMENT OF THE INVENTION
[0038] Unlike earlier schemes, the method of the preferred embodiment uses forward time FWT (rather than RTT) and does not require additional feedback information from the routers. The preferred embodiment can be implemented in an apparatus placed only the edge network nodes. Furthermore, this control method can be based on FWT and forward packet loss and is suitable for best-effort flow control.
[0039] It will be appreciated that present invention can be utilised by network equipment vendors, enterprise networked systems, and applications developers. As such, a deployment scenario could include the enterprise market, applications market (such as video/audio content delivery, real-time game systems) and vendors.
[0040] Referring to Fig. 2, the preferred embodiments, by way of example only, can include a processing element 210 which further comprising one or more from the set including Network Processors (NP), Application-Specific Integrated Circuit (ASIC), and Field Programmable Gate Array (FPGA). This processing element is further coupled to communication line adaptors 220 and 225.
[0041] The preferred embodiment can be deployed amongst a wired and/or wireless broadband architecture, and can be used to substantially ensure an appropriate responsiveness of critical control traffic, voice and video traffic over wide area communication networks. [0042] It will be appreciated that an embodiment of the present invention discloses a method to estimate forward delay (or forward time FWT) that is used to control the rates of best effort application network traffic flows. This method substantially removes bias associated with backward path congestion. This method does not require feedback from core routers, facilitating a solution that is suitable for deployment in existing communication network. [0043] It will be appreciated that a RTT larger than 250 milliseconds and packet loss rate larger than 1% to 3% are generally not acceptable in current network deployments even for best effort flows. In an embodiment, flow rate control can substantially guarantee the meeting of such delay and packet loss requirement, without requiring any changes in the core routers or requiring over-provisioning. This can facilitate a network that is more economical and adaptable to traffic changes. [0044] In an embodiment, as part of an edge-to-edge quality of service (QoS) solution for communication networks, a method and apparatus for rate, delay and packet loss control of best effort flows based on forward queuing time and forward packet loss estimation is disclosed. This control method and apparatus is implemented at the edge network device (such as an edge router or a standalone device), and is facilitated by forward path probing. [0045] FIG. 3 shows, by way of example only, a flow chart of the steps for controlling active network traffic flows between a first edge device and a second edge device coupled to a communication network, wherein the first edge device performs the method comprising the steps of:
(a) performing optimisation of an objective function 360 utilising forward queuing time associated with each active flow; and (b) determining a respective flow rate 370 for each said active flow.
[0046] In another embodiment, there is provided a series of steps for controlling active network traffic flows between a first edge device and a second edge device coupled to a communication network, wherein the first edge device performs the steps of:
(a) substantially synchronise a local clock signal 310 of a first edge device with a local clock signal of a second edge device;
(b) calculating a measure indicative of forward queuing delay 320 between said a first edge device and said a second edge device;
(c) calculating a measure indicative of forward packet loss 330 between said a first edge device and said a second edge device; (d) setting an upper bound for each said delay price 340;
(e) adapting each said delay price 350 based on a calculated forward queuing delay and calculated ;
(f) performing optimisation of an objective function 360 utilising forward queuing time associated with each active flow; and (g) determining a respective flow rate 370 for each said active flow.
[0047] These example methods will be described further in relation to two embodiments associated with:
^" rate control with fixed delay prices; and > combined rate, delay and loss control. System Model
[0048] A network (e.g. 100 of Fig. 1) can be described as comprising a set of N best effort flows and L links with link capacities (bandwidths) of c = (cl , ... , cI )T . There are possibly higher priority flows in the network with reserved bandwidth. The higher priority flows use at most the bandwidth reserved to them. [0049] A system model for a network can be developed as follows: Let x\f) = (xj (t), ...,xN (t)) be the flow rates at time t ; R\n) be the set of links comprising the route of flow n ; and T[I) be the set of flows traversing through link/ . For every time t , let yt (t) be the total rates of flows traversing link / at time t . The total rate of flows traversing link / at time t can be expressed by:
[0050] Each flow is associated with a rate utility function Un \xn ) , and a route delay penalty function Ln (t) . It can be considered that: Un = Un (xn ) is a differentiable, strictly increasing and strictly concave function; and Ln (Y) = XnQn (t) , where Qn \t) is the forward (from source to destination) queuing time of flow n at time t . The forward queuing time Qn (t) can be expressed mathematically by the following equation.
Figure imgf000011_0002
[0051] In this equation, q, (t) is the time to clear the packet backlog of flows residing in the buffer of link / at time t . It will be appreciated that queuing delays and buffer occupancies are related by b, (t) = ql (t) C1. Thus, delay and buffer occupancy controls can be considered equivalent.
[0052] Rate utilities and delay penalties can be combined into a single objective function. By defining positive delay prices a — \al,...,aN ) and specifying a rate control with adapted delay prices α, an objective function can be expressed mathematically as follows.
N J« (X) = Σ Pn (Xn ) " <*„ Qn Xn ] (2) w=l
Rate Control with Fixed Delay Prices
[0053] Rate control can be specified with fixed delay prices by applying a continuous-time gradient projection algorithm for optimizing Jα (x):
Figure imgf000011_0003
where an (t) = an l and
Figure imgf000012_0001
[0054] A discrete time version of equation (3) can be expressed mathematically as xn {t+ dt)= xn{t)+ dtx ([/,; (xH it)) - αn (t) Qn (ή) , (4)
[0055] where dt is a discrete time tuning parameter defining the update frequency. [0056] Implementing a rate control method at an edge router only, Q(t) is estimated at the flow source edge by applying "foi"wαrd queuing time probing and estimation" as described below. [0057] An advantage of using Qn (t) over other control methods based on route penalties delivered by the core routers is that it does not require core router modification (or enhancement). It will be further appreciated that the present control method (or algorithm) maximizes the following utility function subject to the link capacity constraints.
Figure imgf000012_0002
Combined Rate, Delay and Loss Control
[0058] Best effort flows do not have stringent delay and loss requirement; however, they often prefer a maximum round trip delay (for example 250 milliseconds) and a packet loss less than few percents but not at the expense of a significantly low data rate. To this end, the present method can set an upper bound Ccn on the delay price of each flow n .
[0059] An interpretation of Ccn can be expressed as one over the minimum number of bits transmitted per one forward queuing delay. For example, in a TCP flow environment, l/an translates into the congestion window size (in bits). [0060] It will be appreciated that forward queuing delays decrease with the delay prices, α . Hence, forward queuing delays can be made arbitrary small by a proper adaptation of α . Since packet loss is typically due to link buffer overflows, buffer sizes can also be made arbitrary small by the virtue of queuing delays and buffer occupancies being related by b, \t) = qt (t) C1 (as discussed earlier). [0061] The rate control expressed in equation (3) can be amended by adapting ccn . This amendment is based on two estimators, the forward queuing delay Qn \t), and the forward packet loss Ln \f) . [0062] For every flow n , Qn is defined to represent the associated upper bound on the end-to- end forward queuing time and Ln is defined to represent the associated upper bound on the end- to-end forward packet loss. a\t) is defined to represent a delay price vector used at time t . [0063] An excess delay event can be expressed at an updated delay estimation time t by {<2,, it) > Qn }. An excess loss event can be expressed at an updated loss estimation time t by
Figure imgf000013_0001
[0064] In an embodiment, the delay price vector used at time t , ot\t), can be adapted as follows:
Figure imgf000013_0002
[0065] Equation (6) below shows a discrete time form of equation (5).
Figure imgf000013_0003
[0066] m equation (6), dt is discrete time tuning parameter that defines an update frequency. It will be appreciated that equations (3) and (6) express a mathematical relationship applicable to a method of discrete rate, delay and loss control. Virtual Clock Synchronization and Forward Queuing Delay-Loss Probing
[0067] An apparatus and method is taught for estimating the forward queuing time Q(t) and packet loss L(t) at time t .
[0068] Forward queuing delay estimation apparatus can be defined for networks having a zero queuing (ZQ) mechanism that forwards packets from a source to a destination without any queuing time. For example, a zero queuing (ZQ) mechanism can include the low latency queuing (LLQ) of Cisco IOS. Packets employing a ZQ mechanism are marked accordingly and receive top priority scheduling in each router along their traffic path. To guarantee zero queuing time, associated routers are typically configured to police (drop) all data packets marked as ZQ that exceed a predefined bandwidth allocated to ZQ packets. This queuing/scheduling scheme results in the total forward delay from source to destination of the un-policed ZQ packets equaling the forward propagation delay. Virtual Clock Synchronization Between Edge Nodes
[0069] Substantial synchronization between edge nodes, defined by a source node s and a destination node d , can be achieved using a virtual clock at d. A virtual clock, v. , of a source s can be maintained at each destination d of a source-destination pair (s, d) associated with an active flow comprising an exchange of data packets.
[0070] A source s can regularly send ZQ packets to the destination d , to provide data indicative of the source local clock, ts . Upon receiving the source local clock update ts , the destination d calculates (or updates) a time offset from s . The destination node d can calculate (or update) this time offset as OFS{d, s) = td - ts , where td is representative of the clock at the destination node upon reception.
[0071] The virtual clock of s can be specified at a time instant td of node d , using the following equation: vs = td - OFS{d, s)
[0072] It will be appreciated that a ZQ packet, providing data indicative of the source local clock ts , arrives at a destination node d after a one-way propagation delay. Thus, the virtual clock of s at d is lagging by a one-way propagation delay. Regularly updating OFS\d, s) can assist in minimizing an offset due to individual clock drifts. Forward Queuing Time Probing and Estimation
[0073] Forward queuing time probing and estimation can be established between edge nodes, defined by a source node £ and a destination node d . Each source s of a source-destination pair, (s, d) , with an active flow can maintain a forward queuing delay estimator, Q . [0074] The source s can sample values of Q by regularly sending probe packets to the destination d . Probe packets sent to the destination can be used for multiple purposes, one of which is delay estimation. The probe packets are sent as best effort packets, i.e. they are not marked as ZQ packets, and therefore may experience queuing time. Each probe can be labeled with a sequential number, « = 1, 2, Before transmission to a destination node d , each probe packet is time stamped by the source S with its local clock ts . Upon receiving a probe packet at destination d , the destination time stamps the probe with its virtual clock V^ of the relevant source node s . The destination then can respond by sending the corresponding probe packet back to source s . [0075] When a source node receives a packet corresponding to a previously sent probe packet n , the source records qn as the nth sample of Q . This can be expressed mathematically be the following equation, where vs and ts are time stamps carried in the fields of the «th probe packet:
[0076] It will be appreciated that qn is the forward queuing time of probe n . The value of Q can be initialized to a very large number and updated as a probe packet arrives. For example as the source node receives a returned probe packet n , Q can be updated by one of plurality of estimation procedures based on the sample fø; ;1 < i < n) . Two example embodiments of estimation procedures are disclosed.
[0077] A first example embodiment for estimating forward queuing time, involving calculation of the arithmetic average of the last k samples, can be expressed mathematically in the following equation.
1^ i=n-k+l [0078] By keeping Q and - qn_k+i in a local data structure, upon every arrival of qn , the average forward queuing time can be implemented on-the-fly using the following equation.
Figure imgf000015_0001
[0079] A second example embodiment for estimating forward queuing time, involving calculation of a damped average with factor 0 < β < 1 , can be expressed mathematically in the following equation.
Q ^ {\ - β) Q + β qn
Forward packet loss probing and estimation
[0080] Each source node s of a source-destination pair, (s, d) , with an active traffic flow, can maintain a forward packet loss estimator, Ls . [0081] A source node s can sample values of L by regularly sending probe packets to the destination d . Probe packets sent to the destination can be used for multiple purposes, one of which is packet loss estimation. Each probe packet can be labeled with a sequential number, « = 1, 2, ... by containing a representation of the sequence number in a data field of the packet. [0082] The destination d also maintains a current forward packet loss estimator on behalf of the source s , Ll s . The last sequence number received from a respective source node S is denoted by ?7last . Initially, L\s and Ls can be set to one. When the destination node d receives probe packet representing with sequence number n , it records the next sample (for example the /th sample), of the number of losses so far denoted by /, . This can be represented mathematically by the following equation. h = n - nlast - 1 (7)
[0083] The destination node d can then also update «last as expressed below.
"last <~ n [0084] L\s can be estimated by one of a plurality of estimation procedures based on the samples {/ ;1 < j ≤
Figure imgf000016_0001
Two embodiments for estimating forward packet loss are disclosed by way of example only.
[0085] A first example embodiment for estimating forward packet loss, involving calculation of the arithmetic average of the last k samples, can be expressed mathematically in the following equation.
Figure imgf000016_0002
[0086] By retaining the values of ^ / and l,_k+l in a local data structure, the average can be j=ι-k+] calculated on-the-fly upon the receipt of each probe packet and evaluation of the associated /( specified in (2). This on-the-fly calculation of a forward packet loss estimation can be implemented by setting ύ — / • ,, ,i + /;
Ll = 'i-k+l
[0087] A second example embodiment for estimating forward packet loss, involving calculation of the damped average with factor 0 < β < 1 , can be expressed mathematically in the following equation. Ll <- {1- β)Ll+ β JJ
[0088] After every update of Ll at destination node d, the destination writes Ll in the probe packet and sends the probe back to source 5. Upon probe reception at node s, the source updates L by setting Ll into L, i.e,, L <r- Ll
Interpretation
[0089] The invention may be embodied using devices conforming to other network standards and for other applications, including, for example other WLAN standards and other wireless standards. Applications that can be accommodated include IEEE 802.11 wireless LANs and links, wired and wireless Ethernet.
[0090] In the context of this document, the term "wireless" and its derivatives may be used to describe circuits, devices, systems, methods, techniques, communications channels, etc., that may communicate data through the use of modulated electromagnetic radiation through a non- solid medium. The term does not imply that the associated devices do not contain any wires, although in some embodiments they might not. In the context of this document, the term "wired" and its derivatives may be used to describe circuits, devices, systems, methods, techniques, communications channels, etc., that may communicate data through the use of modulated electromagnetic radiation through a solid medium. The term does not imply that the associated devices are coupled by electrically conductive wires. [0091] Unless specifically stated otherwise, as apparent from the following discussions, it is appreciated that throughout the specification discussions utilizing terms such as "processing", "computing", "calculating", "determining" or the like, refer to the action and/or processes of a computer or computing system, or similar electronic computing device, that manipulate and/or transform data represented as physical, such as electronic, quantities into other data similarly represented as physical quantities.
[0092] In a similar manner, the term "processor" may refer to any device or portion of a device that processes electronic data, e.g., from registers and/or memory to transform that electronic data into other electronic data that, e.g., may be stored in registers and/or memory. A "computer" or a "computing device" or a "computing machine" or a "computing platform" may include one or more processors.
[0093] The methodologies described herein are, in one embodiment, performable by one or more processors that accept computer-readable (also called machine-readable) code containing a set of instructions that when executed by one or more of the processors carry out at least one of the methods described herein. Any processor capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken are included. Thus, one example is a typical processing system that includes one or more processors. The processing system further may include a memory subsystem including main RAM and/or a static RAM, and/or ROM. [0094] Furthermore, a computer-readable carrier medium may form, or be included in a computer program product.
[0095] In alternative embodiments, the one or more processors operate as a standalone device or may be connected, e.g., networked to other processor(s), in a networked deployment, the one or more processors may operate in the capacity of a server or a client machine in server-client network environment, or as a peer machine in a peer-to-peer or distributed network environment. The one or more processors may form a web appliance, a network router, switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine.
[0096] Note that while some diagram(s) only show(s) a single processor and a single memory that carries the computer-readable code, those in the art will understand that many of the components described above are included, but not explicitly shown or described in order not to obscure the inventive aspect. For example, while only a single machine is illustrated, the term "machine" shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein. [0097] Thus, one embodiment of each of the methods described herein is in the form of a computer-readable carrier medium carrying a set of instructions, e.g., a computer program that are for execution on one or more processors. Thus, as will be appreciated by those skilled in the art, embodiments of the present invention may be embodied as a method, an apparatus such as a special purpose apparatus, an apparatus such as a data processing system, or a computer-readable carrier medium. The computer-readable carrier medium carries computer readable code including a set of instructions that when executed on one or more processors cause a processor or processors to implement a method. Accordingly, aspects of the present invention may take the form of a method, an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of carrier medium (e.g., a computer program product on a computer-readable storage medium) carrying computer-readable program code embodied in the medium. [0098] The software may further be transmitted or received over a network via a network interface device. While the carrier medium is shown in an example embodiment to be a single medium, the term "carrier medium" should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions. The term "carrier medium" shall also be taken to include any medium that is capable of storing, encoding or carrying a set of instructions for execution by one or more of the processors and that cause the one or more processors to perform any one or more of the methodologies of the present invention. A carrier medium may take many forms, including but not limited to, non- volatile media, volatile media, and transmission media. [0099] It will be understood that the steps of methods discussed are performed in one embodiment by an appropriate processor (or processors) of a processing (i.e., computer) system executing instructions (computer-readable code) stored in storage. It will also be understood that the invention is not limited to any particular implementation or programming technique and that the invention may be implemented using any appropriate techniques for implementing the functionality described herein. The invention is not limited to any particular programming language or operating system.
[00100] Reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment, but may. Furthermore, the particular features, structures or characteristics may be combined in any suitable manner, as would be apparent to one of ordinary skill in the art from this disclosure, in one or more embodiments. [00101] Similarly it should be appreciated that in the above description of example embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. This method of disclosure, however, is not to be interpreted as reflecting an intention that the claimed invention requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the Detailed Description are hereby expressly incorporated into this Detailed Description, with each claim standing on its own as a separate embodiment of this invention. [00102] Furthermore, while some embodiments described herein include some but not other features included in other embodiments, combinations of features of different embodiments are meant to be within the scope of the invention, and form different embodiments, as would be understood by those in the art. For example, in the following claims, any of the claimed embodiments can be used in any combination. [00103] Furthermore, some of the embodiments are described herein as a method or combination of elements of a method that can be implemented by a processor of a computer system or by other means of carrying out the function. Thus, a processor with the necessary instructions for carrying out such a method or element of a method forms a means for carrying out the method or element of a method. Furthermore, an element described herein of an apparatus embodiment is an example of a means for carrying out the function performed by the element for the purpose of carrying out the invention.
[00104] In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the invention may be practiced without these specific details. In other instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description. [00105] As used herein, unless otherwise specified the use of the ordinal adjectives "first",
"second", "third", etc., to describe a common object, merely indicate that different instances of like objects are being referred to, and are not intended to imply that the objects so described must be in a given sequence, either temporally, spatially, in ranking, or in any other manner. [00106] In the claims below and the description herein, any one of the terms comprises, comprising, comprised of or which comprises is an open term that means including at least the elements/features that follow, but not excluding others. Thus, the term comprising, when used in the claims, should not be interpreted as being limitative to the means or elements or steps listed thereafter. For example, the scope of the expression a device comprising A and B should not be limited to devices consisting only of elements A and B. Any one of the terms including or which includes or that includes as used herein is also an open term that also means including at least the elements/features that follow the term, but not excluding others. Thus, including is synonymous with and means comprising.
[00107] Similarly, it is to be noticed that the term coupled, when used in the claims, should not be interpreted as being limitative to direct connections only. The terms "coupled" and "connected", along with their derivatives, may be used. It should be understood that these terms are not intended as synonyms for each other. Thus, the scope of the expression a device A coupled to a device B should not be limited to devices or systems wherein an output of device A is directly connected to an input of device B. It means that there exists a path between an output of A and an input of B which may be a path including other devices or means. "Coupled" may mean that two or more elements are either in direct physical or electrical contact, or that two or more elements are not in direct contact with each other but yet still co-operate or interact with each other.
[00108] Thus, while there has been described what are believed to be the preferred embodiments of the invention, those skilled in the art will recognize that other and further modifications may be made thereto without departing from the spirit of the invention, and it is intended to claim all such changes and modifications as fall within the scope of the invention. For example, any formulas given above are merely representative of procedures that may be used. Functionality may be added or deleted from the block diagrams and operations may be interchanged among functional blocks. Steps may be added or deleted to methods described within the scope of the present invention. [00109] Although the invention has been described with reference to specific examples, it will be appreciated by those skilled in the art that the invention may be embodied in many other forms.

Claims

THE CLAIMS DEFINING THE INVENTION ARE AS FOLLOWS:
1. A method for controlling active network traffic flows between a first edge device and a second edge device, wherein said first device and said second device are coupled via a communication network, said method comprising the steps of: (a) performing optimisation of an objective function utilising forward queuing time associated with each active flow; and (b) determining a respective flow rate for each said active flow.
2. A method according to claim 1, wherein said optimisation applies a respective fixed delay price to each respective said active flow.
3. A method according to any one of the preceding claims, wherein said optimisation maximizes ^ Un \xn )/ccn subject to link capacity constraints, wherein Un [Xn ) is a rate n utility function, and an is a positive delay price.
4. A method according to any one of the preceding claims, wherein said determining a respective flow rate for each said active flow includes the step of solving a discrete time formula of the form: χn{t + dή = xn{t)+ dtχ (u;XxM - an{t) QΛt)), wherein Xn \t) is a flow rates at time t , Un [Xn ) is a rate utility function, an is a positive delay price, Qn (t) is a forward queuing time, and dt is a discrete time tuning parameter defining the update frequency.
5. A method according to claim 4, wherein forward delay estimation is used to control the rate of best effort application network traffic flow.
6. A method according to claims 4 or claim 5, wherein said method comprises the step of setting an upper bound CCn for each said delay price.
7. A method according to claim 6, wherein said upper bound applied to each said delay price is inversely proportional to respective forward queuing delays.
8. A method according to any one of claims 4 to 7, wherein method further comprises the step of adapting each said delay price based on a calculated forward queuing delay and/or calculated forward packet loss.
9. A method according to claim 8, wherein each said delay price is adapted based on a discrete time formu Ia of the form : if excess delay or excess loss event occures *nd an(t)< an otherwise
Figure imgf000022_0001
wherein an (t) is a positive delay price at time t, Ccn is an upper bound for each delay price, and dt is discrete time tuning parameter for defining an update frequency.
10. A method according to any one of the preceding claims wherein said method further comprises the step of substantially synchronising a local clock signal of said first device with a local clock signal of said second edge device.
11. A method according to claim 10, wherein said synchronising a local clock signal includes a step of sending to said second edge device a zero queuing packet comprising a measure indicative of said first device local time clock, such that receipt of said zero queuing packet at said second edge device is delayed substantially by a one forward propagation delay, and wherein said local clock synchronisation incorporates a one forward propagation delay offset.
12. A method according to any one of the preceding claims wherein said method comprises the step of calculating a measure indicative of forward queuing delay between said first device and said second device.
13. A method according to claim 12, wherein said method comprises the step of continuously calculating an arithmetic or damped average of at least two instances of said measure indicative of forward queuing delay.
14. A method according to any one of the preceding claims wherein said method comprises the step of calculating a measure indicative of forward packet loss between said first device and said second device.
15. A method according to claim 14, wherein said method comprises the step of calculating an arithmetic or damped average of at least two instances of said measure indicative of forward packet loss.
16. A method according to any one of the preceding claims, wherein said method provides rate, delay and packet loss control of best effort application flows based on forward queuing time probing and estimation and on forward packet loss probing and estimation, wherein said method provides an edge-to-edge quality of service solution across said communication network.
17. A method for controlling active network traffic flows between a first and second edge device coupled to a communication network, said method comprising a distributed control algorithms selected from the set including an algorithm based on a gradient search for an optimal solution of a primal problem; and an algorithm based on a gradient search for an optimal solution of a dual problem.
18. A method according to claim 17, wherein said method uses forward time (FWT) feedback information and/or and forward packet loss such that said method is suitable for best-effort flow control.
19. A method for controlling active network traffic flows between a first and second edge device coupled to a communication network, substantially as herein described with reference to any one of the embodiments of the invention illustrated in the accompanying drawings and/or examples.
20. A device for controlling active network traffic flows, said device comprising a processing element coupled to at least one network interface, said processor element being adapted for performing a method according to any one of the preceding claims.
21. A first edge device for controlling active network traffic flows between said first edge device and a second edge device, wherein said first device is coupled to said second device via a communication network, said first edge device comprising: at least one network interface for allowing said first device to communicate with said second edge device via said communication data network; a processor element coupled to said network interface for performing a method of for controlling active network traffic flows between said first edge device and said second edge device, wherein said method comprises the steps of:
(a) performing optimisation of an objective function utilising forward queuing time associated with each active flow;
(b) determining a respective flow rate for each said active flow.
22. A device according to claim 20 or claim 21 , wherein said device is an edge router of said communication network.
23. A device according to claim 20 or claim 21, wherein said device is coupled to an edge router of said communication network.
24. A device according to any one of claims 20 to 23 wherein said communication network includes a wired network and/or a wireless network.
25. A device according to any one of claims 20 to 24, wherein said processing element comprises one or more from the set including Network Processors (NP), Application- Specific Integrated Circuit (ASIC), Field Programmable Gate Array (FPGA).
26. A device for controlling active network traffic flows, substantially as herein described with reference to any one of the embodiments of the invention illustrated in the accompanying drawings and/or examples.
27. A computer-readable carrier medium carrying a set of instructions that when executed by one or more processor elements cause the one or more processor elements to carry out a method of according to any one of claims 1 to 19.
PCT/AU2009/001546 2008-12-01 2009-11-26 Method and apparatus for network traffic control WO2010063063A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
AU2008906199A AU2008906199A0 (en) 2008-12-01 Method and apparatus for network traffic control
AU2008906199 2008-12-01

Publications (1)

Publication Number Publication Date
WO2010063063A1 true WO2010063063A1 (en) 2010-06-10

Family

ID=42232809

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/AU2009/001546 WO2010063063A1 (en) 2008-12-01 2009-11-26 Method and apparatus for network traffic control

Country Status (1)

Country Link
WO (1) WO2010063063A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020182649A1 (en) 2019-03-08 2020-09-17 Syngenta Crop Protection Ag Pesticidally active azole-amide compounds
WO2020193341A1 (en) 2019-03-22 2020-10-01 Syngenta Crop Protection Ag N-[1-(5-bromo-2-pyrimidin-2-yl-1,2,4-triazol-3-yl)ethyl]-2-cyclopropyl-6-(trifluoromethyl)pyridine-4-carboxamide derivatives and related compounds as insecticides

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1998047166A2 (en) * 1997-04-15 1998-10-22 Flash Networks Ltd. Data communication protocol

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1998047166A2 (en) * 1997-04-15 1998-10-22 Flash Networks Ltd. Data communication protocol

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
PAPACHRISTODOULOU, A.: "Global stability analysis of a TCP/AQM protocol for arbitrary networks with delay", DECISION AND CONTROL, 2004, CDC, 43RD IEEE CONFERENCE, vol. 1, 17 December 2004 (2004-12-17), pages 1029 - 1034 *
WEI, DX. ET AL.: "FAST TCP: Motivation, Architecture, Algorithms, Performance", IEEE/ACM TRANSACTIONS ON NETWORKING, vol. 14, no. 6, December 2006 (2006-12-01), pages 1246 - 1259 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020182649A1 (en) 2019-03-08 2020-09-17 Syngenta Crop Protection Ag Pesticidally active azole-amide compounds
WO2020193341A1 (en) 2019-03-22 2020-10-01 Syngenta Crop Protection Ag N-[1-(5-bromo-2-pyrimidin-2-yl-1,2,4-triazol-3-yl)ethyl]-2-cyclopropyl-6-(trifluoromethyl)pyridine-4-carboxamide derivatives and related compounds as insecticides

Similar Documents

Publication Publication Date Title
Rozhnova et al. An effective hop-by-hop interest shaping mechanism for ccn communications
CN111316605B (en) Layer 3 fair rate congestion control notification
Zhang et al. JetMax: scalable max–min congestion control for high-speed heterogeneous networks
Zhang et al. Delayed stability and performance of distributed congestion control
Wallace et al. Concurrent multipath transfer using SCTP: Modelling and congestion window management
Xu et al. Hybrid congestion control for high-speed networks
Teymoori et al. Congestion control in the recursive internetworking architecture (RINA)
Rai et al. A distributed algorithm for throughput optimal routing in overlay networks
D’Aronco et al. Improved utility-based congestion control for delay-constrained communication
Rosberg et al. A network rate management protocol with TCP congestion control and fairness for all
Barbera et al. Queue stability analysis and performance evaluation of a TCP-compliant window management mechanism
Zhang et al. Delay-independent stability and performance of distributed congestion control
US7027401B1 (en) Devices with window-time-space flow control (WTFC)
Poojary et al. Analysis of multiple flows using different high speed TCP protocols on a general network
WO2010063063A1 (en) Method and apparatus for network traffic control
Alwahab et al. Ecn-marking with codel and its compatibility with different tcp congestion control algorithms
Xue et al. Fall: A fair and low latency queuing scheme for data center networks
Yuan et al. A generalized fast tcp scheme
Devkota Performance of Quantized Congestion Notification in TXP Incast in Data Centers
Fesehaye Finishing Flows Faster with A Quick congestion Control Protocol (QCP)
Zhang et al. Adaptive fast TCP
Tunc et al. Fixed-point analysis of a network of routers with persistent TCP/UDP flows and class-based weighted fair queuing
Kotian et al. Study on Different Mechanism for Congestion Control in Real Time Traffic for MANETS
Manikandan et al. Active queue management based congestion control protocol for wireless networks
Iyer et al. Time-optimal network queue control: The case of a single congested node

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 09829878

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 09829878

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2012125909

Country of ref document: RU

Kind code of ref document: A