WO1998026510A2 - Method for managing flow bandwidth utilization at network, transport and application layers in store and forward network - Google Patents

Method for managing flow bandwidth utilization at network, transport and application layers in store and forward network Download PDF

Info

Publication number
WO1998026510A2
WO1998026510A2 PCT/US1997/022550 US9722550W WO9826510A2 WO 1998026510 A2 WO1998026510 A2 WO 1998026510A2 US 9722550 W US9722550 W US 9722550W WO 9826510 A2 WO9826510 A2 WO 9826510A2
Authority
WO
WIPO (PCT)
Prior art keywords
flow
resources
rate
policy
traffic
Prior art date
Application number
PCT/US1997/022550
Other languages
French (fr)
Other versions
WO1998026510A3 (en
Inventor
Robert L. Packer
Original Assignee
Packeteer, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Family has litigation
First worldwide family litigation filed litigation Critical https://patents.darts-ip.com/?family=21865178&utm_source=google_patent&utm_medium=platform_link&utm_campaign=public_patent_search&patent=WO1998026510(A2) "Global patent litigation dataset” by Darts-ip is licensed under a Creative Commons Attribution 4.0 International License.
Application filed by Packeteer, Inc. filed Critical Packeteer, Inc.
Priority to AU55967/98A priority Critical patent/AU5596798A/en
Publication of WO1998026510A2 publication Critical patent/WO1998026510A2/en
Publication of WO1998026510A3 publication Critical patent/WO1998026510A3/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS
    • H04L47/2441Traffic characterised by specific attributes, e.g. priority or QoS relying on flow classification, e.g. using integrated services [IntServ]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/02Standardisation; Integration
    • H04L41/0213Standardised network management protocols, e.g. simple network management protocol [SNMP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0893Assignment of logical groups to network elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0894Policy-based network configuration management
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0896Bandwidth or capacity management, i.e. automatically increasing or decreasing capacities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/19Flow control; Congestion control at layers above the network layer
    • H04L47/193Flow control; Congestion control at layers above the network layer at the transport layer, e.g. TCP related
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/22Traffic shaping
    • H04L47/225Determination of shaping rate, e.g. using a moving window
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/26Flow control; Congestion control using explicit feedback to the source, e.g. choke packets
    • H04L47/263Rate modification at the source after receiving feedback
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/16Implementation or adaptation of Internet protocol [IP], of transmission control protocol [TCP] or of user datagram protocol [UDP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/16Implementation or adaptation of Internet protocol [IP], of transmission control protocol [TCP] or of user datagram protocol [UDP]
    • H04L69/163In-band adaptation of TCP data exchange; In-band control procedures

Definitions

  • Robert L. Packer entitled “Method for Explicit Data Rate Control in a Packet Communication Environment Without a Data Rate Supervision, " relates to a technique for automatically scheduling TCP packets for transmission.
  • This invention relates to digital packet telecommunications, and particularly to management of network bandwidth based on information ascertainable from multiple layers of OSI network model. It is particularly useful in conjunction with data flow rate detection and control of a digitally-switched packet telecommunications environment normally not subject to data flow rate control.
  • TCP/IP protocol suite which implements the world-wide data communication network environment called the Internet and is also used in private networks (Intranets), intentionally omits explicit supervisory function over the rate of data transport over the various media which comprise the network. While there are certain perceived advantages, this characteristic has the consequence of juxtaposing very highspeed packet flows and very low-speed packet flows in potential conflict for network resources, which results in inefficiencies. Certain pathological loading conditions can result in instability, overloading and data transfer stoppage. Therefore, it is desirable to provide some mechanism to optimize efficiency of data transfer while minimizing the risk of data loss. Early indication of the rate of data flow which can or must be supported is very useful. In fact, data flow rate capacity information is a key factor for use in resource allocation decisions.
  • IP Internet Protocol
  • TCP Transmission Control Protocol
  • TCP is the transport layer protocol. At the network level, IP provides a "datagram" delivery service. By contrast, TCP builds a transport level service over the datagram service to provide guaranteed, sequential delivery of a byte stream between two IP hosts.
  • TCP flow control mechanisms operate exclusively at the end stations to limit the rate at which TCP endpoints emit data. However, TCP lacks explicit data rate control. In fact, there is heretofore no concept of coordination of data rates among multiple flows.
  • the basic TCP flow control mechanism is a sliding window, superimposed on a range of bytes beyond the last explicitly-acknowledged byte. Its sliding operation limits the amount of unacknowledged transmissible data that a TCP endpoint can emit.
  • Another flow control mechanism is a congestion window, which is a refinement of the sliding window scheme, which employs conservative expansion to fully utilize all of the allowable window.
  • a component of this mechanism is sometimes referred to as "slow start”.
  • the sliding window flow control mechanism works in conjunction with the Retransmit Timeout Mechanism (RTO), which is a timeout to prompt a retransmission of unacknowledged data.
  • RTO Retransmit Timeout Mechanism
  • the timeout length is based on a running average of the Round Trip Time (RTT) for acknowledgment receipt, i.e. if an acknowledgment is not received within (typically) the smoothed RTT + 4*mean deviation, then packet loss is inferred and the data pending acknowledgment is retransmitted.
  • RTT Round Trip Time
  • Data rate flow control mechanisms which are operative end-to-end without explicit data rate control draw a strong inference of congestion from packet loss (inferred, typically, by RTO). TCP end systems, for example, will 'back-off, i.e., inhibit transmission in increasing multiples of the base RTT average as a reaction to consecutive packet loss.
  • TCP end systems are clients or servers running the TCP transport protocol, typically as part of their operating system.
  • bandwidth management is often used to refer to link level bandwidth management, e.g. multiple line support for Point to Point Protocol (PPP).
  • Link level bandwidth management is essentially the process of keeping track of all traffic and deciding whether an additional dial line or ISDN channel should be opened or an extraneous one closed.
  • the field of this invention is concerned with network level bandwidth management, i.e. policies to assign available bandwidth from a single logical link to network flows.
  • Routers support various queuing options. These options are generally intended to promote fairness and to provide a rough ability to partition and prioritize separate classes of traffic. Configuring these queuing options with any precision or without side effects is in fact very difficult, and in some cases, not possible. Seemingly simple things, such as the length of the queue, have a profound effect on traffic characteristics. Discarding packets as a feedback mechanism to TCP end systems may cause large, uneven delays perceptible to interactive users.
  • Bandwidth management is heretofore not known to employ information contained in the packets corresponding to higher OSI protocol layers, even though such information may be extremely useful in making bandwidth allocation and management decisions.
  • a method for classifying packet network flows for use in determining a policy, or rule of assignment of a service level, and enforcing that policy by direct rate control.
  • the method comprises applying individual instances of traffic objects, i.e., packet network flows to a classification model based on selectable information obtained from a plurality of layers of a multi-layered communication protocol, then mapping the flow to the defined traffic classes, which are arbitrarily assignable by an offline manager which creates the classification. It is useful to note that the classification need not be a complete enumeration of the possible traffic.
  • bandwidth may be divided into arbitrary units, partitions, facilitating isolation and allocation.
  • a partition is allocated for a class or set of traffic classes, carving the bandwidth of the associated link into multiple, independent pieces.
  • available bandwidth may be allocated among flows according to a policy, which may include any combination of guaranteed information rate, excess information rate, the later allocated according to a priority.
  • bandwidth resource needs of multiple heterogeneous requesting flows are reconciled with available bandwidth resources in accordance with policy of each flow based upon the flow's class. Flows requiring reserved service with guaranteed information rates, excess information rates or unreserved service are reconciled with the available bandwidth resources continuously and automatically.
  • An advantage of network management techniques according to the present invention is that network managers need only define traffic classes which are of interest.
  • traffic classes may include information such as a URI for web traffic.
  • a yet further advantage of the present invention is that service levels may be defined in terms of explicit rates and may be scaled to a remote client or server's network access rate. Different service levels may be specified for high speed and low speed users.
  • a yet further advantage of the present invention is that service levels may be defined in terms of a guaranteed minimum service level.
  • FIG. 1A depicts a representative client server relationship in accordance with a particular embodiment of the invention
  • Fig. IB depicts a functional perspective of the representative client server relationship in accordance with a particular embodiment of the invention
  • Fig. 1C depicts a representative internetworking environment in accordance with a particular embodiment of the invention
  • Fig. ID depicts a relationship diagram of the layers of the TCP/IP protocol suite
  • Fig. IE depicts a two dimensional representation of timing relationships in the exchange of packets between hosts using the TCP protocol
  • Figs. 2A-2B depict representative divisions of bandwidth according to a particular embodiment of the invention
  • Figs. 2C-2E are flow charts depicting process steps according to a particular embodiment of the invention.
  • Fig. 3 is a block diagram of a particular embodiment according to the invention
  • Fig. 4A is a block diagram of a data structure according to a particular embodiment of the invention
  • Fig. 4B is a block diagram of data structure according to a particular embodiment of the invention.
  • Figs. 5A-5H are flow charts depicting process steps according to a particular embodiment of the invention.
  • the present invention provides techniques to manage network bandwidth such as on a network access link between a local area network and a wide area network.
  • Systems according to the present invention enable network managers to: define traffic classes; create policies which define service levels for traffic classes; and isolate bandwidth resources associated with certain traffic classes. Inbound as well as outbound traffic may be managed.
  • Table 1 provides a definitional list of terminology used herein. LIST OF DEFINITIONAL TERMS
  • ADMISSIONS CONTROL A policy invoked whenever a system according to the invention detects that a guaranteed information rate cannot be maintained.
  • An admissions control policy is analogous to a busy signal in the telephone world.
  • CLASS SEARCH ORDER A search method based upon traversal of a N-ary tree data structure containing classes.
  • CIR rate of data flow allocated to reserved service traffic for rate based bandwidth allocation for a committed bandwidth.
  • GIR guaranteed information rate
  • EXCEPTION A class of traffic provided by the user which supersedes an automatically determined classification order.
  • EIR A rate of data flow allocated to reserved service traffic for rate based bandwidth allocation for uncommitted bandwidth resources.
  • a flow is a single instance of a traffic class. For example, all packets in a TCP connection belong to the same flow. As do all packets in a UDP session.
  • GUARANTEED INFORMATION RATE A rate of data flow allocated to reserved service traffic for rate based bandwidth allocation for a committed bandwidth. Also called a committed information rate (CIR). HARD ISOLATION Hard isolation results from the creation of an entirely separate logical channel for a designated set of classes.
  • INSIDE On the system side of an access link. Outside clients and servers are on the other side of the access link.
  • ISOLATION Isolation is the degree that bandwidth resources are allocable to traffic classes.
  • PARTITION Partition is an arbitrary unit of network resources.
  • POLICY A rule for the assignment of a service level to a flow.
  • POLICY INHERITANCE A method for assigning policies to flows for which no policy exists in a hierarchical arrangement of policies. For example, if a flow is determined to be comprised of FTP packets for Host A, and no corresponding policy exists, a policy associated with a parent node, such as an FTP policy, may be located and used. See also POLICY SEARCH ORDER.
  • SCALING An adjustment of a requested data rate for a particular flow based upon the policy associated with the flow and information about the flow's potential rate.
  • RESERVED SERVICE Reserved service is a service level intended for traffic which "bursts" or sends chunks of data. Reserved service is defined in terms of a scaled rate. SCALED RATE Assignment of a data rate based upon detected speed.
  • SERVICE LEVEL A service paradigm having a combination of characteristics defined by a network manager to handle a particular class of traffic. Service levels may be designated as either reserved or unreserved.
  • Target rate is a combination of a guaranteed rate and an excess rate.
  • Target rate is a policy-based paradigm. Excess rate is allocated by systems according to the invention from bandwidth that is not consumed by reserved service. Policies will demand excess rate at a given priority and systems according to the invention satisfy this demand by a priority level.
  • Traffic classes have properties or class attributes such as, directionality, which is the property of traffic to be flowing inbound or outbound.
  • UNRESERVED SERVICE Unreserved service is a service level defined in terms of priority in which no reservation of bandwidth is made.
  • the method for flow bandwidth management in a packet oriented telecommunications network environment of the present invention is implemented in the
  • FIG. 1A C programming language and is operational on a computer system such as shown in Fig. 1A.
  • This invention may be implemented in a client-server environment, but a client- server environment is not essential.
  • This figure shows a conventional client-server computer system which includes a server 20 and numerous clients, one of which is shown as client 25.
  • server receives queries from (typically remote) clients, does substantially all the processing necessary to formulate responses to the queries, and provides these responses to the clients.
  • server 20 may itself act in the capacity of a client when it accesses remote databases located at another node acting as a database server.
  • server 20 includes one or more processors 30 which communicate with a number of peripheral devices via a bus subsystem 32.
  • peripheral devices typically include a storage subsystem 35, comprised of a memory subsystem 35a and a file storage subsystem 35b holding computer programs (e.g., code or instructions) and data, a set of user interface input and output devices 37, and an interface to outside networks, which may employ Ethernet, Token Ring, ATM, IEEE 802.3, ITU X.25, Serial Link Internet Protocol (SLIP) or the public switched telephone network.
  • This interface is shown schematically as a "Network
  • Interface block 40. It is coupled to corresponding interface devices in client computers via a network connection 45.
  • Client 25 has the same general configuration, although typically with less storage and processing capability.
  • the server computer is generally a high-end workstation or mainframe, such as a SUN SPARC server.
  • Bus subsystem 32 is shown schematically as a single bus, but a typical system has a number of buses such as a local bus and one or more expansion buses (e.g., ADB, SCSI, ISA, EISA, MCA, NuBus, or PCI), as well as serial and parallel ports.
  • Network connections are usually established through a device such as a network adapter on one of these expansion buses or a modem on a serial port.
  • the client computer may be a desktop system or a portable system.
  • interface devices 37' or devices 37 in a standalone system.
  • client queries are entered via a keyboard, communicated to client processor 30', and thence to modem or network interface 40' over bus subsystem 32'.
  • the query is then communicated to server 20 via network connection 45.
  • results of the query are communicated from the server to the client via network connection 45 for output on one of devices 37' (say a display or a printer), or may be stored on storage subsystem 35'.
  • Fig. IB is a functional diagram of a computer system such as that of Fig. 1A.
  • Fig. IB depicts a server 20, and a representative client 25 of a plurality of clients which may interact with the server 20 via the Internet 45 or any other communications method. Blocks to the right of the server are indicative of the processing steps and functions which occur in the server's program and data storage indicated by blocks 35a and 35b in Fig. 1A.
  • a TCP/IP "stack" 44 works in conjunction with Operating System 42 to communicate with processes over a network or serial connection attaching Server 20 to Internet 45.
  • Web server software 46 executes concurrently and cooperatively with other processes in server 20 to make data objects 50 and 51 available to requesting clients.
  • a Common Gateway Interface (CGI) script 55 enables information from user clients to be acted upon by web server 46, or other processes within server 20. Responses to client queries may be returned to the clients in the form of a Hypertext Markup Language (HTML) document outputs which are then communicated via Internet
  • CGI Common Gateway Interface
  • Client 25 in Fig. IB possesses software implementing functional processes operatively disposed in its program and data storage as indicated by block 35a' in Fig. 1A.
  • TCP/IP stack 44' works in conjunction with Operating System 42' to communicate with processes over a network or serial connection attaching Client 25 to Internet 45.
  • Software implementing the function of a web browser 46' executes concurrently and cooperatively with other processes in client 25 to make requests of server 20 for data objects 50 and 51.
  • the user of the client may interact via the web browser 46' to make such queries of the server 20 via Internet 45 and to view responses from the server 20 via Internet 45 on the web browser 46' .
  • Fig. 1C is illustrative of the internetworking of a plurality of clients such as client 25 of Figs. 1A and IB and a plurality of servers such as server 20 of Figs. 1A and IB as described herein above.
  • network 70 is an example of a Token
  • Network 70 links host 71, such as an IBM RS6000 RISC workstation, which may be running the AIX operating system, to host 72, which is a personal computer, which may be running Windows 95, IBM OS/2 or a DOS operating system, and host 73, which may be an IBM AS/400 computer, which may be running the OS/400 operating system.
  • Network 70 is internetworked to network 60 via a system gateway which is depicted here as router 75, but which may also be a gateway having a firewall or a network bridge.
  • Network 60 is an example of an Ethernet network that interconnects host 61, which is a SPARC workstation, which may be running SUNOS operating system with host 62, which may be a Digital Equipment VAX6000 computer which may be running the VMS operating system.
  • Router 75 is a network access point (NAP) of network 70 and network 60. Router 75 employs a Token Ring adapter and Ethernet adapter. This enables router 75 to interface with the two heterogeneous networks. Router 75 is also aware of the Internetwork Protocols, such as ICMP ARP and RIP, which are described herein below.
  • Fig. ID is illustrative of the constituents of the Transmission Control Protocol/Internet Protocol (TCP/IP) protocol suite.
  • the base layer of the TCP/IP protocol suite is the physical layer 80, which defines the mechanical, electrical, functional and procedural standards for the physical transmission of data over communications media, such as, for example, the network connection 45 of Fig. 1A.
  • the physical layer may comprise electrical, mechanical or functional standards such as whether a network is packet switching or frame-switching; or whether a network is based on a Carrier Sense Multiple Access/Collision Detection (CSMA/CD) or a frame relay paradigm.
  • CSMA/CD Carrier Sense Multiple Access/Collision Detection
  • the data link layer Overlying the physical layer is the data link layer 82.
  • the data link layer provides the function and protocols to transfer data between network resources and to detect errors that may occur at the physical layer.
  • Operating modes at the datalink layer comprise such standardized network topologies as IEEE 802.3 Ethernet, IEEE 802.5 Token Ring, ITU X.25, or serial (SLIP) protocols.
  • Network layer protocols 84 overlay the datalink layer and provide the means for establishing connections between networks.
  • the standards of network layer protocols provide operational control procedures for internetworking communications and routing information through multiple heterogenous networks. Examples of network layer protocols are the Internet Protocol (IP) and the Internet Control Message Protocol
  • the Address Resolution Protocol (ARP) is used to correlate an Internet address and a Media Access Address (MAC) for a particular host.
  • the Routing Information Protocol (RIP) is a dynamic routing protocol for passing routing information between hosts on networks.
  • the Internet Control Message Protocol (ICMP) is an internal protocol for passing control messages between hosts on various networks. ICMP messages provide feedback about events in the network environment or can help determine if a path exists to a particular host in the network environment. The latter is called a "Ping".
  • the Internet Protocol (IP) provides the basic mechanism for routing packets of information in the Internet. IP is a non-reliable communication protocol. It provides a "best efforts" delivery service and does not commit network resources to a particular transaction, nor does it perform retransmissions or give acknowledgments.
  • the transport layer protocols 86 provide end-to-end transport services across multiple heterogenous networks.
  • the User Datagram Protocol (UDP) provides a connectionless, datagram oriented service which provides a non-reliable delivery mechanism for streams of information.
  • the Transmission Control Protocol (TCP) provides a reliable session-based service for delivery of sequenced packets of information across the Internet. TCP provides a connection oriented reliable mechanism for information delivery.
  • the session, or application layer 88 provides a list of network applications and utilities, a few of which are illustrated here.
  • File Transfer Protocol FTP
  • Telnet is a standard TCP/IP protocol for remote terminal connection.
  • Telnet client acts as a terminal emulator and establishes a connection using TCP as the transport mechanism with a Telnet server.
  • the Simple Network Management Protocol (SNMP) is a standard for managing TCP/IP networks. SNMP tasks, called “agents”, monitor network status parameters and transmit these status parameters to SNMP tasks called “managers. " Managers track the status of associated networks.
  • HTTP Hypertext Transfer Protocol
  • URI uniform resource indicators
  • Transmission Control Protocol It is the mechanism which underlies the function of the World Wide Web.
  • the HTTP provides a method for users to obtain data objects from various hosts acting as servers on the Internet.
  • User requests for data objects are made by means of an HTTP request, such as a GET request.
  • a GET request as depicted below is comprised of 1) an HTTP protocol version, such as "http:/1.0"; followed by 2) the full path of the data object; followed by 3) the name of the data object.
  • GET request shown below, a request is being made for the data object with a path name of "/pub/" and a name of "MyData.html":
  • Processing of a GET request entails the establishing of an TCP/IP connection with the server named in the GET request and receipt from the server of the data object specified. After receiving and interpreting a request message, a server responds in the form of an HTTP RESPONSE message.
  • Status line comprising a protocol version followed by a numeric Status Code and an associated textual Reason Phrase. These elements are separated by space characters.
  • the format of a status line is depicted in line (2):
  • the status line always begins with a protocol version and status code, e.g., "HTTP/1.0
  • the status code element is a three digit integer result code of the attempt to understand and satisfy a prior request message.
  • the reason phrase is intended to give a short textual description of the status code.
  • the first digit of the status code defines the class of response. There are five categories for the first digit. 1XX is an information response. It is not currently used.
  • 2XX is a successful response, indicating that the action was successfully received, understood and accepted.
  • 3XX is a redirection response, indicating that further action must be taken in order to complete the request.
  • 4XX is a client error response. This indicates a bad syntax in the request.
  • 5XX is a server error. This indicates that the server failed to fulfill an apparently valid request.
  • Fig. IE depicts a two-dimensional timing diagram illustrative of one particular method for determining a data link rate and latency period from an exchange of packets between TCP endpoints. According to this method, the initial data packets are examined as they establish a connection. Parameters are developed from which round trip time (RTT) and maximum data rate can be determined.
  • RTT round trip time
  • SS serialization speed or data flow rate capacity of a link is given by the relation:
  • T ] The arrival time of the first data packet less the time of arrival of the ACK packet (T daIal -T ACK )
  • Fig. IE shows a two dimensional timing diagram of the connection protocol of a remote HTTP request between a local TCP endpoint (server) and a remote TCP endpoint (client).
  • the remote endpoint issues a request for connection in the form of a SYN packet in step A.
  • the SYN packet takes a finite but unknown transit time to arrive at the local TCP endpoint.
  • the local TCP endpoint responds by sending its own
  • SYN packet in step B This SYN packet is of a known byte length and is issued at a known time, which becomes the reference time, "tbase. "
  • the remote TCP endpoint issues a standard ACK packet, whose length is likewise known, in step C, and then also issues the first data packet datal in step D.
  • Time T is computed immediately at the time of arrival of datal by examining the difference in arrival time of the received ACK packet and the datal packet.
  • the value of m is extracted by examining the entire first packet to determine its length or by examining the packet length field, if any, in the header information of the packet.
  • a first estimate of serialization speed SS is given by Equation (3).
  • Serialization speed can be estimated immediately upon interchange of the first packets and used to make important strategic decisions about the nature and speed of the connection about to be established.
  • An alternative method for determining a data link rate and latency period from an exchange of packets between TCP endpoints develops parameters from which round trip time (RTT) and maximum data rate can be determined. Values are obtained for serialization of n, the size (i.e., data length) of the SYN packet in response, plus the size (i.e., data length) of the ACK packet.
  • Serialization time is determined according to equation (4) by dividing the value n by the transit time T 0 , or
  • SS(max possible) n/T 0 (4)
  • n number of bytes in the SYN packet plus the number of bytes in the ACK packet
  • T 0 the arrival time of the ACK packet less the tbase value.
  • the round trip time minus the serialization time is the round trip latency
  • a traffic class is broadly defined as traffic between one or more clients and one or more servers.
  • a single instance of a traffic class is called a flow.
  • Traffic classes have the property, or class attribute, of being directional, i.e. all traffic flowing inbound will belong to different traffic classes and be managed separately from traffic flowing outbound.
  • the directional property enables asymmetric classification and control of traffic, i.e., inbound and outbound flows belong to different classes which may be managed independent of one another.
  • Traffic classes may be defined at any level of the TCP/IP protocol. For example, at the IP level, traffic may be defined as only those flows between a set of inside and outside IP addresses or domain names. An example of such a low level traffic class definition would be all traffic between my network and other corporate offices throughout the Internet. At the application level, traffic classes may be defined for specific URIs within a web server. Traffic classes may be defined having "Web aware" class attributes.
  • a traffic class could be created such as all URIs matching "*.html” for all servers, or all URIs matching "*.gif" for server X, or for access to server Y with URI "/sales/*" from client Z, wherein '*' is a wildcard character, i.e., a character which matches all other character combinations. Traffic class attributes left unspecified will simply match any value for that attribute.
  • a traffic class that accesses data objects within a certain directory path of a web server is specified by a URI of the directory path to be managed, e.g. "/sales/*".
  • the present invention provides a method for classifying traffic according to a definable set of classification attributes selectable by the manager, including selecting a subset of traffic of interest to be classified.
  • the invention provides the ability to classify and search traffic based upon multiple orthogonal classification attributes.
  • Traffic class membership may be hierarchical.
  • a flow may be classified by a series of steps through a traffic class tree, with the last step (i.e., at the leaves on the classification tree) mapping the flow to a policy.
  • the policy is a rule of assignment for flows. For example, the first step in classification may be to classify a flow as web traffic, the next may further classify this flow as belonging to server X, and the final classification may be a policy for URI "*.avi".
  • a classification tree is a data structure representing the hierarchical aspect of traffic class relationships.
  • Each node of the classification tree represents a class, and has a traffic specification, i.e., a set of attributes or characteristics describing the traffic, and a mask associated with it.
  • Leaf nodes of the classification tree contain policies.
  • the classification process checks at each level if the flow being classified matches the attributes of a given traffic class. If it does, processing continues down to the links associated with that node in the tree. If it does not, the class at the level that matches determines the policy for the flow being classified. If no policy specific match is found, the flow is assigned the default policy.
  • the classification tree is an N-ary tree with its nodes ordered by specificity. For example, in classifying a particular flow in a classification tree ordered first by organizational departments, the attributes of the flow are compared with the traffic specification in each successive department node and if no match is found, then processing proceeds to the next subsequent department node. If no match is found, then the final compare is a default "match all" category. If, however, a match is found, then classification moves to the children of this department node.
  • the child nodes may be ordered by an orthogonal paradigm such as, for example, "service type. " Matching proceeds according to the order of specificity in the child nodes.
  • Processing proceeds in this manner, traversing downward and from left to right in the classification tree, searching the plurality of orthogonal paradigms. Key to implementing this a hierarchy is that the nodes are arranged in decreasing order of specificity. This permits search to find the most specific class for the traffic before more general.
  • Table 2 depicts components from which Traffic classes may be built. Note that the orientation of the server (inside or outside) is specified. And as noted above, any traffic class component may be unspecified, i.e. set to match any value.
  • IP Address/Domain Name IP Address/Domain Name
  • TCP or UDP Service e.g. WWW, FTP, RealAudio, etc.
  • URI for Web Service e.g. "*.html”, “*.gif”, “/sales/*”, etc.
  • Figs. 2 A and 2B depict representative allocations of bandwidth made by a hypothetical network manager as an example.
  • the network manager has decided to divide her network resources first by allocating bandwidth between Departments A and B.
  • Fig 2A shows the resulting classification tree, in which
  • Department A bandwidth resources 202 and Department B bandwidth resources 204 each have their own nodes representing a specific traffic class for that department.
  • Each traffic class may have a policy attribute associated with it.
  • the Department A resources node 202 has the policy attribute Inside Host Subnet A associated with it.
  • the network manager has chosen to divide the bandwidth resources of Department A among two applications. She allocates an FTP traffic class 206 and a World Wide Web server traffic class 208. Each of these nodes may have a separate policy attribute associated with them.
  • the FTP node 206 for has an attribute Outside port 20 associated with it.
  • the network manager has chosen to divide network bandwidth resources of Department B into an FTP server traffic class 210 and a World Wide Web server traffic class 212. Each may have their own respective policies.
  • Fig. 2B shows a second example, wherein the network manager has chosen to first divide network bandwidth resource between web traffic and TCP traffic. She creates three traffic nodes, a web traffic node 220, a TCP traffic node 224 and a default node 225. Next, she divides the web traffic among two organizational departments by creating a Department A node 226, and a Department B node 228. Each may have its own associated policy. Similarly, she divides TCP network bandwidth into separate traffic classes by creating a Department A node 230 and a Department B node 232. Each represents a separate traffic class which may have its own policy.
  • Fig 2A the default category is depicted by a default node 205
  • Fig. 2B the default category is depicted by a default node 225.
  • Network managers create policies in order to assign service levels to traffic classes which are of particular interest.
  • Service levels may be either reserved or unreserved. Reserved service is useful in instances where traffic 'bursts', i.e., sends chunks of data, or in interactive applications, which require a minimum bandwidth in order to function properly.
  • Reserved service is defined in terms of scaled rate, unreserved service is defined in terms of priority. Allocation of bandwidth resources by scaled rate is accomplished in a process known as "speed scaling" wherein the speed of the underlying network link upon which the flow is carried is detected and available bandwidth is allocated based upon this detected speed. Allocation of bandwidth resources for unreserved service is priority based. The total bandwidth resources for unreserved service demanded by all flows is satisfied according to a prioritized scheme, in which highest priorities are satisfied first, followed by successively lower priority applications. Unreserved service allocations are based upon the available bandwidth after reserved service demands have been satisfied.
  • Traffic may be a combination of reserved and unreserved services.
  • web traffic may have a service level defined at a web server for "*.html" to designate these types of files as traffic allocated unreserved service, as these are typically small files.
  • "*.gif" files may be designated as a separate traffic class receiving reserved service.
  • TCP connections and UDP sessions may be defined for reserved service.
  • Guaranteed information rate represents a commitment of bandwidth resources to maintain a specified rate.
  • Excess information rate is an allocation of bandwidth resources on an "as available" basis. Guaranteed information rate prevents evident lack of progress situations, such as stuck progress indicators, from occurring. Excess rate is allocated from available bandwidth, i.e. that which has not consumed by providing guaranteed rate service. As flows demand excess information rate at different priority levels, their demands are satisfied in order of priority level.
  • admissions policy When a guaranteed information rate cannot be provided to a requesting flow due to insufficient bandwidth resources, an admissions policy is invoked.
  • the admissions policy may refuse a connection, display an error message or a combination of both.
  • Admissions policies prevent service 'brown outs' which would otherwise occur if guaranteed information rate demand exceeds available bandwidth resources. By contrast, excess information rate requirements are not guaranteed. If a flow's request for excess information rate is not able to be satisfied from the available bandwidth resources, it is simply not met. In the previous example, if insufficient bandwidth existed in order to accommodate the ten voice over IP sessions, an admissions policy would be invoked.
  • Excess information rate (EIR) is redistributed each time a new flow is created, an existing flow is terminated, or a substantial change in demand for unreserved priority level occurs.
  • Guaranteed information rate (GIR) is allocated only at the time of creation of a flow and freed when the flow terminates.
  • reserved TCP service levels may be enforced by TCP Rate Control for both inbound and outbound traffic.
  • TCP Rate Control is a technique well known to persons of ordinary skill in the art which scales reserved flows as excess rate is available without incurring retransmission.
  • Bandwidth resources may be dedicated to certain traffic classes. There are two degrees of isolation. Hard isolation creates an entirely separate logical channel for designated traffic classes. For example, department A and department B could both be hard isolated, setting up two logical .75 Mbps channels on a Tl access link, one for each department. With hard isolation, unused bandwidth in either channel would not be shared with the other department's traffic. Hard isolation involves allocating arbitrary subsets of access link bandwidth for any traffic class or family of traffic classes. It completely protects the traffic belonging to the respective class or classes, but this protection comes at the cost of forsaken opportunities to share unused bandwidth.
  • Soft isolation allows sharing of unused bandwidth, but limits the amount of guaranteed bandwidth that can be allocated to any particular traffic class at any one time.
  • Fig. 2C depicts a flowchart 203 showing the process steps in building a classification tree, such as classification tree 201 of Fig. 2 A, by adding traffic class nodes, such as traffic class node 206, in order of specificity of classes.
  • a new tclass node is allocated for the traffic class which is to be added to the classification tree.
  • a traffic specification is inserted for the new traffic class node, allocated in step 240.
  • the new tclass node is inserted into the classification tree.
  • bandwidth resources allocated based upon the traffic class hierarchy is redistributed to reflect the new hierarchy.
  • Fig. 2D depicts a flowchart 205 showing the component steps of traffic specification creation step 242 of Fig. 2C.
  • the processing steps of flowchart 205 insert a traffic specification, or tspec, into the traffic class allocated in step 240.
  • a current tspec is initialized to the first tspec in the tclass.
  • a decisional step 252 a determination is made whether the new tspec is less specific than the current tspec, initialized in step 250. If this is not the case, then in a step 258, the new tspec is inserted after the current tspec, and processing returns. Otherwise, in a decisional step 254, a determination is made whether this is the final tspec in the traffic class. If this is so, then in step 258, the new tspec is inserted after the last (current) tspec. Otherwise, in a step 256, the next tspec in the traffic class is examined by repeating processing steps 252, 254, 256 and 258.
  • Fig. 2E depicts a flowchart 207 showing the component steps of traffic specification creation step 244 of Fig. 2D.
  • the processing steps of flowchart 207 insert the traffic class, or tclass, allocated in step 240 into the classification tree.
  • a current sibling is initialized to the first sibling in the parent tclass.
  • a decisional step 262 a determination is made whether the new tclass is less specific than the current sibling, initialized in step 260. If this is not the case, then in a step 268, the new tclass is inserted after the current sibling, and processing returns. Otherwise, in a decisional step 264, a determination is made whether this is the final sibling in the parent tclass. If this is so, then in step 268, the new tclass is inserted after the last (current) tclass. Otherwise, in a step 266, the next sibling in the parent traffic class is examined by repeating processing steps 262, 264, 266 and 268.
  • Fig. 3 depicts a functional block diagram 301 of component processes in accordance with a particular embodiment of the present invention.
  • Processing of a new flow begins with a TCP autobaud component 302, which implements the automatic detection of flow speeds as depicted in Fig. IE.
  • TCP autobaud component 302 determines values for selectable information such as flow data rate of new flow 300.
  • a classifier 304 determines the class for the new flow using the information determined by TCP autobaud 302. Having classified the traffic, the classifier next determines an appropriate policy for this particular flow from a classification tree (not shown). Based upon the policy, the classifier then determines a service level for the new flow.
  • a bandwidth manager 306 uses the policy determined by classifier 304 in order to allocate bandwidth according to the service level prescribed by the policy.
  • Fig. 4 A depicts data structures in a particular embodiment according to the invention.
  • the bandwidth management aspect of the present invention is performed between inside clients and servers and the access link. Outside clients and servers are on the other side of the access link.
  • Traffic is directional. Inbound (flowing from an outside host to an inside host) traffic is managed entirely separately from outbound traffic.
  • a connection hash table 402 facilitates indexing into a plurality of TCP connections, each TCP connection having an associated flow, such as flows 300, 299 and 298 of Fig. 3.
  • a plurality of TCP control blocks 404 are used to track connection- related information for each TCP connection. Host-related information is tracked using a plurality of host information control blocks 406, indexed by a host hash table 408. Information about particular hosts corresponding to TCP connections is tracked using the TCP control block 404.
  • Each TCP connection represented by TCP control block 404 has associated with it a gear 410 representing an outbound flow and a gear 411 representing an inbound flow.
  • Each gear is an internal data object associated with a particular TCP connection or UDP session. Gears are protocol independent and contain rate-based information. Since network resources are managed separately for inbound and outbound traffic in this particular embodiment, there is a bandwidth pool 412 for outbound traffic and a bandwidth pool 413 for inbound traffic.
  • Bandwidth pools represent collections of available network resources managed by bandwidth manager 306. Bandwidth pools are arbitrarily divided into a partition 414 in the outbound path, and a partition 415 in the inbound path. Each partition represents an arbitrarily-sized portion of available network resource.
  • Gears become associated with particular partitions during the beginning of flow management for the new flow in step 518 of Fig. 5 A.
  • a policy 416 in the outbound path and a policy 417 in the inbound path become associated with partition 414 in the outbound path and partition 415 in the inbound path during the beginning of management of a new flow, as shown in Fig. 5 A.
  • These policies are associated with particular flows according to the traffic class that classifier 304 assigns to the particular flow.
  • Fig. 5 A depicts flowchart 501 of the steps taken in initiating a new flow in accordance with a particular embodiment of the invention.
  • TCP autobaud component 302 detects a TCP data rate of new flow 300.
  • classifier 304 classifies the new flow to determine a policy. Classification is described in greater detail in Fig. 5F herein below.
  • a step 506 based upon the policy determined in step 504, a guaranteed information rate (GIR) and an excess information rate (EIR) are determined and speed scaled to the incoming TCP data rate, detected in step 502. Speed scaling is described in greater detail in Figs. 5G and 5H herein below.
  • GIR guaranteed information rate
  • EIR excess information rate
  • bandwidth manager 306 allocates guaranteed information rate resources and excess information rate resources to accommodate an initial target rate for the new flow.
  • bandwidth manager 306 determines if a partition limit would be exceeded by the allocation determined in step 508. If this is the case, then in a step 512, an admissions policy is invoked. Otherwise, in a step 514, the excess information rate resources demanded by new flow 300 is added to a total excess information rate resources demanded for this bandwidth pool.
  • excess information rate resources are redistributed with respect to all flows within the partition, if demand has changed substantially since the last time it was redistributed. Fig.
  • 5E depicts a flowchart 509 showing the steps in reapplying demand satisfaction parameters to all traffic flows in the bandwidth pool. A new target rate and total excess information rate resources are calculated. Finally, in a step 518, the new flow is associated with a partition.
  • Fig. 5B depicts a flowchart 503 showing processing of a packet belonging to an existing flow such as a forward flow packet 299.
  • a scheduler 308 using policy information determined at flow initiation, filters TCP acknowledgments for scheduling in a step 522. Output times are scheduled on millisecond boundaries, based upon the target data rate computed in step 508 of Fig. 5A.
  • the remaining packets are delivered immediately to TCP, which performs just in time delivery.
  • Fig. 5C depicts a flowchart 505 showing processing by bandwidth manager 306 of a packet at the termination of a flow, such as an end flow packet 298.
  • a step 530 guaranteed information rate resources are restored.
  • excess information rate resources demanded by the terminating flow 298 is subtracted from the total excess information rate resources demanded.
  • excess information rate resources are redistributed as described in Fig. 5D.
  • the flow is removed from its partition.
  • Fig. 4B depicts specific data structures used in redistributing unreserved service resources among multiple flows in a particular partition.
  • Input rates of particular flows are measured by TCP autobaud component 302 and used to determine an Extrapolated Unreserved Demand or (EUD) 424.
  • EUD Extrapolated Unreserved Demand
  • Flows requiring excess information rate resources such as flows 299 and 298, having gears 410 and 411 (not shown), respectively, have associated with them an individual flow demands 420 and
  • the individual flow demands of all flows in a particular partition are added together to form an Aggregate Rate Demand (ARD) 422 at each priority level.
  • Aggregate rate demand is added to extrapolated unreserved demand 424 to arrive at a total instantaneous demand 426.
  • the total instantaneous demand is satisfied from the highest priority, priority 7, downward based upon the available excess information rate resources in the partition 414.
  • This available rate is information rate which has not been used to satisfy guaranteed information rate demands.
  • a satisfaction percentage vector 428 is computed having the percentage of each priority of flow which may be satisfied in the total demand vector 426 by the available excess information rate resources in partition 414.
  • Individual flow demand vectors 420 and 421 are each multiplied by the satisfaction percentage for each priority 428 in order to arrive at a target rate which is incorporated in gears 410 and 411 corresponding to each flow.
  • the satisfaction percentage vector 428 is also applied to the Extrapolated Unreserved Demand (EUD) 424 to determine unreserved targets.
  • EUD Extrapolated Unreserved Demand
  • Fig. 5D depicts flowchart 507 of the processing steps to redistribute excess information rate resources with respect to reserved service flows within a given partition.
  • a demand satisfaction metric is computed for the bandwidth pool being redistributed.
  • a decisional step 542 a determination whether the demand satisfaction metric computed in step 540 has changed substantially since the last time demand was redistributed. If this is the case, then in a decisional step 544, a loop is entered for each gear associated with the partition owning the bandwidth pool being redistributed.
  • demand satisfaction parameters are reapplied a traffic flow associated with each gear and a new target rate and a new total excess information rate resources are calculated.
  • Fig. 5E depicts flowchart 509 of the processing steps for reconciling bandwidth among a plurality of flows, requiring reserved service and unreserved service resources.
  • partition 414 of Fig. 4B has unreserved service flow 300 and reserved service flows 299 and 298.
  • a decisional step 550 a loop is formed with a step 554, in which the individual flow demands of reserved service flows, such as flows 299 and 298 in partition 414, are added together to form an aggregate rate demand (ARD) 422.
  • ARD aggregate rate demand
  • an extrapolated unreserved demand (EUD) 424 is determined from the input rate of unreserved service traffic 300 in partition 414.
  • the EUD must be inflated in order to promote growth in traffic. This is done to accommodate the elastic nature of TCP traffic.
  • the EUD determined in step 552 and the ARD determined in step 554 are combined to form an instantaneous total demand (TD) 426 for the entire partition 414. The total instantaneous demand is satisfied from the highest priority level, level 7, downward based upon the available unreserved resources in the partition 414.
  • a satisfaction percentage vector (SP) 428 is computed for the partition from the total demand vector (TD) 426 determined in step 556.
  • the satisfaction percentage indicates that percentage of the demand at each priority level that can be satisfied from the unreserved resources in the partition.
  • a target rate for reserved rate based flows such as flows 299 and 298, is computed.
  • a target rate for unreserved priority based flows such as flow 300, is computed.
  • Fig. 5F depicts a flowchart 511 showing the component steps of traffic classification step 504 of Fig. 5 A.
  • the processing steps of flowchart 511 determine a class and a policy for a flow, such as new flow 300, by traversing a classification tree such as the classification tree 201 in Fig. 2 A. Processing begins with a first node in the classification tree 201, such as department A node 202.
  • a traffic specification of a child node, such as FTP node 206 is compared with a flow specification of the new flow 300.
  • a decisional step 572 if a match is discovered, then in a step 574, the processing of flowchart 511 is applied to the child node 206 recursively.
  • processing determines whether any sibling nodes exist for the child node 206. If processing detects the existence of a sibling of child node 206, such as child node 208, then processing continues on node 208 with step 570. Otherwise, if there are no further child nodes for node 202, then in a decisional step 578, a determination is made whether the node is a leaf node having a policy. If no policy exists, then in a step 580, processing backtracks to a parent node and looks for a policy associated with the parent node to apply to the new flow 300. Otherwise, if a policy is associated with the node 202, then in a step 582, the policy is associated with the new flow 300.
  • Fig. 5G depicts a flowchart 513 showing the component steps of speed scaling step 506 of Fig. 5A.
  • the processing steps of flowchart 513 determine an acceptable allocation of bandwidth to reserved service flows, i.e., GIR and EIR, by allocating rate for each gear, such as gears 410 and 411, based upon individual flow demands, base limits and total limits.
  • scaled rate for GIR is set based upon a connection speed detected by TCP autobaud component 302 for a particular flow, such as flow 299, having a gear, such as gear 410.
  • step 584 scaled rate for EIR is set based upon a connection speed detected by TCP autobaud component 302 for a particular flow, such as flow 298, having a gear, such as gear 411.
  • a limit is computed from either the available EIR remaining after the allocation to flow 299 in step 584, or from a total limit, if such a limit is specified.
  • a determination is made whether any extra EIR resource remains after the limit computed in step 586 has been satisfied.
  • limits computed in step 586 are allocated to flows, such as flow 299 and flow 298, in order of descending priority level.
  • a decisional step 592 the processing of steps 582 through 592 are applied to subsequent gears, if any.
  • Fig. 5H depicts a flowchart 515 showing the component steps of scaled rate setting steps 582 and 584 of Fig. 5G.
  • the processing steps of flowchart 515 determine whether the rate of the underlying link detected by TCP autobaud 302 exceeds limits imposed on the network.
  • a decisional step 592 a determination is made whether the detected speed of a flow is less than a base limit minimum. If this is so, then in a step 594, the base limit is returned as the scaled rate. Otherwise, in a decisional step 596, a determination is made whether the total rate of all flows is greater than a maximum limit. If this is so, then in a step 598, the maximum limit is returned as the scaled rate.
  • Rate Specifier i.e. CIR + EIR... */ typedef struct ⁇
  • EIR_VECTOR eirNewPercentage, urDemand, urAllocated; int i, available, urTotal, totalDemand, demand, diff; int new, old; int dir bwp->flags & BWF_DIR_MASK;
  • priority level is PARTIALLY SATISIFIED.
  • TCLASSs have a component specifications (lists of TSPECS) which is * ordered by increasing specificity. The global flow implicitly matches * all traffic classes.
  • * * TCLASSs are organized into an N-ARY tree, where children of tclasses * are arranged in order of exception, normal, and inheritable.
  • * Sorting/searching order is * exception tclasses without applied policy * exception tclasses with applied policy * tclasses without policy * tclass with policy * inheritable tclass with policy (inheritable has policy by definition) * * * within each group, tclasses are ordered by Decreasing Match Specificity
  • a FLOW is matched to a policy as follows:
  • MATCHALL tspec/policy exists at the top level for both the inbound and outbound tree.
  • tclass_check(tclass, flow) tclass are checked in above specified order against flow. If they match, recurse to call tclass_check(match_tclass, flow) *
  • TCF_EXCEPTION 0x0010 #define TCFJNHERITABLE 0x0020 #define TCF_STANDARD 0x0000 #define TCF_TYPE_MASK (TCF_EXCEPTION+TCF_INHERITABLE)
  • Points to self or root class is CIR set or inherited or NULL if
  • WEBNESS may be set for a connection by virtue of that connection being on the well known HTTP port (80), or by an explicit user configuration indicating that a certain port if for a web server, or by automatic recognition process which detects an http 'signature' in traffic destined towards a certain repetitively used (server) port.
  • connection has WEBNESS, check to see if it an http request.
  • OBJECTJD oldPid *pidPtr
  • OBJECTJD oldTclassId *tclassIdPtr
  • TCLASS *tclass NULL
  • BOOL newPolicyOrTclass FALSE; int latency, confidence, retval;
  • ⁇ if (newUri) ⁇ retval policyCheckBwidth(pi, tclass, g, bps);
  • CONN *cl &fl->conn
  • CONN *c2 &f2->conn
  • Speed Scaling when a flow is allocated, the GIR and EIR are scaled per the detected (potential maximum) speed. in this embodiment, the speed scaling is done by extrapolating from a defined high speed and low speed target rates.
  • CIR scale is specified by user as two values, a base value (B) under which all speeds get 100%, and a limiting value (L) to cap CIR for high speed users. If a connections incoming speed is actually BETWEEN (B) and (L), then we create a scaled intermediate value for CIR.
  • EIR scale is used to generate an EIR_VECTOR of allocation limits at each EIR priority level.
  • EIR scale is specified as the highest priority level (HP), the scaled limit at that priority level (in terms of B and L, as with CIR).
  • Rate Specifier i.e. CIR + EIR...

Abstract

In a packet communication environment, a method is provided for classifying packet network flows (202, 204) for use in determining a policy, or rule of assignment of a service level, and enforcing that policy by direct rate control. The method comprises applying individual instances of traffic objects, i.e., packet network flows (202, 204) to a classification model based on selectable information obtained from a plurality of layers of a multi-layered communication protocol, then mapping the flow to the defined traffic classes (206, 208, 210, 212), which are arbitrarily assignable by an offline manager which creates the classification. It is useful to note that the classification need not be a complete enumeration of the possible traffic.

Description

METHOD FOR MANAGING FLOW BANDWIDTH UTILIZATION AT NETWORK, TRANSPORT AND APPLICATION LAYERS IN STORE
AND FORWARD NETWORK
COPYRIGHT NOTICE A portion of the disclosure of this patent document contains material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure as it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever.
CROSS-REFERENCE TO RELATED APPLICATIONS
This application claims priority from the following U.S. Provisional Application:
U.S. Provisional Patent Application Serial No. 60/032,485, Robert L. Packer, entitled, "Method for Managing Flow Bandwidth Utilization at Network, Transport and Application Layers in Store and Forward Network", filed December 9,
1996.
Further, this application makes reference to the following commonly owned U.S. Patent Application:
Copending U.S. Patent Application Serial No. 08/762,828, in the name of Robert L. Packer, entitled "Method for Rapid Data Rate Detection in a Packet
Communication Environment Without Data Rate Supervision, " relates to a technique for automatically determining the data rate of a TCP connection.
Further, this application makes reference to the following U.S. Patent Application: Copending U.S. Patent Application Serial No. 08/742,994, in the name of
Robert L. Packer, entitled "Method for Explicit Data Rate Control in a Packet Communication Environment Without a Data Rate Supervision, " relates to a technique for automatically scheduling TCP packets for transmission. BACKGROUND OF THE INVENTION This invention relates to digital packet telecommunications, and particularly to management of network bandwidth based on information ascertainable from multiple layers of OSI network model. It is particularly useful in conjunction with data flow rate detection and control of a digitally-switched packet telecommunications environment normally not subject to data flow rate control.
The ubiquitous TCP/IP protocol suite, which implements the world-wide data communication network environment called the Internet and is also used in private networks (Intranets), intentionally omits explicit supervisory function over the rate of data transport over the various media which comprise the network. While there are certain perceived advantages, this characteristic has the consequence of juxtaposing very highspeed packet flows and very low-speed packet flows in potential conflict for network resources, which results in inefficiencies. Certain pathological loading conditions can result in instability, overloading and data transfer stoppage. Therefore, it is desirable to provide some mechanism to optimize efficiency of data transfer while minimizing the risk of data loss. Early indication of the rate of data flow which can or must be supported is very useful. In fact, data flow rate capacity information is a key factor for use in resource allocation decisions.
Internet/Intranet technology is based largely on the TCP/IP protocol suite, where IP, or Internet Protocol, is the network layer protocol and TCP, or Transmission
Control Protocol, is the transport layer protocol. At the network level, IP provides a "datagram" delivery service. By contrast, TCP builds a transport level service over the datagram service to provide guaranteed, sequential delivery of a byte stream between two IP hosts. TCP flow control mechanisms operate exclusively at the end stations to limit the rate at which TCP endpoints emit data. However, TCP lacks explicit data rate control. In fact, there is heretofore no concept of coordination of data rates among multiple flows. The basic TCP flow control mechanism is a sliding window, superimposed on a range of bytes beyond the last explicitly-acknowledged byte. Its sliding operation limits the amount of unacknowledged transmissible data that a TCP endpoint can emit.
Another flow control mechanism is a congestion window, which is a refinement of the sliding window scheme, which employs conservative expansion to fully utilize all of the allowable window. A component of this mechanism is sometimes referred to as "slow start".
The sliding window flow control mechanism works in conjunction with the Retransmit Timeout Mechanism (RTO), which is a timeout to prompt a retransmission of unacknowledged data. The timeout length is based on a running average of the Round Trip Time (RTT) for acknowledgment receipt, i.e. if an acknowledgment is not received within (typically) the smoothed RTT + 4*mean deviation, then packet loss is inferred and the data pending acknowledgment is retransmitted. Data rate flow control mechanisms which are operative end-to-end without explicit data rate control draw a strong inference of congestion from packet loss (inferred, typically, by RTO). TCP end systems, for example, will 'back-off, i.e., inhibit transmission in increasing multiples of the base RTT average as a reaction to consecutive packet loss.
Bandwidth Management in TCP/IP Networks
Conventional bandwidth management in TCP/IP networks is accomplished by a combination of TCP end systems and routers which queue packets and discard packets when certain congestion thresholds are exceeded. The discarded, and therefore unacknowledged, packet serves as a feedback mechanism to the TCP transmitter. (TCP end systems are clients or servers running the TCP transport protocol, typically as part of their operating system.)
The term "bandwidth management" is often used to refer to link level bandwidth management, e.g. multiple line support for Point to Point Protocol (PPP). Link level bandwidth management is essentially the process of keeping track of all traffic and deciding whether an additional dial line or ISDN channel should be opened or an extraneous one closed. The field of this invention is concerned with network level bandwidth management, i.e. policies to assign available bandwidth from a single logical link to network flows. Routers support various queuing options. These options are generally intended to promote fairness and to provide a rough ability to partition and prioritize separate classes of traffic. Configuring these queuing options with any precision or without side effects is in fact very difficult, and in some cases, not possible. Seemingly simple things, such as the length of the queue, have a profound effect on traffic characteristics. Discarding packets as a feedback mechanism to TCP end systems may cause large, uneven delays perceptible to interactive users.
In a copending U.S. Patent Application Serial No. 08/742,994, in the name of Robert L. Packer, entitled "Method for Explicit Data Rate Control in a Packet
Communication Environment Without Data Rate Supervision, " a technique for automatically scheduling TCP packets for transmission is disclosed. Furthermore, in a copending U.S. Patent Application Serial No. 08/762,828, in the name of Robert L. Packer, entitled "Method for Rapid Data Rate Detection in a Packet Communication Environment Without Data Rate Supervision," a technique for automatically determining the data rate of a TCP connection is disclosed. While these patent applications teach methods for solving problems associated with scheduling transmissions and for automatically determining a data flow rate on a TCP connection, respectively, there is no teaching in the prior art of methods for explicitly managing TCP packet traffic based upon information about the flow's characteristics at multiple OSI protocol layers.
Bandwidth management is heretofore not known to employ information contained in the packets corresponding to higher OSI protocol layers, even though such information may be extremely useful in making bandwidth allocation and management decisions.
SUMMARY OF THE INVENTION According to the invention, in a packet communication environment, a method is provided for classifying packet network flows for use in determining a policy, or rule of assignment of a service level, and enforcing that policy by direct rate control. The method comprises applying individual instances of traffic objects, i.e., packet network flows to a classification model based on selectable information obtained from a plurality of layers of a multi-layered communication protocol, then mapping the flow to the defined traffic classes, which are arbitrarily assignable by an offline manager which creates the classification. It is useful to note that the classification need not be a complete enumeration of the possible traffic.
In one aspect of the invention, bandwidth may be divided into arbitrary units, partitions, facilitating isolation and allocation. A partition is allocated for a class or set of traffic classes, carving the bandwidth of the associated link into multiple, independent pieces.
In another aspect of the invention, available bandwidth may be allocated among flows according to a policy, which may include any combination of guaranteed information rate, excess information rate, the later allocated according to a priority. In another aspect of the invention, bandwidth resource needs of multiple heterogeneous requesting flows are reconciled with available bandwidth resources in accordance with policy of each flow based upon the flow's class. Flows requiring reserved service with guaranteed information rates, excess information rates or unreserved service are reconciled with the available bandwidth resources continuously and automatically.
In another aspect of the invention, providing an admissions policy which is invoked whenever a request for a bandwidth cannot be met consistently with other users of bandwidth.
An advantage of network management techniques according to the present invention is that network managers need only define traffic classes which are of interest.
A further advantage of the present invention is that traffic classes may include information such as a URI for web traffic.
A yet further advantage of the present invention is that service levels may be defined in terms of explicit rates and may be scaled to a remote client or server's network access rate. Different service levels may be specified for high speed and low speed users.
A yet further advantage of the present invention is that service levels may be defined in terms of a guaranteed minimum service level.
The invention will be better understood upon reference to the following detailed description in connection with the accompanying drawings.
BRIEF DESCRIPTION OF THE DRA WINGS Fig. 1A depicts a representative client server relationship in accordance with a particular embodiment of the invention; Fig. IB depicts a functional perspective of the representative client server relationship in accordance with a particular embodiment of the invention;
Fig. 1C depicts a representative internetworking environment in accordance with a particular embodiment of the invention; Fig. ID depicts a relationship diagram of the layers of the TCP/IP protocol suite;
Fig. IE depicts a two dimensional representation of timing relationships in the exchange of packets between hosts using the TCP protocol;
Figs. 2A-2B depict representative divisions of bandwidth according to a particular embodiment of the invention;
Figs. 2C-2E are flow charts depicting process steps according to a particular embodiment of the invention;
Fig. 3 is a block diagram of a particular embodiment according to the invention; Fig. 4A is a block diagram of a data structure according to a particular embodiment of the invention;
Fig. 4B is a block diagram of data structure according to a particular embodiment of the invention; and
Figs. 5A-5H are flow charts depicting process steps according to a particular embodiment of the invention.
DETAILED DESCRIPTION OF SPECIFIC EMBODIMENTS A preferable embodiment of a flow bandwidth management system according to the invention has been reduced to practice and will be made available under the trade name "PacketShaper™. "
1.0 Introduction
The present invention provides techniques to manage network bandwidth such as on a network access link between a local area network and a wide area network. Systems according to the present invention enable network managers to: define traffic classes; create policies which define service levels for traffic classes; and isolate bandwidth resources associated with certain traffic classes. Inbound as well as outbound traffic may be managed. Table 1 provides a definitional list of terminology used herein. LIST OF DEFINITIONAL TERMS
ADMISSIONS CONTROL A policy invoked whenever a system according to the invention detects that a guaranteed information rate cannot be maintained. An admissions control policy is analogous to a busy signal in the telephone world.
CLASS SEARCH ORDER A search method based upon traversal of a N-ary tree data structure containing classes.
COMMITTED INFORMATION
RATE
(CIR) A rate of data flow allocated to reserved service traffic for rate based bandwidth allocation for a committed bandwidth.
Also called a guaranteed information rate (GIR).
EXCEPTION A class of traffic provided by the user which supersedes an automatically determined classification order.
EXCESS INFORMATION
RATE
(EIR) A rate of data flow allocated to reserved service traffic for rate based bandwidth allocation for uncommitted bandwidth resources.
FLOW A flow is a single instance of a traffic class. For example, all packets in a TCP connection belong to the same flow. As do all packets in a UDP session.
GUARANTEED INFORMATION RATE (GIR) A rate of data flow allocated to reserved service traffic for rate based bandwidth allocation for a committed bandwidth. Also called a committed information rate (CIR). HARD ISOLATION Hard isolation results from the creation of an entirely separate logical channel for a designated set of classes.
INSIDE On the system side of an access link. Outside clients and servers are on the other side of the access link.
ISOLATION Isolation is the degree that bandwidth resources are allocable to traffic classes.
OUTSIDE On the opposite side of an access link as viewed from the perspective of the system on which the software resides.
PARTITION Partition is an arbitrary unit of network resources.
POLICY A rule for the assignment of a service level to a flow.
POLICY INHERITANCE A method for assigning policies to flows for which no policy exists in a hierarchical arrangement of policies. For example, if a flow is determined to be comprised of FTP packets for Host A, and no corresponding policy exists, a policy associated with a parent node, such as an FTP policy, may be located and used. See also POLICY SEARCH ORDER.
POLICY BASED
SCALING An adjustment of a requested data rate for a particular flow based upon the policy associated with the flow and information about the flow's potential rate.
RESERVED SERVICE Reserved service is a service level intended for traffic which "bursts" or sends chunks of data. Reserved service is defined in terms of a scaled rate. SCALED RATE Assignment of a data rate based upon detected speed.
SERVICE LEVEL A service paradigm having a combination of characteristics defined by a network manager to handle a particular class of traffic. Service levels may be designated as either reserved or unreserved.
SOFT ISOLATION Restricting GIR allocated for traffic classes in a partition.
TARGET RATE A target rate is a combination of a guaranteed rate and an excess rate. Target rate is a policy-based paradigm. Excess rate is allocated by systems according to the invention from bandwidth that is not consumed by reserved service. Policies will demand excess rate at a given priority and systems according to the invention satisfy this demand by a priority level.
TRAFFIC CLASS All traffic between a client and a server endpoints. A single instance of a traffic class is called a flow. Traffic classes have properties or class attributes such as, directionality, which is the property of traffic to be flowing inbound or outbound.
UNRESERVED SERVICE Unreserved service is a service level defined in terms of priority in which no reservation of bandwidth is made.
Table 1 1.1 Hardware Overview
The method for flow bandwidth management in a packet oriented telecommunications network environment of the present invention is implemented in the
C programming language and is operational on a computer system such as shown in Fig. 1A. This invention may be implemented in a client-server environment, but a client- server environment is not essential. This figure shows a conventional client-server computer system which includes a server 20 and numerous clients, one of which is shown as client 25. The use of the term "server" is used in the context of the invention, wherein the server receives queries from (typically remote) clients, does substantially all the processing necessary to formulate responses to the queries, and provides these responses to the clients. However, server 20 may itself act in the capacity of a client when it accesses remote databases located at another node acting as a database server.
The hardware configurations are in general standard and will be described only briefly. In accordance with known practice, server 20 includes one or more processors 30 which communicate with a number of peripheral devices via a bus subsystem 32. These peripheral devices typically include a storage subsystem 35, comprised of a memory subsystem 35a and a file storage subsystem 35b holding computer programs (e.g., code or instructions) and data, a set of user interface input and output devices 37, and an interface to outside networks, which may employ Ethernet, Token Ring, ATM, IEEE 802.3, ITU X.25, Serial Link Internet Protocol (SLIP) or the public switched telephone network. This interface is shown schematically as a "Network
Interface" block 40. It is coupled to corresponding interface devices in client computers via a network connection 45.
Client 25 has the same general configuration, although typically with less storage and processing capability. Thus, while the client computer could be a terminal or a low-end personal computer, the server computer is generally a high-end workstation or mainframe, such as a SUN SPARC server. Corresponding elements and subsystems in the client computer are shown with corresponding, but primed, reference numerals. Bus subsystem 32 is shown schematically as a single bus, but a typical system has a number of buses such as a local bus and one or more expansion buses (e.g., ADB, SCSI, ISA, EISA, MCA, NuBus, or PCI), as well as serial and parallel ports.
Network connections are usually established through a device such as a network adapter on one of these expansion buses or a modem on a serial port. The client computer may be a desktop system or a portable system.
The user interacts with the system using interface devices 37' (or devices 37 in a standalone system). For example, client queries are entered via a keyboard, communicated to client processor 30', and thence to modem or network interface 40' over bus subsystem 32'. The query is then communicated to server 20 via network connection 45. Similarly, results of the query are communicated from the server to the client via network connection 45 for output on one of devices 37' (say a display or a printer), or may be stored on storage subsystem 35'.
Fig. IB is a functional diagram of a computer system such as that of Fig. 1A. Fig. IB depicts a server 20, and a representative client 25 of a plurality of clients which may interact with the server 20 via the Internet 45 or any other communications method. Blocks to the right of the server are indicative of the processing steps and functions which occur in the server's program and data storage indicated by blocks 35a and 35b in Fig. 1A. A TCP/IP "stack" 44 works in conjunction with Operating System 42 to communicate with processes over a network or serial connection attaching Server 20 to Internet 45. Web server software 46 executes concurrently and cooperatively with other processes in server 20 to make data objects 50 and 51 available to requesting clients. A Common Gateway Interface (CGI) script 55 enables information from user clients to be acted upon by web server 46, or other processes within server 20. Responses to client queries may be returned to the clients in the form of a Hypertext Markup Language (HTML) document outputs which are then communicated via Internet
45 back to the user.
Client 25 in Fig. IB possesses software implementing functional processes operatively disposed in its program and data storage as indicated by block 35a' in Fig. 1A. TCP/IP stack 44', works in conjunction with Operating System 42' to communicate with processes over a network or serial connection attaching Client 25 to Internet 45.
Software implementing the function of a web browser 46' executes concurrently and cooperatively with other processes in client 25 to make requests of server 20 for data objects 50 and 51. The user of the client may interact via the web browser 46' to make such queries of the server 20 via Internet 45 and to view responses from the server 20 via Internet 45 on the web browser 46' .
L2 Network Overview
Fig. 1C is illustrative of the internetworking of a plurality of clients such as client 25 of Figs. 1A and IB and a plurality of servers such as server 20 of Figs. 1A and IB as described herein above. In Fig. 1C, network 70 is an example of a Token
Ring or frame oriented network. Network 70 links host 71, such as an IBM RS6000 RISC workstation, which may be running the AIX operating system, to host 72, which is a personal computer, which may be running Windows 95, IBM OS/2 or a DOS operating system, and host 73, which may be an IBM AS/400 computer, which may be running the OS/400 operating system. Network 70 is internetworked to network 60 via a system gateway which is depicted here as router 75, but which may also be a gateway having a firewall or a network bridge. Network 60 is an example of an Ethernet network that interconnects host 61, which is a SPARC workstation, which may be running SUNOS operating system with host 62, which may be a Digital Equipment VAX6000 computer which may be running the VMS operating system.
Router 75 is a network access point (NAP) of network 70 and network 60. Router 75 employs a Token Ring adapter and Ethernet adapter. This enables router 75 to interface with the two heterogeneous networks. Router 75 is also aware of the Internetwork Protocols, such as ICMP ARP and RIP, which are described herein below. Fig. ID is illustrative of the constituents of the Transmission Control Protocol/Internet Protocol (TCP/IP) protocol suite. The base layer of the TCP/IP protocol suite is the physical layer 80, which defines the mechanical, electrical, functional and procedural standards for the physical transmission of data over communications media, such as, for example, the network connection 45 of Fig. 1A. The physical layer may comprise electrical, mechanical or functional standards such as whether a network is packet switching or frame-switching; or whether a network is based on a Carrier Sense Multiple Access/Collision Detection (CSMA/CD) or a frame relay paradigm.
Overlying the physical layer is the data link layer 82. The data link layer provides the function and protocols to transfer data between network resources and to detect errors that may occur at the physical layer. Operating modes at the datalink layer comprise such standardized network topologies as IEEE 802.3 Ethernet, IEEE 802.5 Token Ring, ITU X.25, or serial (SLIP) protocols.
Network layer protocols 84 overlay the datalink layer and provide the means for establishing connections between networks. The standards of network layer protocols provide operational control procedures for internetworking communications and routing information through multiple heterogenous networks. Examples of network layer protocols are the Internet Protocol (IP) and the Internet Control Message Protocol
(ICMP). The Address Resolution Protocol (ARP) is used to correlate an Internet address and a Media Access Address (MAC) for a particular host. The Routing Information Protocol (RIP) is a dynamic routing protocol for passing routing information between hosts on networks. The Internet Control Message Protocol (ICMP) is an internal protocol for passing control messages between hosts on various networks. ICMP messages provide feedback about events in the network environment or can help determine if a path exists to a particular host in the network environment. The latter is called a "Ping". The Internet Protocol (IP) provides the basic mechanism for routing packets of information in the Internet. IP is a non-reliable communication protocol. It provides a "best efforts" delivery service and does not commit network resources to a particular transaction, nor does it perform retransmissions or give acknowledgments. The transport layer protocols 86 provide end-to-end transport services across multiple heterogenous networks. The User Datagram Protocol (UDP) provides a connectionless, datagram oriented service which provides a non-reliable delivery mechanism for streams of information. The Transmission Control Protocol (TCP) provides a reliable session-based service for delivery of sequenced packets of information across the Internet. TCP provides a connection oriented reliable mechanism for information delivery.
The session, or application layer 88 provides a list of network applications and utilities, a few of which are illustrated here. For example, File Transfer Protocol (FTP) is a standard TCP/IP protocol for transferring files from one machine to another. FTP clients establish sessions through TCP connections with FTP servers in order to obtain files. Telnet is a standard TCP/IP protocol for remote terminal connection. A
Telnet client acts as a terminal emulator and establishes a connection using TCP as the transport mechanism with a Telnet server. The Simple Network Management Protocol (SNMP) is a standard for managing TCP/IP networks. SNMP tasks, called "agents", monitor network status parameters and transmit these status parameters to SNMP tasks called "managers. " Managers track the status of associated networks. A Remote
Procedure Call (RPC) is a programming interface which enables programs to invoke remote functions on server machines. The Hypertext Transfer Protocol (HTTP) facilitates the transfer of data objects across networks via a system of uniform resource indicators (URI). The Hypertext Transfer Protocol is a simple protocol built on top of
Transmission Control Protocol (TCP). It is the mechanism which underlies the function of the World Wide Web. The HTTP provides a method for users to obtain data objects from various hosts acting as servers on the Internet. User requests for data objects are made by means of an HTTP request, such as a GET request. A GET request as depicted below is comprised of 1) an HTTP protocol version, such as "http:/1.0"; followed by 2) the full path of the data object; followed by 3) the name of the data object. In the GET request shown below, a request is being made for the data object with a path name of "/pub/" and a name of "MyData.html":
HTTP- Version GET /pub/My Data. html (1)
Processing of a GET request entails the establishing of an TCP/IP connection with the server named in the GET request and receipt from the server of the data object specified. After receiving and interpreting a request message, a server responds in the form of an HTTP RESPONSE message.
Response messages begin with a status line comprising a protocol version followed by a numeric Status Code and an associated textual Reason Phrase. These elements are separated by space characters. The format of a status line is depicted in line (2):
Status-Line = HTTP- Version Status-Code Reason-Phrase (2)
The status line always begins with a protocol version and status code, e.g., "HTTP/1.0
200 " . The status code element is a three digit integer result code of the attempt to understand and satisfy a prior request message. The reason phrase is intended to give a short textual description of the status code.
The first digit of the status code defines the class of response. There are five categories for the first digit. 1XX is an information response. It is not currently used.
2XX is a successful response, indicating that the action was successfully received, understood and accepted. 3XX is a redirection response, indicating that further action must be taken in order to complete the request. 4XX is a client error response. This indicates a bad syntax in the request. Finally, 5XX is a server error. This indicates that the server failed to fulfill an apparently valid request.
Particular formats of HTTP messages are described in, Crocker, D., "Standard for the Format of ARPA Internet Text Messages", STD 11, RFC 822, UDEL, August 1982. 1.3 TCP Data Link Rate and Latency Period Determination
Techniques for determining data rates are more fully set forth in co-owned U.S. Patent Application Serial No. 08/762,828, entitled "Method for Rapid Data Rate Detection in a Packet Communication Environment Without Data Rate Supervision". Fig. IE depicts a two-dimensional timing diagram illustrative of one particular method for determining a data link rate and latency period from an exchange of packets between TCP endpoints. According to this method, the initial data packets are examined as they establish a connection. Parameters are developed from which round trip time (RTT) and maximum data rate can be determined. The serialization speed (SS) or data flow rate capacity of a link is given by the relation:
SS=m/T! (3) where: m = number of bytes in the first data packet (Datal)
T] = The arrival time of the first data packet less the time of arrival of the ACK packet (TdaIal -TACK)
Fig. IE shows a two dimensional timing diagram of the connection protocol of a remote HTTP request between a local TCP endpoint (server) and a remote TCP endpoint (client). The remote endpoint issues a request for connection in the form of a SYN packet in step A. The SYN packet takes a finite but unknown transit time to arrive at the local TCP endpoint. The local TCP endpoint responds by sending its own
SYN packet in step B. This SYN packet is of a known byte length and is issued at a known time, which becomes the reference time, "tbase. " After a brief latency, the remote TCP endpoint issues a standard ACK packet, whose length is likewise known, in step C, and then also issues the first data packet datal in step D. Time T, is computed immediately at the time of arrival of datal by examining the difference in arrival time of the received ACK packet and the datal packet. The value of m is extracted by examining the entire first packet to determine its length or by examining the packet length field, if any, in the header information of the packet. A first estimate of serialization speed SS is given by Equation (3). Serialization speed can be estimated immediately upon interchange of the first packets and used to make important strategic decisions about the nature and speed of the connection about to be established. An alternative method for determining a data link rate and latency period from an exchange of packets between TCP endpoints develops parameters from which round trip time (RTT) and maximum data rate can be determined. Values are obtained for serialization of n, the size (i.e., data length) of the SYN packet in response, plus the size (i.e., data length) of the ACK packet. Serialization time is determined according to equation (4) by dividing the value n by the transit time T0, or
SS(max possible) = n/T0 (4) where n = number of bytes in the SYN packet plus the number of bytes in the ACK packet and
T0 = the arrival time of the ACK packet less the tbase value.
The round trip time minus the serialization time is the round trip latency
T d, -
2.0 Traffic Class
A traffic class is broadly defined as traffic between one or more clients and one or more servers. A single instance of a traffic class is called a flow. Traffic classes have the property, or class attribute, of being directional, i.e. all traffic flowing inbound will belong to different traffic classes and be managed separately from traffic flowing outbound. The directional property enables asymmetric classification and control of traffic, i.e., inbound and outbound flows belong to different classes which may be managed independent of one another.
Traffic classes may be defined at any level of the TCP/IP protocol. For example, at the IP level, traffic may be defined as only those flows between a set of inside and outside IP addresses or domain names. An example of such a low level traffic class definition would be all traffic between my network and other corporate offices throughout the Internet. At the application level, traffic classes may be defined for specific URIs within a web server. Traffic classes may be defined having "Web aware" class attributes. For example, a traffic class could be created such as all URIs matching "*.html" for all servers, or all URIs matching "*.gif" for server X, or for access to server Y with URI "/sales/*" from client Z, wherein '*' is a wildcard character, i.e., a character which matches all other character combinations. Traffic class attributes left unspecified will simply match any value for that attribute. For example, a traffic class that accesses data objects within a certain directory path of a web server is specified by a URI of the directory path to be managed, e.g. "/sales/*".
2.1 Classifying Traffic The present invention provides a method for classifying traffic according to a definable set of classification attributes selectable by the manager, including selecting a subset of traffic of interest to be classified. The invention provides the ability to classify and search traffic based upon multiple orthogonal classification attributes. Traffic class membership may be hierarchical. Thus, a flow may be classified by a series of steps through a traffic class tree, with the last step (i.e., at the leaves on the classification tree) mapping the flow to a policy. The policy is a rule of assignment for flows. For example, the first step in classification may be to classify a flow as web traffic, the next may further classify this flow as belonging to server X, and the final classification may be a policy for URI "*.avi". A classification tree is a data structure representing the hierarchical aspect of traffic class relationships. Each node of the classification tree represents a class, and has a traffic specification, i.e., a set of attributes or characteristics describing the traffic, and a mask associated with it. Leaf nodes of the classification tree contain policies. According to a particular embodiment, the classification process checks at each level if the flow being classified matches the attributes of a given traffic class. If it does, processing continues down to the links associated with that node in the tree. If it does not, the class at the level that matches determines the policy for the flow being classified. If no policy specific match is found, the flow is assigned the default policy.
In a preferable embodiment, the classification tree is an N-ary tree with its nodes ordered by specificity. For example, in classifying a particular flow in a classification tree ordered first by organizational departments, the attributes of the flow are compared with the traffic specification in each successive department node and if no match is found, then processing proceeds to the next subsequent department node. If no match is found, then the final compare is a default "match all" category. If, however, a match is found, then classification moves to the children of this department node. The child nodes may be ordered by an orthogonal paradigm such as, for example, "service type. " Matching proceeds according to the order of specificity in the child nodes. Processing proceeds in this manner, traversing downward and from left to right in the classification tree, searching the plurality of orthogonal paradigms. Key to implementing this a hierarchy is that the nodes are arranged in decreasing order of specificity. This permits search to find the most specific class for the traffic before more general.
Table 2 depicts components from which Traffic classes may be built. Note that the orientation of the server (inside or outside) is specified. And as noted above, any traffic class component may be unspecified, i.e. set to match any value.
Traffic Class Components
Client Side Server Side
IP Address/Domain Name IP Address/Domain Name
TCP or UDP Service, e.g. WWW, FTP, RealAudio, etc.
URI for Web Service, e.g. "*.html", "*.gif", "/sales/*", etc.
Table 2
Figs. 2 A and 2B depict representative allocations of bandwidth made by a hypothetical network manager as an example. In Fig. 2A, the network manager has decided to divide her network resources first by allocating bandwidth between Departments A and B. Fig 2A shows the resulting classification tree, in which
Department A bandwidth resources 202 and Department B bandwidth resources 204 each have their own nodes representing a specific traffic class for that department. Each traffic class may have a policy attribute associated with it. For example, in Fig. 2A, the Department A resources node 202 has the policy attribute Inside Host Subnet A associated with it. Next, the network manager has chosen to divide the bandwidth resources of Department A among two applications. She allocates an FTP traffic class 206 and a World Wide Web server traffic class 208. Each of these nodes may have a separate policy attribute associated with them. For example, in Fig. 2A, the FTP node 206 for has an attribute Outside port 20 associated with it. Similarly, the network manager has chosen to divide network bandwidth resources of Department B into an FTP server traffic class 210 and a World Wide Web server traffic class 212. Each may have their own respective policies.
Fig. 2B shows a second example, wherein the network manager has chosen to first divide network bandwidth resource between web traffic and TCP traffic. She creates three traffic nodes, a web traffic node 220, a TCP traffic node 224 and a default node 225. Next, she divides the web traffic among two organizational departments by creating a Department A node 226, and a Department B node 228. Each may have its own associated policy. Similarly, she divides TCP network bandwidth into separate traffic classes by creating a Department A node 230 and a Department B node 232. Each represents a separate traffic class which may have its own policy.
All traffic which does not match any user specified traffic class falls into an automatically created default traffic class which has a default policy. In a preferable embodiment, the default policy treats default class flows as unreserved traffic at the default (medium) priority. In Fig 2A, the default category is depicted by a default node 205, and in Fig. 2B, the default category is depicted by a default node 225.
2.2 Creating Policies
2.2.1 Reserved vs. Unreserved
Network managers create policies in order to assign service levels to traffic classes which are of particular interest. Service levels may be either reserved or unreserved. Reserved service is useful in instances where traffic 'bursts', i.e., sends chunks of data, or in interactive applications, which require a minimum bandwidth in order to function properly.
Reserved service is defined in terms of scaled rate, unreserved service is defined in terms of priority. Allocation of bandwidth resources by scaled rate is accomplished in a process known as "speed scaling" wherein the speed of the underlying network link upon which the flow is carried is detected and available bandwidth is allocated based upon this detected speed. Allocation of bandwidth resources for unreserved service is priority based. The total bandwidth resources for unreserved service demanded by all flows is satisfied according to a prioritized scheme, in which highest priorities are satisfied first, followed by successively lower priority applications. Unreserved service allocations are based upon the available bandwidth after reserved service demands have been satisfied.
Traffic may be a combination of reserved and unreserved services. For example, web traffic may have a service level defined at a web server for "*.html" to designate these types of files as traffic allocated unreserved service, as these are typically small files. In the same web server, "*.gif" files may be designated as a separate traffic class receiving reserved service. TCP connections and UDP sessions may be defined for reserved service.
2.2.2 Reserved Service: GIR vs. EIR
Reserved service may have both a guaranteed information rate (GIR) and an excess information rate (EIR) components. Guaranteed information rate represents a commitment of bandwidth resources to maintain a specified rate. Excess information rate is an allocation of bandwidth resources on an "as available" basis. Guaranteed information rate prevents evident lack of progress situations, such as stuck progress indicators, from occurring. Excess rate is allocated from available bandwidth, i.e. that which has not consumed by providing guaranteed rate service. As flows demand excess information rate at different priority levels, their demands are satisfied in order of priority level. For example, in an application having a ten voice over IP sessions, each session requiring a minimum of eight kilobits of bandwidth rate which must be guaranteed and up to 8 kilobits of additional bandwidth rate which may be used from time-to-time, would be allocated reserved service comprised of eight kilobits of guaranteed information rate for each of the ten sessions. Then, from the remaining available bandwidth resources, eight kilobits of excess information rate is allocated to each session based upon the priority of each session, until either all demands have been satisfied, or bandwidth resources are completely consumed.
When a guaranteed information rate cannot be provided to a requesting flow due to insufficient bandwidth resources, an admissions policy is invoked. The admissions policy may refuse a connection, display an error message or a combination of both.
Admissions policies prevent service 'brown outs' which would otherwise occur if guaranteed information rate demand exceeds available bandwidth resources. By contrast, excess information rate requirements are not guaranteed. If a flow's request for excess information rate is not able to be satisfied from the available bandwidth resources, it is simply not met. In the previous example, if insufficient bandwidth existed in order to accommodate the ten voice over IP sessions, an admissions policy would be invoked.
Excess information rate (EIR) is redistributed each time a new flow is created, an existing flow is terminated, or a substantial change in demand for unreserved priority level occurs. By contrast, Guaranteed information rate (GIR) is allocated only at the time of creation of a flow and freed when the flow terminates.
In a preferable embodiment, reserved TCP service levels may be enforced by TCP Rate Control for both inbound and outbound traffic. TCP Rate Control is a technique well known to persons of ordinary skill in the art which scales reserved flows as excess rate is available without incurring retransmission.
2.3 Isolating Bandwidth Resources for Traffic Classes
Bandwidth resources may be dedicated to certain traffic classes. There are two degrees of isolation. Hard isolation creates an entirely separate logical channel for designated traffic classes. For example, department A and department B could both be hard isolated, setting up two logical .75 Mbps channels on a Tl access link, one for each department. With hard isolation, unused bandwidth in either channel would not be shared with the other department's traffic. Hard isolation involves allocating arbitrary subsets of access link bandwidth for any traffic class or family of traffic classes. It completely protects the traffic belonging to the respective class or classes, but this protection comes at the cost of forsaken opportunities to share unused bandwidth.
Soft isolation allows sharing of unused bandwidth, but limits the amount of guaranteed bandwidth that can be allocated to any particular traffic class at any one time.
This provides a basic level of fairness between traffic classes which have corresponding guaranteed service policies. But this fairness does not come at the cost of efficiency gains accrued from sharing available bandwidth.
3.0 System Function
3.1 Building a Traffic Classification Tree
Fig. 2C depicts a flowchart 203 showing the process steps in building a classification tree, such as classification tree 201 of Fig. 2 A, by adding traffic class nodes, such as traffic class node 206, in order of specificity of classes. In a step 240, a new tclass node is allocated for the traffic class which is to be added to the classification tree. Next, in a step 242, a traffic specification is inserted for the new traffic class node, allocated in step 240. Then, in a step 244, the new tclass node is inserted into the classification tree. Next, in a step 246, bandwidth resources allocated based upon the traffic class hierarchy is redistributed to reflect the new hierarchy.
Fig. 2D depicts a flowchart 205 showing the component steps of traffic specification creation step 242 of Fig. 2C. The processing steps of flowchart 205 insert a traffic specification, or tspec, into the traffic class allocated in step 240. In a step 250, a current tspec is initialized to the first tspec in the tclass. Next, in a decisional step 252, a determination is made whether the new tspec is less specific than the current tspec, initialized in step 250. If this is not the case, then in a step 258, the new tspec is inserted after the current tspec, and processing returns. Otherwise, in a decisional step 254, a determination is made whether this is the final tspec in the traffic class. If this is so, then in step 258, the new tspec is inserted after the last (current) tspec. Otherwise, in a step 256, the next tspec in the traffic class is examined by repeating processing steps 252, 254, 256 and 258.
Fig. 2E depicts a flowchart 207 showing the component steps of traffic specification creation step 244 of Fig. 2D. The processing steps of flowchart 207 insert the traffic class, or tclass, allocated in step 240 into the classification tree. In a step 260, a current sibling is initialized to the first sibling in the parent tclass. Next, in a decisional step 262, a determination is made whether the new tclass is less specific than the current sibling, initialized in step 260. If this is not the case, then in a step 268, the new tclass is inserted after the current sibling, and processing returns. Otherwise, in a decisional step 264, a determination is made whether this is the final sibling in the parent tclass. If this is so, then in step 268, the new tclass is inserted after the last (current) tclass. Otherwise, in a step 266, the next sibling in the parent traffic class is examined by repeating processing steps 262, 264, 266 and 268.
3.2 Allocating Bandwidth based upon Policy
Fig. 3 depicts a functional block diagram 301 of component processes in accordance with a particular embodiment of the present invention. Processing of a new flow, such as new flow 300 as described by a flowchart 501 in Fig. 5 A, begins with a TCP autobaud component 302, which implements the automatic detection of flow speeds as depicted in Fig. IE. TCP autobaud component 302 determines values for selectable information such as flow data rate of new flow 300. A classifier 304 determines the class for the new flow using the information determined by TCP autobaud 302. Having classified the traffic, the classifier next determines an appropriate policy for this particular flow from a classification tree (not shown). Based upon the policy, the classifier then determines a service level for the new flow. A bandwidth manager 306 uses the policy determined by classifier 304 in order to allocate bandwidth according to the service level prescribed by the policy.
Fig. 4 A depicts data structures in a particular embodiment according to the invention. The bandwidth management aspect of the present invention is performed between inside clients and servers and the access link. Outside clients and servers are on the other side of the access link. In the embodiment of Fig. 4A, Traffic is directional. Inbound (flowing from an outside host to an inside host) traffic is managed entirely separately from outbound traffic.
A connection hash table 402 facilitates indexing into a plurality of TCP connections, each TCP connection having an associated flow, such as flows 300, 299 and 298 of Fig. 3. A plurality of TCP control blocks 404 are used to track connection- related information for each TCP connection. Host-related information is tracked using a plurality of host information control blocks 406, indexed by a host hash table 408. Information about particular hosts corresponding to TCP connections is tracked using the TCP control block 404.
Each TCP connection represented by TCP control block 404 has associated with it a gear 410 representing an outbound flow and a gear 411 representing an inbound flow. Each gear is an internal data object associated with a particular TCP connection or UDP session. Gears are protocol independent and contain rate-based information. Since network resources are managed separately for inbound and outbound traffic in this particular embodiment, there is a bandwidth pool 412 for outbound traffic and a bandwidth pool 413 for inbound traffic. Bandwidth pools represent collections of available network resources managed by bandwidth manager 306. Bandwidth pools are arbitrarily divided into a partition 414 in the outbound path, and a partition 415 in the inbound path. Each partition represents an arbitrarily-sized portion of available network resource. Gears become associated with particular partitions during the beginning of flow management for the new flow in step 518 of Fig. 5 A. A policy 416 in the outbound path and a policy 417 in the inbound path become associated with partition 414 in the outbound path and partition 415 in the inbound path during the beginning of management of a new flow, as shown in Fig. 5 A. These policies are associated with particular flows according to the traffic class that classifier 304 assigns to the particular flow.
Fig. 5 A depicts flowchart 501 of the steps taken in initiating a new flow in accordance with a particular embodiment of the invention. In a step 502, TCP autobaud component 302 detects a TCP data rate of new flow 300. In a step 504, classifier 304 classifies the new flow to determine a policy. Classification is described in greater detail in Fig. 5F herein below. In a step 506, based upon the policy determined in step 504, a guaranteed information rate (GIR) and an excess information rate (EIR) are determined and speed scaled to the incoming TCP data rate, detected in step 502. Speed scaling is described in greater detail in Figs. 5G and 5H herein below. In a step 508, bandwidth manager 306 allocates guaranteed information rate resources and excess information rate resources to accommodate an initial target rate for the new flow. In a decisional step 510, bandwidth manager 306 determines if a partition limit would be exceeded by the allocation determined in step 508. If this is the case, then in a step 512, an admissions policy is invoked. Otherwise, in a step 514, the excess information rate resources demanded by new flow 300 is added to a total excess information rate resources demanded for this bandwidth pool. In a step 516, excess information rate resources are redistributed with respect to all flows within the partition, if demand has changed substantially since the last time it was redistributed. Fig. 5E depicts a flowchart 509 showing the steps in reapplying demand satisfaction parameters to all traffic flows in the bandwidth pool. A new target rate and total excess information rate resources are calculated. Finally, in a step 518, the new flow is associated with a partition.
Fig. 5B depicts a flowchart 503 showing processing of a packet belonging to an existing flow such as a forward flow packet 299. In a decisional step 520, a scheduler 308, using policy information determined at flow initiation, filters TCP acknowledgments for scheduling in a step 522. Output times are scheduled on millisecond boundaries, based upon the target data rate computed in step 508 of Fig. 5A. In as step 524, the remaining packets are delivered immediately to TCP, which performs just in time delivery.
Fig. 5C depicts a flowchart 505 showing processing by bandwidth manager 306 of a packet at the termination of a flow, such as an end flow packet 298. In a step 530, guaranteed information rate resources are restored. In step 532, excess information rate resources demanded by the terminating flow 298 is subtracted from the total excess information rate resources demanded. In a step 534, excess information rate resources are redistributed as described in Fig. 5D. Finally, in a step 536, the flow is removed from its partition. Fig. 4B depicts specific data structures used in redistributing unreserved service resources among multiple flows in a particular partition. Input rates of particular flows, such as new flow 300, are measured by TCP autobaud component 302 and used to determine an Extrapolated Unreserved Demand or (EUD) 424. Flows requiring excess information rate resources, such as flows 299 and 298, having gears 410 and 411 (not shown), respectively, have associated with them an individual flow demands 420 and
421. The individual flow demands of all flows in a particular partition are added together to form an Aggregate Rate Demand (ARD) 422 at each priority level. Aggregate rate demand is added to extrapolated unreserved demand 424 to arrive at a total instantaneous demand 426. The total instantaneous demand is satisfied from the highest priority, priority 7, downward based upon the available excess information rate resources in the partition 414. This available rate is information rate which has not been used to satisfy guaranteed information rate demands. From the available excess information rate resources, a satisfaction percentage vector 428 is computed having the percentage of each priority of flow which may be satisfied in the total demand vector 426 by the available excess information rate resources in partition 414. Individual flow demand vectors 420 and 421 are each multiplied by the satisfaction percentage for each priority 428 in order to arrive at a target rate which is incorporated in gears 410 and 411 corresponding to each flow. The satisfaction percentage vector 428 is also applied to the Extrapolated Unreserved Demand (EUD) 424 to determine unreserved targets.
3.3 Reconciling Needs for Bandwidth
Fig. 5D depicts flowchart 507 of the processing steps to redistribute excess information rate resources with respect to reserved service flows within a given partition. In a step 540, a demand satisfaction metric is computed for the bandwidth pool being redistributed. In a decisional step 542, a determination whether the demand satisfaction metric computed in step 540 has changed substantially since the last time demand was redistributed. If this is the case, then in a decisional step 544, a loop is entered for each gear associated with the partition owning the bandwidth pool being redistributed. In a step 546, demand satisfaction parameters are reapplied a traffic flow associated with each gear and a new target rate and a new total excess information rate resources are calculated. Processing continues until in decisional step 544, it is determined that all gears have been processed. Fig. 5E depicts flowchart 509 of the processing steps for reconciling bandwidth among a plurality of flows, requiring reserved service and unreserved service resources. For example, partition 414 of Fig. 4B has unreserved service flow 300 and reserved service flows 299 and 298. In a decisional step 550, a loop is formed with a step 554, in which the individual flow demands of reserved service flows, such as flows 299 and 298 in partition 414, are added together to form an aggregate rate demand (ARD) 422. Next, in a step 552, an extrapolated unreserved demand (EUD) 424 is determined from the input rate of unreserved service traffic 300 in partition 414. In a preferable embodiment, the EUD must be inflated in order to promote growth in traffic. This is done to accommodate the elastic nature of TCP traffic. Next, in a step 556, the EUD determined in step 552 and the ARD determined in step 554 are combined to form an instantaneous total demand (TD) 426 for the entire partition 414. The total instantaneous demand is satisfied from the highest priority level, level 7, downward based upon the available unreserved resources in the partition 414. In a step 558, a satisfaction percentage vector (SP) 428 is computed for the partition from the total demand vector (TD) 426 determined in step 556. The satisfaction percentage indicates that percentage of the demand at each priority level that can be satisfied from the unreserved resources in the partition. Next, in a step 560, a target rate for reserved rate based flows, such as flows 299 and 298, is computed. Finally, in a step 562, a target rate for unreserved priority based flows, such as flow 300, is computed.
3.4 Searching a Classification Tree
Fig. 5F depicts a flowchart 511 showing the component steps of traffic classification step 504 of Fig. 5 A. The processing steps of flowchart 511 determine a class and a policy for a flow, such as new flow 300, by traversing a classification tree such as the classification tree 201 in Fig. 2 A. Processing begins with a first node in the classification tree 201, such as department A node 202. In a step 570, a traffic specification of a child node, such as FTP node 206 is compared with a flow specification of the new flow 300. In a decisional step 572, if a match is discovered, then in a step 574, the processing of flowchart 511 is applied to the child node 206 recursively. Otherwise, if child node 206 does not match the new flow, then in a decisional step 576, processing determines whether any sibling nodes exist for the child node 206. If processing detects the existence of a sibling of child node 206, such as child node 208, then processing continues on node 208 with step 570. Otherwise, if there are no further child nodes for node 202, then in a decisional step 578, a determination is made whether the node is a leaf node having a policy. If no policy exists, then in a step 580, processing backtracks to a parent node and looks for a policy associated with the parent node to apply to the new flow 300. Otherwise, if a policy is associated with the node 202, then in a step 582, the policy is associated with the new flow 300.
3.5 Speed Scaling
Fig. 5G depicts a flowchart 513 showing the component steps of speed scaling step 506 of Fig. 5A. The processing steps of flowchart 513 determine an acceptable allocation of bandwidth to reserved service flows, i.e., GIR and EIR, by allocating rate for each gear, such as gears 410 and 411, based upon individual flow demands, base limits and total limits. In a step 582, scaled rate for GIR is set based upon a connection speed detected by TCP autobaud component 302 for a particular flow, such as flow 299, having a gear, such as gear 410. Then in a step 584, scaled rate for EIR is set based upon a connection speed detected by TCP autobaud component 302 for a particular flow, such as flow 298, having a gear, such as gear 411. Next, in a step 586, a limit is computed from either the available EIR remaining after the allocation to flow 299 in step 584, or from a total limit, if such a limit is specified. Then, in a step 588, a determination is made whether any extra EIR resource remains after the limit computed in step 586 has been satisfied. Then, in a step 590, limits computed in step 586 are allocated to flows, such as flow 299 and flow 298, in order of descending priority level. In a decisional step 592, the processing of steps 582 through 592 are applied to subsequent gears, if any.
Fig. 5H depicts a flowchart 515 showing the component steps of scaled rate setting steps 582 and 584 of Fig. 5G. The processing steps of flowchart 515 determine whether the rate of the underlying link detected by TCP autobaud 302 exceeds limits imposed on the network. In a decisional step 592, a determination is made whether the detected speed of a flow is less than a base limit minimum. If this is so, then in a step 594, the base limit is returned as the scaled rate. Otherwise, in a decisional step 596, a determination is made whether the total rate of all flows is greater than a maximum limit. If this is so, then in a step 598, the maximum limit is returned as the scaled rate.
Otherwise, in a decisional step 600, a determination is made whether the detected speed of a flow is greater than a total rate of all flows. If this is so, then in a step 602, the maximum limit is returned as the scaled rate. Otherwise, in a step 604, a percentage of the total rate, represented by the maximum limit, is computed. Then, in a step 606, the percentage computed in step 604 is applied to a total rate available, i.e., the maximum limit - base (minimum) limit. The result is returned as the scaled rate.
4.0 Conclusion
In conclusion, the present invention provides for a policy based bandwidth allocation method for IP packet telecommunications systems wherein bandwidth is allocated to requesting flows according to automatically determined application requirements. An advantage of network management techniques according to the present invention is that network managers need only define traffic classes which are of interest. A further advantage of the present invention is that traffic classes may include information such as a URI for web traffic. A yet further advantage of the present invention is that service levels may be defined in terms of explicit rates, may be scaled to a remote client or server's network access rate, specified for high speed and low speed users and defined in terms of a guaranteed minimum service level.
Other embodiments of the present invention and its individual components will become readily apparent to those skilled in the art from the foregoing detailed description. As will be realized, the invention is capable of other and different embodiments, and its several details are capable of modifications in various obvious respects, all without departing from the spirit and the scope of the present invention. Accordingly, the drawings and detailed description are to be regarded as illustrative in nature and not as restrictive. It is therefore not intended that the invention be limited except as indicated by the appended claims.
A pseudo code description of a particular embodiment is included herein below:
** * *
* description: BANDWIDTH POOL related functions. *
** */
#define EIR_LEVELS 8
#define EIR_HIGHEST 7
#define EIR_LOWEST 0
#define EIR_DEFAULT 3
#define EIR_NONE -1
#defme EIR_ALL_LEVELS (Oxffff)
/*
* An EIR vector is a demand for Excess rate bandwidth at each of
* the N eir prioritylevels. */ typedef BPS EIR_VECTOR[EIR_LEVELS];
/*
* Rate Specifier, i.e. CIR + EIR... */ typedef struct {
BPS cirLimit; /* How much CIR I want */
BPS eirDemand; /* total of eirDemandvector */
EIR_VECTOR eirDemandvector; /* How much EIR I want */
/*
* UDP stuff tbd...
*/
INT32 delayBound;
} RATE_SPEC;
#define RATEJ NCONSTRAINED OxffffffffL
/*
* Latency status record */
#define LATENCY_BIG_TROUGH OxffffffffUL
typedef struct {
INT32 ewmaValue;
INT32 lastPeak; INT32 lastTrough; INT32 peak; INT32 trough; TIME timeStamp; } LATENCY;
typedef struct _gearshift { DLINK dlink;
#define GF INITIALIZED 0x0001 #define GF SCALED 0x0002 #define GF ACTIVE 0x0004 #define GF_ EIR_OVERALLOCATED 0x0008 /* allocated though none available*/ #define GF_ CIR_OVERALLOCATED 0x0010 /* allocated though none available*/ #define GF QUIESCENT 0x0020 #define GF~ PRIORITY 0x0040 #define GF RATE 0x0080 #define GF Q ( 0x0100 #define GF TCPRC 0x0200 #define GF_ FILTER 0x0400 #define GF NEEDS_RE ALLOC 0x0800 #define GF~ CIR COMMITTED 0x1000
void* hosts[2]; /* Who the hosts are! */
INT16 flags;
INT16 priCounted;
BPS target;
INT32 cirlnUse;
INT32 eirlnUse;
BPS lastScaledBps;
RATE_SPEC rspec; } GEARSHIFT;
/*
* For scaling CIR to connection speed. See traffic/scale. c.
* If limit is set to 0, no CIR is allocated. */ typedef struct {
BPS base;
BPS limit; } CIR_SPEC;
/*
* For scaling EIR to connection speed, i.e. creating eirLimits vector.
* hp -highest priority, hp-base, and hp-limit and optional total limit. * See traffic/scale.c.
*
* As with CIR Spec, if hpLimit is set to 0, no EIR is allocated. */ typedef struct {
INT8 hp;
BPS hpBase;
BPS hpLimit;
BPS totalLimit; } EIR_SPEC;
** *
* bwpRedistributeEir(bwp, force)
*
* Called when a FLOW is added to or deleted from a tclass belong to
* this partition (partiton is a bandwidth pool - BW_POOL) *
* Reallocate EIR. If new allocation percentages are significantly different
* for any priority level, then spin through the GEARs to update the
* SATISFACTION percentage level at each GEAR (at each TCLASS) with a
* demand at that level. *
* NOTE: Execution efficieny is critical.
** */ void bwpRedistributeEir(BW_POOL *bwp, int forceAU)
{ whereStr("bwpRedistributeEir"); EIR_MASK changeMask = 0;
EIR_VECTOR eirNewPercentage, urDemand, urAllocated; int i, available, urTotal, totalDemand, demand, diff; int new, old; int dir = bwp->flags & BWF_DIR_MASK;
VALIDATE_BWP(bwp); available = bwp->limit - bwp->cirCurrent; totalDemand = eir Vector Sum(bwp->eirDemand); urTotal = bwpUqDemand(bwp, urDemand); bwp->stats.eirRedistributions++; /*
* All SATISIFIED ? */ if (available > (totalDemand + urTotal)) { eirVectorSet(eirNewPercentage, 100); bwp->eirCurrent = totalDemand; /*
* And give leftovers to UQ with a little margin unallocated... */ urTotal = max((available - totalDemand) - (bwp->limit » 3), urTotal); bwpUqSetTarget(bwp, urTotal);
} /*
* None SATISIFIED ? */ else if (available <= 0) { /*
* No EIR is allocated... */ bwp->eirCurrent = 0; eirVectorSet(eirNewPercentage, 0); /*
* Priority buckets set to minimal output rate... */ bwpUqSetTarget(&bwp->uq, 0);
}
/*
* Some SATISFIED... */ else { bwpSetEirPct(bwp, available, urDemand, eirNewPercentage); forceAU = TRUE;
}
/*
* OK - now check NEW versus OLD percentages to see if we have to
* run through the gears... */ if (forceAU) changeMask = EIR_ALL_LEVELS; else { for (changeMask = i = 0; i <= EIR_HIGHEST; i++) { new = eirNewPercentage[i]; old = bwp->eirPctSatisfied[i]; if (new = old) continue; else {
/* * Check to see if the % difference is significant... */ diff = abs(new - old); if (diff > (old » 3)) changeMask |= (1 « i);
} } } eirVectorCopy(eirNewPercentage, bwp->eirPctSatisfϊed); if (changeMask) bwpUpdateGearEir(bwp, changeMask); }
**
* bwpSetEirPct(bwp, avail, urTotal, urDemand, newPctV)
* Allocate available bandiwdth between UQ and EIR. Note only one
* priority level is PARTIALLY SATISIFIED.
**
*/ void bwpSetEirPct(BW_POOL *bwp, int avail,
EIR_VECTOR urDemand, /* Unreserved demand vector to be filled in */
EIR VΕCTOR newPct) /* New satisfaction percentage to be filled in*/
{ whereStr("bwpSetEirPctOUT"); int demand, i, pet, urPct, eirPct; EIR_VECTOR urAllocate;
/*
* Go through each prioirty level from hi to low...
*/ for (i = EIR_HIGHEST; i >= 0; i~) { demand = bwp->eirDemand[i] + urDemandfi]; /*
* None left ? */ if (avail <= 0){ newPctfi] = 0; urAllocatefi] = 0; } /*
* More than enough - 100% for both... */ else if (avail >= demand) { newPct[i] = 100; ur Allocate [i] = urDemand[i]; avail -= demand;
} else {
/*
* Have to divide available here between UQ and reserved flow EIR... */ if (bwp->eirDemand[i] == 0){ ur Allocate [i] = avail; newPct[iJ = 0;
} else if (urDemand [i] — 0){ newPct[i] = (avail * 100)/demand; urAllocate[i] = 0;
} else {
/*
* Need to share. The overall level is satisfied N
* percent. UQ and rate EIR each get their proportional
* share of that N percent... */ pet = max( 1 , (avail* 100)/demand); urPct = max(2, (urDemand[i]*100)/demand); urPct = min(98, urPct); eirPct = 100 - urPct; assert(eirPct >= 2 && eirPct <= 98); newPct[i] = max(2,(eirPct*pct)/100); urPct = max(2,(urPct*pct)/100); urAllocate[i] = (urDemand[i] * urPct)/100;
}
/*
* Only one priority level requires this calculation -
* all lower levels will be 0 percent satisified... */ avail = 0;
/*
* Set the output target for the unreserved priority traffic based
* on the total unreserved vector... */ bwpUqSetTarget(bwp, eirVectorSum(urAllocate)); ** *
* bwpUpdateGearEir(bwp, changeMask) *
* Cycle through all GEARs using EIR at the specified priority levels and
* change their SATISFACTION percentage allocations.
*
* NOTE: Execution efficieny is critical.
**
*/ void bwpUpdateGearEir(BW_POOL *bwp, EIR_MASK changeMask)
{ whereStr("bwpUpdateGearEIR");
TCLASS *tclass;
GEARSHIFT *g; int i, tclassTotal, pooiTotal;
INT32 *evPct = bwp->eirPctSatisfied;
INT32 *evActual, *evDemand; blurt2("pool %s mask %x", bwp->root->name, changeMask); bwp->stats.gearUpdates-H-; pooiTotal = 0; /*
* For all TCLASSs using this BW POOL... */ for (tclass = bwp->tclassList; tclass != NULL; tclass = tclass->bwpThread) { /*
* For all GEARs at this TCLASS... */ tclassTotal = 0; for (g = tclass->activeList; g != NULL; g = (GEARSHIFT *)g->dlink.next) { /*
* Are any of the GEARs EIR levels affected - That is, does
* the flow have a demand at a changed Satisfaction Percentage
* priority level ? */ if (! IS_FLAG_SET(g->flags, GF_DEMANDS_EIR)) continue; if ((changeMask & g->rspec.eirActiveMask) = 0){ tclassTotal += g->rspec.eir Total; continue;
}
/*
* Can we now allocate for an overalloced flow ? */ if (IS_FLAG_SET(g->flags, GF_EIR_OVERALLOCATED)){ blurt 1 ("recovering from eir-overalloc tclass %s", tclass->name); tclassFreeOverallocEir(tclass, g->rspec.eirTotal); g->rspec.eirTotal = 0; g->flags &= ~GF_EIR_OVERALLOCATED;
}
/*
* OK - allocate our N percentage levels. Note that no more
* than 1 level is not zero or 100. */ ev Actual = g->rspec.eirActual; evDemand = g->rspec.eirDemand; for (i = 0; i <= EIR_HIGHEST; i++) { if ((changeMask & (1 « i)) = 0) continue;
if (evPct[i] = 100) evActualfi] = evDemandfi]; else if (evPct[i] = 0) evActual[i] = 0; else evActual[i] = (evDemand[i] * evPct[i])/100;
}
/*
* Note we need to update g->rspec.eirTotal and g->target */ g->rspec.eirTotal = eirVectorSum(g->rspec.eirActual); tclassTotal += g->rspec.eirTotal;
/*
* And set target rate... */ g->target.bps = g->rspec.eirTotal + g->rspec.cirTotal; /*
* Check too see if we got ALL of our bandwidth taken away.
* If so OVERALLOCATE some... */ if (g->target.bps = 0) bwpOverAllocEir(bwp, tclass, g); /*
* Update TCLASS total... */ tclass->eirCurrent = tclassTotal; } /* For all GEARs */ pooiTotal += tclassTotal; } /* For all TCLASSs */ bwp->eirCurrent = pooiTotal; /* Set the total EIR in use for the pool */
}
** *
* bwpUqDemand(bwp) *
* Return current 'demand' for unreserved bandwidth, EXTRAPOLATED from
* current output and demand rates...
** */ int bwpUqDemand(BW_POOL *bwp, EIR_VECTOR demandV)
{ whereStr("bwpUqDemand"); UQ *uq = &bwp->uq; BPS demandRate = 0; int old = uq->target; int i = 0; BPS outputRatε = exponential weighted moving average of uq->outputRate if ((bwp->flags & BWF_DIR_MASK) = DIR_INBOUND){ eirVectorSet(demandV, 0); return(outputRate) ;
} for (i = EIR_HIGHEST; i >= 0; i-){ demandV[i] = exponential weighted moving average of uq->demandRate[i]
Add overstatement factor to demand at this level
demandRate += demandV[i];
} demandRate = max(outputRate, demandRate); uq->target = min(demandRate, bwp->limit); blurt2("new %ld old %ld", uq->target, old); return(uq->target) ;
} TCB - transport control block - TCP State information for both directions
/*
* BCB - Buffer Control Block. Contains packet info, including parsed
* flow spec, as well as pointers to various layers of the actual
* packet buffer.
#define BCB BUF SIZE 1548
#define BCB_ MAGIC Oxdeadbeef typedef struct _bcb { struct _bcb *next; struct _bcb *last;
INT32 linkHeaderLen; /* length of link header */ int frarm eMaxSize; /* Actual buffer memory
*/ INT8 *frameStart; /* Note a pointer to the TOP of the */
/* bcb is embedded at frameStart-4 */ I T8 *dataStart; /* Current Data pointer and length */ int dataLen;
USEC_TIME tick; /* time the packet should be output and
/* timestamp of when it was sent */ FLOW_SPEC fspec; /* Parsed FLOW description...
INT8 >protHdrStart; /* Pointer to and length of whatever is */
INT 16 protLength; /* Encapsulated in the IP packet. */ OBJECT ID tclassld; /* class ID for accounting purposes
*/
}BCB;
/*
* FLOW components and object. */ typedef unsigned char DIRECTION;
#define DIRJNBOUND 0
#define DIR OUTBOUND 1 typedef struct {
INT16 port;
INT32 host; } SOCKET; typedef struct {
SOCKET inside;
SOCKET outside; } CONN; typedef struct _flow_spec { CONN conn;
#define PF INET 0 #define PF IPX 1 #define PF ATALK 2 #define PF NETBIOS 3 #define PF SNA 4 #define PF_FNA 5
INT8 protFamily; INT8 tos; INT8 ipProtocol; INT8 direction;
Servicelndex service[NUM_DIRECTIONS]; INT 16 uriLen; char *uri;
} FLOW_SPEC;
/*
* For storing URIs
*/ #define MAX URI NAME LEN 128 typedef struct _uri_buf { struct _uri_buf *next; char name[MAX_URI_NAME_LEN];
} URI_BUF;
/*
* For matching URIs
*/ typedef struct {
#define URI_FLAG_QUERY (0x01) /* '?' present?
*/
#define URI FLAG SPECIAL EXACT (0x02) /* special-case exact-match? */ #define URI_FLAG_SPECIAL_LEFT (0x04) /* do special-case left-match?
*/
#define URI_FLAG_SPECIAL_RIGHT (0x08) /* do special-case right-match?
*/ unsigned char flag; /* URI_FLAG_* bitmask */ unsigned char count; /* no. of segments in uriBufT
*/
URI_BUF *uriBufCompile; /* the compiled URI (if non-nil) */
URI_BUF *uriBufLiteral; /* the original URI (if non-nil) */
} URI_SPEC;
/*
* Define Traffic Class Specifier...
*/ typedef struct _t_spec {
FLOW_SPEC fspec;
FLOW_SPEC fspecMask;
URI_SPEC uriSpec; /* present if uriSpec.uriBufCompile != 0 */ } TSPEC;
/* * TCLASSs are directional. There is both an INBOUND and OUTBOUND
* root partition. NOTE this accomodates ASYMMETRIC links.
* INBOUND partition refers to direction of data flow, not of connection
* origin
* TCLASSs have a component specifications (lists of TSPECS) which is * ordered by increasing specificity. The global flow implicitly matches * all traffic classes. * * TCLASSs are organized into an N-ARY tree, where children of tclasses * are arranged in order of exception, normal, and inheritable. * * Sorting/searching order is * exception tclasses without applied policy * exception tclasses with applied policy * tclasses without policy * tclass with policy * inheritable tclass with policy (inheritable has policy by definition) * * within each group, tclasses are ordered by Decreasing Match Specificity A FLOW is matched to a policy as follows:
Starting at the global partition (implict component match), a MATCHALL tspec/policy exists at the top level for both the inbound and outbound tree.
Intermediate traffic classes (those with children) do NOT have associated policies. tclass_check(tclass, flow) tclass are checked in above specified order against flow. If they match, recurse to call tclass_check(match_tclass, flow) *
* OK - we have found the most specific tclass. If it has an * associated policy, we are done. Else we need to check back up
* thorugh the tree for an inherited policies.
* * Checking for inherited policies goes up to parent, and then * down through parents remaining siblings, then recurses up to
* parents parent, etc... till a match is found.
•7 typedef struct _tclass { struct tclass * sibling; struct _tclass * child; struct _tclass *parent;
#define TCF INBOUND DIRJNBOUND #define TCF OUTBOUND DIR_OUTBOUND #define TCF DIR_MASK 0x0001 #define TCF ACTIVE 0x0002 #define TCF IDLE 0x0000 #define TCF~ GLOBAL 0x0004 #define TCF DISABLED 0x0008
#define TCF_EXCEPTION 0x0010 #define TCFJNHERITABLE 0x0020 #define TCF_STANDARD 0x0000 #define TCF_TYPE_MASK (TCF_EXCEPTION+TCF_INHERITABLE)
#defme TCLASS_TYPE(t) ((t)->flags & TCF_TYPE_MASK)
#define TCF ΕB 0x0100
#define TCF AUTO CLASSIFYING 0x0200 #define TCF AUTO CREATED 0x0400
INT16 flags;
INT8 direction;
OBJECT D id;
INT32 hits;
/*
* Rate pool stuff Need to have EIR limits/priorities as well
* as CIR limits.. */
PKT_ACCUM pktAccum; EWMA STATS traffic;
TSPEC TEM *imsTspecList; /* IMS ordered TSPEC ITEMs */
POLICY TEM *policy;
May be NULL */
/*
* Points to self or root class is CIR set or inherited or NULL if
* CIR is not enforced... */ struct _tclass *cirRoot;
BPS cirRoofLimit;
BPS cirRootCurrent;
/*
* Values for this Tclass... */
BPS cirCurrent; BPS eirCurrent; BPS cirOverAllocated; BPS eirOverAUocated; BPS cirCommitted; /* Amount of
CIR committed for outstanding HTTP request */ LATENCY xactLatency;
CIR_STATS cirStats;
/*
* List of active sessions/connections.
*/ DLINK activeList; struct _bw_pool *bwp; struct _tclass *bwpThread;
char name[S YS_OB J_NAME_LEN] ;
} TCLASS; /*
* Abstracted packet path... */ main_pkt_path()
{ if its an IP input packet, it is parsed with parseTspec to embed an fspec inside the bcb the packet is processed by the matching protocol finite state machine }
/************************************************************$************* ** *
* ipParseFlow(bcb)
*
* Build basic flow spec for the given input packet. Just parse basic ip
* info and leave headers in net format.
************************************************************************** **
*/ int ipParseFlow(BCB *bcb)
{
FLOW_SPEC *fspec = &bcb->fspec; CONN *conn = &fspec->conn; INT8 *cp = NULL; INT 16 len, hdrLen; memset(fspec, 0, sizeof(FLOW_SPEC)); fspec->protFamily = PF_INET;
/*
* Extract IP source and destination and protocol type from net header. */ if (bcb->device = INSIDEJFACE) { conn->inside.host = get32(&bcb->dataStart[12]); conn->outside.host = get32(&bcb->dataStart[16]);
} else { conn->inside.host = get32(&bcb->dataStart[16]); conn->outside.host = get32(&bcb->dataStart[12]);
} fspec->ipProtocol = bcb->dataStart[9]; fspec->tos = bcb->dataStart[l]; /*
* Advance to next protocol header... */ hdrLen = ((bcb->dataStart[0] & OxOf) « 2); cp = bcb->dataStart + hdrLen;
/*
* If it is TCP or UDP extract ports... */ if (fspec->iρProtocol = TCP_PTCL |J fspec->ipProtocol == UDP_PTCL) { if (bcb->device = INSIDE FACE) { conn->inside.port = getl6(cp); conn->outside.port = getl6(cp + 2);
} else { conn->inside.port = getl6(cp + 2); conn->outside.port = getlό(cp);
} } bcb->protHdrStart = cp;_ len = getl6(&bcb->dataStart[2]); bcb->protLength = len - hdrLen; fspec->direction = (bcb->device = INSIDEJFACE) ? DIR_OUTBOUND : DIRJNBOUND; return(OK); }
NOTE: WEBNESS may be set for a connection by virtue of that connection being on the well known HTTP port (80), or by an explicit user configuration indicating that a certain port if for a web server, or by automatic recognition process which detects an http 'signature' in traffic destined towards a certain repetitively used (server) port.
tcp_finite_state_machine()
{ handles connection establishment and clearing
for data on an established connection, if connection has WEBNESS, check to see if it an http request.
} ************************************************************************** ** *
* httpParseRequest(tcb, bcb, tcpHdr) *
* If we find a new URI from a GET operation, re-lookup the policy for the
* OTHER direction flow. So the HTTP BURST will effectively start from the
* time of the GET. When the actual data burst starts in the other
* direction, this policy will already be set, i.e. tmgrSefPolicy() will
* do nothing.
********************************************************$***************** **
*/ int httpParseRequest(TCB *tcb, BCB *bcb, TCP_HDR *tcpHdr)
{ whereStr("httpParseRequest"); int dir = bcb->fspec.direction; int len, re;
CONN *conn = &bcb->fspec.conn; char *uri, *method;
HALF_CONN_INFO *hci = &tcb->halfConns[dir];
HALF_CONN_INFO *otherHci = hci->other;
/* * Is it a GET or POST ?
*/ if ((len = httpGetUri((char *)tcpHdr->dataStart, tcpHdr->dataLen, TRUE, &uri, &method)) > 0) {
/*
* Set URI pointer in flowspec embedded in BCB... * bcb->fspec.uri = uri; bcb->fspec.uriLen = len;
/*
* httpSetPolicies will CLASSIFY this URL and find policy... */ re = httpSetPolicies(tcb, bcb, tcpHdr); if (re = NOTOK) { admitFailHttpGetRequest(tcb, bcb, tcpHdr); return NOTOK;
}
/* * Check for Admission Control... */ if (admitHttpGet(tcb, dir, bcb, tcpHdr, (re == OK_BUT_NO_CIR)) != OK) return NOTOK; bcb->fspec.uri = NULL; /*
* This is the start of a burst on the other connection direction (HCl). */ otherHci->burstState = HCF_BURST_WAITING_FOR_REPLY; otherHci->reqStamp = sysTimeStamp();
} else { if (method && httpValidateMethod(method)){ httpFsmMibOtherOps-H-;
if (tmgrInitTcpHalf(tcb, dir, NULL, 0) != OK |j tmgrInitTcpHalf(tcb, OTHER_DIRECTION(dir), NULL, 0) != OK){ admitFailHttpGetRequest(tcb, bcb, tcpHdr); return(NOTOK);
} } } return(OK); }
/************************************************************************** ** *
* httpGetUri(pkt, pktLen, getOrPostOnly, pUri, pMethod)
*
* Check for a valid HTTP request. If valid, parse URI and method.
************************************************************************** **
*/ int httpGetUri(const char *httppkt, int pktlen, int getOrPostOnly, char **pUri, char **pMethod)
{ whereStrC'httpGetUri"); char *header, *protocol, *method, *cp; int headerlen, methodlen, plen, urilen; *pMethod = NULL; method = NULL; header = NULL; protocol = NULL; headerlen = methodlen = plen = urilen = 0; headerlen = getline(httppkt,pktlen,&header);
if ((header = NULL) |j (headerlen <= 0)) return (-1); method = header; method += strspn(method," \t"); if (method = NULL) return (-1);
*pMethod = method; if (getOrPostOnly) { if (strncmp(method,"GET",3) != 0 && strncmp(method,"POST",4) != 0) return (-1);
} else if (httpValidateMethod(method) = 0) return (-1);
*pUri = strpbrk(method," \t"); methodlen = *pUri - method; *pUri += strspn(*pUri," \t"); protocol - strpbrk(*pUri," \t"); urilen = protocol - *pUri;
cp = strpbrk(protocol,"\r"); plen = cp - protocol; cp += strspn(cp,"\r\n"); if (plen <= 0)
{ attn("no protocol"); urilen = 0;
} return urilen;
} ************************************************************************** ** *
* httpSetPolicies(tcb, getFspec) * ******** ************** ************* ***** **** ******************** ***** ****£ **
*/ httpSetPolicies(TCB *tcb, BCB *bcb, TCP_HDR *tcpHdr)
{ int dir = bcb->fspec.direction;
HALF_CONN_INFO *otherHci = &tcb->halfConns[OTHER_DIRECTION(dir)];
TCLASS* tclass;
POLICYJTEM* pi;
HALF_CONN_INFO *hci = otherHci->other; int setResult, tos;
/*
* Set any service abstraction in bcb.fspec...
*/ serviceSetFlowSpec(&bcb->fspec, tcb->service, tcb->flags & TF_DIR_MASK);
bcb->fspec.direction = OTHER_DIRECTION(dir); /* need correct direction for autoclassification */ setResult = tmgrSetPolicy(&otherHci->policyId,
&otherHci->tclassId,
&bcb->fspec, tcpSpeedGet(tcb),
&otherHci->gears, tcb->hosts,
TRUE);
bcb->fspec.directicn = dir;
TCB_SET_POLICY(tcb, OTHER_DIRECTION(dir)); /*
* Now initialize the client->server direction.
*/ if (tmgrInitTcpHalf(tcb, dir, bcb->fspec.uri, bcb->fspec.uriLen) = NOTOK) return NOTOK; return (setResult = OK) ? OK : OK_BUT_NO_CIR; }
/************************************************************************** ** *
* tmgrSetPolicy(pidPtr, tclassIdPtr, fspec, bps, gears, hosts) *
* Called at the appearance of a BURST or an HTTP REQUEST. Called by * both TCP and UDP state machines. *
* Set allowed CIR/EIR based on current link state, traffic class, and
* connection speed...
************************************************************************** **
*/ int tmgrSefPolicy(OBJECT_ID *pidPtr,
OBJECTJD *tclassIdPtr,
FLOW_SPEC *fspec,
SPEED bps,
GEARSHIFT *g, void *hosts, int forceHits)
{ whereStr("tmgrSefPolicy"); POLICYJTEM *oldPi = NULL;
POLICYJTEM *pi = NULL; TSPECJTEM *ti;
OBJECTJD oldPid = *pidPtr; OBJECTJD oldTclassId = *tclassIdPtr; TCLASS *tclass = NULL;
BOOL newUri;
BOOL newPolicyOrTclass = FALSE; int latency, confidence, retval;
/*
* Do we already have a policy and tclass set ? could be a re-lookup, e.g.
* persistent HTTP second click... */ if (pidPtr->val != 0) pi = oldPi = policyIdToPolicy(*pidPtr); if (tclassIdPtr->val != 0) tclass = tclassIdToTclass(*tclassIdPtr);
/*
* If it is URI based, we may be changing policy/tclass. So we
* need to free any bandwidth here we may currently own. */ newUri = (fspec->uri != NULL); /*
* Lookup Traffic Spec if policy is valid... */ if (newUri || oldPi = NULL || tclass = NULL) { newPolicyOrTclass = TRUE; pMatchSpecial(fspec) ; /*
* Find the class and policy for a flow in this direction... */ pi = tclassCheck(fspec, GlobalTclasses[fspec->direction], &tclass, &ti); *pidPtr = pi->id; *tclassIdPtr = tclass->id; doAccounting = TRUE; } if (newUri) { /*
* If policy changed, clear the gear state. Else gear is simply in
* QUIESCENT state and can be quickly reactivated... */ if (oldPi != pi jj oldTclassId.val != tclass->id.val) { /*
* NewPolicy is TRUE...
*/ tmgrMibSwitchedPolicies++; tmgrReleaseB width(oldTclassId, g) ; tmgrResetGear(g) ;
} else newPolicyOrTclass = FALSE;
}
/*
* If it is an existing policy, check to see if speed has changed... */ if (newPolicyOrTclass = FALSE && g->flags != 0) { /*
* Speed changed ? */ if (g->lastScaledBps != bps) tmgrMibSpeedChanges-H-; /*
* Re-awakening ? */ else if (IS_FLAG_SET(g->flags, GF_QUIESCENT)) tmgrMibRebursts-H-; /*
* No change... */ else { tmgrMibNoChanges-H-; return OK;
} } /*
* Else it is a new policy. Shouldn't be any previous allocation * unless the in force policy was deleted and this gear changed
* rate or P-HTTP... */ else { if (oldPi != pi || oldTclassId.val != tclass->id.val) doAccounting = TRUE; if (IS_FLAG_SET(g->flags, GF_ACTIVE)){ tmgrMibSwitchedPolicies++; tmgrReleaseBwidth(oldTclassId, g); tmgrResetGear(g) ;
} else tmgrMibNewPolicies++;
} if (newUri) { retval = policyCheckBwidth(pi, tclass, g, bps);
} else { policy AllocBwidth(pi, tclass, g, bps); retval = OK; } return retval; } ************************************************************************** ** *
* tclassCheck(flow, tclass, tclassPtr, tspecItemPtr)
*
* Check child tclass tspecs and RECURSE until leaf tclass matches.
* If final traffic class has a policy, apply it. Else search back up
* through the class tree for an inheritable policy at a matching traffic
* class.
*
* We assume direction is already checked.
************************************************************************** **
*/
POLICYJTEM * tclassCheck(FLOW_SPEC *fspec, TCLASS *tclass, TCLASS **tclassPtr, TSPEC JTEM **tiPtr)
{
POLICYJTEM *pi;
TCLASS *child; whereStr("tclassCheck"); /*
* Check Child traffic classes...
*/ for (child = tclass->child; child != NULL; child = child->sibling) {
/*
* Do we match this child's tspec(s) ? */ if (tclassMatchTspec(fspec, child->imsTspecList, tiPtr)) { /*
* We matched. RECURSE to check matching child... */ blurt2("branching to %s from %s", child->name, tclass->name); if ((pi = tclassCheck(fspec, child, tclassPtr, tiPtr)) != NULL) return pi; } } if (tclass->child != NULL) { *tclassPtr = NULL; return NULL;
}
/*
* Is a leaf, belongs to this traffic class. If we have a policy -
* - apply it. Else search upwards till we find a matching
* inheritable policy... */
*tclassPtr = tclass; autoRecordFlow(tclass, fspec); if ((pi = tclass->policy) != NULL) { blurt 1 ("policy at tclass %s", tclass->name); return(pi);
} /*
* No match, find inheritable policy for this flow... */ assert((pi = policyFindDefault(tclass, fspec)) != NULL); return(pi); }
************************************************************************** **
tclassMatchTspec(fspec, tclass, tiPtr)
Check to see if a flow is a component of a traffic class. ************************************************************************** **
*/ int tclassMatchTspec(FLOW_SPEC *fspec, TSPEC JTEM *clist, TSPEC JTEM **tiPtr)
{ TSPEC JTEM *ti;
*tiPtr = NULL; for (ti = clist; ti != NULL; ti = ti->next) { if(tspecIsMatchAll(&ti->tspec)) return(TRUE); if(fspec->protFamily != ti->tspec.fspec.protFamily) continue; if(fspec->protFamily != PFJNET) return(TRUE); if(ti->tspec.fspec.ipProtocol && fspec->ipProtocol != ti->tspec.fspec.ipProtocol) continue;
if (tspecMatch(fspec, &(ti->tspec))) { *tiPtr = ti; return(TRUE);
} } return(FALSE);
}
/************************************************************************** ** *
* tspecMatch(FLOW_SPEC *fl, TSPEC *t2) *
* NOTE fl is the live flow and t2 is the TSPEC.
************************************************************************** **
*/ int tspecMatch(FLOW_SPEC *fl, TSPEC *t2)
{
FLOW_SPEC *f2 = &(t2->fsρec); /* we treat tspec as a 2nd flow */
CONN *mask = &(t2->fspecMask.conn); /* but latch the mask separately */ int prec;
CONN *cl = &fl->conn; CONN *c2 = &f2->conn; INT32 hmask;
if(fl->inService && 12->inService && (fl->inService != fi->inService)) return(FALSE);
if (mask->inside.port && cl->inside.port &&
((cl->inside.port < c2->inside.port) \\ (cl->inside.port > mask->inside.port))) return(FALSE); if (((hmask = mask->inside.host) != 0) &&
((cl->inside.host & hmask) != (c2->inside.host & hmask))) return(FALSE);
if(fl->outService && f2->outService && (fl->outService != fi->outService)) return(FALSE); if (mask->outside.port && cl->outside.port &&
((cl->outside.port < c2->outside.port) " (cl->outside.port > mask->outside.port))) return(FALSE); if (((hmask = mask->outside.host) != 0) &&
((cl->outside.host & hmask) != (c2->outside.host & hmask))) return(FALSE); /*
* No port spec for ICMP... */ if(fl->ipProtocol = ICMP_PTCL
&& (c2->inside.port || c2->outside.port || f2->inService > SVC_CLIENT || fi->outService > SVC_CLIENT)) return(FALSE); /*
* IP Precedence is an orthiginal classification element... */ prec = PREC(fl->tos); if(prec < PREC(f2->tos) |j prec > PREC(t2->fspecMask.tos)) return(FALSE);
/*
* Check for URI here !!!!!! WILDCARD match... */ if (t2->uriSpec.uriBufCompile) { if(fl->uri = NULL) return(FALSE);
return(hrtpIsUriMatched(&(t2->uriSpec), fl->uri, fl->uriLen));
} return(TRUE);
} ************************************************************************** ** *
* httpIsUriMatched(uriSpec,stringBuf,stringLen) *
* This matches the URL in stringBuf[] to the
* (previously-compiled) URI_SPEC *uriSpec.
*
* Returns nonzero on match; zero on no-match.
************************************************************************** **
*/ int httpIsUriMatched(URI_SPEC *uriSpec,char *stringBuf,int stringLen)
{ char *s = stringBuf; char *stringEnd = &stringBuf[ stringLen]; char *listPtr; char *itemPtr; int itemLen; char *itemEnd; int i; char ich; char sch; char *p, *pEnd; char pch;
/*
* Skip over leading "http:" */ if((stringLen >= 6) &&
(!memcmp(s,"http:",5) \\ !memcmp(s,"HTTP:",5))) {
stringLen -= 5; s += 5; } listPtr = uriSpec->uriBufCompile->name;
/*
* Special case? */ if((uriSpec->flag &
(URI_FLAG_SPECIAL_EXACT | URI_FLAG_SPECIAL_LEFT J URI_FLAG_SPECIAL_RIGHT)) != 0) { * <literal> special-case? NOTE: This special-case is REQUIRED, as
* the generic code below does NOT handle the literal-only case
* correctly. Doing so would add more code below. */ if((uriSpec->flag & URI_FLAG_SPECIAL_EXACT) != 0) {
/*
* Length to end of s[] must match exactly. */ if((itemLen = *listPtr++) != stringLen) return(O);
}
/*
* <literal>* special-case? NOTE: This special-case is correctly
* handled below; this is a performance hack. */ else if((uriSpec->flag & URI_FLAG_SPECIAL_LEFT) != 0) {
/*
* s[] must be equal or longer than compare string */ if((itemLen = *listPtr++) > stringLen) return(O);
} /*
* *<literal> special-case? NOTE: This special-case is correctly
* handled below; this is a performance hack. */ else if((uriSpec->flag & URI_FLAG_SPECIAL_RIGHT) != 0) {
-H-listPtr;
/*
* s[] must be equal or longer than compare string. */ if((itemLen = *listPtr-H-) > stringLen) return(O);
/*
* We compare from the right. */ s = (stringEnd - itemLen);
/*
* Case-insensitive compare, stringBufT and listPtr[], itemLen bytes
* (assumes listPtr[] already lower-cased in httpCompileUri()) */ for(; itemLen != 0; itemLen--) { sch = *s++; if(isupper(sch)) sch = tolower(sch); if(sch != *listPtr++) return(O);
/*
* If we did not fail to match, we must match! */ return(l);
/*
* Loop thru segments
*/ for(i = uriSpec->count; i != 0; i~) {
/*
* Latch length and end of this segment.
* If zero length, matches immediately. If nonzero, search for
* match. */ if((itemLen = *HstPtr++) != 0) {
/*
* Cannot match if remaining string is shorter than item */ if(itemLen > (stringEnd - s)) return(O);
/*
* Latch beginning and end of this segment. */ itemPtr = listPtr; itemEnd = (itemPtr + itemLen);
/*
* First segment must match exactly on the front. */ if(i = uriSpec->count) { do { if((ich = *itemPtr) != '?') { sch = *s; if(isupper(sch)) sch = tolower(sch); /* if byte doesn't match, NO MATCH */ if(ich != sch) return(O);
} s++;
} while(-H-itemPtr < itemEnd);
} /*
* Last segment must match exactly on the end. */ else if(i = 1) { s = (stringEnd - itemLen); do { if((ich = *itemPtr) != '?') { sch = *s; if(isupper(sch)) sch = tolower(sch);
/* if byte doesn't match, NO MATCH */ if(ich != sch) return(0); } s++;
} while(++itemPtr < itemEnd);
} /*
* Following segments can accept intervening strings (for '*') */ else {
/*
* End of region where this segment can start */ pEnd = (stringEnd - itemLen + 1); again:
/*
* Search for matching first byte (unless it is '?') */ if((ich = *itemPtr) != '?') { p = s; for(;;) { pch = *p; if(isupper(pch)) pch = tolower(pch); if(ich == pch) { s = p; break; } if(!(++p < pEnd)) return(O);
/*
* Skip over that byte. */
++s; ++itemPtr;
/*
* Match remaining bytes */ for(; itemPtr < itemEnd; s++, itemPtr-H-) {
if((ich = *itemPtr) != '?') { sch = *s; if(isupper(sch)) sch = tolower(sch); if(ich != sch) {
/* if out of string, NO MATCH */ if(itemLen > (stringEnd - ++s)) return(O);
/* if not out of string, try again */ itemPtr = listPtr; goto again;
}
/*
* Go on to next segment.
*/ listPtr = itemPtr; }
/*
* If we got here, MATCH!
*/ return(l); }
For building a classification tree by adding a class or adding an additional tspec to an exisitng class and maintaining the specificity order automatically, we have the following...
/******************************************************************** ***** **
* tclassAddTspec(tclass, tspec)
* Allocate a TSPECJTEM and enlist it at this traffic class.
***************************** * * * * ********* ****** * ****** * ****** ** ******£,,,££ **
s aticf void tclassSetupTspec(TCLASS *tclass, TSPECJTEM *ti)
{
TSPECJTEM **lst = &tclass->imsTspecList; while (*lst != NULL && tspecLessSpecific(*lst, ti))
1st = &(*lst)->next; ti->next = *lst; 1st = ti; if (ti = tclass->imsTspecList && ti->next != NULL) { tclassRemoveChild(tclass); tclassInsertChild(tclass->parent, tclass) ;
tcfgConfigChanged = TRUE; }
**
* void
* tclassNew(parent, dir, name, tflags, tspec, tclassPtr)
* Make sure TCLASS and BWP direction are the same.
************************************************************************** **
*/ void t assNew(TCLASS *parent, int dir, char *name, int tflags, TSPEC *tspec, TCLASS **tclassPtr)
{ whereStr("tclassNew");
TCLASS *tclass;
TSPECJTEM *ti; INT32 dummy;
BW_POOL *bwp; int global = (parent = NULL) ? TRUE : FALSE;
if ((tclass = tclassAllocO) = NULL) { fatalfOut of TCLASSs!"); } tclass->flags |= (dir & TCF_DIR_MASK) | tflags; if (global = FALSE) { ti = tclassGetTspec(tclass, tspec);
VALIDATE_TCLASS(parent); tclass->parent = parent; if (tclass ValidateMembership(parent, tspec, TRUE) != OK) { fatal("TSPEC not reachable");
} bwp = parent->bwp;
} else { bwp = GlobalBwPoolsfdir]; tclass->flags j= TCF_GLOBAL+TCF_CIRLIMIT_NONE;
}
/*
* Thread the tclass into its bandwidth pool as well... */ tclass->bwp = bwp; bwpAddTclass(bwp, tclass); /*
* If new tclass is NOT global, thread it into the classifier tree.
* It should be inserted in Match Specificity order onto the
* child.sibling list. */ if (global = FALSE) { tclassSetupTspec(tclass, ti); tclassInsertChild(parent, tclass); bwpRedistributeB width(dir) ;
}
*tclassPtr = tclass;
} /************************************************************************** ** *
* tclassInsertChild(parent, tclass) *
* Insert in child list in DMS order.
************************************************************************** **
*/ void tclassInsertChild(TCLASS *parent, TCLASS *tclass)
{
TCLASS *sibling, *last; unsigned long old, dummy; sibling = parent->child; last = NULL;
LOCK_TASK(&old); while(sibling != NULL && tclassLessSpecific(tclass, sibling))! last = sibling; sibling = sibling->sibling;
} tclass->sibling = sibling; tclass->parent = parent; if (last = NULL) parent->child = tclass; else last->sibling = tclass;
UNLOCK_TASK(old,&dummy);
tcfgConfigChanged = TRUE;
}
/************************************************************************** ** *
* tclassLessSpecific(a, b) *
* Return TRUE if a is less match specific than b, that is if a should
* be ordered AFTER b...
************************************************************************** **
*/ staticf int tclassLessSpecific(TCLASS *a, TCLASS *b)
{ int re; if (IS_FLAG_SET(a->flags, TCF_EXCEPTION)){ if (! IS_FLAG_SET(b->flags, TCF_EXCEPTION)) return(FALSE);
} else if (IS_FLAG_SET(b->flags, TCF_EXCEPTION)) return(TRUE); if (IS_FLAG_SET(a->flags, TCFJNHERITABLE)){ if (! IS_FLAG_SET(b->fiags, TCFJNHERITABLE)) return(TRUE);
} else if (IS_FLAG_SET(b->flags, TCFJNHERITABLE)) return(FALSE); /*
* Ok - need to order by tspec IMS... */ if ((re = tspecLessSpecific(a->imsTspecList, b->imsTspecList)) = TRUE && tclassIsMatchAll(b) && a->child != NULL) { if (b->child != NULL) re = tclassLessSpecific(a->chi!d, b->child); re = FALSE;
}
/*
* Ok - now order by policy/no policy... */ if (re = TRUE && a->policy = NULL && b->policy != NULL) return(FALSE); return(rc); }
/************************************************************************** ** *
* tspecLessSpecific(a, b) *
* Return TRUE if (a) less specific than (b).
************************************************************************** **
*/ #define TSPEC_EQ(a,b) (memcmp((a), (b), sizeof(TSPEC)) == 0)
int tspecLessSpecific(TSPECJTEM *a, TSPECJTEM *b) { int abits, bbits;
FLOW_SPEC *amask, *bmask, *af, *bf; whereStr("tspecLessSpecific"); if(tspecl sMatchAll(&a->tspec)) return(TRUE); else if(tspecIsMatchAll(&b->tspec)) return(FALSE); af = &a->tspec.fspec; bf = &b->tspec.fspec; if(af->protFamily != bf->protFamily) return(af->protFamily > bf->protFamily); if(af->protFamily != PF_INET) return(FALSE);
/*
* OK - Compare host masks... */ amask = &a->tspec.fspecMask; bmask = &b->tspec.fspecMask; abits = tspecHostBits(&amask->conn); bbits = tspecHostBits(&bmask->conn);
if (abits != bbits) return(abits < bbits); /*
* Compare port ranges... */ abits = tspecPortBits(&af->conn, &amask->conn); bbits = tspecPortBits(&bf->conn, &bmask->conn); if (abits != bbits) return(abits > bbits); /* here more "bits" mean less specific */ abits = (af->service[DIRJNBOUND] > SVC_CLIENT); bbits = (bf->service[DIRJNBOUND] > SVC_CLIENT);
if(abits != bbits) return(abits < bbits); abits = (af->service[DIR_OUTBOUND] > SVC_CLIENT); bbits = (bf->service[DIR_OUTBOUND] > SVC_CLIENT);
if(abits != bbits) retura(abits < bbits);
/*
* Compare precedence ranges */ abits = PREC(amask->tos) - PREC(af->tos); bbits = PREC(bmask->tos) - PREC(bf->tos); if(abits != bbits) return(abits > bbits);
/*
* Finally - compare URL if it is present in either... */ if (af->uri j| bf->uri) return(urlLessSpecific(af->uri, bf->uri)); /*
* They are the same - true on a tie... */ return(TRUE); } ************************************************************************** ** *
* urlLessSpecific(a, b) *
************************************************************************** **
*/ int urlLessSpecific(char *a, char *b)
{ int aw, bw; char *cp; if (! a) return(TRUE); else if (!b) return(FALSE); /*
* OK - both have URL specified. Compare number of wildcards... */ for (aw = 0, cp = a; *cp; cp++){ if (*cp == '*') aw++;
} for (bw = 0, cp = b; *cp; cp++){ if (*cp = '*') bw++;
} return(aw > bw ? TRUE : FALSE); }
int hostBits(INT32 host)
{ int bits, i; for (bits = 0, i = 31; i >= 0; i™){ if ((1 « i) & host) bits++; else break;
} return(bits);
} int portBits(INT16 port, INT 16 range)
{ if (port = 0) return(0x00007fff); else if (range = 0) return(l); else return((range-port)+ 1 ) ;
} int tspecHostBits(CONN *conn)
{ return(hostBits(conn->inside.host) + hostBits(conn->outside.host));
} int tspecPortBits(CONN *conn, CONN *mask)
{ return(portBits(conn->inside.port, mask->inside.port) + portBits(conn->outside.port, mask->outside.port)); }
************************************************************************** ** *
* tclassValidateMembership(parent, tspec, verbose) *
* Search recursively through parents to make sure we have a path from the * root down to here. We ignore siblings and uncles, as specificity-order
* should take care of that.
************************************************************************** **
*/ int tclass ValidateMembership(TCL ASS *parent, TSPEC *tspec, int verbose)
{ VALIDATE_TCLASS(parent);
/*
* If we got to the top - it is valid... */ if (IS_FLAG_SET(parent->flags,TCF_GLOBAL)) return(OK); /*
* Is tspec a child of this parent... */ if (! tspedsChild(tspec, parent->imsTspecList)){ if (verbose) printf("\nNot valid member of traffic class %s", parent->name); return(NOTOK); .
} /*
* OK - we fit here. Check at next parent level... */ return(tclassValidateMembership(parent->parent, tspec, verbose));
}
Speed Scaling - when a flow is allocated, the GIR and EIR are scaled per the detected (potential maximum) speed. in this embodiment, the speed scaling is done by extrapolating from a defined high speed and low speed target rates.
/************************************************************************** **
* file: scale.c description: CIR and EIR scaling to load.
SCALING CIR and EIR to load potential:
CIR scale is specified by user as two values, a base value (B) under which all speeds get 100%, and a limiting value (L) to cap CIR for high speed users. If a connections incoming speed is actually BETWEEN (B) and (L), then we create a scaled intermediate value for CIR.
So CIR_SPEC = {B,L}
EIR scale is used to generate an EIR_VECTOR of allocation limits at each EIR priority level. EIR scale is specified as the highest priority level (HP), the scaled limit at that priority level (in terms of B and L, as with CIR). We then will automatically
* generate the rest of the vector for the lower priority levels, up
* to an optional total limit (TL). * * So EIR_SPEC = {HP, HP-B, HP-L, TL}; *
* 09/18/97 Copyright Packeteer, Inc, 1996-97 @(#)scale.c 1.2
************************************************************************** **
*/
/*
* Rate Specifier, i.e. CIR + EIR...
*/ typedef struct {
BPS cirLimit; /* How much CIR I want */
BPS eirDemand; /* total of eirDemandvector */
EIR VECTOR eirDemandvector; /* How much EIR I want */
/*
* UDP stuff tbd...
*/
INT32 bucketLimit;
} RATE_SPEC;
typedef struct _gearshift { DLINK dlink; #define GF INITIALIZED 0x0001
#define GF SCALED 0x0002
#define GF ACTIVE 0x0004
#define GF_EIR_OVERALLOCATED 0x0008 /* allocated though none available*/
#define GF_CIR_OVERALLOCATED 0x0010 /* allocated though none available*/
#define GF QUIESCENT 0x0020
#define GF PRIORITY 0x0040
#define GF RATE 0x0080
#define GF DQ 1 0x0100
#define GF TCPRC 0x0200
#define GF FILTER 0x0400
#define GF NEEDS REALLOC 0x0800
#define GF_CIR_COMMITTED 0x1000
INT32 flags;
BPS target;
INT32 cirlnUse;
INT32 eirlnUse;
BPS lastScaledBps;
RATE SPEC rspec;
} GEARSHIFT;
/*
* For scaling CIR to connection speed. See traffic/scale.c.
* If limit is set to 0, no CIR is allocated. */ typedef struct {
BPS base;
BPS limit; } CIR_SPEC;
/*
* For scaling EIR to connection speed, i.e. creating eirLimits vector.
* hp -highest priority, hp-base, and hp-limit and optional total limit.
* See traffic/scale.c. *
* As with CIR Spec, if hpLimit is set to 0, no EIR is allocated. */ typedef struct {
INT8 hp;
BPS hpBase;
BPS hpLimit;
BPS totalLimit; } EIR_SPEC; * Policy Item */ typedef struct _policy { struct _policy *next; struct tclass *tclass;
INT32 flags; INT8 direction;
INT8 sessionType;
INT32 hits;
/*
* For scaling EIPJCIR... */
EIR SPEC eirSpec; CIR SPEC cirSpec;
INT32 bucketLimit;
/*
* For Admis. ;ion Control... */
URI BUF *redirectUrl;
/*
* For Conten t Substitution...
* // SUB SPEC *subSpec; POLICYJTEM;
/************************************************************************** ** *
* scaleToLoad(policy, tclass, gears, bps) *
* Set up EIR limit vector as well as appropriate CIR for this specific
* connection speed (bps) for this GEAR (g).
************************************************************************** **
*/ void scaleToLoad(POLICYJTEM *pi, TCLASS *tclass, GEARSHIFT *g, SPEED bps)
{
CIR_SPEC *cs = &ρi->cirSpec; EIR_SPEC *es = &pi->eirSpec; BW_POOL *bwp = tclass->bwp; int eirDemand, eirLimit, eirExtra; g->lastScaledBps = bps; /*
* First set CIR...
*/ if (cs->limit > 0) { g->rspec.cirLimit = scaleRate(bps,cs->base,cs->limit, tclass->bwp->limit);
} else g->rspec.cirLimit = g->cirInUse = 0; eirDemand = bps - g->rspec.cirLimit; eirVectorSet(g->rspec.eirDemandVector, 0); /* Zero vector... */ g->rspec.eirDemand = 0; if (es->hpLimit > 0 && eirDemand > 0) { int cap; /*
* Descending priority levels will get exponential chunks of
* bandwidth, increasing in size as priority decreases. */ eirLimit = scaleRate(eirDemand, es->hpBase, es->hpLimit, bwp->limit); /*
* We allocate all of the rest (of BPS) at lower priority
* levels unless a TOTAL LIMIT is set... */ cap = bwp->limit; cap -= g->rspec.cirLimit; if (es->totalLimit > 0 && es->totalLimit < cap) cap = es->totalLimit; /*
* cap is now the largest amount of EIR we can have */ eirExtra = min(eirDemand, cap) - eirLimit; if (eirExtra < 0) eirExtra = 0;
/*
* Assign limits to descending priority levels... */
(void)eirAssignDemand(es->hp, eirLimit, eirExtra, &g->rspec);
} info4("pol %s: gear %d cir=%d, eir=%d", pi->tclass->name, g, g->rspec.cirLimit, g->rspec.eirDemand); }
************************************************************************** ** * * scaleRate(bps, base, limit, total) *
************************************************************************** **
*/
INT32 scaleRate(INT32 bps, INT32 base, INT32 limit, INT32 total)
{
INT32 pet;
/*
* If detected rate is lower than scale-base, just return base. */ if (bps <= base) { bps «= 2; if (bps > base) bps = base; return bps;
}
/*
* Constrain total...
*/ if (total > limit « 1) total = limit « 1;
/*
* If detected rate higher than total, just return limit */ if (bps >= total) return limit; /*
* OK - we will allocate base + some percentage of (limit - base),
* i.e. up to limit. Since we know bps < total, this will always
* return a number less than or equal to limit. */ pet = (bps « 1)1 total; return base + (((limit - base) * pet) » 7); } ************************************************************************** ** *
* eirAssignDemand(hp, hpLimit, eirExtra, rspec) *
************************************************************************** **
*/ staticf void eirAssignDemand(int hp, int eirLimit, int eirExtra, RATE_SPEC *r)
{ BPS *b = &r->eirDemandVector[hp]; int i, active; int extra, extraChunk; int left; extra = max(0, (eirExtra - (eirLimit*(hp+l)))); left = eirExtra + eirLimit; r->eirDemand = 0; active = 0;
/*
* NOTE: Eir demand has been set to 0...
*/ for (i = hp; i >= 0; i~, b~) { if (left < EIR_MIN_CHUNK && active != 0) break; active = 1; if (i = O)
*b = left; else {
*b = min(left, eirLimit); if (extra > 0 && i < hp) { extraChunk = extra » (i + 1); *b += extraChunk;
} left -= *b;
} r->eirDemand += *b;
>
}

Claims

WHAT IS CLAIMED IS:
1. A method for allocating bandwidth in a packet telecommunication system having any number of flows, including zero, using a classification paradigm comprising the steps of: parsing a packet into a flow specification, wherein said flow specification contains one or more instances of any of the following: a protocol family designation, a direction of packet flow designation, a protocol type designation, a plurality of hosts, a plurality of ports, in http protocol packets, a pointer to a URL; thereupon, matching the flow specification of the parsing step to a plurality of hierarchically-recognized classes represented by a plurality of nodes, each node having a traffic specification and a mask, according to the mask; thereupon, having found a matching node in the matching step, associating said flow specification with one of said plurality of hierarchically-recognized classes represented by a plurality nodes; and allocating bandwidth resources according to a policy associated with said class.
2. The method of claim 1 wherein a node may have one or many traffic specifications.
3. The method of claim 1 wherein said plurality of hierarchically-recognized classes represents a plurality of orthogonal flow attributes.
4. The method of claim 1 wherein allocating bandwidth resources further comprises the steps of: for rate based policies having a guaranteed information rate (GIR) and an excess information rate (EIR), allocating GIR until no further GIR is available; thereupon, allocating EIR, for priority based policies, having a priority, allocating unreserved service according to said priority.
5. The method of claim 1 wherein said allocating bandwidth resources further comprises the step of: allocating a combination of GIR resources, EIR resources and unreserved resources, wherein GIR resources and EIR resources are allocated based upon speed scaling and unreserved resources are allocated based upon a priority.
6. The method of claim 5 wherein allocating guaranteed information rate (GIR) resources and excess information rate (EIR) resources to said flow based upon speed scaling further comprises the steps of: detecting the speed of a network link upon which said flow is transported; allocating available bandwidth resources based upon the speed detected.
7. The method of claim 5 wherein allocating unreserved resources to a particular flow, said flow having an input rate, said flow being one of said plurality of flows, based upon a priority further comprises the steps of: extrapolating from said input rate of the detecting step an extrapolated unreserved demand (EUD); aggregating together a plurality of individual flow demands to form an aggregate rate demand (ARD); combining said EUD and said ARD to form an instantaneous total demand; allocating available bandwidth resources based upon the instantaneous total demand determined in the combining step from highest priority demand first to lower priority demanded to yield a satisfaction percentage of demand; determining a target rate for individual flows by multiplying said individual flow demand for each flow by said satisfaction percentage of demand determined in the allocating step.
8. The method of claim 7 wherein said unreserved resources are comprised of a plurality of individual inputs, each input having a priority, further comprising the step of: determining a target rate for said units according to said priority by multiplying said extrapolated unreserved demand by said satisfaction percentage of demand determined in the allocating step.
9. The method of claim 1 further comprising the step of determining that if guaranteed information rate resources are insufficient to accommodate said flow, invoking an admissions policy.
10. In a packet communication environment allocated into layers, including at least a transport layer, link layer and an application layer, a method for classifying Internet Protocol (IP) flows at the transport layer for use in determining a policy, or rule of assignment of a service level, and enforcing that policy by direct rate control, said method comprising: applying individual instances of traffic classes to a classification tree based on selectable information obtained from an application layer of a communication protocol; and thereupon, mapping the IP flow to the defined traffic classes, said traffic classes having an associated policy.
11. A method for managing bandwidth on Internet Protocol (IP) flows in a packet communication environment allocated into layers, including at least a transport layer, a link layer and an application layer, said method comprising: automatically detecting selectable information about said flow; determining a policy for assigning a service level to said flow based upon said automatically detected selectable information about said flow; and implementing said policy by direct rate control of said flow.
12. The method of claim 11 wherein said determining step further comprises applying individual instances of traffic class specifications, each traffic class specification having an associated policy, to said flow based on selectable information; thereupon, associating with said flow the particular policy associated with said traffic class.
13. The method of claim 12 wherein said traffic classes have a hierarchical relationship to one another.
14. A method for allocating network bandwidth, which bandwidth comprises guaranteed information rate (GIR) resources, excess information rate (EIR) resources and unreserved rate resources, on Internet Protocol (IP) flows in a packet communication environment allocated into layers, including at least a transport layer, a link layer and an application layer, said method comprising: automatically detecting selectable information about said flow; determining a policy for assigning a service level to said flow based upon said automatically detected selectable information about said flow, said policy directing either reserved service or unreserved service; allocating bandwidth representing a combination of GIR resources, EIR resources and unreserved resources to a plurality of flows assigned said service level by said determining step.
15. The method of claim 14 wherein said unreserved resources are comprised of a plurality of individual inputs, each input having a priority, further comprising the step of: determining a target rate for said units according to said priority by multiplying said extrapolated unreserved demand by said satisfaction percentage of demand determined in the allocating step.
16. The method of claim 14 wherein said allocating bandwidth representing a combination of GIR resources, EIR resources and unreserved resources comprises allocating GIR resources and EIR resources based upon a speed scaling and allocating unreserved resources based upon a priority.
17. The method of claim 16 wherein allocating unreserved resources based upon a priority to a plurality of flows, each having an input rate, further comprises the steps of: detecting the input rate of a flow; extrapolating from said input rate of the detecting step an extrapolated unreserved demand (EUD); aggregating together a plurality of individual flow demands to form an aggregate rate demand (ARD); combimng said EUD and said ARD to form an instantaneous total demand; allocating available bandwidth resources based upon the instantaneous total demand determined in the combining step from highest priority demand first to lower priority demanded to yield a satisfaction percentage of demand; determining a target rate for individual flows by multiplying said individual flow demand for each flow by said satisfaction percentage of demand determined in the allocating step.
18. The method of claim 17 wherein said unreserved resources are comprised of a plurality of individual inputs, each input having a priority, further comprising the step of: determining a target rate for said units according to said priority by multiplying said extrapolated unreserved demand by said satisfaction percentage of demand determined in the allocating step.
19. The method of claim 14 further comprising the step of determining that if guaranteed bandwidth available is insufficient to accommodate said flow, invoking an admissions policy.
20. The method of claim 14 wherein said determining step further comprises applying individual instances of traffic classes, each traffic class having an associated policy, to said flow based on selectable information; thereupon, associating with said flow the particular policy associated with said traffic class.
21. The method of claim 19 wherein said traffic classes have a hierarchical relationship to one another.
22. The method of claim 14 further comprising allocating said bandwidth representing unreserved data resources among flows based upon a priority associated with said flows.
23. The method of claim 14 further comprising applying a partition to one or more classes, said partition having a limit on aggregate data rate flow.
24. The method of claim 23 wherein said limit is a soft partition limiting FIR for one or more classes.
PCT/US1997/022550 1996-12-09 1997-12-08 Method for managing flow bandwidth utilization at network, transport and application layers in store and forward network WO1998026510A2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU55967/98A AU5596798A (en) 1996-12-09 1997-12-08 Method for managing flow bandwidth utilization at network, transport and appli cation layers in store and forward network

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US3248596P 1996-12-09 1996-12-09
US60/032,485 1996-12-09

Publications (2)

Publication Number Publication Date
WO1998026510A2 true WO1998026510A2 (en) 1998-06-18
WO1998026510A3 WO1998026510A3 (en) 1998-10-01

Family

ID=21865178

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1997/022550 WO1998026510A2 (en) 1996-12-09 1997-12-08 Method for managing flow bandwidth utilization at network, transport and application layers in store and forward network

Country Status (3)

Country Link
US (2) US6046980A (en)
AU (1) AU5596798A (en)
WO (1) WO1998026510A2 (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0971518A2 (en) * 1998-07-09 2000-01-12 Nec Corporation Communication quality control system
WO2000027044A2 (en) * 1998-11-03 2000-05-11 Hughes Electronics Corporation Selective allocation and enforcement of bandwidth usage requirements
US6084855A (en) * 1997-02-18 2000-07-04 Nokia Telecommunications, Oy Method and apparatus for providing fair traffic scheduling among aggregated internet protocol flows
US6560230B1 (en) 1999-02-01 2003-05-06 Redback Networks Inc. Packet scheduling methods and apparatus
US6567408B1 (en) 1999-02-01 2003-05-20 Redback Networks Inc. Methods and apparatus for packet classification with multi-level data structure
WO2004057812A1 (en) * 2002-12-20 2004-07-08 Koninklijke Philips Electronics N.V. Network bandwidth division on a per-user basis
US6795441B1 (en) 2002-08-30 2004-09-21 Redback Networks, Inc. Hierarchy tree-based quality of service classification for packet processing
US6901592B2 (en) 2000-11-27 2005-05-31 Redback Networks, Inc. Method and apparatus for the enumeration of sets of concurrently scheduled events
EP1796304A2 (en) * 1999-07-09 2007-06-13 Malibu Networks Inc. TCP/IP packet-centric wireless transmission system architecture
US7283472B2 (en) 2002-08-30 2007-10-16 Redback Networks Inc. Priority-based efficient fair queuing for quality of service classification for packet processing
US7339953B2 (en) 2002-08-30 2008-03-04 Redback Networks, Inc. Surplus redistribution for quality of service classification for packet processing
EP1990960A1 (en) * 2007-05-09 2008-11-12 Wayport, Inc. System and method for providing application categorization and quality of service in a network with multiple users
US8793390B2 (en) 2006-05-23 2014-07-29 Blue Coat Systems, Inc. Systems and methods for protocol detection in a proxy
US9712289B2 (en) 1998-07-10 2017-07-18 Intellectual Ventures I Llc Transmission control protocol/internet protocol (TCP/IP) packet-centric wireless point to multi-point (PtMP) transmission system architecture

Families Citing this family (218)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6490251B2 (en) * 1997-04-14 2002-12-03 Nortel Networks Limited Method and apparatus for communicating congestion information among different protocol layers between networks
US6341309B1 (en) * 1997-05-27 2002-01-22 Novell, Inc. Firewall system for quality of service management
JP3413788B2 (en) * 1997-09-25 2003-06-09 日本電信電話株式会社 Communication method having communication protocol for performing flow control between layers and data communication terminal
WO1999027684A1 (en) * 1997-11-25 1999-06-03 Packeteer, Inc. Method for automatically classifying traffic in a packet communications network
US6591299B2 (en) 1997-11-25 2003-07-08 Packeteer, Inc. Method for automatically classifying traffic with enhanced hierarchy in a packet communications network
US8060613B2 (en) * 1998-02-10 2011-11-15 Level 3 Communications, Llc Resource invalidation in a content delivery network
US6185598B1 (en) * 1998-02-10 2001-02-06 Digital Island, Inc. Optimized network resource location
US6590885B1 (en) 1998-07-10 2003-07-08 Malibu Networks, Inc. IP-flow characterization in a wireless point to multi-point (PTMP) transmission system
US6680922B1 (en) 1998-07-10 2004-01-20 Malibu Networks, Inc. Method for the recognition and operation of virtual private networks (VPNs) over a wireless point to multi-point (PtMP) transmission system
US6452915B1 (en) * 1998-07-10 2002-09-17 Malibu Networks, Inc. IP-flow classification in a wireless point to multi-point (PTMP) transmission system
US6594246B1 (en) 1998-07-10 2003-07-15 Malibu Networks, Inc. IP-flow identification in a wireless point to multi-point transmission system
US6628629B1 (en) * 1998-07-10 2003-09-30 Malibu Networks Reservation based prioritization method for wireless transmission of latency and jitter sensitive IP-flows in a wireless point to multi-point transmission system
US6640248B1 (en) 1998-07-10 2003-10-28 Malibu Networks, Inc. Application-aware, quality of service (QoS) sensitive, media access control (MAC) layer
US6580726B1 (en) * 1998-09-30 2003-06-17 Hewlett-Packard Development Company, L.P. Multiple access prevention for datagram-based control protocols method
US6167445A (en) 1998-10-26 2000-12-26 Cisco Technology, Inc. Method and apparatus for defining and implementing high-level quality of service policies in computer networks
US6286052B1 (en) 1998-12-04 2001-09-04 Cisco Technology, Inc. Method and apparatus for identifying network data traffic flows and for applying quality of service treatments to the flows
US6430155B1 (en) * 1998-11-30 2002-08-06 Cisco Technology, Inc. Congestion avoidance on communications networks
US6633563B1 (en) 1999-03-02 2003-10-14 Nortel Networks Limited Assigning cell data to one of several processors provided in a data switch
US6466976B1 (en) * 1998-12-03 2002-10-15 Nortel Networks Limited System and method for providing desired service policies to subscribers accessing the internet
US6614774B1 (en) * 1998-12-04 2003-09-02 Lucent Technologies Inc. Method and system for providing wireless mobile server and peer-to-peer services with dynamic DNS update
US20020188720A1 (en) * 1998-12-28 2002-12-12 William F. Terrell Method and apparatus for dynamically controlling the provision of differentiated services
US6483805B1 (en) * 1998-12-28 2002-11-19 Nortel Networks Limited Internet differentiated services service for transaction applications
US6611872B1 (en) * 1999-01-11 2003-08-26 Fastforward Networks, Inc. Performing multicast communication in computer networks by using overlay routing
US6438105B1 (en) * 1999-02-08 2002-08-20 3Com Corporation Reliable internet facsimile protocol
US6560243B1 (en) * 1999-04-30 2003-05-06 Hewlett-Packard Development Company System and method for receiver based allocation of network bandwidth
US6735633B1 (en) * 1999-06-01 2004-05-11 Fast Forward Networks System for bandwidth allocation in a computer network
US6275470B1 (en) 1999-06-18 2001-08-14 Digital Island, Inc. On-demand overlay routing for computer-based communication networks
US6959006B1 (en) 1999-06-29 2005-10-25 Adc Telecommunications, Inc. Service delivery unit for an enterprise network
US6847609B1 (en) 1999-06-29 2005-01-25 Adc Telecommunications, Inc. Shared management of a network entity
US7346677B1 (en) 1999-07-02 2008-03-18 Cisco Technology, Inc. Method and apparatus for creating policies for policy-based management of quality of service treatments of network data traffic flows
GB2353172B (en) * 1999-08-04 2001-09-26 3Com Corp Network switch including bandwidth allocation controller
US8135842B1 (en) 1999-08-16 2012-03-13 Nvidia Corporation Internet jack
US6587432B1 (en) * 1999-08-31 2003-07-01 Intel Corporation Method and system for diagnosing network congestion using mobile agents
US6785704B1 (en) * 1999-12-20 2004-08-31 Fastforward Networks Content distribution system for operation over an internetwork including content peering arrangements
US6647419B1 (en) * 1999-09-22 2003-11-11 Hewlett-Packard Development Company, L.P. System and method for allocating server output bandwidth
US8543901B1 (en) 1999-11-01 2013-09-24 Level 3 Communications, Llc Verification of content stored in a network
GB2356104B (en) * 1999-11-04 2001-10-10 3Com Corp Network switch including bandwidth controller
US6788647B1 (en) 1999-11-19 2004-09-07 Cisco Technology, Inc. Automatically applying bi-directional quality of service treatment to network data flows
US7010611B1 (en) * 1999-12-21 2006-03-07 Converged Access, Inc. Bandwidth management system with multiple processing engines
US7149222B2 (en) * 1999-12-21 2006-12-12 Converged Access, Inc. Integrated access point network device
US7126915B1 (en) * 1999-12-28 2006-10-24 At&T Corp. Network traffic regulation
AU2001238322A1 (en) 2000-02-18 2001-08-27 Cedere Corporation Multiple management system and method
US7444669B1 (en) 2000-05-05 2008-10-28 Microsoft Corporation Methods and systems for providing variable rates of service for accessing networks, methods and systems for accessing the internet
US7032241B1 (en) * 2000-02-22 2006-04-18 Microsoft Corporation Methods and systems for accessing networks, methods and systems for accessing the internet
US6834341B1 (en) 2000-02-22 2004-12-21 Microsoft Corporation Authentication methods and systems for accessing networks, authentication methods and systems for accessing the internet
US6687224B1 (en) * 2000-02-28 2004-02-03 Orckit Communications, Ltd. Bandwidth sharing method
US6992983B1 (en) 2000-05-05 2006-01-31 Macromedia, Inc. Bandwidth detection in a heterogeneous network with parallel and proxy modes
US7499453B2 (en) * 2000-05-19 2009-03-03 Cisco Technology, Inc. Apparatus and methods for incorporating bandwidth forecasting and dynamic bandwidth allocation into a broadband communication system
US6823385B2 (en) * 2000-05-19 2004-11-23 Scientifc Atlanta, Inc. Allocating access across a shared communications medium to user classes
US7032031B2 (en) * 2000-06-23 2006-04-18 Cloudshield Technologies, Inc. Edge adapter apparatus and method
US8204082B2 (en) 2000-06-23 2012-06-19 Cloudshield Technologies, Inc. Transparent provisioning of services over a network
US7046658B1 (en) 2000-06-23 2006-05-16 At & T Corp. Method and system for customer selected direct dialed voice-over-internet protocol (VOIP)
US6728785B1 (en) 2000-06-23 2004-04-27 Cloudshield Technologies, Inc. System and method for dynamic compression of data
US7114008B2 (en) 2000-06-23 2006-09-26 Cloudshield Technologies, Inc. Edge adapter architecture apparatus and method
US6829654B1 (en) * 2000-06-23 2004-12-07 Cloudshield Technologies, Inc. Apparatus and method for virtual edge placement of web sites
US9444785B2 (en) 2000-06-23 2016-09-13 Cloudshield Technologies, Inc. Transparent provisioning of network access to an application
US7003555B1 (en) * 2000-06-23 2006-02-21 Cloudshield Technologies, Inc. Apparatus and method for domain name resolution
US7111163B1 (en) 2000-07-10 2006-09-19 Alterwan, Inc. Wide area network using internet with quality of service
US6959332B1 (en) 2000-07-12 2005-10-25 Cisco Technology, Inc. Basic command representation of quality of service policies
US6842463B1 (en) * 2000-07-14 2005-01-11 Nortel Networks Limited Automated and adaptive management of bandwidth capacity in telecommunications networks
US7099932B1 (en) 2000-08-16 2006-08-29 Cisco Technology, Inc. Method and apparatus for retrieving network quality of service policy information from a directory in a quality of service policy management system
US7412514B2 (en) * 2000-08-17 2008-08-12 Hoshiko Llc Method and apparatus for improving bandwidth efficiency in a computer network
US8619793B2 (en) * 2000-08-21 2013-12-31 Rockstar Consortium Us Lp Dynamic assignment of traffic classes to a priority queue in a packet forwarding device
WO2002019132A1 (en) * 2000-09-01 2002-03-07 Tut Systems, Inc. A method and system to pre-compile configuration information for a data communications device
US7096260B1 (en) * 2000-09-29 2006-08-22 Cisco Technology, Inc. Marking network data packets with differentiated services codepoints based on network load
US6822940B1 (en) 2000-09-29 2004-11-23 Cisco Technology, Inc. Method and apparatus for adapting enforcement of network quality of service policies based on feedback about network conditions
US6988133B1 (en) 2000-10-31 2006-01-17 Cisco Technology, Inc. Method and apparatus for communicating network quality of service policy information to a plurality of policy enforcement points
EP1204248A1 (en) * 2000-11-06 2002-05-08 Agilent Technologies, Inc. (a Delaware corporation) Monitoring traffic in telecommunications networks
US6807156B1 (en) 2000-11-07 2004-10-19 Telefonaktiebolaget Lm Ericsson (Publ) Scalable real-time quality of service monitoring and analysis of service dependent subscriber satisfaction in IP networks
US7039717B2 (en) * 2000-11-10 2006-05-02 Nvidia Corporation Internet modem streaming socket method
US7050396B1 (en) 2000-11-30 2006-05-23 Cisco Technology, Inc. Method and apparatus for automatically establishing bi-directional differentiated services treatment of flows in a network
US20020075901A1 (en) * 2000-12-19 2002-06-20 Bruce Perlmutter Bandwidth management for tunneling servers
US7937470B2 (en) 2000-12-21 2011-05-03 Oracle International Corp. Methods of determining communications protocol latency
US20020126672A1 (en) * 2001-01-10 2002-09-12 Nelson Chow Method and apparatus for a flexible and reconfigurable packet classifier using content addressable memory
US7143154B2 (en) * 2001-01-26 2006-11-28 Lucent Technologies Inc. Internet protocol security framework utilizing predictive security association re-negotiation
US7379475B2 (en) * 2002-01-25 2008-05-27 Nvidia Corporation Communications processor
US6987734B2 (en) * 2001-04-20 2006-01-17 Clear Channel Wireless, Inc. Provision of digital data via multiple broadcasts
US8218555B2 (en) 2001-04-24 2012-07-10 Nvidia Corporation Gigabit ethernet adapter
US7042848B2 (en) * 2001-05-04 2006-05-09 Slt Logic Llc System and method for hierarchical policing of flows and subflows of a data stream
US6944168B2 (en) * 2001-05-04 2005-09-13 Slt Logic Llc System and method for providing transformation of multi-protocol packets in a data stream
US6904057B2 (en) * 2001-05-04 2005-06-07 Slt Logic Llc Method and apparatus for providing multi-protocol, multi-stage, real-time frame classification
US6901052B2 (en) 2001-05-04 2005-05-31 Slt Logic Llc System and method for policing multiple data flows and multi-protocol data flows
US7082502B2 (en) * 2001-05-15 2006-07-25 Cloudshield Technologies, Inc. Apparatus and method for interfacing with a high speed bi-directional network using a shared memory to store packet data
US7210022B2 (en) * 2001-05-15 2007-04-24 Cloudshield Technologies, Inc. Apparatus and method for interconnecting a processor to co-processors using a shared memory as the communication interface
US7116636B2 (en) * 2001-05-15 2006-10-03 Northrop Grumman Corporation Data rate adjuster using transport latency
US20030007485A1 (en) * 2001-05-22 2003-01-09 Motorola, Inc Method and system for operating an edge router
US7272645B2 (en) * 2001-05-25 2007-09-18 Sbc Technology Resources, Inc. Method of improving the reliability of peer-to-peer network downloads
GB2375927B (en) * 2001-05-26 2004-09-29 Cambridge Broadband Ltd Method and apparatus for communications bandwidth allocation
US6922732B2 (en) * 2001-06-08 2005-07-26 Mobixell Networks Inc. Class-based per-flow queuing across multiple hierarchical link-sharing structures
US7072291B1 (en) 2001-08-23 2006-07-04 Cisco Technology, Inc. Devices, softwares and methods for redundantly encoding a data stream for network transmission with adjustable redundant-coding delay
US7406522B2 (en) * 2001-09-26 2008-07-29 Packeteer, Inc. Dynamic partitioning of network resources
US7860964B2 (en) 2001-09-28 2010-12-28 Level 3 Communications, Llc Policy-based content delivery network selection
JP4160506B2 (en) * 2001-09-28 2008-10-01 レヴェル 3 シーディーエヌ インターナショナル インコーポレーテッド. Configurable adaptive wide area traffic control and management
US7373644B2 (en) * 2001-10-02 2008-05-13 Level 3 Communications, Llc Automated server replication
US7389537B1 (en) 2001-10-09 2008-06-17 Juniper Networks, Inc. Rate limiting data traffic in a network
US8914480B1 (en) * 2001-10-15 2014-12-16 6020356 Canada Inc. Method and device for transparent interception of socket connections
US20080279222A1 (en) * 2001-10-18 2008-11-13 Level 3 Communications Llc Distribution of traffic across a computer network
US20030079027A1 (en) * 2001-10-18 2003-04-24 Michael Slocombe Content request routing and load balancing for content distribution networks
US7688863B2 (en) 2001-10-19 2010-03-30 Renesas Technology America, Inc. Methods and apparatus for sharing network bandwidth
US20030081599A1 (en) * 2001-10-30 2003-05-01 Chui-Tsang Wu System and method for data transmission control
JP2003158543A (en) * 2001-11-22 2003-05-30 Anritsu Corp Relaying device and relaying method
US20030101260A1 (en) * 2001-11-29 2003-05-29 International Business Machines Corporation Method, computer program element and system for processing alarms triggered by a monitoring system
US6999417B1 (en) * 2001-11-30 2006-02-14 Cisco Technology, Inc. Devices, softwares and methods for incorporating burstiness of packet loss metric in QoS based network routing
US20030103453A1 (en) * 2001-12-05 2003-06-05 Broadweb Corporation System and method for managing flow bandwidth utilization in a packet communication environment
US7032072B1 (en) * 2001-12-31 2006-04-18 Packeteer, Inc. Method and apparatus for fast lookup of related classification entities in a tree-ordered classification hierarchy
US7299277B1 (en) 2002-01-10 2007-11-20 Network General Technology Media module apparatus and method for use in a network monitoring environment
US6801940B1 (en) * 2002-01-10 2004-10-05 Networks Associates Technology, Inc. Application performance monitoring expert
US7333432B1 (en) 2002-02-12 2008-02-19 Cisco Technology, Inc. Method and apparatus for configuring network elements to support real time applications
US7477600B1 (en) 2002-02-12 2009-01-13 Cisco Technology, Inc. Method and apparatus for configuring network elements to support real time applications based on meta-templates
US9167036B2 (en) 2002-02-14 2015-10-20 Level 3 Communications, Llc Managed object replication and delivery
US7535913B2 (en) * 2002-03-06 2009-05-19 Nvidia Corporation Gigabit ethernet adapter supporting the iSCSI and IPSEC protocols
US7227840B1 (en) * 2002-03-18 2007-06-05 Juniper Networks, Inc. High performance probabilistic rate policer
US7236459B1 (en) 2002-05-06 2007-06-26 Packeteer, Inc. Method and apparatus for controlling data transmission volume using explicit rate control and queuing without data rate supervision
US6912574B2 (en) * 2002-05-29 2005-06-28 Dhc, Llc System and process for allocating flow capacity in a network
US7293091B2 (en) * 2002-05-30 2007-11-06 Intel Corporation Method and apparatus for disruption sensitive network data management
US7161904B2 (en) * 2002-06-04 2007-01-09 Fortinet, Inc. System and method for hierarchical metering in a virtual router based network switch
US7450507B2 (en) * 2002-06-05 2008-11-11 Lucent Technologies Inc. Hierarchal rate-limiting at a network node that utilizes an infinity rate-limit check
US7305464B2 (en) * 2002-09-03 2007-12-04 End Ii End Communications, Inc. Systems and methods for broadband network optimization
US7433304B1 (en) * 2002-09-06 2008-10-07 Packeteer, Inc. Classification data structure enabling multi-dimensional network traffic classification and control schemes
US10051092B2 (en) * 2002-10-15 2018-08-14 Rockwell Collins, Inc. Method and device for transparent interception of socket connections
US6996763B2 (en) * 2003-01-10 2006-02-07 Qualcomm Incorporated Operation of a forward link acknowledgement channel for the reverse link data
US8391249B2 (en) 2003-02-18 2013-03-05 Qualcomm Incorporated Code division multiplexing commands on a code division multiplexed channel
US8023950B2 (en) 2003-02-18 2011-09-20 Qualcomm Incorporated Systems and methods for using selectable frame durations in a wireless communication system
US8081598B2 (en) * 2003-02-18 2011-12-20 Qualcomm Incorporated Outer-loop power control for wireless communication systems
US20040160922A1 (en) 2003-02-18 2004-08-19 Sanjiv Nanda Method and apparatus for controlling data rate of a reverse link in a communication system
US7155236B2 (en) 2003-02-18 2006-12-26 Qualcomm Incorporated Scheduled and autonomous transmission and acknowledgement
US7660282B2 (en) * 2003-02-18 2010-02-09 Qualcomm Incorporated Congestion control in a wireless data network
US8150407B2 (en) * 2003-02-18 2012-04-03 Qualcomm Incorporated System and method for scheduling transmissions in a wireless communication system
IL154739A0 (en) * 2003-03-04 2003-10-31 Bamboo Mediacasting Ltd Segmented data delivery over non-reliable link
US8705588B2 (en) * 2003-03-06 2014-04-22 Qualcomm Incorporated Systems and methods for using code space in spread-spectrum communications
US7215930B2 (en) * 2003-03-06 2007-05-08 Qualcomm, Incorporated Method and apparatus for providing uplink signal-to-noise ratio (SNR) estimation in a wireless communication
US8477592B2 (en) * 2003-05-14 2013-07-02 Qualcomm Incorporated Interference and noise estimation in an OFDM system
US8239516B2 (en) * 2003-05-15 2012-08-07 At&T Intellectual Property I, L.P. Methods, systems and computer program products for proactively offering a network turbo boost service to end users
US8521889B2 (en) * 2003-05-15 2013-08-27 At&T Intellectual Property I, L.P. Methods, systems, and computer program products for modifying bandwidth and/or quality of service for a user session in a network
US20050021739A1 (en) * 2003-05-15 2005-01-27 Carter Sharon E. Methods, systems and computer program products for communicating the expected efficacy of invoking a network turbo boost service
US8204042B2 (en) 2003-05-15 2012-06-19 At&T Intellectual Property I, L.P. Methods, systems, and computer program products for establishing VoIP service in a network
US8489949B2 (en) * 2003-08-05 2013-07-16 Qualcomm Incorporated Combining grant, acknowledgement, and rate control commands
US20050039135A1 (en) * 2003-08-11 2005-02-17 Konstantin Othmer Systems and methods for navigating content in an interactive ticker
US20050210391A1 (en) * 2003-08-11 2005-09-22 Core Mobility, Inc. Systems and methods for navigating content in an interactive ticker
US7343564B2 (en) 2003-08-11 2008-03-11 Core Mobility, Inc. Systems and methods for displaying location-based maps on communication devices
US7430724B2 (en) * 2003-08-11 2008-09-30 Core Mobility, Inc. Systems and methods for displaying content in a ticker
US7370283B2 (en) * 2003-08-11 2008-05-06 Core Mobility, Inc. Systems and methods for populating a ticker using multiple data transmission modes
US20060236258A1 (en) 2003-08-11 2006-10-19 Core Mobility, Inc. Scheduling of rendering of location-based content
US20080089347A1 (en) * 2003-08-29 2008-04-17 End Ii End Communications Inc. Systems and methods for broadband network optimization
IL157885A0 (en) * 2003-09-11 2004-03-28 Bamboo Mediacasting Ltd Iterative forward error correction
IL157886A0 (en) * 2003-09-11 2009-02-11 Bamboo Mediacasting Ltd Secure multicast transmission
IL158158A (en) 2003-09-29 2012-05-31 Bamboo Mediacasting Ltd Distribution of multicast data to users
US7385924B1 (en) 2003-09-30 2008-06-10 Packeteer, Inc. Enhanced flow data records including traffic type data
US7664048B1 (en) * 2003-11-24 2010-02-16 Packeteer, Inc. Heuristic behavior pattern matching of data flows in enhanced network traffic classification
US7899913B2 (en) * 2003-12-19 2011-03-01 Nvidia Corporation Connection management system and method for a transport offload engine
US7260631B1 (en) 2003-12-19 2007-08-21 Nvidia Corporation System and method for receiving iSCSI protocol data units
US8176545B1 (en) 2003-12-19 2012-05-08 Nvidia Corporation Integrated policy checking system and method
US7624198B1 (en) 2003-12-19 2009-11-24 Nvidia Corporation Sequence tagging system and method for transport offload engine data lists
US8065439B1 (en) 2003-12-19 2011-11-22 Nvidia Corporation System and method for using metadata in the context of a transport offload engine
US7272400B1 (en) 2003-12-19 2007-09-18 Core Mobility, Inc. Load balancing between users of a wireless base station
US8549170B2 (en) * 2003-12-19 2013-10-01 Nvidia Corporation Retransmission system and method for a transport offload engine
US7543052B1 (en) 2003-12-22 2009-06-02 Packeteer, Inc. Automatic network traffic discovery and classification mechanism including dynamic discovery thresholds
US8275865B2 (en) * 2004-02-05 2012-09-25 International Business Machines Corporation Methods, systems and computer program products for selecting among alert conditions for resource management systems
US7249306B2 (en) * 2004-02-20 2007-07-24 Nvidia Corporation System and method for generating 128-bit cyclic redundancy check values with 32-bit granularity
US7206872B2 (en) * 2004-02-20 2007-04-17 Nvidia Corporation System and method for insertion of markers into a data stream
US7774456B1 (en) * 2004-02-27 2010-08-10 Packeteer, Inc. Methods, apparatuses and systems facilitating classification of web services network traffic
US7457870B1 (en) * 2004-02-27 2008-11-25 Packeteer, Inc. Methods, apparatuses and systems facilitating classification of web services network traffic
US7698413B1 (en) 2004-04-12 2010-04-13 Nvidia Corporation Method and apparatus for accessing and maintaining socket control information for high speed network connections
US7853676B1 (en) 2004-06-10 2010-12-14 Cisco Technology, Inc. Protocol for efficient exchange of XML documents with a network device
US7640317B2 (en) * 2004-06-10 2009-12-29 Cisco Technology, Inc. Configuration commit database approach and session locking approach in a two-stage network device configuration process
US7660882B2 (en) * 2004-06-10 2010-02-09 Cisco Technology, Inc. Deploying network element management system provisioning services
US20060034167A1 (en) * 2004-07-30 2006-02-16 International Business Machines Corporation Communication resource reservation system for improved messaging performance
US7444588B2 (en) * 2004-08-05 2008-10-28 At&T Intellectual Property, I.L.P. Methods, systems, and storage mediums for providing multi-media content storage and management services
US7545748B1 (en) 2004-09-10 2009-06-09 Packeteer, Inc. Classification and management of network traffic based on attributes orthogonal to explicit packet attributes
US7953845B2 (en) * 2004-09-30 2011-05-31 Microsoft Corporation Network-specific estimation in network node discovery
US7957379B2 (en) * 2004-10-19 2011-06-07 Nvidia Corporation System and method for processing RX packets in high speed network applications using an RX FIFO buffer
JP2008524112A (en) * 2004-12-20 2008-07-10 コーニング インコーポレイテッド How to make a glass envelope
US20060140193A1 (en) * 2004-12-29 2006-06-29 Nokia Corporation Optimization of a TCP connection
US7822837B1 (en) 2004-12-30 2010-10-26 Packeteer, Inc. Adaptive correlation of service level agreement and network application performance
US7453804B1 (en) * 2005-02-08 2008-11-18 Packeteer, Inc. Aggregate network resource utilization control scheme
DE102005010690B4 (en) * 2005-03-09 2007-04-12 Knorr-Bremse Systeme für Schienenfahrzeuge GmbH Oil-injected compressor with temperature switch
US20060217110A1 (en) * 2005-03-25 2006-09-28 Core Mobility, Inc. Prioritizing the display of non-intrusive content on a mobile communication device
US7580356B1 (en) 2005-06-24 2009-08-25 Packeteer, Inc. Method and system for dynamically capturing flow traffic data
US7639613B1 (en) 2005-06-24 2009-12-29 Packeteer, Inc. Adaptive, flow-based network traffic measurement and monitoring system
US7895295B1 (en) 2006-01-19 2011-02-22 Sprint Communications Company L.P. Scoring data flow characteristics to assign data flows to storage systems in a data storage infrastructure for a communication network
US7797395B1 (en) 2006-01-19 2010-09-14 Sprint Communications Company L.P. Assignment of data flows to storage systems in a data storage infrastructure for a communication network
US8510429B1 (en) 2006-01-19 2013-08-13 Sprint Communications Company L.P. Inventory modeling in a data storage infrastructure for a communication network
US7801973B1 (en) 2006-01-19 2010-09-21 Sprint Communications Company L.P. Classification of information in data flows in a data storage infrastructure for a communication network
US7788302B1 (en) 2006-01-19 2010-08-31 Sprint Communications Company L.P. Interactive display of a data storage infrastructure for a communication network
US7970899B2 (en) * 2006-03-03 2011-06-28 Barracuda Networks Inc Integrated data flow packet admission and traffic management apparatus
US7869366B1 (en) 2006-03-28 2011-01-11 Packeteer, Inc. Application-aware rate control
US9003292B2 (en) 2006-07-06 2015-04-07 LiveAction, Inc. System and method for network topology and flow visualization
CN101491028A (en) * 2006-07-12 2009-07-22 华为技术有限公司 Method for controlling congestion
US20080052284A1 (en) * 2006-08-05 2008-02-28 Terry Stokes System and Method for the Capture and Archival of Electronic Communications
CN101529866A (en) * 2006-08-17 2009-09-09 核心移动公司 Presence-based communication between local wireless network access points and mobile devices
US8745185B1 (en) 2006-10-12 2014-06-03 Timothy J. Salo Method and apparatus for providing semantically aware network services
US8654638B2 (en) * 2006-12-19 2014-02-18 Marcin Godlewski Dynamically adjusting bandwidth usage among subscriber streams
US20080285496A1 (en) * 2007-05-14 2008-11-20 Bamboo Mediacasting Ltd. Data download in wireless network
US8549135B1 (en) 2007-05-18 2013-10-01 Raytheon Company Method and apparatus for performing quality of service in secure networks
US8116337B2 (en) * 2007-07-27 2012-02-14 Marcin Godlewski Bandwidth requests transmitted according to priority in a centrally managed network
US7742945B2 (en) * 2007-08-27 2010-06-22 At&T Intellectual Property, I,L.P. Methods, systems and computer products to incentivize high speed internet access
US8917598B2 (en) 2007-12-21 2014-12-23 Qualcomm Incorporated Downlink flow control
US8699487B2 (en) * 2008-02-04 2014-04-15 Qualcomm Incorporated Uplink delay budget feedback
US8656239B2 (en) * 2008-02-12 2014-02-18 Qualcomm Incorporated Control of data transmission based on HARQ in a wireless communication system
US9391921B1 (en) 2008-03-18 2016-07-12 Packeteer, Inc. Virtual bandwidth management deployment architectures
CA2720353C (en) 2008-04-04 2016-01-19 Level 3 Communications, Llc Handling long-tail content in a content delivery network (cdn)
US9762692B2 (en) 2008-04-04 2017-09-12 Level 3 Communications, Llc Handling long-tail content in a content delivery network (CDN)
US10924573B2 (en) 2008-04-04 2021-02-16 Level 3 Communications, Llc Handling long-tail content in a content delivery network (CDN)
US8958363B2 (en) 2008-10-15 2015-02-17 Viasat, Inc. Profile-based bandwidth scheduler
US9331914B2 (en) * 2009-02-26 2016-05-03 Telefonaktiebolaget L M Ericsson (Publ) Service specific bandwidth policy configuration in data networks
US8972596B2 (en) * 2009-04-28 2015-03-03 The Boeing Company System and method for effecting communications among devices in different domains employing different operating protocols
US7957319B2 (en) 2009-05-08 2011-06-07 Blue Coat Systems, Inc. Classification techniques for encrypted network traffic
US8503428B2 (en) * 2010-03-18 2013-08-06 Juniper Networks, Inc. Customized classification of host bound traffic
US10187353B2 (en) 2010-06-02 2019-01-22 Symantec Corporation Behavioral classification of network data flows
US9398103B2 (en) * 2011-04-15 2016-07-19 Qualcomm Incorporated Methods and apparatus for enhancing device performance through flow control
US9894397B2 (en) 2011-12-15 2018-02-13 Cisco Technology, Inc. Controlling bit-rates for media streaming sessions
US8996674B2 (en) * 2012-03-19 2015-03-31 Salesforce.Com, Inc. System, method and computer program product for SNMP based mobile device management
CN104396187B (en) * 2012-07-31 2017-08-25 慧与发展有限责任合伙企业 The method and device that Bandwidth guaranteed and work are kept
US10033644B2 (en) 2013-02-12 2018-07-24 Adara Networks, Inc. Controlling congestion controlled flows
US10027562B2 (en) * 2014-09-12 2018-07-17 Cisco Technology, Inc. Detecting network services based on network flow data
US10630600B2 (en) * 2017-10-11 2020-04-21 Nicira, Inc. Adaptive network input-output control in virtual environments
US10833923B2 (en) * 2017-10-26 2020-11-10 Skylo Technologies Inc. Dynamic multiple access for distributed device communication networks with scheduled and unscheduled transmissions
US10306442B1 (en) 2018-01-16 2019-05-28 Skylo Technologies Inc. Devices and methods for specialized machine-to-machine communication transmission network modes via edge node capabilities

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5157659A (en) * 1990-06-05 1992-10-20 Canai Computer And Network Architecture Inc. Packet communication system and method of clearing communication bus
US5224099A (en) * 1991-05-17 1993-06-29 Stratacom, Inc. Circuitry and method for fair queuing and servicing cell traffic using hopcounts and traffic classes
US5408465A (en) * 1993-06-21 1995-04-18 Hewlett-Packard Company Flexible scheme for admission control of multimedia streams on integrated networks
US5557609A (en) * 1993-12-01 1996-09-17 Kabushiki Kaisha Toshiba Switching apparatus for ATM
US5557611A (en) * 1994-07-26 1996-09-17 Cselt - Centro Studi E Laboratori Telecomunicazioni S.P.A. ATM cross-connect node utilizing two passes through the connection network with shaping between passes for optimal resource allocation in a bursty environment
US5572522A (en) * 1994-04-28 1996-11-05 Hewlett-Packard Company Asynchronous transfer mode switch with multicasting ability

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4870641A (en) * 1988-03-30 1989-09-26 Bell Communications Research, Inc. Multichannel bandwidth allocation
US5442630A (en) * 1991-05-24 1995-08-15 Gagliardi; Ugo O. ISDN interfacing of local area networks
JP3278865B2 (en) * 1991-06-28 2002-04-30 日本電気株式会社 Traffic control method
US5347511A (en) * 1993-06-07 1994-09-13 International Business Machines Corp. Traffic management in packet communications networks
US5581703A (en) * 1993-06-29 1996-12-03 International Business Machines Corporation Method and apparatus for reserving system resources to assure quality of service
US5694548A (en) * 1993-06-29 1997-12-02 International Business Machines Corporation System and method for providing multimedia quality of service sessions in a communications network
JP3457367B2 (en) * 1993-12-01 2003-10-14 富士通株式会社 Communication device
JP2928452B2 (en) * 1994-03-17 1999-08-03 富士通株式会社 ATM switch and call receiving apparatus and method in ATM switch
US5530852A (en) * 1994-12-20 1996-06-25 Sun Microsystems, Inc. Method for extracting profiles and topics from a first file written in a first markup language and generating files in different markup languages containing the profiles and topics for use in accessing data described by the profiles and topics
US5732219A (en) * 1995-03-17 1998-03-24 Vermeer Technologies, Inc. Computer system and computer-implemented process for remote editing of computer files
US5956342A (en) * 1995-07-19 1999-09-21 Fujitsu Network Communications, Inc. Priority arbitration for point-to-point and multipoint transmission
US5764645A (en) * 1996-06-12 1998-06-09 Microsoft Corporation IP/ATM network adaptation

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5157659A (en) * 1990-06-05 1992-10-20 Canai Computer And Network Architecture Inc. Packet communication system and method of clearing communication bus
US5224099A (en) * 1991-05-17 1993-06-29 Stratacom, Inc. Circuitry and method for fair queuing and servicing cell traffic using hopcounts and traffic classes
US5408465A (en) * 1993-06-21 1995-04-18 Hewlett-Packard Company Flexible scheme for admission control of multimedia streams on integrated networks
US5557609A (en) * 1993-12-01 1996-09-17 Kabushiki Kaisha Toshiba Switching apparatus for ATM
US5572522A (en) * 1994-04-28 1996-11-05 Hewlett-Packard Company Asynchronous transfer mode switch with multicasting ability
US5557611A (en) * 1994-07-26 1996-09-17 Cselt - Centro Studi E Laboratori Telecomunicazioni S.P.A. ATM cross-connect node utilizing two passes through the connection network with shaping between passes for optimal resource allocation in a bursty environment

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6473793B1 (en) 1994-06-08 2002-10-29 Hughes Electronics Corporation Method and apparatus for selectively allocating and enforcing bandwidth usage requirements on network users
WO2000056114A1 (en) * 1997-02-18 2000-09-21 Nokia Networks Oy Method and apparatus for providing fair traffic scheduling among aggregated internet protocol flows
US6084855A (en) * 1997-02-18 2000-07-04 Nokia Telecommunications, Oy Method and apparatus for providing fair traffic scheduling among aggregated internet protocol flows
EP0971518A2 (en) * 1998-07-09 2000-01-12 Nec Corporation Communication quality control system
EP0971518A3 (en) * 1998-07-09 2005-07-13 Nec Corporation Communication quality control system
US9712289B2 (en) 1998-07-10 2017-07-18 Intellectual Ventures I Llc Transmission control protocol/internet protocol (TCP/IP) packet-centric wireless point to multi-point (PtMP) transmission system architecture
WO2000027044A3 (en) * 1998-11-03 2000-10-05 Hughes Electronics Corp Selective allocation and enforcement of bandwidth usage requirements
AU750222B2 (en) * 1998-11-03 2002-07-11 Hughes Electronics Corporation Method and apparatus for selectively allocating and enforcing bandwidth usage requirements on network users
WO2000027044A2 (en) * 1998-11-03 2000-05-11 Hughes Electronics Corporation Selective allocation and enforcement of bandwidth usage requirements
US6560230B1 (en) 1999-02-01 2003-05-06 Redback Networks Inc. Packet scheduling methods and apparatus
US6567408B1 (en) 1999-02-01 2003-05-20 Redback Networks Inc. Methods and apparatus for packet classification with multi-level data structure
EP1796304A3 (en) * 1999-07-09 2009-11-18 Malibu Networks Inc. TCP/IP packet-centric wireless transmission system architecture
EP1796304A2 (en) * 1999-07-09 2007-06-13 Malibu Networks Inc. TCP/IP packet-centric wireless transmission system architecture
US6901592B2 (en) 2000-11-27 2005-05-31 Redback Networks, Inc. Method and apparatus for the enumeration of sets of concurrently scheduled events
US6795441B1 (en) 2002-08-30 2004-09-21 Redback Networks, Inc. Hierarchy tree-based quality of service classification for packet processing
US7339953B2 (en) 2002-08-30 2008-03-04 Redback Networks, Inc. Surplus redistribution for quality of service classification for packet processing
US7283472B2 (en) 2002-08-30 2007-10-16 Redback Networks Inc. Priority-based efficient fair queuing for quality of service classification for packet processing
WO2004057812A1 (en) * 2002-12-20 2004-07-08 Koninklijke Philips Electronics N.V. Network bandwidth division on a per-user basis
US8793390B2 (en) 2006-05-23 2014-07-29 Blue Coat Systems, Inc. Systems and methods for protocol detection in a proxy
EP1990960A1 (en) * 2007-05-09 2008-11-12 Wayport, Inc. System and method for providing application categorization and quality of service in a network with multiple users
US7958256B2 (en) 2007-05-09 2011-06-07 Wayport, Inc. System and method for providing application categorization and quality of service in a network with multiple users

Also Published As

Publication number Publication date
WO1998026510A3 (en) 1998-10-01
US6285658B1 (en) 2001-09-04
AU5596798A (en) 1998-07-03
US6046980A (en) 2000-04-04

Similar Documents

Publication Publication Date Title
WO1998026510A2 (en) Method for managing flow bandwidth utilization at network, transport and application layers in store and forward network
US6591299B2 (en) Method for automatically classifying traffic with enhanced hierarchy in a packet communications network
US6412000B1 (en) Method for automatically classifying traffic in a packet communications network
Kumar et al. Beyond best effort: Router architectures for the differentiated services of tomorrow's internet
Wroclawski Specification of the controlled-load network element service
US6466984B1 (en) Method and apparatus for policy-based management of quality of service treatments of network data traffic flows by integrating policies with application programs
US7853952B2 (en) Dynamic partitioning of network resources
US7970893B2 (en) Method and apparatus for creating policies for policy-based management of quality of service treatments of network data traffic flows
US6959332B1 (en) Basic command representation of quality of service policies
US8031601B2 (en) Slow-start adaptive mechanisms to improve efficiency of bandwidth allocation
US6292465B1 (en) Linear rule based method for bandwidth management
US6456630B1 (en) Method for data rate control for heterogenous or peer internetworking
US7778194B1 (en) Examination of connection handshake to enhance classification of encrypted network traffic
US7991907B2 (en) Method and apparatus for communicating COPS protocol policies to non-COPS-enabled network devices
US7519067B2 (en) Method, system, and computer product for controlling input message priority
CA2302218C (en) Packet network
Wroclawski RFC2211: Specification of the controlled-load network element service
US7107344B2 (en) Connection allocation technology
EP1063818A2 (en) System for multi-layer provisioning in computer networks
JP4474286B2 (en) Quality of service for iSCSI
US20070043738A1 (en) Methods and systems for reputation based resource allocation for networking
CN1433623A (en) Method and apparatus for packet delay reduction using scheduling and header compression
CN101404650A (en) Method and system for sub-dividing network application service quality
Wang et al. Experimental QoS performances of multimedia applications
WO2002051077A1 (en) A method and system for distinguishing higher layer protocols of the internet traffic

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AU CN IL JP KR NZ

AK Designated states

Kind code of ref document: A3

Designated state(s): AU CN IL JP KR NZ

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)