WO1996024212A1 - Bandwidth management and access control for an atm network - Google Patents

Bandwidth management and access control for an atm network Download PDF

Info

Publication number
WO1996024212A1
WO1996024212A1 PCT/US1996/001208 US9601208W WO9624212A1 WO 1996024212 A1 WO1996024212 A1 WO 1996024212A1 US 9601208 W US9601208 W US 9601208W WO 9624212 A1 WO9624212 A1 WO 9624212A1
Authority
WO
WIPO (PCT)
Prior art keywords
die
virtual
queue
virtual connections
cells
Prior art date
Application number
PCT/US1996/001208
Other languages
French (fr)
Inventor
Wei Chen
Original Assignee
Bell Communications Research, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bell Communications Research, Inc. filed Critical Bell Communications Research, Inc.
Priority to EP96906219A priority Critical patent/EP0754383B1/en
Priority to CA002186449A priority patent/CA2186449C/en
Priority to JP08523661A priority patent/JP3088464B2/en
Priority to KR1019960705640A priority patent/KR100222743B1/en
Publication of WO1996024212A1 publication Critical patent/WO1996024212A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L12/5602Bandwidth control in ATM Networks, e.g. leaky bucket
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5638Services, e.g. multimedia, GOS, QOS
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5638Services, e.g. multimedia, GOS, QOS
    • H04L2012/5646Cell characteristics, e.g. loss, delay, jitter, sequence integrity
    • H04L2012/5651Priority, marking, classes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5678Traffic aspects, e.g. arbitration, load balancing, smoothing, buffer management
    • H04L2012/5679Arbitration or scheduling

Definitions

  • This invention relates to the field of asynchronous data communication and more particularly to the field of managing the flow rate of data through an asynchronous data communication network.
  • the next generation public backbone network capable of supporting voice, video, image, data, and multi-media services is envisioned as a broadband Integrated Services Digital Network (ISDN) using an asynchronous transfer mode (ATM) to transmit data.
  • ISDN broadband Integrated Services Digital Network
  • ATM asynchronous transfer mode
  • the perceived advantages of ATM technology include flexibility and simplicity of asynchronously multiplexing traffic sources with a very broad range of source parameters and service quality requirements with information loss and delay performance ranging from close to that of the synchronous transfer mode to that of best-effort service in today's packet-switched networks.
  • Broadband ISDN is therefore a promising network technology to support emerging needs of high performance computing and communications.
  • bandwidth management and traffic control of broadband ISDN is to provide consistent and predictable end-to-end performance, while at the same time optimizing network resource utilization for end-users with a wide range of performance requirements and traffic flows, including some which are very bursty.
  • bandwidth management and traffic control of broadband ISDN becomes complex due to factors such as the diversity of traffic flow characteristics (much of which is unknown even at the time of transmission), connection performance requirements, the impacts of end-to-end propagation delay and the processing delay at the network elements due to increased switching and transmission speeds.
  • These factors cause existing traffic control approaches (like the X.25 data communications flow control designed for traditionally low-speed networks) to become ineffective.
  • new, robust (to cope with unknown and changing traffic characteristics), low-cost and scalable (to increasingly higher speeds) traffic control strategies are needed for Broadband ISDN.
  • the real-time (typically several milliseconds or less) traffic control requirements of broadband ISDN is drastically different from those of existing networks due to the factors discussed above.
  • the real ⁇ time traffic control may consist of well-coordinated components such as access control, flow control, reactive congestion control, and error control.
  • Access control including connection admission control/bandwidth allocation, traffic shaping, and bandwidth enforcement, can achieve user objectives at the user-network interface.
  • Other controls flow control, reactive congestion control, error control
  • the central goal of access control is to achieve, at the user-network interface, objectives such as predictable information throughput, connection blocking probability, cell loss ratio, and cell transfer delay/cell delay variation, among others.
  • objectives such as predictable information throughput, connection blocking probability, cell loss ratio, and cell transfer delay/cell delay variation, among others.
  • the access control mechanism should be robust, fast, scalable (from 155 to 622 Mbps, or higher), and low-cost to implement.
  • connection admission control decides connection acceptance, given traffic characteristics and performance requirements (e.g. peak rate, cell loss rate), based on the current network state. Generally, a decision needs to be made and network resources allocated in no more than a few seconds. Some key issues include what traffic descriptors should be used (i.e., peak data rate, average data rate, priority, burst length, etc), how to determine effective bandwidth of a bursty connection, how to predict performance (given underlying traffic shaping/bandwidth enforcement mechanisms), what acceptance/rejection criteria to use, and how fast can algorithms be executed.
  • traffic descriptors i.e., peak data rate, average data rate, priority, burst length, etc
  • connection admission control ranging from peak rate allocation (simple to implement, but bandwidth inefficient for variable bit-rate services) to more sophisticated algorithms which determine the "admissible acceptance region" in an N-service space.
  • the connection admission control will be implemented in software, which makes it possible to start with a simple and robust algorithm during the early stages of broadband ISDN and yet allows evolution towards more sophisticated (and effective) algorithms as they become available.
  • a distinguishing feature of access control schemes is their capability to provide statistical multiplexing for variable bit rate services. A number of factors significantly impact the statistical multiplexing efficiency including, among others, variable bit-rate transmission peak rate (relative to the link capacity) and the burst duration distribution.
  • variable bit-rate connections may result in a low (e.g. 20-25%) network utilization, if no traffic shaping or resource allocation (such as the fast reservation protocol) techniques are used.
  • traffic shaping By proper modification of the cell arrival process (i.e., traffic shaping), higher statistical multiplexing efficiency may be achieved.
  • modification techniques employed by the pacing mechanism may include cell jitter smoothing, peak cell rate reduction, message burst length limiting, and service scheduling of variable bit rate virtual circuit connections (VCs), among others.
  • traffic shaping may possibly be used in conjunction with a network bandwidth enforcement mechanism by rescheduling a cell's service (in addition to cell discarding or tagging) when a non-compliant cell is observed.
  • a network bandwidth enforcement mechanism by rescheduling a cell's service (in addition to cell discarding or tagging) when a non-compliant cell is observed.
  • this option is appealing in order to minimize the cell loss ratio.
  • applicability of pacing techniques to cells with stringent delay requirement (such as interactive variable bit rate video) requires further study. From a network operation point of view, source shaping/pacing is also desirable to prevent overwhelming the system with more data than the system can handle.
  • the purpose of network bandwidth enforcement is to monitor a connection's bandwidth usage for compliance with appropriate bandwidth limits and to impose a policing action on observed violations of those limits.
  • the enforcement control works on the time scale of a cell emission time, (i.e., about 2.7 ⁇ sec for 155 Mb/s service or about 0.7 ⁇ sec for 622 MB/s service).
  • Key issues of bandwidth enforcement include the design of the monitoring algorithm and policing actions to be taken on non-compliant cells. Other issues include handling of traffic parameter value uncertainty, the effectiveness in terms of the percentage of erroneous police actions upon compliant cells and the percentage of non-compliant cells undetected, and the detection time of a given violation.
  • the bandwidth enforcement mechanism operates upon network-measured traffic parameter values, which may include a connection's peak cell rate, its average cell rate, and its peak burst length.
  • network-measured traffic parameter values may include a connection's peak cell rate, its average cell rate, and its peak burst length.
  • a few bandwidth enforcement algorithms have been proposed using, for example, single or dual leaky buckets, jumping, or sliding windows.
  • the leaky-bucket algorithms using peak/average rates and average burst length, may still not be robust enough for various variable bit rate traffic mixes.
  • the studies also show that the leaky-bucket algorithms also tend to complicate performance prediction at the connection admission control level.
  • accurate estimation of average cell rate and burst length by users may be very difficult in practice. This suggests the need for the exploration of alternative approaches to this problem, in order to bring about an early deployment of broadband ISDN.
  • a number of access control approaches have been proposed, but there is no standard consensus among the vendor and research communities.
  • An approach has been proposed that accepts connections based on bandwidth pools dedicated to several traffic classes respectively and uses a leaky bucket type algorithm for monitoring connection bandwidth utilization with immediate cell discarding of non-compliant cells.
  • Another proposal also uses a leaky bucket type monitoring algorithm with tagging of non-compliant cells for possible later cell discarding.
  • Another proposed approach uses a rate-based time window approach.
  • leaky bucket algorithm is superior to the time-window based approaches under certain traffic patterns studies. However, the same study also revealed difficultly in its dimensioning (e.g., the counter limit). Further, the performance of the leaky bucket algorithm is also found to be far below optimal, in terms of non-compliant cell detection and false alarms (the long term probability of declaring a bandwidth violation for compliant cells).
  • a bandwidth management system manages a plurality of virtual data connections within a communications network.
  • the system includes an input for receiving data cells, wherein each cell is associated with a particular one of the virtual connections.
  • the system also includes a cell pool, coupled to the input, for storing the cells, a first and second queue for ordering the virtual connections, and an output for transmitting cells from the cell pool.
  • the relative position of a virtual connection in the first queue is determined by an eligibility variable that varies according to an anticipated data rate associated with the particular virtual connection and according to an amount of time that the particular virtual connection has been in the first queue.
  • the relative position of a virtual connection in the second queue varies according to a predetermined quality of service that is assigned to each of the virtual connections.
  • the output transmits a cell from the cell pool corresponding to a virtual connection at the front of the second queue.
  • Virtual connections having eligibility variables with equal values can be ordered in the first queue according to the predetermined quality of service.
  • the system can use four different values for quality of service.
  • Virtual connections having equal quality of service values can be ordered in the second queue according to a priority computed for each of the virtual connections.
  • Credit variables for each virtual connection can indicate allocated time slots provided to each of the virtual connections.
  • the priority can vary according to one or more of: one or more anticipated data rates of each of the virtual connections, the value of one or more of the credit variables, and the number of bac logged cells that are awaiting transmission.
  • the data rates, credit variables, and backlog values can be weighted prior to determining the priority.
  • the system can use a burst bit indicator to determine if virtual connections associated with cells received at the input should
  • the bandwidth management system can be one of: a pacing unit and an enforcement unit.
  • a pacing unit receives cells from a data source node and provides cells to a communication link.
  • An enforcement unit receives data from a communication link and provides data to a data sink node.
  • a virtual connection
  • FIG. 1 shows a data communications network having a plurality of physically interconnected nodes.
  • FIG. 2 shows a plurality of communication nodes interconnected by a plurality of virtual connections.
  • FIG. 3 is a data flow diagram showing different states of data within a bandwidth management unit according to the present invention.
  • FIG. 4 is a functional block diagram of a bandwidth management unit according to the present invention.
  • FIG. 5 is a state diagram showing operation of a bandwidth management unit according to the present invention.
  • FIG. 6 is a state diagram illustrating setting up a virtual connection.
  • FIG. 7 is a flow diagram illustrating cell admission for a virtual connection.
  • FIG. 8 is a flow diagram illustrating initializing the value of an eligibility timer for a virtual connection.
  • FIG. 9 is a flow diagram illustrating resetting the value of an eligibility timer for a virtual connection.
  • FIG. 10 is a flow diagram illustrating updating credit variables of a virtual connection.
  • FIG. 11 is a flow diagram illustrating updating an eligibility timer of a virtual connection.
  • FIG. 12 is a flow diagram illustrating computing values for credit variables and a priority variable for a virtual connection.
  • FIG. 13 is a flow diagram illustrating updating a priority variable for virtual
  • FIG. 14 is a schematic diagram illustrating hardware implementation of a bandwidth management unit. DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
  • specific terminology is employed for the sake of clarity. However, the invention is not intended to be limited to the specific terminology so selected, and it is to be understood that each specific element includes all technical equivalents which operate in a similar manner to accomplish a similar purpose.
  • a communications network 30 comprises a plurality of communication nodes 32-50 that are physically interconnected.
  • Each of the nodes 32-50 represents conventional communications equipment for sending and receiving communications data.
  • the lines drawn between the nodes 32-50 represent physical communications links for transmitting data between the nodes 32-50.
  • the communications links can be any one of a plurality of conventional mediums for communication data transmission including fiber optics, twisted pair, microwaves links, etc.
  • nodes it is possible for two nodes to be directly connected to each other, which facilitates communication between the directly connected nodes.
  • the node 34 is shown in FIG. 1 as being directly connected to the node 35. Accordingly, communications between the node 34 and the node 35 is facilitated via the direct physical communication link.
  • a "virtual connection" between the node 34 and the node 49 is established to facilitate communication between the nodes 34, 49 and to establish desired quality of service for the communication.
  • a virtual connection is established between two nodes when one of the nodes has data that is to be sent to another node.
  • a virtual connection can be terminated after all of the data has been sent. For example, if the node 34 represents a user computer terminal and the node 49 represents a mainframe computer, then a virtual connection can be established between the node 34 (terminal) and the node 49 (mainframe) whenever a user logs on to the computer system. The virtual connection could then be terminated when the user logs off the system.
  • a schematic diagram 60 shows a plurality of hosts 62- 67, a pair of private ATM (asynchronous transfer mode) mux/switches 71, 72, and a public ATM mux/switch 74.
  • Each of the hosts 62-67 represents one of a plurality of possible sources of data, such as a telephone, a fax machine, a computer terminal, a video teleconference terminal, etc.
  • the host 62 transmits data to the private ATM mux/switch 71 via a virtual connection set up between the host 62 and the private ATM mux switch 71.
  • the host 63 transmits data to d e mux/switch 71 via a second virtual connection between the host 63 and the private ATM mux/switch 71.
  • a virtual connection can span a single direct physical link or can span a plurality of consecutive physical connections through intermediate network nodes, not shown in FIG. 2.
  • the host 64, 65 are connected to the mux/switch 72 via separate virtual connections therebetween.
  • the mux switches 71, 72 and the hosts 66, 67 are connected to the public ATM mux/switch 74 via additional virtual connections.
  • the public ATM mux/switch 74 provides data to a trunk (not shown).
  • the diagram 60 shows a general flow of data from the hosts 62-67 to the trunk via the intermediate devices 71, 72, 74.
  • the virtual connections shown in the diagram 60 are set up at the time that the hosts 62-67 begin transmitting data. For example, if the host 62 represents a telephone, then a virtual connection between the host 62 and the private ATM mux/switch 71 can be established by a connection admission procedure when the user picks up die handset of the telephone and begins dialing. The virtual connection can be terminated when the user hangs up the telephone.
  • each output port of the hosts 62-67 is provided with one of a plurality of pacing units 82a-87a.
  • the pacing unit 82a is connected to the output port of the host 62
  • the pacing unit 83a is connected to the output port of the host 63, etc.
  • the pacing units 82a-87a control the data transmitted from the hosts 62-67 by limiting all data transmitted on the associated connection. For example, the pacing unit 82a limits data transmitted from the host 62 to the private ATM mux/switch 71. Data is first passed from the host 62 to the pacing unit 82a via the output port of the host 62. The pacing unit 82a then provides the data to the communication link.
  • the pacing unit 82a provides the data on the communication link at an agreed upon rate, as described in more detail hereinafter. Note that the switches
  • 71, 72 also transmit data via pacing units 91a, 92a connected to the output ports thereof.
  • each of the pacing units 82a-87a, 91a, 92a are corresponding enforcement units 82b-87b, 91b, 92b.
  • the enforcement units 82b- 87b, 91b, 92b are at the receiving ends of the communication links to ensure that the agreed upon data rate of each of the connections is maintained.
  • the enforcement units 82b-87b, 91b, 92b are connected to input ports of corresponding devices so that, for example, the enforcement unit 82b is connected to the input port of the mux/switch 71. If a pacing unit provides data at greater ti an the agreed upon data rate, the corresponding enforcement unit discards the excess data.
  • Pacing units are provided at data source nodes while enforcement units are provided at data sink (i.e., receiving) nodes.
  • the pacing unit at the data source node and the enforcement unit at the data sink node initially agree upon a data rate and a quality of service.
  • Quality of service relates to a variety of characteristics, including the acceptable delay between generation and reception of the data, as described in more detail hereinafter.
  • the requested data rate and quality of service are functions of the type of data being sent. For example, real time voice data may require a different data rate than would real time video data, but both may require a relatively high quality of service.
  • a pacing unit requests an appropriate set of traffic parameters (such as data rate, buffer space, etc.) and requests the desired quality of service.
  • the corresponding enforcement unit at the other end of the virtual connection can accept or deny the request of the pacing unit A request is denied if the node to which the enforcement unit is attached is not capable of handling the requested amount of data to meet the requested quality of service.
  • Reasons for denying the request include insufficient capacity of the data sink node to handle the requested data rate and insufficient capacity of the communications link to handle the requested data rate. If the request is denied by the enforcement unit, then the virtual connection is not established. Assuming however that the request is accepted, die virtual connection between the nodes is established. Once established, the pacing unit operates so as to send data according to the initially agreed upon traffic parameters.
  • Data is communicated over the communications links via packets or fixed- size ATM cells.
  • Each cell contains one or more bytes of digital data along with header information identifying the virtual connection and other appropriate information, discussed in detail hereinafter.
  • the pacing units 82a-87a, 91a, 92a and enforcement units 82b-87b, 91b, 92b can be used to prevent the trunk, and hence the network, from being overloaded with too much data.
  • the pacing and enforcement units receive cells at their respective inputs and transmit cells from their respective outputs.
  • the operation of each is substantially identical. Accordingly, the detailed discussion which follows is equally applicable both types of units which are referred to generically as bandwidth management
  • a data flow diagram 100 illustrates different states of a virtual connection handled by a bandwidth management unit. Note that a single bandwidth management unit can handle multiple virtual connections even though the bandwidth management unit is physically connected to only a single transmission link.
  • the node to which die bandwidth management unit is connected sends or receives cells for the various connections and the bandwidth management unit outputs the cells in a particular order.
  • a cell is provided to the bandwidth management unit from either the output port of the associated data source node (for a pacing unit) or from the communication link (for an enforcement unit).
  • the associated virtual connection is provided to the bandwidth management unit from either the output port of the associated data source node (for a pacing unit) or from the communication link (for an enforcement unit).
  • VC begins in an unscheduled state 102.
  • a cell corresponding to a VC in the unscheduled state 102 is deemed not to be ready to transmit by the bandwidth management unit.
  • the VC transitions out of the unscheduled state 102 by becoming eligible and having at least one data cell ready to transmit. Eligibility of a VC is controlled externally by the data node connected to the bandwiddi management unit in a manner described in more detail hereinafter. A VC which is deemed immediately eligible by the data node will transfer out of the unscheduled state 102.
  • the VC can transition from the unscheduled state 102 to an eligibility state
  • the eligibility state 104 indicates that die cells associated with die VC are waiting to become eligible for output, but the VC is waiting for particular conditions to be met, which are described in more detail hereinafter.
  • the VC can transition from the eligibility state 104 to a departure state 106.
  • a VC in the departure state 106 has at least one cell associated therewith which is ready to be output by the bandwidth management unit.
  • the cells of the VC in the departure state are transmitted by the bandwidth management unit in a specific order, as described in more detail hereinafter.
  • a VC which is in the eligibility state 104 or in the departure state 106 can be transitioned back to the unscheduled state 102 when the data node connected to the bandwiddi management unit deems the VC to be ineligible or when other conditions, described in more detail hereinafter, are met. Furthermore, it is possible for a VC in the departure state 106 to be transitioned back to the eligibility state 104 under certain conditions, also described in detail hereinafter. Under certain odier conditions, a VC in the unscheduled state 102 can transition directly to die departure state 106 witiiout going through the eligibility state 104. Transitions between various states and the conditions therefor are described in more detail hereinafter.
  • the ordering of the VC's in a queue associated witii die departure state 106 is a function of the Quality of Service (QOS) of die virtual connection.
  • QOS Quality of Service
  • the QOS of a virtual connection is established when the virtual connection is initialized by die data source node.
  • the source node for die data can request one of four possible values for the QOS: high level QOS, mid level QOS, low level QOS, and best effort QOS.
  • High level QOS is for real time traffic that requires bounded delay and virtually no cell loss.
  • Mid level QOS is for real time traffic that requires bounded delay and little cell loss.
  • Low level QOS is for delay- tolerant data traffic having controllable cell loss.
  • Best effort QOS is for data which can be transmitted at a very low rate that can share available bandwiddi not being used by die high level, mid level, and low level QOS cells.
  • the ordering of the cells in the queue is based on the QOS associated with the virtual connection and upon other parameters which are discussed in detail hereinafter. Note diat, altiiough the invention is illustrated herein witii four separate levels for die QOS, the invention can be practiced with a different number of levels.
  • the number of levels chosen for the implementation is a design choice based on a variety of functional factors known to one of ordinary skill in die art.
  • a schematic diagram 120 functionally illustrates operation of a bandwidth management unit.
  • An input 122 accepts cells from a data node (not shown) connected to die bandwiddi management unit.
  • the cells diat are received are provided to a demultiplexer 124.
  • the demultiplexer 124 separates die input cells according to die VC associated tiierewith and places the cells in one of a plurality of appropriate queues 126- 128, so diat, for example, cells associated with virtual connection VC, are placed in the queue 126, cells associated with virtual connection VC j are placed in die queue 127 and cells associated the virtual connection VC k are placed in die queue 128.
  • a cell admission unit 130 controls placing d e cells in the queues 126-128 in a manner described in more detail hereinafter.
  • each virtual connection, V , VC r and VC k is a set of variables 131-133 diat controls when the cells for each of die VC's will be transmitted out of die bandwiddi management unit. Also when a cell is input to die bandwiddi management unit, the cell admission unit 130 examines the variables 131-133, to determine if me cell should be admitted. As shown in FIG. 4, the virtual connection V has the variable set 131 associated tiierewitii, the virtual connection VC j has die variable set 132 associated tiierewitii, and the virtual connection VC k has die variable set 133 associated tiierewitii.
  • variable sets 131-133 are provided as inputs to a mux control 135 which controls which cell from the head of one of die queues 126-128 is next to be transmitted out of d e bandwiddi management unit.
  • a multiplexor 136 is connected to each of die queues 126-128 and, in conjunction witii die mux control
  • VARIABLE NAME DESCRIPTION scd state of VC, qos, quality of service for VC, e, indicates if transmission for VC, is enabled s, eligibility timer for VC, r,l first rate variable for VC, r,2 second rate variable for VC, c,l first credit variable for VC, c,2 second credit variable for VC,
  • a state diagram 140 illustrates operation of a bandwiddi management unit. Although operation is illustrated witii regard to a single VC, VC, die operation of other VC's of die bandwiddi management unit is identical.
  • Circles on die diagram 140 show the tiiree possible states of VC,: die unscheduled state, die eligibility state, and die departure state.
  • a VC is always in one of those tiiree states, although for a bandwidth management unit servicing multiple VC's, die state of any one of die VC's is independent of the state of any of the odier VC's.
  • the state diagram 140 also shows rectangles having rounded corners which represent operations performed by die bandwiddi management unit with respect to VC,. Diamonds on the diagram show decision branches where die flow is conditioned upon a particular variable or variables associated witii VC, being in a particular state or having a particular value or values.
  • the arrows between the various circles, rounded rectangles, and diamonds illustrate the flow through the state diagram.
  • the annotation indicates die condition causing the transition from one portion of the state diagram 140 to another.
  • Arrows having no annotation associated therewith represent an unconditional flow from one portion of die state diagram 140 to anodier.
  • Initialization of a VC includes having a pacing unit request a particular set of traffic parameters and quality of service (QOS) from die associated enforcement unit (i.e., the enforcement unit at die odier end of die communication link). That is, VC, is initialized when the pacing unit sends a command to die associated enforcement unit requesting creation of VC, and specifying die set of traffic parameters and
  • QOS quality of service
  • die enforcement unit either accepts or rejects die request depending on die existing data loading on both die communication link and die receiving node. The decision is based on die loading for die node attached to die enforcement unit. If the request from die pacing unit to the enforcement unit is denied, tiien VC, is not initialized and no processing (i.e., no data transmission) occurs for VC,.
  • VC Assuming diat die VC, initialization request is approved by die enforcement unit, then following the initialization step 142, VC, enters an unscheduled state 144. While VC, is in the unscheduled state 144, die variable scd, is set to zero. VC, remains in die unscheduled state 144 until die first cell of data for VC, is received by die bandwiddi management unit.
  • the bandwiddi management unit transitions from the unscheduled state 144 to a cell admission step 146, where die cell is eitiier accepted or rejected by the bandwiddi management unit.
  • the cell admission step 146 is described in more detail hereinafter.
  • a test step 148 which determines whedier die VC, transitions from the unscheduled state 144 to another state.
  • die variables 1, and e are examined.
  • the variable 1, represents die backlog, in number of cells, for VC,. That is, 1, equals die number of cells received by die bandwiddi management unit for VC, diat have not been output by the bandwiddi management unit.
  • the variable e is a variable indicating whedier VC, has been enabled by die data node diat is being serviced by die bandwidth management unit. That is, die data node diat is connected to die bandwiddi management unit can either enable or disable VC,. For a virtual connections that is not enabled, die bandwiddi management unit receives inputted cells but does not output the cells.
  • die backlog of cells for VC is at least one (i.e., there is at least one cell associated witii VC, stored in die bandwiddi management unit) and if VC, is enabled. If one or botii of tiiose conditions are not met, then VC, returns to the unscheduled state 144.
  • step 150 e
  • die test step 148 dierefore, that it is possible for the bandwiddi management unit to receive one or more cells for VC, prior to VC, being enabled and tiien for the bandwiddi management unit to receive an enable signal from the data node so diat VC, transitions from the unscheduled state 144.
  • die bandwiddi management unit While in die unscheduled state 144, it is possible for die bandwiddi management unit to receive a signal from the data node requesting termination of VC,. Termination can occur for a variety of reasons, including there being no more data to transmit over the virtual connection. In that case, control transfers from the unscheduled state 144 to a step 152 where VC, is terminated. Once VC, has been terminated, die bandwiddi management unit does no further processing for VC, and accepts no more cells for transmission via VC,.
  • die bandwiddi management unit receives a request from die data node to update parameters associated with VC, while in die unscheduled state 144, die bandwiddi management unit receives a request from die data node to update parameters associated with VC,.
  • the parameters diat can be associated witii VC, are discussed elsewhere herein.
  • diat one of die parameters diat can be updated at die step 154 is the quality of service associated witii the VC,, qos,. That is, die data node can request diat die quality of service associated widi VC, be changed to a second value after the VC, has been initialized widi a qos, having a first value.
  • VC returns to die unscheduled state 144.
  • diat die backlog for die VC is at least one (i.e., 1, is greater tiian or equal to one) and diat VC, is enabled (i.e., e, equals one), then control transfers from the step 148 to a step 156 where scd, is set to one.
  • the scd, variable associated with VC equals zero when the VC, is in d e unscheduled state and equals one when VC, is in any of die otiier states.
  • step 158 the credit variables associated widi VC font c,l and c,2, are updated.
  • the credit variables, c,l and c,2, represent the amount of transmission channel bandwidth "credited” to VC,.
  • the credit variables are described in more detail hereinafter.
  • step 160 die value of s, is initialized.
  • the value of s represented die amount of elapsed time that VC, is to spend in die eligibility queue before transitioning to die departure queue. That is, after exiting from the unscheduled state, VC, is placed in die eligibility queue until an amount of time indicated by s, has elapsed, after which VC, is moved from die eligibility queue to the departure queue for output by die bandwiddi management unit.
  • the initial value of s is determined by die associated data node and varies according to die initial traffic parameters specified when VC, is initialized.
  • step 160 die estimated values of the credits, c,l and c,2 and die estimated value of die priority, P,, associated with VC, are computed.
  • the priority, P is used to order die VC's in die departure queue. Computing the priority is described in more detail hereinafter.
  • a test step 164 determines if s, was set to zero at die step 160.
  • a VC initializes s, according to values of the initial traffic parameters and die credits. If s, is set to zero at die step 160, tiien VC, is placed immediately on die departure queue. Otiierwise, VC, is placed in the eligibility queue for an amount of time represented by die initial setting of s, provided at d e step 160.
  • step 164 If at die test step 164 it is determined that s, does not equal zero, then control passes from the test step 164 to a step 166 where die value of s, is incremented by s(t).
  • the quantity s(t) varies according to elapsed system time and is updated every cycle d at the bandwiddi management unit software is executed. In odier words, die quantity s(t) varies according to die value of a real time system clock.
  • s is set at the step 160 to represent the amount of time VC, remains on die eligibility queue
  • tiien adding die present value of s(t) to s sets die value of s, to the value that s(t) will equal when VC, is to be moved from die eligibility queue to die departure queue.
  • die bandwidth management unit runs once every microsecond and diat s(t) is incremented once every cycle.
  • die value of S j will be set to one hundred at die step 160.
  • the present value of s(t) is added to one hundred in order to provide a value of s s diat is tested against die real time clock so as to determine when to move die VC from die eligibility queue to die departure queue.
  • die eligibility queue is placed in die eligibility queue.
  • die eligibility queue is referred to as die "A" queue.
  • VC's in die eligibility queue are sorted by die value of s for each of die VC's and by the value of the quality of service for each of the VC's.
  • the VC widi die lowest value for s is at die head of die queue
  • the VC with die second lowest value for s is second in die queue, etc.
  • the VC widi die highest QOS is first
  • the VC with the second highest QOS is second, and so forth.
  • tiien the VC's are sorted in die queue in a last in first out order. The significance of die order in die cells in die eligibility queue is discussed in detail hereinafter.
  • VC enters die eligibility state at a step 168.
  • Note diat scd is set to one at die step 168.
  • VQ is set to one at die step 168.
  • step 170 sets Q, to either one or zero according to an enable or disable command provided by die node attached to die bandwidth management unit.
  • VC returns to the eligibility state 168 without performing any further tests on die value of e t .
  • the value of e t is tested at a later point in die processing (discussed below), at which time VC, is returned to die unscheduled state if e s does not equal one.
  • the bandwiddi management unit can receive a request to update VC,, in which case control transfers from die eligibility state 168 to a step 172 where die VQ parameters are updated. Following the step 172, die bandwidth management unit returns to die unscheduled state 168.
  • die bandwiddi management unit may receive a command, while in die eligibility state 168, to update s, for VC,.
  • VC transitions to a step 174 where the value of s, is updated.
  • die step 174 is a test step 176 which determines if die update of s, has been accepted. If not, tiien control passes form the step 176 back to die eligibility state
  • step 168 Odierwise, die system transitions from the step 176 to a step 180 where the priority variable, P it is updated. Following die step 180, VQ returns to die eligibility state 168.
  • N is a predetermined constant value, such as eight.
  • diat die backlog does not contain at least one cell or diat VC, is disabled, tiien control passes from the test step 184 to a step 186 where scd, is set to zero.
  • VC returns to die unscheduled state 144, and waits for either a cell admission or a signal to set e, to one, as described above in connection widi die description of die steps 146, 150.
  • step 184 If at die step 184 it is determined diat die backlog of cells is at least one and diat VC, is enabled, tiien control passes from the step 184 to a step 190 where VC, is placed in die departure queue, designated as die "B" queue on die state diagram 140. Note mat the step 190 also follows the step 164, discussed above, when it is determined at die test step 164 diat s, equals zero.
  • the VC's in die departure queue are sorted primarily in order of the quality of service (QOS) for die VC's. That is, VC's widi higher requested quality of service are ahead of VC's witii lower requested qualities of service.
  • VC's in the queue having the same quality of service are sorted in order of priority, P.
  • VC's having botii the same quality of service and die same value for priority are sorted in a first in first out basis.
  • VC enters a departure state 192.
  • VC While in die departure state 192, VC, can receive signals to set e, and update the parameters of VC,.
  • Setting e is performed at a step 194 and updating the parameters for VC, is performed at a step 196.
  • the steps 194, 196 are analogous to die steps 170, 172 discussed above in connection widi die eligibility state 168. Note that following eitiier the step 194 or the step 196, VC, returns to the departure state 192.
  • VC reaches d e head of die departure queue when VC, has die highest QOS and highest priority of all of die VC's in die departure queue.
  • VC transitions from the departure state 192 to a step 198 where VC, is removed from die departure queue.
  • step 200 die head of die line cell (i.e., die oldest cell) corresponding to VQ is output by die bandwiddi management system.
  • die value of die backlog, 1 is decremented by one and the credit variables, c,l and c,2, are also decremented.
  • die step 200 is a step 202 where die credit variables, c,l and c,2, are updated. Updating die credit variables is discussed in more detail hereinafter.
  • step 204 die value of s, is reset. Resetting the value of s, is discussed in more detail hereinafter.
  • step 206 die variables c,l, c,2 and P, are computed in a manner discussed in more detail hereinafter.
  • die step 206 Following die step 206 is a test step 208 where die value of s, is examined. If die value of s, is not zero, tiien control transfers from die test step 208 to the step 166, discussed above, where the value of s(t) is added to s, and where VC, is added to die eligibility queue.
  • tiien control passes from the step 208 to the test step 184, discussed above, where the backlog variable for VC,, L,, and die eligibility variable, e t , are examined. If die backlog of VC, is at least one and if VC, is eligible, tiien control passes from the step 184 to the step 190, discussed above, where VC, is placed in die departure queue. Otiierwise, if die backlog number of cells for VC, is not at least one or if VC, is not eligible, tiien control passes from the step 184 to the step 186, discussed above, where the value of scd, is set to zero.
  • VC enters die unscheduled state 144. Note diat, prior to transitioning to the step 184, it is possible to first run die cell admission check to determine if a cell has arrived since die last time the cell admission procedure has been run.
  • a flow diagram 220 illustrates in detail die step 142 of FIG. 5 for setting up VQ.
  • r, and r 2 are set.
  • the variables r, and r 2 represent data transmission rates.
  • the variables r can represent die average data transmission rate for die virtual data connection.
  • the variables r 2 can represent die burst transmission rate for the data connection.
  • the burst transmission rate is the maximum bandwiddi allowed for sending die data over die
  • VC virtual connection
  • step 222 Following die step 222 is a step 223 where die weights, w,l-w,5 are set.
  • the weights are used in die calculation of the priority, P Dust in a manner which is described in more detail hereinafter.
  • step 224 the quality of service, qos, is set.
  • the quality of service is provided by the sending node to die bandwiddi management unit and depends upon the nature of the data being transmitted on the virtual connections, VC,. The different possible values for qos, are discussed above.
  • step 224 is a step 225 where c,l and c,2 are set to zero and c, ml and c, ⁇ are set.
  • the variables c,l and c,2 represent bandwidth credits diat are allocated to VC,.
  • the credit variable c,l corresponds to the rate variable r,l and die credit variable c,l corresponds to die rate variable r,2.
  • the variable c, ml is die maximum value diat die variable c.l is allowed to reach.
  • the variable c, ⁇ is the maximum value that the variable c,2 is allowed to reach.
  • the credit variables represent the number of cell slots allocated to VC, during die operation of the bandwiddi management unit.
  • the credit variables are incremented whenever an amount of time has passed corresponding to die associated rate variable. For example, if r,l equals a rate corresponding to one hundred cells per second, than the credit variable c,l would be incremented once every one hundredtii of a second.
  • the credit mechanism controls the amount of bandwiddi used by each of die VC's. Also, note that since r,l and r,2 can be different, tiien the credit variables c,l and c,2 can be incremented at different rates. Updating die credit variables is described in more detail hereinafter.
  • step 226 where s, m , and s, ⁇ are set.
  • the variables s, ml and s ⁇ m2 represent die maximum value diat s, is allowed to reach.
  • step 227 where _, die backlog of cells for VQ, is set to zero, and die variables 1, m , L ⁇ ,, and l m h , which relate to 1, in a manner discussed in more detail hereinafter, are all set to predetermined constant values that vary according to die traffic parameters provided when VQ is initialized.
  • the variables ad ⁇ i j , n,, and n,,,, which are discussed in more detail hereinafter, are also set
  • step 227 a step 228 where die variable enuit which determines whether VC, is enabled, is set to one, thus enabling VQ, at least initially.
  • step 229 die variable scd, is set to zero, thus indicating d at VC, is initially in the unscheduled state.
  • step 229 is a step 230 where die variable t ⁇ is set to zero.
  • variable t indicates die value of die system time clock when the credit variables, c,l and c,2, were most recently updated.
  • step 230 Following die step 230 is a step 231 where the priority valuable, P Monel is set to zero.
  • a step 232 Following die step 231 is a step 232 where die burst bit indicator, b, is initially set to zero.
  • a diagram 240 illustrates in detail die cell admission step 146 of FIG. 5.
  • the cell admission procedure shown in die diagram 240 is deemed "block mode" admission.
  • Block mode admission involves potentially dropping a block of contiguous cells for a single VC in order to concentrate cell loss in that VC. That is, whenever a cell arrives for a VC having a full queue, the block mode admission algoritiim drops diat cell and may also drop a predetermined number of cells that follow the first dropped cell. The number of dropped cells is one of die initial traffic parameters provided when die VC is first established.
  • Processing for acceptance of cells begins at a current state 241 and transmission to a test step 242 when a new cell for VC, is provided to die bandwiddi management unit by the data node. If n, equals zero at die test step
  • CLP is a variable indicating the cell loss priority of a cell. If a cell has a CLP of zero, that indicates diat die cell is relatively less loss tolerant (i.e., the cell should not be discarded if possible). A cell having a CLP of odier than zero is relatively more loss tolerant (i.e., could be discarded if necessary).
  • CLP is not zero at die step 244, tiien control transfers to a step 245 where the newly arrived cell is discarded. Following die step 245, conu-ol transfer back to the current state 241. If it is determined at die test step 243 diat die number of backlogged cells, 1,, is less tiian the maximum allowable number of backlogged cell 1, m , tiien control transfers from the step 243 to a step 246 to test if the value of CLP is zero.
  • step 247 where the new cell is added to die cell buffer at the end of a queue of cells for VC,.
  • step 248 die variable representing the backlog cells for VC, l j is incremented.
  • step 246 If at the step 246 it is determined diat CLP for die cell does not equal zero, then control transfers from the step 246 to a step 250 where adm t is tested.
  • the variable ad ⁇ i j is used to indicate which l m variable, 1 ⁇ , or ⁇ _ h , will be used.
  • tiien control transfers from d e step 250 to a test step 251 to determine if me number of backlog cells for VC,, l j , is less than the first backlog cell limit for VQ, Lj, h . If so, tiien control transfers from the step 251 to a step 252 where the new cell is added to die queue for VC,.
  • step 252 is a step 253 where die variable indicating the number of backlogged cells, L. is incremented. Following the step 253, control transfers back to the current state 241.
  • step 255 die number of backlogged cells for VC, is compared widi the second limit for die number of backlog cells for VC justify l m ,. If at die test step 255 it is determined diat 1, is not less tiian L ⁇ , , then control transfers from the step 255 to a step 256 where the newly arrived cell is discarded. Following die step 256, control transfers back to die current state 241.
  • die step 252 where the new cell is added to die queue of cells for VC,.
  • die variable representing the number of backlogged cells for VC, _, is incremented at die step 253.
  • tiien control transfers from the step 251 to a step 260 where adm, is set to zero.
  • die step 260 die step 245 where die newly arrived cell is discarded.
  • die step 245 control transfers back to the current state 241.
  • n does not equal zero, or if CLP does equal zero at die step 244, tiien control transfers to a step 262, where die newly arrived cell is discarded.
  • step 262 is a step 263 where n, is incremented.
  • a test step 264 where n, is compared to die limit for n,, n m . If at die step 264 n, is less tiian or equal to r , control transfers from die step 264 back to die current state 241. Otherwise, control
  • step 265 transfers to step 265, where n 4 is set to zero. Following the step 265, control transfers back to the current state 241.
  • a flow diagram 280 illustrates die step 160 of FIG. 5 where die value of s, is initialized.
  • Flow begins when VC, is in die unscheduled state 282.
  • die values of c,l and c,2 are examined.
  • the variable c,l and c,2 represent die credits for VC,. If die values of both of die credit variables, c,l and c,2, are greater tiian or equal to one, then control passes from the test step 284 to a step 286 where the value of s, is set to zero. Following the step 286, VC, enters a next state 288.
  • die next state 288 is die departure state 192, as shown on FIG. 5. This occurs because at die test step 164 of FIG. 5, die value of s, is checked to see if s, equals zero. If so, tiien VC, goes from die unscheduled state 144 directly to die departure state 192, as shown in FIG. 5 and as described above.
  • step 284 If at die test step 284 it is determined diat eitiier c,l or c,2 is less than one, then control transfers from the step 284 to a step 290 where s, is set to s, ml .
  • the variable s, m ⁇ is a variable that represents die amount of time between cell transmissions for cells transmitted at a rate r,l.
  • step 292 ⁇ C is computed.
  • the variable ⁇ C represents die amount of additional credit diat will be added to c,2 after an amount of time corresponding to s, has passed.
  • the value of ⁇ C is computed by multiplying s, by r,2.
  • step 292 determines if the existing value of die credit variable, c,2, plus die value of ⁇ C, is less tiian one. Note tiiat the variable c,2 is not updated at this step. If the sum of c,2 and ⁇ C is not less tiian one, then control transfers from the step 294 to die next state 288. Note that, in this case, die next state will be die eligibility state 168 shown in FIG. 5 because s, will not equal zero at die step 164 of FIG. 5.
  • the variable s m2 represents the amount of time between cell transmissions for cells transmitted at a rate r,2. Following the step 296, control transfers to the next state 288, discussed above.
  • a flow diagram 300 illustrates the reset s ⁇ value step 204 shown in FIG. 5.
  • the burst bit variable for VQ, b turn is tested.
  • the burst bit variable, b, indicates that VC, should transmit in burst mode.
  • Burst mode is used for a virtual connection where die cells of a virtual connection should be transmitted as close togetiier as possible. That is, the data node sets the virtual connection for burst mode for data traffic that should be transmitted as close togetiier as possible.
  • an e-mail message may have a low priority, it may be advantageous to transmit the entire e-mail message once the first cell of the e-mail message has been sent.
  • die burst bit indicator brush is set at die step 302 but VQ does not have enough bandwiddi credits or die backlog 1, is zero at die step 304, tiien control passes from the step 304 to a step 310 where s, is set equal to s, ml . Note that the step 310 is also reached if it is determined at die test step 302 diat die burst bit indicator, b j does not equal one (i.e., VC, is not transmitting in burst mode).
  • step 312 where ⁇ C is computed in a manner similar to at illustrated in connection with the step 292 of FIG. 8.
  • a step 314 where it is determined if sum of die credit variable c,2 and
  • ⁇ C is less tiian one. If not, then control passes from the step 314 to die next state 308.
  • the test step 314 determines if die credit variable c,2 will be greater tiian one after an amount of time corresponding to s t has passed (i.e., after the s s counter has timed out). If not, tiien c t 2 plus ⁇ C will be less than one when s, times out and control transfers from the step 314 to a step 316 where s, is set equal to s t ⁇ Following the step 316, VC, enters the next state 308.
  • a flow diagram 320 illustrates the steps for updating c,l and c,2, d e credit variables for VC,.
  • Updating c,l and c,2 occurs at die step 158 and die step 202 shown in FIG. 5, discussed above.
  • Note diat die credit variables are set according to current system time but that, for the step 202 of FIG. 5, the time used is the beginning of the next cell time slot in order to account for die amount of time it takes to transmit a cell at the steps preceding the step 202.
  • Processing begins at a current state 322 which is followed by a step 324 where a value for Cjl is computed.
  • c,l is increased by an amount equal to the product of r,l (one of die rate variables for VQ) and die differences between die current system time, t, and die time at which the credit variables, l and C j 2, were last updated, t, n .
  • a test step 326 is followed by a test step 326 to determine if c s l is greater tiian c ilm .
  • the variable c, lm is the maximum value that the credit variable c,l is allowed to equal and is set up at die time VC, is initialized.
  • step 326 If at the test step 326 c,l is greater man c, lm , then control transfers from the step 326 to a step 328 where c,l is set equal to c, lm .
  • the steps 326, 328 effectively set c,l to the maximum of either c,l or c ilm .
  • step 330 Following the step 328 or die step 326 if c,l is not greater tiian c, lm is a step 330 where a value for c,2 is computed in a manner similar to die computation of c,l at the step 324.
  • steps 332, 334 which set c,2 to the maximum of either c,2 or c l2rn in a manner similar to the steps 326, 328 where c,l is set equal to the maximum of c,l and c tlm .
  • step 336 Following die step 334, or the step 332 if c,2 is not greater tiian c ⁇ , is a step 336 where t,,, is set equal to t, the current system time.
  • the variable t m represents the value of the system time, t, when the credit variables, c,l and c,2, were last updated.
  • step 338 Following die steps 336 is a step 338 where the VC, enters the next appropriate state.
  • a flow diagram 340 illustrates the step 174 of FIG. 5 where s, is updated by ⁇ S.
  • the step 174 of FIG. 5 is executed whenever die data node associated widi die bandwiddi management unit requests changing die delay of sending data via VC, by changing the value of s, for VC,.
  • Processing begins with VC, in an eligibility state 342.
  • control transfers from the step 342 to a test step 344 to determine if the value of ⁇ S, provided by die associated data node, equals zero. If so, control transfers from the step 344 back to the eligibility state 342 and no processing occurs. Otiierwise, if ⁇ S does not equal zero, then control transfers from the test step 344 to a step 346 where VC, is located in the eligibility queue. Note diat, as shown in FIG. 5, the step 174 where S j is updated only occurs when VC, is in die eligibility state.
  • step 346 Following die step 346 is a test step 348 where it is determined if ⁇ S is greater tiian zero or less than zero. If ⁇ S is greater tiian 0, then control transfers from die step 348 to a step 350 where ⁇ S is added to s ; . Following the step 350 is a step 352 where VC, is repositioned in die eligibility queue according to die new value of s t . Note that, as discussed above in connection with FIG. 5, the position of VQ in the eligibility queue is a function of the values of s s and qoS j .
  • step 348 If at the test step 348 it is determined diat ⁇ S is less tiian zero, then control transfers from the step 348 to a step 354 where the credit variables, c,l and c,2, are examined. If at die test step 354 it is determined diat either c t l or Cj2 is less than one, then control transfers from the step 354 back to the eligibility state 342 and no update of s t is performed. The steps 348, 354 indicate diat die value of s, is not decreased if eitiier of the credit variables is less tiian one.
  • step 354 If at die step 354 it is determined diat die credit variables, c t l and c,2, are botii greater than or equal to one, then control transfers from the step 354 to a step
  • ⁇ S the value of s, is incremented by die amount ⁇ S. Note diat, in order to reach die step 356, ⁇ S must be less tiian zero so that at die step 356, die value of s s is in fact decreased.
  • step 358 die value of ⁇ is compared to s(t). If die value of S j has been decreased at die step 356 by an amount that would make Sj less than s(t) then control transfers from the step 358 to a step 360 where s, is set equal to s(t).
  • the steps 358, 360 serve to set the value of s, to the greater of either s, or s(t).
  • die position of VC, in the eligibility queue is a function of the values of s, and qos,..
  • VC returns to die eligibility stage 342.
  • a flow diagram 370 illustrates the compute steps 162, 206 of FIG. 5 where the values of c,l, c,2, and P, are computed. Processing begins at a current state 372 which is followed by a step 374 where the value of c,l is computed.
  • the value of c,l equals die value diat die credit variables will have when s, times out. That is, c,l equals die future value of die credit variables c,l when VC, is removed from die eligibility queue.
  • P is a function of die credit variables, c,l and c,2, then calculating die future values of c,l and c,2 is useful for anticipating die value of P, when VC, is removed from die eligibility queue and placed in die departure queue.
  • die priority variable, P is determined using die following equation:
  • P w,l*c,l + w,2*c,2 + w,3*r,l + w,4*r,2 + w,5*L.
  • P is a function of the credit variables, die rate variables, and the number of cells backlogged in die queue.
  • the node requesting initialization of VC specifies the weights, w,l-w,5, for the VC. Accordingly, it is possible for die requesting node to weight die terms differently, depending upon the application.
  • w,2 and w,4 die weights for die terms corresponding to the burst data rate
  • tiian w,l and w,3 die weights for die terms corresponding to die average data rate.
  • tiien w,5 the weight corresponding to die number of backlogged cells
  • the value of c,l is calculated by adding die current value of the credit variables c,l and die product of r,l and s,. Note that, if the credit variables were updated for each iteration while VC, was in the eligibility queue, then c,l would equal c,l when s, equalled zero. Note diat c,l is not updated at the step 374.
  • a test step 376 where c s l is compared to c,, m . If c,l is greater tiian c,, m (the maximum value at c,l is allowed to take on) tiien control transfers from the test step 376 to a step 378 where the value of c,l is set equal to the value of c grav m .
  • the steps 376, 378 serve to set the value of c,l to the maximum of c,l and c,, m . Note diat for die steps 376, 378, c,l is not updated.
  • steps 380-382 which set the value of c,2 in a manner similar to setting die value of c,l, described above.
  • step 384 where the weights w,l, w,2, w,3, w,4, and w,5 are fetched. As discussed above, die weights are used to calculate the priority, P,.
  • step 386 the priority for VQ, P Volume is calculated in die manner shown on die flow diagram 370.
  • the weights are specified at die time that VC, is initialized and diat die importance of each of the terms used in die calculations of P,, can be emphasized or deemphasized by setting the value of the weights. The values of one or more of the weights can be set to zero, thus causing a term not to be used in die calculation of die priority.
  • a flow diagram 390 illustrates in detail die step 180 of FIG. 5 where the priority variable, P, is updated.
  • P is updated only in response to die value of s, being updated at die step 174. Otherwise, the value of P, remains constant while VC, is in the eligibility state. That is, since P, is a function of c,l, c,2, r,l, r,2 and L., and since c,l and q2 are a function of s tension then P, only changes when and if s, changes at die step 174 shown in FIG. 5.
  • Flow begins at a current state 392 which is followed by a step 394 where s, is accessed. Following the step 394 are steps 396-398 where c,l is computed in a manner similar to mat illustrated in FIG. 12. Note, however, diat at die step 396, ⁇ is subtracted from s, since, at is stage of the processing, s(t) has already been added to s, at die step 166 shown in FIG. 5, and since c,l is being predicted when s, times out.
  • steps 400-402 where c,2 is computed in a manner similar to the computation of c.l.
  • steps 404, 405 where die weights are fetched and P, is computed in a manner similar to that illustrated in connection with FIG. 12, described above.
  • a schematic diagram 420 illustrates in detail one possible hardware implementation for die bandwiddi management unit.
  • An input 422 connects die bandwidth management unit 420 to the source of the data cells.
  • the input 422 is connected to a cell pool 424 which is implemented using memory.
  • the cell pool 424 stores all of the cells waiting for departure from die bandwidth management unit.
  • die size of die cell pool 424 equals the number of VC's that die system can accommodate multiplied by the maximum allowable backlog per VC. For example, for a bandwiddi management unit designed to handle 4,096 VC's widi a maximum backlog lengtii of 30 cells, die size of die cell pool is 4096 x 30 cells.
  • a cell pool manager 426 handles die cells witiiin the bandwidth management unit 420 in a manner described above in connection with die state diagrams of FIG's 5-13.
  • a parameter buffer 428 is used to handle all of die variables associated widi each of die VC's and is implemented using memory.
  • the size of die parameter buffer 428 is die number of variables per VC multiplied by the maximum number of VC's handled by die bandwiddi management unit 420.
  • a parameter update unit 430 is connected to die parameter buffer 428 and updates die variables for each of the VC's in a manner described above in connection with the state diagrams of FIG.'s 5-13.
  • a cell clock 432 computes the value of s(t) which, as described above, is used for determining when a VC transitions from the eligibility queue to the departure queue.
  • the cell clock 432 also provides die system time, t.
  • An eligibility queue 431 is implemented using memory and stores die VC's that are in the eligibility state. As discussed above in connection widi FIG.'s 5-13, VC's in die eligibility queue are sorted in order of the values of s and qos for each
  • a comparator 434 compares die value of s(t) from the eligibility cell clock 432 with the value of s for each of the VC's and determines when a VC should transition from the eligibility state to the departure state. Note diat, as discussed above in connection with FIG.'s 5-13, VQ transitions from the eligibility state to the departure state when s, > s(t).
  • a departure queue 436 is implemented using memory and stores die VC's diat are awaiting departure from die bandwiddi management unit 420. As discussed above in connection with FIG.'s 5-13, die departure queue 436 contains VC's that are sorted in order of quality of service (qos) for each VC and die priority that is computed for each VC.
  • a select sequencer unit 433 selects, for each VC, which of the two queues 431, 436 to place each VC according to die algoritiim described above and shown in FIG.'s 5-13.
  • a priority computation unit 438 computes the priority for each of die VC's during each iteration. Note diat, as discussed above in connection with FIG.'s 5-13, once a VC has entered die eligibility queue, the priority can be updated by simply summing die value of the priority variable and the product of the weight w 5 after each cell arrival is admitted.
  • the device 420 shown in FIG. 14 can be implemented in a straightforward manner using conventional VLSI architecture.
  • the cell pool 424, eligibility queue 431, departure queue 436, and parameter buffer 428 can be implemented as memories.
  • the cell pool manager 426, die parameter update unit 430, die select sequencer 433, the priority computation unit 438 and the comparator 434 can be implemented using VLSI logic to implement die functionality described above in connection with FIG.'s 5-13. That is, one of ordinary skill in die art can implement the bandwiddi management unit 420 by using die state diagrams and current custom VLSI design techniques. Note also that, based on die description contained herein, one of ordinary skill in die art could implement die bandwiddi management unit 420 in a variety of other manners, including entirely as software on a very fast processor or as a conventional combination of hardware and software.

Abstract

A bandwidth management system (120) manages a plurality of virtual data connections within a communications network. The system includes an input (122) for receiving data cells, wherein each cell is associated with a particular one of the virtual connections. The system also includes a cell pool, coupled to the input for storing the cells, first and second queues (126-128) for ordering the virtual connections, and an output (137) for transmitting cells from the cell pool. The relative position of a virtual connection in the first queue is determined by an eligibility variable that varies according to an anticipated data rate associated with the particular virtual connection and according to an amount of time that the particular virtual connection has been in the first queue. The relative position of a virtual connection in the second queue varies according to a predetermined quality of service that is assigned to each of the virtual connections.

Description

BANDWIDTH MANAGEMENTANDACCESS CONTROL FORANATM NETWORK
Government Rights
This invention was made with Government support under Contract DABT63-93-C-0013 awarded by the Department of the Army. The Government has certain rights in the invention.
Field of the Invention
This invention relates to the field of asynchronous data communication and more particularly to the field of managing the flow rate of data through an asynchronous data communication network.
Related Art
The next generation public backbone network capable of supporting voice, video, image, data, and multi-media services is envisioned as a broadband Integrated Services Digital Network (ISDN) using an asynchronous transfer mode (ATM) to transmit data. The perceived advantages of ATM technology include flexibility and simplicity of asynchronously multiplexing traffic sources with a very broad range of source parameters and service quality requirements with information loss and delay performance ranging from close to that of the synchronous transfer mode to that of best-effort service in today's packet-switched networks. Broadband ISDN is therefore a promising network technology to support emerging needs of high performance computing and communications. The major technical impediment to practical large scale deployment of ATM-based broadband ISDN networks, however, is the lack of practical real-time bandwidth management that will guarantee predictable end-to-end grades of service over a multi-node, multi-carrier network to end application platforms, such as high performance workstations and supercomputers.
In ATM-based broadband ISDN, information from the application layer is processed at the ATM Adaptation layer into fixed size ATM cells, which are in turn multiplexed/switched at the ATM layer, and transported in payload envelopes at the physical layer. Performance degradation caused by congestion due to insufficient network resources at certain parts of the network leads to perceptible performance degradation to the end-users, unless robust bandwidth management policies are defined and implemented. The overall objective of bandwidth management and traffic control of broadband ISDN is to provide consistent and predictable end-to-end performance, while at the same time optimizing network resource utilization for end-users with a wide range of performance requirements and traffic flows, including some which are very bursty.
The problem of bandwidth management and traffic control of broadband ISDN becomes complex due to factors such as the diversity of traffic flow characteristics (much of which is unknown even at the time of transmission), connection performance requirements, the impacts of end-to-end propagation delay and the processing delay at the network elements due to increased switching and transmission speeds. The very high speed of ATM networks, coupled with propagation delay results in large numbers of ATM cells in route, while the (relatively) large cell processing time at intermediate nodes imposes severe limitations on intermediate node protocol processing. These factors cause existing traffic control approaches (like the X.25 data communications flow control designed for traditionally low-speed networks) to become ineffective. Hence new, robust (to cope with unknown and changing traffic characteristics), low-cost and scalable (to increasingly higher speeds) traffic control strategies are needed for Broadband ISDN.
The real-time (typically several milliseconds or less) traffic control requirements of broadband ISDN is drastically different from those of existing networks due to the factors discussed above. According to functionality, the real¬ time traffic control may consist of well-coordinated components such as access control, flow control, reactive congestion control, and error control. Access control, including connection admission control/bandwidth allocation, traffic shaping, and bandwidth enforcement, can achieve user objectives at the user-network interface. Other controls (flow control, reactive congestion control, error control) achieve destination buffer protection, network internal congestion handling, and information integrity guarantee, respectively.
The central goal of access control is to achieve, at the user-network interface, objectives such as predictable information throughput, connection blocking probability, cell loss ratio, and cell transfer delay/cell delay variation, among others. In view of uncertain and changing traffic characteristics/parameters and increased transmission/switching speeds in broadband ISDN, the access control mechanism should be robust, fast, scalable (from 155 to 622 Mbps, or higher), and low-cost to implement.
There are three related aspects of access control: connection admission control, bandwidth shaping and pacing, and bandwidth enforcement. Connection admission control decides connection acceptance, given traffic characteristics and performance requirements (e.g. peak rate, cell loss rate), based on the current network state. Generally, a decision needs to be made and network resources allocated in no more than a few seconds. Some key issues include what traffic descriptors should be used (i.e., peak data rate, average data rate, priority, burst length, etc), how to determine effective bandwidth of a bursty connection, how to predict performance (given underlying traffic shaping/bandwidth enforcement mechanisms), what acceptance/rejection criteria to use, and how fast can algorithms be executed.
Currently, different solutions for connection admission control exist, ranging from peak rate allocation (simple to implement, but bandwidth inefficient for variable bit-rate services) to more sophisticated algorithms which determine the "admissible acceptance region" in an N-service space. Typically, the connection admission control will be implemented in software, which makes it possible to start with a simple and robust algorithm during the early stages of broadband ISDN and yet allows evolution towards more sophisticated (and effective) algorithms as they become available. A distinguishing feature of access control schemes is their capability to provide statistical multiplexing for variable bit rate services. A number of factors significantly impact the statistical multiplexing efficiency including, among others, variable bit-rate transmission peak rate (relative to the link capacity) and the burst duration distribution. Studies have shown that pure statistical multiplexing of variable bit-rate connections may result in a low (e.g. 20-25%) network utilization, if no traffic shaping or resource allocation (such as the fast reservation protocol) techniques are used. By proper modification of the cell arrival process (i.e., traffic shaping), higher statistical multiplexing efficiency may be achieved. Such modification techniques employed by the pacing mechanism may include cell jitter smoothing, peak cell rate reduction, message burst length limiting, and service scheduling of variable bit rate virtual circuit connections (VCs), among others.
Further, traffic shaping may possibly be used in conjunction with a network bandwidth enforcement mechanism by rescheduling a cell's service (in addition to cell discarding or tagging) when a non-compliant cell is observed. For non-delay sensitive but loss sensitive variable bit rate services, this option is appealing in order to minimize the cell loss ratio. However, applicability of pacing techniques to cells with stringent delay requirement (such as interactive variable bit rate video) requires further study. From a network operation point of view, source shaping/pacing is also desirable to prevent overwhelming the system with more data than the system can handle.
The purpose of network bandwidth enforcement is to monitor a connection's bandwidth usage for compliance with appropriate bandwidth limits and to impose a policing action on observed violations of those limits. The enforcement control works on the time scale of a cell emission time, (i.e., about 2.7 μsec for 155 Mb/s service or about 0.7 μsec for 622 MB/s service). Key issues of bandwidth enforcement include the design of the monitoring algorithm and policing actions to be taken on non-compliant cells. Other issues include handling of traffic parameter value uncertainty, the effectiveness in terms of the percentage of erroneous police actions upon compliant cells and the percentage of non-compliant cells undetected, and the detection time of a given violation.
The bandwidth enforcement mechanism operates upon network-measured traffic parameter values, which may include a connection's peak cell rate, its average cell rate, and its peak burst length. Currently, a few bandwidth enforcement algorithms have been proposed using, for example, single or dual leaky buckets, jumping, or sliding windows. However, some studies have shown that the leaky-bucket algorithms, using peak/average rates and average burst length, may still not be robust enough for various variable bit rate traffic mixes. The studies also show that the leaky-bucket algorithms also tend to complicate performance prediction at the connection admission control level. Furthermore, accurate estimation of average cell rate and burst length by users may be very difficult in practice. This suggests the need for the exploration of alternative approaches to this problem, in order to bring about an early deployment of broadband ISDN. A number of access control approaches have been proposed, but there is no standard consensus among the vendor and research communities. An approach has been proposed that accepts connections based on bandwidth pools dedicated to several traffic classes respectively and uses a leaky bucket type algorithm for monitoring connection bandwidth utilization with immediate cell discarding of non-compliant cells. Another proposal also uses a leaky bucket type monitoring algorithm with tagging of non-compliant cells for possible later cell discarding. Another proposed approach uses a rate-based time window approach.
One study has indicated that the leaky bucket algorithm is superior to the time-window based approaches under certain traffic patterns studies. However, the same study also revealed difficultly in its dimensioning (e.g., the counter limit). Further, the performance of the leaky bucket algorithm is also found to be far below optimal, in terms of non-compliant cell detection and false alarms (the long term probability of declaring a bandwidth violation for compliant cells).
This seems to indicate that enforcement and pacing near the average cell rate for variable bit-rate services are much more complex than that has been generally recognized. For example, enforcement of a variable bit-rate traffic stream near the average rate generally leads to enormous buffer requirements (hence unacceptable response time) in order to keep the false alarm rate to an acceptable level (say 10"7).
More importantly, the effectiveness of a monitoring algorithm (such as the leaky bucket) is critically dependent on the source model behavior which is unknown for many applications and is difficult to estimate accurately at the time that the connection is made. Other reservation based schemes are more suitable for data traffic, but their performance characteristics have not been fully evaluated.
SUMMARY OF THE INVENTION
According to the present invention, a bandwidth management system manages a plurality of virtual data connections within a communications network. The system includes an input for receiving data cells, wherein each cell is associated with a particular one of the virtual connections. The system also includes a cell pool, coupled to the input, for storing the cells, a first and second queue for ordering the virtual connections, and an output for transmitting cells from the cell pool. The relative position of a virtual connection in the first queue is determined by an eligibility variable that varies according to an anticipated data rate associated with the particular virtual connection and according to an amount of time that the particular virtual connection has been in the first queue. The relative position of a virtual connection in the second queue varies according to a predetermined quality of service that is assigned to each of the virtual connections. The output transmits a cell from the cell pool corresponding to a virtual connection at the front of the second queue.
Virtual connections having eligibility variables with equal values can be ordered in the first queue according to the predetermined quality of service. The system can use four different values for quality of service. Virtual connections having equal quality of service values can be ordered in the second queue according to a priority computed for each of the virtual connections. Credit variables for each virtual connection can indicate allocated time slots provided to each of the virtual connections. The priority can vary according to one or more of: one or more anticipated data rates of each of the virtual connections, the value of one or more of the credit variables, and the number of bac logged cells that are awaiting transmission. The data rates, credit variables, and backlog values can be weighted prior to determining the priority. The system can use a burst bit indicator to determine if virtual connections associated with cells received at the input should
be placed immediately on the second queue.
The bandwidth management system can be one of: a pacing unit and an enforcement unit. A pacing unit receives cells from a data source node and provides cells to a communication link. An enforcement unit receives data from a communication link and provides data to a data sink node. A virtual connection
can be established by specifying initially agreed-upon traffic parameters and dropping cells that are received at a rate that exceeds that specified by the initial
traffic parameters.
BRIEF DESCRIPTION OF THE DRAWINGS
The invention is better understood by reading the following Detailed Description of the Preferred Embodiments with reference to the accompanying drawing figures, in which like reference numerals refer to like elements throughout, and in which:
FIG. 1 shows a data communications network having a plurality of physically interconnected nodes.
FIG. 2 shows a plurality of communication nodes interconnected by a plurality of virtual connections.
FIG. 3 is a data flow diagram showing different states of data within a bandwidth management unit according to the present invention.
FIG. 4 is a functional block diagram of a bandwidth management unit according to the present invention.
FIG. 5 is a state diagram showing operation of a bandwidth management unit according to the present invention.
FIG. 6 is a state diagram illustrating setting up a virtual connection. FIG. 7 is a flow diagram illustrating cell admission for a virtual connection.
FIG. 8 is a flow diagram illustrating initializing the value of an eligibility timer for a virtual connection.
FIG. 9 is a flow diagram illustrating resetting the value of an eligibility timer for a virtual connection.
FIG. 10 is a flow diagram illustrating updating credit variables of a virtual connection.
FIG. 11 is a flow diagram illustrating updating an eligibility timer of a virtual connection.
FIG. 12 is a flow diagram illustrating computing values for credit variables and a priority variable for a virtual connection.
FIG. 13 is a flow diagram illustrating updating a priority variable for virtual
connection.
FIG. 14 is a schematic diagram illustrating hardware implementation of a bandwidth management unit. DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS In describing preferred embodiments of the present invention illustrated in the drawings, specific terminology is employed for the sake of clarity. However, the invention is not intended to be limited to the specific terminology so selected, and it is to be understood that each specific element includes all technical equivalents which operate in a similar manner to accomplish a similar purpose.
Referring to FIG. 1, a communications network 30 comprises a plurality of communication nodes 32-50 that are physically interconnected. Each of the nodes 32-50 represents conventional communications equipment for sending and receiving communications data. The lines drawn between the nodes 32-50 represent physical communications links for transmitting data between the nodes 32-50. The communications links can be any one of a plurality of conventional mediums for communication data transmission including fiber optics, twisted pair, microwaves links, etc.
It is possible for two nodes to be directly connected to each other, which facilitates communication between the directly connected nodes. For example, the node 34 is shown in FIG. 1 as being directly connected to the node 35. Accordingly, communications between the node 34 and the node 35 is facilitated via the direct physical communication link.
Often, it is desirable to communicate between two nodes which are not directly connected via a single physical communication link. "For example, it may be desirable to send data from the node 34 to the node 49 of FIG. 1. In that case, communication is facilitated by passing messages through intermediate nodes between the node 34 and the node 49. Data from the node 34 to the node 49 can be sent by first transmitting the data from the node 34 to the node 35, then from the node 35 to the node 36, then from the node 36 to the node 45, and finally from the node 45 to the node 49.
In the example above, a "virtual connection" between the node 34 and the node 49 is established to facilitate communication between the nodes 34, 49 and to establish desired quality of service for the communication. A virtual connection is established between two nodes when one of the nodes has data that is to be sent to another node. A virtual connection can be terminated after all of the data has been sent. For example, if the node 34 represents a user computer terminal and the node 49 represents a mainframe computer, then a virtual connection can be established between the node 34 (terminal) and the node 49 (mainframe) whenever a user logs on to the computer system. The virtual connection could then be terminated when the user logs off the system.
Referring to FIG. 2, a schematic diagram 60 shows a plurality of hosts 62- 67, a pair of private ATM (asynchronous transfer mode) mux/switches 71, 72, and a public ATM mux/switch 74. Each of the hosts 62-67 represents one of a plurality of possible sources of data, such as a telephone, a fax machine, a computer terminal, a video teleconference terminal, etc. The host 62 transmits data to the private ATM mux/switch 71 via a virtual connection set up between the host 62 and the private ATM mux switch 71. The host 63 transmits data to d e mux/switch 71 via a second virtual connection between the host 63 and the private ATM mux/switch 71. A virtual connection can span a single direct physical link or can span a plurality of consecutive physical connections through intermediate network nodes, not shown in FIG. 2. Similarly, the host 64, 65 are connected to the mux/switch 72 via separate virtual connections therebetween. The mux switches 71, 72 and the hosts 66, 67 are connected to the public ATM mux/switch 74 via additional virtual connections. The public ATM mux/switch 74 provides data to a trunk (not shown).
The diagram 60 shows a general flow of data from the hosts 62-67 to the trunk via the intermediate devices 71, 72, 74. The virtual connections shown in the diagram 60 are set up at the time that the hosts 62-67 begin transmitting data. For example, if the host 62 represents a telephone, then a virtual connection between the host 62 and the private ATM mux/switch 71 can be established by a connection admission procedure when the user picks up die handset of the telephone and begins dialing. The virtual connection can be terminated when the user hangs up the telephone.
In order to avoid exceeding the data-handling capacity of either the trunk, the switches 71, 72, 74 or the links tfierebetween, it is desirable to control the amount of data that flows from the hosts 62-67 to the switches 71, 72, 74 and to control the amount of data that flows from the switches 71, 72, 74 to the trunk. Accordingly, each output port of the hosts 62-67 is provided with one of a plurality of pacing units 82a-87a. The pacing unit 82a is connected to the output port of the host 62, the pacing unit 83a is connected to the output port of the host 63, etc.
The pacing units 82a-87a control the data transmitted from the hosts 62-67 by limiting all data transmitted on the associated connection. For example, the pacing unit 82a limits data transmitted from the host 62 to the private ATM mux/switch 71. Data is first passed from the host 62 to the pacing unit 82a via the output port of the host 62. The pacing unit 82a then provides the data to the communication link.
The pacing unit 82a provides the data on the communication link at an agreed upon rate, as described in more detail hereinafter. Note that the switches
71, 72 also transmit data via pacing units 91a, 92a connected to the output ports thereof.
Associated with each of the pacing units 82a-87a, 91a, 92a are corresponding enforcement units 82b-87b, 91b, 92b. The enforcement units 82b- 87b, 91b, 92b are at the receiving ends of the communication links to ensure that the agreed upon data rate of each of the connections is maintained. The enforcement units 82b-87b, 91b, 92b are connected to input ports of corresponding devices so that, for example, the enforcement unit 82b is connected to the input port of the mux/switch 71. If a pacing unit provides data at greater ti an the agreed upon data rate, the corresponding enforcement unit discards the excess data.
Pacing units are provided at data source nodes while enforcement units are provided at data sink (i.e., receiving) nodes.
When a virtual connection is established between two nodes, the pacing unit at the data source node and the enforcement unit at the data sink node initially agree upon a data rate and a quality of service. Quality of service relates to a variety of characteristics, including the acceptable delay between generation and reception of the data, as described in more detail hereinafter. The requested data rate and quality of service are functions of the type of data being sent. For example, real time voice data may require a different data rate than would real time video data, but both may require a relatively high quality of service. In
contrast, an E-Mail message would probably require a relatively low quality of service.
During initialization of the virtual connection, a pacing unit requests an appropriate set of traffic parameters (such as data rate, buffer space, etc.) and requests the desired quality of service. The corresponding enforcement unit at the other end of the virtual connection can accept or deny the request of the pacing unit A request is denied if the node to which the enforcement unit is attached is not capable of handling the requested amount of data to meet the requested quality of service. Reasons for denying the request include insufficient capacity of the data sink node to handle the requested data rate and insufficient capacity of the communications link to handle the requested data rate. If the request is denied by the enforcement unit, then the virtual connection is not established. Assuming however that the request is accepted, die virtual connection between the nodes is established. Once established, the pacing unit operates so as to send data according to the initially agreed upon traffic parameters.
Data is communicated over the communications links via packets or fixed- size ATM cells. Each cell contains one or more bytes of digital data along with header information identifying the virtual connection and other appropriate information, discussed in detail hereinafter. Note u at, for d e network shown in FIG. 2, data rate limiting all of die connections between the nodes will, of necessity, limit the total amount of data diat is provided to the trunk. Accordingly, the pacing units 82a-87a, 91a, 92a and enforcement units 82b-87b, 91b, 92b can be used to prevent the trunk, and hence the network, from being overloaded with too much data.
The pacing and enforcement units receive cells at their respective inputs and transmit cells from their respective outputs. The operation of each is substantially identical. Accordingly, the detailed discussion which follows is equally applicable both types of units which are referred to generically as bandwidth management
units.
Referring to FIG. 3, a data flow diagram 100 illustrates different states of a virtual connection handled by a bandwidth management unit. Note that a single bandwidth management unit can handle multiple virtual connections even though the bandwidth management unit is physically connected to only a single transmission link. The node to which die bandwidth management unit is connected sends or receives cells for the various connections and the bandwidth management unit outputs the cells in a particular order.
Initially, a cell is provided to the bandwidth management unit from either the output port of the associated data source node (for a pacing unit) or from the communication link (for an enforcement unit). The associated virtual connection
(VC) begins in an unscheduled state 102. A cell corresponding to a VC in the unscheduled state 102 is deemed not to be ready to transmit by the bandwidth management unit.
The VC transitions out of the unscheduled state 102 by becoming eligible and having at least one data cell ready to transmit. Eligibility of a VC is controlled externally by the data node connected to the bandwiddi management unit in a manner described in more detail hereinafter. A VC which is deemed immediately eligible by the data node will transfer out of the unscheduled state 102.
The VC can transition from the unscheduled state 102 to an eligibility state
104. The eligibility state 104 indicates that die cells associated with die VC are waiting to become eligible for output, but the VC is waiting for particular conditions to be met, which are described in more detail hereinafter.
When the conditions are met, the VC can transition from the eligibility state 104 to a departure state 106. A VC in the departure state 106 has at least one cell associated therewith which is ready to be output by the bandwidth management unit. The cells of the VC in the departure state are transmitted by the bandwidth management unit in a specific order, as described in more detail hereinafter.
A VC which is in the eligibility state 104 or in the departure state 106 can be transitioned back to the unscheduled state 102 when the data node connected to the bandwiddi management unit deems the VC to be ineligible or when other conditions, described in more detail hereinafter, are met. Furthermore, it is possible for a VC in the departure state 106 to be transitioned back to the eligibility state 104 under certain conditions, also described in detail hereinafter. Under certain odier conditions, a VC in the unscheduled state 102 can transition directly to die departure state 106 witiiout going through the eligibility state 104. Transitions between various states and the conditions therefor are described in more detail hereinafter.
The ordering of the VC's in a queue associated witii die departure state 106 is a function of the Quality of Service (QOS) of die virtual connection. The QOS of a virtual connection is established when the virtual connection is initialized by die data source node. The source node for die data can request one of four possible values for the QOS: high level QOS, mid level QOS, low level QOS, and best effort QOS. High level QOS is for real time traffic that requires bounded delay and virtually no cell loss. Mid level QOS is for real time traffic that requires bounded delay and little cell loss. Low level QOS is for delay- tolerant data traffic having controllable cell loss. Best effort QOS is for data which can be transmitted at a very low rate that can share available bandwiddi not being used by die high level, mid level, and low level QOS cells. The ordering of the cells in the queue is based on the QOS associated with the virtual connection and upon other parameters which are discussed in detail hereinafter. Note diat, altiiough the invention is illustrated herein witii four separate levels for die QOS, the invention can be practiced with a different number of levels. The number of levels chosen for the implementation is a design choice based on a variety of functional factors known to one of ordinary skill in die art.
Referring to FIG. 4, a schematic diagram 120 functionally illustrates operation of a bandwidth management unit. An input 122 accepts cells from a data node (not shown) connected to die bandwiddi management unit. The cells diat are received are provided to a demultiplexer 124. The demultiplexer 124 separates die input cells according to die VC associated tiierewith and places the cells in one of a plurality of appropriate queues 126- 128, so diat, for example, cells associated with virtual connection VC, are placed in the queue 126, cells associated with virtual connection VCj are placed in die queue 127 and cells associated the virtual connection VCk are placed in die queue 128. A cell admission unit 130 controls placing d e cells in the queues 126-128 in a manner described in more detail hereinafter.
Associated witii each virtual connection, V , VCr and VCk, is a set of variables 131-133 diat controls when the cells for each of die VC's will be transmitted out of die bandwiddi management unit. Also when a cell is input to die bandwiddi management unit, the cell admission unit 130 examines the variables 131-133, to determine if me cell should be admitted. As shown in FIG. 4, the virtual connection V has the variable set 131 associated tiierewitii, the virtual connection VCj has die variable set 132 associated tiierewitii, and the virtual connection VCk has die variable set 133 associated tiierewitii. The variable sets
131-133 are described in more detail here and after.
The variable sets 131-133 are provided as inputs to a mux control 135 which controls which cell from the head of one of die queues 126-128 is next to be transmitted out of d e bandwiddi management unit. A multiplexor 136 is connected to each of die queues 126-128 and, in conjunction witii die mux control
135, provides a cell from one of die queues 126-128 to an output 137 for transmission out of the bandwidth management unit.
The table below facilitates die discussion which follows by showing parameters diat are used for each virtual connection of the bandwiddi management unit:
VARIABLE NAME DESCRIPTION scd, state of VC, qos, quality of service for VC, e, indicates if transmission for VC, is enabled s, eligibility timer for VC, r,l first rate variable for VC, r,2 second rate variable for VC, c,l first credit variable for VC, c,2 second credit variable for VC,
1, number of backlogged cells for VC .l first weight variable for VC, w,2 second weight variable for VC, w,3 third weight variable for VC, w,4 fourth weight variable for VC, w,5 fifth weight variable for VC, b, burst bit indicator for VC,
P, priority variable for VC,
Referring to FIG. 5, a state diagram 140 illustrates operation of a bandwiddi management unit. Although operation is illustrated witii regard to a single VC, VC,, die operation of other VC's of die bandwiddi management unit is identical.
Circles on die diagram 140 show the tiiree possible states of VC,: die unscheduled state, die eligibility state, and die departure state. A VC is always in one of those tiiree states, although for a bandwidth management unit servicing multiple VC's, die state of any one of die VC's is independent of the state of any of the odier VC's. The state diagram 140 also shows rectangles having rounded corners which represent operations performed by die bandwiddi management unit with respect to VC,. Diamonds on the diagram show decision branches where die flow is conditioned upon a particular variable or variables associated witii VC, being in a particular state or having a particular value or values. The arrows between the various circles, rounded rectangles, and diamonds illustrate the flow through the state diagram. For an arrow having an annotation associated tiierewith, the annotation indicates die condition causing the transition from one portion of the state diagram 140 to another. Arrows having no annotation associated therewith represent an unconditional flow from one portion of die state diagram 140 to anodier.
Flow begins at an initial step 142 where VC, is initialized. Initialization of a VC includes having a pacing unit request a particular set of traffic parameters and quality of service (QOS) from die associated enforcement unit (i.e., the enforcement unit at die odier end of die communication link). That is, VC, is initialized when the pacing unit sends a command to die associated enforcement unit requesting creation of VC, and specifying die set of traffic parameters and
QOS for VC,. As discussed above, die enforcement unit either accepts or rejects die request depending on die existing data loading on both die communication link and die receiving node. The decision is based on die loading for die node attached to die enforcement unit. If the request from die pacing unit to the enforcement unit is denied, tiien VC, is not initialized and no processing (i.e., no data transmission) occurs for VC,.
Assuming diat die VC, initialization request is approved by die enforcement unit, then following the initialization step 142, VC, enters an unscheduled state 144. While VC, is in the unscheduled state 144, die variable scd, is set to zero. VC, remains in die unscheduled state 144 until die first cell of data for VC, is received by die bandwiddi management unit.
When a cell associated witii VC, arrives, the bandwiddi management unit transitions from the unscheduled state 144 to a cell admission step 146, where die cell is eitiier accepted or rejected by the bandwiddi management unit. The cell admission step 146 is described in more detail hereinafter.
Following die cell admission step 146 is a test step 148 which determines whedier die VC, transitions from the unscheduled state 144 to another state. At the test step 148, die variables 1, and e, are examined. The variable 1, represents die backlog, in number of cells, for VC,. That is, 1, equals die number of cells received by die bandwiddi management unit for VC, diat have not been output by the bandwiddi management unit. The variable e, is a variable indicating whedier VC, has been enabled by die data node diat is being serviced by die bandwidth management unit. That is, die data node diat is connected to die bandwiddi management unit can either enable or disable VC,. For a virtual connections that is not enabled, die bandwiddi management unit receives inputted cells but does not output the cells.
At die test step 148, it is determined if die backlog of cells for VC, is at least one (i.e., there is at least one cell associated witii VC, stored in die bandwiddi management unit) and if VC, is enabled. If one or botii of tiiose conditions are not met, then VC, returns to the unscheduled state 144.
Note also that while in the unscheduled state 144, it is possible for die bandwiddi management unit to receive a signal from die data source node requesting diat e, be set. In that case, control transfers from the unscheduled state
144 to a step 150 where e, is set. Following die step 150 is die test step 148, discussed above. Note, dierefore, that it is possible for the bandwiddi management unit to receive one or more cells for VC, prior to VC, being enabled and tiien for the bandwiddi management unit to receive an enable signal from the data node so diat VC, transitions from the unscheduled state 144.
While in die unscheduled state 144, it is possible for die bandwiddi management unit to receive a signal from the data node requesting termination of VC,. Termination can occur for a variety of reasons, including there being no more data to transmit over the virtual connection. In that case, control transfers from the unscheduled state 144 to a step 152 where VC, is terminated. Once VC, has been terminated, die bandwiddi management unit does no further processing for VC, and accepts no more cells for transmission via VC,.
It is also possible that, while in die unscheduled state 144, die bandwiddi management unit receives a request from die data node to update parameters associated with VC,. In diat case, control transfers from the step 144 to a step 154 where die parameters associated witii VC, are modified. The parameters diat can be associated witii VC, are discussed elsewhere herein. Note, however, diat one of die parameters diat can be updated at die step 154 is the quality of service associated witii the VC,, qos,. That is, die data node can request diat die quality of service associated widi VC, be changed to a second value after the VC, has been initialized widi a qos, having a first value. Following the update step 154, VC, returns to die unscheduled state 144.
If at the test step 148 it is determined diat die backlog for die VC, is at least one (i.e., 1, is greater tiian or equal to one) and diat VC, is enabled (i.e., e, equals one), then control transfers from the step 148 to a step 156 where scd, is set to one. The scd, variable associated with VC, equals zero when the VC, is in d e unscheduled state and equals one when VC, is in any of die otiier states.
Following the step 156 is a step 158 where the credit variables associated widi VC„ c,l and c,2, are updated. The credit variables, c,l and c,2, represent the amount of transmission channel bandwidth "credited" to VC,. The credit variables are described in more detail hereinafter.
Following die step 158 is a step 160 where die value of s, is initialized. The value of s, represented die amount of elapsed time that VC, is to spend in die eligibility queue before transitioning to die departure queue. That is, after exiting from the unscheduled state, VC, is placed in die eligibility queue until an amount of time indicated by s, has elapsed, after which VC, is moved from die eligibility queue to the departure queue for output by die bandwiddi management unit. The initial value of s, is determined by die associated data node and varies according to die initial traffic parameters specified when VC, is initialized.
Following die step 160 is a step 162 where die estimated values of the credits, c,l and c,2 and die estimated value of die priority, P,, associated with VC, are computed. The priority, P„ is used to order die VC's in die departure queue. Computing the priority is described in more detail hereinafter.
Following die step 162 is a test step 164 which determines if s, was set to zero at die step 160. As discussed above, a VC initializes s, according to values of the initial traffic parameters and die credits. If s, is set to zero at die step 160, tiien VC, is placed immediately on die departure queue. Otiierwise, VC, is placed in the eligibility queue for an amount of time represented by die initial setting of s, provided at d e step 160.
If at die test step 164 it is determined that s, does not equal zero, then control passes from the test step 164 to a step 166 where die value of s, is incremented by s(t). The quantity s(t) varies according to elapsed system time and is updated every cycle d at the bandwiddi management unit software is executed. In odier words, die quantity s(t) varies according to die value of a real time system clock. Since s, is set at the step 160 to represent the amount of time VC, remains on die eligibility queue, tiien adding die present value of s(t) to s, sets die value of s, to the value that s(t) will equal when VC, is to be moved from die eligibility queue to die departure queue. For example, assume mat die bandwidth management unit runs once every microsecond and diat s(t) is incremented once every cycle. Further assume d at a cell corresponding to a particular VC has been admitted and that die VC is to be moved from die eligibility queue to the departure queue in one hundred microseconds. Accordingly, die value of Sj will be set to one hundred at die step 160. At die step 166, the present value of s(t) is added to one hundred in order to provide a value of ss diat is tested against die real time clock so as to determine when to move die VC from die eligibility queue to die departure queue.
Also at die step 166, VC, is placed in die eligibility queue. For purposes of the state diagram 140, die eligibility queue is referred to as die "A" queue. The
VC's in die eligibility queue are sorted by die value of s for each of die VC's and by the value of the quality of service for each of the VC's. In other words, the VC widi die lowest value for s is at die head of die queue, the VC with die second lowest value for s is second in die queue, etc. For VC's diat have the same value of s, the VC widi die highest QOS is first, the VC with the second highest QOS is second, and so forth. If two or more VC's have an identical value for both s and QOS, tiien the VC's are sorted in die queue in a last in first out order. The significance of die order in die cells in die eligibility queue is discussed in detail hereinafter.
Following the step 166, VC, enters die eligibility state at a step 168. Note diat scd, is set to one at die step 168. From die eligibility state 168, it is possible for VQ to transition to a step 170, which sets Q, to either one or zero according to an enable or disable command provided by die node attached to die bandwidth management unit. Note that after setting βj at the step 170, VC, returns to the eligibility state 168 without performing any further tests on die value of et. The value of et is tested at a later point in die processing (discussed below), at which time VC, is returned to die unscheduled state if es does not equal one. The bandwiddi management unit can receive a request to update VC,, in which case control transfers from die eligibility state 168 to a step 172 where die VQ parameters are updated. Following the step 172, die bandwidth management unit returns to die unscheduled state 168.
It is also possible for die bandwiddi management unit to receive a command, while in die eligibility state 168, to update s, for VC,. When tiiis occurs, VC, transitions to a step 174 where the value of s, is updated. Following
die step 174 is a test step 176 which determines if die update of s, has been accepted. If not, tiien control passes form the step 176 back to die eligibility state
168. Odierwise, die system transitions from the step 176 to a step 180 where the priority variable, Pit is updated. Following die step 180, VQ returns to die eligibility state 168.
Whenever VC, is in die eligibility state 168 and a new cell for VC, arrives, control transfers to a step 181 where die priority for VC,, P,, is updated.
Calculating and updating P, is described in more detail hereinafter. Every cycle that die bandwiddi management unit runs, the first N VC's at die head of die queue are tested to determine if s(t) > s,. N is a predetermined constant value, such as eight. When die value of s(t) is greater tiian or equal to the value of st, then the system transitions from the eligibility state 168 to a step 182 where VC, is removed from die eligibility queue (i.e., die "A" queue). Following the step 182 is a test step 184 which determines if die backlog for the VC, is at least one (i.e., 1, > one) and if VC, is enabled (i.e., e, = one). Note diat it is possible for VC, to have become disabled while in die eligibility state 168 if die bandwiddi management unit received a signal to set e, to zero at die step 170, discussed above.
If at die test step 184 it is determined diat die backlog does not contain at least one cell or diat VC, is disabled, tiien control passes from the test step 184 to a step 186 where scd, is set to zero. Following die step 186, VC, returns to die unscheduled state 144, and waits for either a cell admission or a signal to set e, to one, as described above in connection widi die description of die steps 146, 150.
If at die step 184 it is determined diat die backlog of cells is at least one and diat VC, is enabled, tiien control passes from the step 184 to a step 190 where VC, is placed in die departure queue, designated as die "B" queue on die state diagram 140. Note mat the step 190 also follows the step 164, discussed above, when it is determined at die test step 164 diat s, equals zero.
The VC's in die departure queue are sorted primarily in order of the quality of service (QOS) for die VC's. That is, VC's widi higher requested quality of service are ahead of VC's witii lower requested qualities of service. VC's in the queue having the same quality of service are sorted in order of priority, P. VC's having botii the same quality of service and die same value for priority are sorted in a first in first out basis. Following die step 190, VC, enters a departure state 192.
While in die departure state 192, VC, can receive signals to set e, and update the parameters of VC,. Setting e, is performed at a step 194 and updating the parameters for VC, is performed at a step 196. The steps 194, 196 are analogous to die steps 170, 172 discussed above in connection widi die eligibility state 168. Note that following eitiier the step 194 or the step 196, VC, returns to the departure state 192.
VC, reaches d e head of die departure queue when VC, has die highest QOS and highest priority of all of die VC's in die departure queue. In diat case, VC, transitions from the departure state 192 to a step 198 where VC, is removed from die departure queue. Following the step 198 is a step 200 where die head of die line cell (i.e., die oldest cell) corresponding to VQ is output by die bandwiddi management system. Also at die step 200, die value of die backlog, 1„ is decremented by one and the credit variables, c,l and c,2, are also decremented. Following die step 200 is a step 202 where die credit variables, c,l and c,2, are updated. Updating die credit variables is discussed in more detail hereinafter.
Following the step 202 is a step 204 where die value of s, is reset. Resetting the value of s, is discussed in more detail hereinafter. Following the step 204 is a step 206 where die variables c,l, c,2 and P,, are computed in a manner discussed in more detail hereinafter.
Following die step 206 is a test step 208 where die value of s, is examined. If die value of s, is not zero, tiien control transfers from die test step 208 to the step 166, discussed above, where the value of s(t) is added to s, and where VC, is added to die eligibility queue.
If at the step 208 the value of s, is determined to be zero, tiien control passes from the step 208 to the test step 184, discussed above, where the backlog variable for VC,, L,, and die eligibility variable, et, are examined. If die backlog of VC, is at least one and if VC, is eligible, tiien control passes from the step 184 to the step 190, discussed above, where VC, is placed in die departure queue. Otiierwise, if die backlog number of cells for VC, is not at least one or if VC, is not eligible, tiien control passes from the step 184 to the step 186, discussed above, where the value of scd, is set to zero. Following die step 186, VC, enters die unscheduled state 144. Note diat, prior to transitioning to the step 184, it is possible to first run die cell admission check to determine if a cell has arrived since die last time the cell admission procedure has been run.
Referring to FIG. 6, a flow diagram 220 illustrates in detail die step 142 of FIG. 5 for setting up VQ. At a first step 222, r, and r2 are set. The variables r, and r2 represent data transmission rates. The variables r, can represent die average data transmission rate for die virtual data connection. The variables r2 can represent die burst transmission rate for the data connection. The burst transmission rate is the maximum bandwiddi allowed for sending die data over die
virtual connection, VC,, by the requesting node. The system uses both the average rate and d e burst rate in a manner described in more detail hereinafter.
Following die step 222 is a step 223 where die weights, w,l-w,5 are set.
The weights are used in die calculation of the priority, P„ in a manner which is described in more detail hereinafter.
Following die step 223 is a step 224 where the quality of service, qos,, is set. The quality of service is provided by the sending node to die bandwiddi management unit and depends upon the nature of the data being transmitted on the virtual connections, VC,. The different possible values for qos, are discussed above.
Following die step 224 is a step 225 where c,l and c,2 are set to zero and c, ml and c, ^ are set. The variables c,l and c,2 represent bandwidth credits diat are allocated to VC,. The credit variable c,l corresponds to the rate variable r,l and die credit variable c,l corresponds to die rate variable r,2. The variable c, ml is die maximum value diat die variable c.l is allowed to reach. The variable c, ^ is the maximum value that the variable c,2 is allowed to reach.
The credit variables represent the number of cell slots allocated to VC, during die operation of the bandwiddi management unit. The credit variables are incremented whenever an amount of time has passed corresponding to die associated rate variable. For example, if r,l equals a rate corresponding to one hundred cells per second, than the credit variable c,l would be incremented once every one hundredtii of a second. The credit mechanism controls the amount of bandwiddi used by each of die VC's. Also, note that since r,l and r,2 can be different, tiien the credit variables c,l and c,2 can be incremented at different rates. Updating die credit variables is described in more detail hereinafter.
Following die step 225 is a step 226 where s, m, and s, ^ are set. The variables s, ml and sι m2 represent die maximum value diat s, is allowed to reach. Following die step 226 is a step 227 where _,, die backlog of cells for VQ, is set to zero, and die variables 1, m, L^,, and lm h, which relate to 1, in a manner discussed in more detail hereinafter, are all set to predetermined constant values that vary according to die traffic parameters provided when VQ is initialized. Also at the step 227, the variables adπij, n,, and n,,,, which are discussed in more detail hereinafter, are also set
Following d e step 227 is a step 228 where die variable e„ which determines whether VC, is enabled, is set to one, thus enabling VQ, at least initially. Following die step 228 is a step 229 where die variable scd, is set to zero, thus indicating d at VC, is initially in the unscheduled state.
Following die step 229 is a step 230 where die variable t^ is set to zero.
The variable t,, indicates die value of die system time clock when the credit variables, c,l and c,2, were most recently updated.
Following die step 230 is a step 231 where the priority valuable, P„ is set to zero. Following die step 231 is a step 232 where die burst bit indicator, b, is initially set to zero.
Referring die FIG. 7, a diagram 240 illustrates in detail die cell admission step 146 of FIG. 5. The cell admission procedure shown in die diagram 240 is deemed "block mode" admission. Block mode admission involves potentially dropping a block of contiguous cells for a single VC in order to concentrate cell loss in that VC. That is, whenever a cell arrives for a VC having a full queue, the block mode admission algoritiim drops diat cell and may also drop a predetermined number of cells that follow the first dropped cell. The number of dropped cells is one of die initial traffic parameters provided when die VC is first established.
Processing for acceptance of cells begins at a current state 241 and transmission to a test step 242 when a new cell for VC, is provided to die bandwiddi management unit by the data node. If n, equals zero at die test step
242, control transfers to another test step 243. If it is determined at die test step
243 diat die number of backlog cells for VC,, 1,, is not less tiian the maximum allowable backlog of cells, 1, m, then control transfers from the step 243 to a step
244 to determine if CLP equals zero. CLP is a variable indicating the cell loss priority of a cell. If a cell has a CLP of zero, that indicates diat die cell is relatively less loss tolerant (i.e., the cell should not be discarded if possible). A cell having a CLP of odier than zero is relatively more loss tolerant (i.e., could be discarded if necessary).
If CLP is not zero at die step 244, tiien control transfers to a step 245 where the newly arrived cell is discarded. Following die step 245, conu-ol transfer back to the current state 241. If it is determined at die test step 243 diat die number of backlogged cells, 1,, is less tiian the maximum allowable number of backlogged cell 1, m, tiien control transfers from the step 243 to a step 246 to test if the value of CLP is zero.
If CLP equals zero at the step 246, then control transfers from the step 246 to a step 247 where the new cell is added to die cell buffer at the end of a queue of cells for VC,. Following the step 247 is a step 248 where die variable representing the backlog cells for VC,, lj is incremented. Following die step 248, control transfers back to the current state 241 of VC,.
If at the step 246 it is determined diat CLP for die cell does not equal zero, then control transfers from the step 246 to a step 250 where admt is tested. The variable adπij is used to indicate which lm variable, 1^ , or ^ _h, will be used. If at die test step 250 it is determined diat admt equals one, tiien control transfers from d e step 250 to a test step 251 to determine if me number of backlog cells for VC,, lj, is less than the first backlog cell limit for VQ, Lj, h. If so, tiien control transfers from the step 251 to a step 252 where the new cell is added to die queue for VC,.
Following die step 252 is a step 253 where die variable indicating the number of backlogged cells, L. is incremented. Following the step 253, control transfers back to the current state 241.
If it is determined at the step 250 diat adm, does not equal one, then control
transfers from d e step 250 to a step 255 where die number of backlogged cells for VC, is compared widi the second limit for die number of backlog cells for VC„ lm ,. If at die test step 255 it is determined diat 1, is not less tiian L^, ,, then control transfers from the step 255 to a step 256 where the newly arrived cell is discarded. Following die step 256, control transfers back to die current state 241.
If at the test step 255 it is determined diat die number of backlog cells for VQ, 1„ is less than the limit variable \mJ, then control transfers from the step 255 to a step 258 where adm; is set to one. Following die step 258 is die step 252 where the new cell is added to die queue of cells for VC,. Following die step 252, die variable representing the number of backlogged cells for VC,, _,, is incremented at die step 253. Following the step 253, control transfers back to current state 241.
If at die test step 251 it is determined diat the number of backlog cells for
VC, is not less than the limit variable l^ h, tiien control transfers from the step 251 to a step 260 where adm, is set to zero. Following die step 260 is die step 245 where die newly arrived cell is discarded. Following die step 245, control transfers back to the current state 241. If at die test step 242 it is determined that n, does not equal zero, or if CLP does equal zero at die step 244, tiien control transfers to a step 262, where die newly arrived cell is discarded. Following die step 262 is a step 263 where n, is incremented. Following die step 263 is a test step 264, where n, is compared to die limit for n,, nm. If at die step 264 n, is less tiian or equal to r , control transfers from die step 264 back to die current state 241. Otherwise, control
transfers to step 265, where n4 is set to zero. Following the step 265, control transfers back to the current state 241.
Referring to FIG. 8, a flow diagram 280 illustrates die step 160 of FIG. 5 where die value of s, is initialized. Flow begins when VC, is in die unscheduled state 282. At a first test step 284, die values of c,l and c,2 are examined. The variable c,l and c,2 represent die credits for VC,. If die values of both of die credit variables, c,l and c,2, are greater tiian or equal to one, then control passes from the test step 284 to a step 286 where the value of s, is set to zero. Following the step 286, VC, enters a next state 288. Note that, if s, is set to zero at me step 286, then die next state 288 is die departure state 192, as shown on FIG. 5. This occurs because at die test step 164 of FIG. 5, die value of s, is checked to see if s, equals zero. If so, tiien VC, goes from die unscheduled state 144 directly to die departure state 192, as shown in FIG. 5 and as described above.
If at die test step 284 it is determined diat eitiier c,l or c,2 is less than one, then control transfers from the step 284 to a step 290 where s, is set to s, ml. The variable s, is a variable that represents die amount of time between cell transmissions for cells transmitted at a rate r,l.
Following die step 290 is a step 292 where ΔC is computed. The variable ΔC represents die amount of additional credit diat will be added to c,2 after an amount of time corresponding to s, has passed. The value of ΔC is computed by multiplying s, by r,2.
Following the step 292 is a test step 294 which determines if the existing value of die credit variable, c,2, plus die value of ΔC, is less tiian one. Note tiiat the variable c,2 is not updated at this step. If the sum of c,2 and ΔC is not less tiian one, then control transfers from the step 294 to die next state 288. Note that, in this case, die next state will be die eligibility state 168 shown in FIG. 5 because s, will not equal zero at die step 164 of FIG. 5.
If die sum of c,2 and ΔC is less tiian one at die step 294, then control transfers from this step 294 to a step 296 where the value of Sj is set to sι m2. The variable s m2 represents the amount of time between cell transmissions for cells transmitted at a rate r,2. Following the step 296, control transfers to the next state 288, discussed above.
Referring to FIG. 9, a flow diagram 300 illustrates the reset s{ value step 204 shown in FIG. 5. At a first test step 302, the burst bit variable for VQ, b„ is tested. The burst bit variable, b,, indicates that VC, should transmit in burst mode. Burst mode is used for a virtual connection where die cells of a virtual connection should be transmitted as close togetiier as possible. That is, the data node sets the virtual connection for burst mode for data traffic that should be transmitted as close togetiier as possible. For example, although an e-mail message may have a low priority, it may be advantageous to transmit the entire e-mail message once the first cell of the e-mail message has been sent.
If the burst bit for VQ, b„ is determined to be set at die step 302, control passes from the test step 302 to a test step 304 where the credit variables, c,l and c,2, and die backlog variable, 1,, for VC, are tested. If die credit variables and die backlog are all greater tiian or equal to one at the step 304, tiien control passes from tiie step 304 to a step 306 where s, is set to zero. Following step 306, VC, enters the next state 308. Note that, since s, is set to zero at the step 306, the next state for VC, will be die departure state 192 shown in FIG. 5.
If die burst bit indicator, b„ is set at die step 302 but VQ does not have enough bandwiddi credits or die backlog 1, is zero at die step 304, tiien control passes from the step 304 to a step 310 where s, is set equal to s, ml. Note that the step 310 is also reached if it is determined at die test step 302 diat die burst bit indicator, bj does not equal one (i.e., VC, is not transmitting in burst mode).
Following the step 310 is a step 312 where ΔC is computed in a manner similar to at illustrated in connection with the step 292 of FIG. 8. Following the step 312 is a step 314 where it is determined if sum of die credit variable c,2 and
ΔC is less tiian one. If not, then control passes from the step 314 to die next state 308. The test step 314 determines if die credit variable c,2 will be greater tiian one after an amount of time corresponding to st has passed (i.e., after the ss counter has timed out). If not, tiien ct2 plus ΔC will be less than one when s, times out and control transfers from the step 314 to a step 316 where s, is set equal to st ^ Following the step 316, VC, enters the next state 308.
Referring to FIG. 10, a flow diagram 320 illustrates the steps for updating c,l and c,2, d e credit variables for VC,. Updating c,l and c,2 occurs at die step 158 and die step 202 shown in FIG. 5, discussed above. Note diat die credit variables are set according to current system time but that, for the step 202 of FIG. 5, the time used is the beginning of the next cell time slot in order to account for die amount of time it takes to transmit a cell at the steps preceding the step 202.
Processing begins at a current state 322 which is followed by a step 324 where a value for Cjl is computed. At die step 324, c,l is increased by an amount equal to the product of r,l (one of die rate variables for VQ) and die differences between die current system time, t, and die time at which the credit variables, l and Cj2, were last updated, t, n. Following the step 324 is a test step 326 to determine if csl is greater tiian cilm. The variable c,lm is the maximum value that the credit variable c,l is allowed to equal and is set up at die time VC, is initialized. If at the test step 326 c,l is greater man c,lm, then control transfers from the step 326 to a step 328 where c,l is set equal to c,lm. The steps 326, 328 effectively set c,l to the maximum of either c,l or cilm. Following the step 328 or die step 326 if c,l is not greater tiian c,lm is a step 330 where a value for c,2 is computed in a manner similar to die computation of c,l at the step 324. Following the step 330 are steps 332, 334 which set c,2 to the maximum of either c,2 or cl2rn in a manner similar to the steps 326, 328 where c,l is set equal to the maximum of c,l and ctlm.
Following die step 334, or the step 332 if c,2 is not greater tiian c^, is a step 336 where t,,, is set equal to t, the current system time. The variable tm represents the value of the system time, t, when the credit variables, c,l and c,2, were last updated. Following die steps 336 is a step 338 where the VC, enters the next appropriate state.
Referring to FIG. 11, a flow diagram 340 illustrates the step 174 of FIG. 5 where s, is updated by ΔS. The step 174 of FIG. 5 is executed whenever die data node associated widi die bandwiddi management unit requests changing die delay of sending data via VC, by changing the value of s, for VC,.
Processing begins with VC, in an eligibility state 342. When die data source node provides VC, with a command to update s,, control transfers from the step 342 to a test step 344 to determine if the value of ΔS, provided by die associated data node, equals zero. If so, control transfers from the step 344 back to the eligibility state 342 and no processing occurs. Otiierwise, if ΔS does not equal zero, then control transfers from the test step 344 to a step 346 where VC, is located in the eligibility queue. Note diat, as shown in FIG. 5, the step 174 where Sj is updated only occurs when VC, is in die eligibility state.
Following die step 346 is a test step 348 where it is determined if ΔS is greater tiian zero or less than zero. If ΔS is greater tiian 0, then control transfers from die step 348 to a step 350 where ΔS is added to s;. Following the step 350 is a step 352 where VC, is repositioned in die eligibility queue according to die new value of st. Note that, as discussed above in connection with FIG. 5, the position of VQ in the eligibility queue is a function of the values of ss and qoSj.
If at the test step 348 it is determined diat ΔS is less tiian zero, then control transfers from the step 348 to a step 354 where the credit variables, c,l and c,2, are examined. If at die test step 354 it is determined diat either ctl or Cj2 is less than one, then control transfers from the step 354 back to the eligibility state 342 and no update of st is performed. The steps 348, 354 indicate diat die value of s, is not decreased if eitiier of the credit variables is less tiian one.
If at die step 354 it is determined diat die credit variables, ctl and c,2, are botii greater than or equal to one, then control transfers from the step 354 to a step
356 where the value of s, is incremented by die amount ΔS. Note diat, in order to reach die step 356, ΔS must be less tiian zero so that at die step 356, die value of ss is in fact decreased.
Following die step 356 is a step 358 where die value of ^ is compared to s(t). If die value of Sj has been decreased at die step 356 by an amount that would make Sj less than s(t) then control transfers from the step 358 to a step 360 where s, is set equal to s(t). The steps 358, 360 serve to set the value of s, to the greater of either s, or s(t).
Following the step 360 or following the step 358 if s{ is greater than or equal to s(t), control transfers to a step 362 where VC, is repositioned in die eligibility queue. As discussed above, die position of VC, in the eligibility queue is a function of the values of s, and qos,.. Following e step 362, VC, returns to die eligibility stage 342.
Referring to FIG. 12, a flow diagram 370 illustrates the compute steps 162, 206 of FIG. 5 where the values of c,l, c,2, and P, are computed. Processing begins at a current state 372 which is followed by a step 374 where the value of c,l is computed. The value of c,l equals die value diat die credit variables will have when s, times out. That is, c,l equals die future value of die credit variables c,l when VC, is removed from die eligibility queue. Since, as discussed in more detail hereinafter, P, is a function of die credit variables, c,l and c,2, then calculating die future values of c,l and c,2 is useful for anticipating die value of P, when VC, is removed from die eligibility queue and placed in die departure queue.
The value of die priority variable, P,, is determined using die following equation:
P, = w,l*c,l + w,2*c,2 + w,3*r,l + w,4*r,2 + w,5*L. P; is a function of the credit variables, die rate variables, and the number of cells backlogged in die queue. The node requesting initialization of VC, specifies the weights, w,l-w,5, for the VC. Accordingly, it is possible for die requesting node to weight die terms differently, depending upon the application. For example, if the data is relatively bursty, then perhaps w,2 and w,4 (die weights for die terms corresponding to the burst data rate) are made larger tiian w,l and w,3 (die weights for die terms corresponding to die average data rate). Similarly, if it is desirable to minimize die backlog for die data, tiien w,5 (the weight corresponding to die number of backlogged cells) may be made relatively large. It may be desirable for some applications to assign zero to some of the weights.
At the step 374, the value of c,l is calculated by adding die current value of the credit variables c,l and die product of r,l and s,. Note that, if the credit variables were updated for each iteration while VC, was in the eligibility queue, then c,l would equal c,l when s, equalled zero. Note diat c,l is not updated at the step 374.
Following the step 374 is a test step 376 where csl is compared to c,, m. If c,l is greater tiian c,, m (the maximum value at c,l is allowed to take on) tiien control transfers from the test step 376 to a step 378 where the value of c,l is set equal to the value of c„ m. The steps 376, 378 serve to set the value of c,l to the maximum of c,l and c,, m. Note diat for die steps 376, 378, c,l is not updated.
Following die step 378 or following die step 376 if c,l is not greater tiian c,ι m are steps 380-382, which set the value of c,2 in a manner similar to setting die value of c,l, described above.
Following die step 382 or following the step 381 if c,2 is not greater than ^ m is a step 384 where the weights w,l, w,2, w,3, w,4, and w,5 are fetched. As discussed above, die weights are used to calculate the priority, P,.
Following the step 384 is a step 386 where the priority for VQ, P„ is calculated in die manner shown on die flow diagram 370. Note mat the weights are specified at die time that VC, is initialized and diat die importance of each of the terms used in die calculations of P,, can be emphasized or deemphasized by setting the value of the weights. The values of one or more of the weights can be set to zero, thus causing a term not to be used in die calculation of die priority. Note that P, is updated, P=P,+w,5, after each VC, cell admission while VC, is in die eligibility state, as discussed above. This allows for a piecewise computation of P, that decreases the amount of computation required to compute all of die priorities for all of the VC's within one time slot when the VC's are transferred to the departure queue (die "B" queue).
Referring to FIG. 13, a flow diagram 390 illustrates in detail die step 180 of FIG. 5 where the priority variable, P,, is updated. Note diat P, is updated only in response to die value of s, being updated at die step 174. Otherwise, the value of P, remains constant while VC, is in the eligibility state. That is, since P, is a function of c,l, c,2, r,l, r,2 and L., and since c,l and q2 are a function of s„ then P, only changes when and if s, changes at die step 174 shown in FIG. 5.
Flow begins at a current state 392 which is followed by a step 394 where s, is accessed. Following the step 394 are steps 396-398 where c,l is computed in a manner similar to mat illustrated in FIG. 12. Note, however, diat at die step 396, ^ is subtracted from s, since, at is stage of the processing, s(t) has already been added to s, at die step 166 shown in FIG. 5, and since c,l is being predicted when s, times out.
Following the step 398 are steps 400-402 where c,2 is computed in a manner similar to the computation of c.l. Following the step 402 or following the step 401 if c,2 is not greater than cα m are the steps 404, 405 where die weights are fetched and P, is computed in a manner similar to that illustrated in connection with FIG. 12, described above.
Referring to FIG. 14, a schematic diagram 420 illustrates in detail one possible hardware implementation for die bandwiddi management unit. An input 422 connects die bandwidth management unit 420 to the source of the data cells.
The input 422 is connected to a cell pool 424 which is implemented using memory. The cell pool 424 stores all of the cells waiting for departure from die bandwidth management unit. In the embodiment illustrated herein, die size of die cell pool 424 equals the number of VC's that die system can accommodate multiplied by the maximum allowable backlog per VC. For example, for a bandwiddi management unit designed to handle 4,096 VC's widi a maximum backlog lengtii of 30 cells, die size of die cell pool is 4096 x 30 cells.
A cell pool manager 426 handles die cells witiiin the bandwidth management unit 420 in a manner described above in connection with die state diagrams of FIG's 5-13.
A parameter buffer 428 is used to handle all of die variables associated widi each of die VC's and is implemented using memory. The size of die parameter buffer 428 is die number of variables per VC multiplied by the maximum number of VC's handled by die bandwiddi management unit 420.
A parameter update unit 430 is connected to die parameter buffer 428 and updates die variables for each of the VC's in a manner described above in connection with the state diagrams of FIG.'s 5-13.
A cell clock 432 computes the value of s(t) which, as described above, is used for determining when a VC transitions from the eligibility queue to the departure queue. The cell clock 432 also provides die system time, t.
An eligibility queue 431 is implemented using memory and stores die VC's that are in the eligibility state. As discussed above in connection widi FIG.'s 5-13, VC's in die eligibility queue are sorted in order of the values of s and qos for each
VC. A comparator 434 compares die value of s(t) from the eligibility cell clock 432 with the value of s for each of the VC's and determines when a VC should transition from the eligibility state to the departure state. Note diat, as discussed above in connection with FIG.'s 5-13, VQ transitions from the eligibility state to the departure state when s, > s(t).
A departure queue 436 is implemented using memory and stores die VC's diat are awaiting departure from die bandwiddi management unit 420. As discussed above in connection with FIG.'s 5-13, die departure queue 436 contains VC's that are sorted in order of quality of service (qos) for each VC and die priority that is computed for each VC. A select sequencer unit 433 selects, for each VC, which of the two queues 431, 436 to place each VC according to die algoritiim described above and shown in FIG.'s 5-13.
A priority computation unit 438 computes the priority for each of die VC's during each iteration. Note diat, as discussed above in connection with FIG.'s 5-13, once a VC has entered die eligibility queue, the priority can be updated by simply summing die value of the priority variable and the product of the weight w5 after each cell arrival is admitted.
The device 420 shown in FIG. 14 can be implemented in a straightforward manner using conventional VLSI architecture. The cell pool 424, eligibility queue 431, departure queue 436, and parameter buffer 428 can be implemented as memories. The cell pool manager 426, die parameter update unit 430, die select sequencer 433, the priority computation unit 438 and the comparator 434 can be implemented using VLSI logic to implement die functionality described above in connection with FIG.'s 5-13. That is, one of ordinary skill in die art can implement the bandwiddi management unit 420 by using die state diagrams and current custom VLSI design techniques. Note also that, based on die description contained herein, one of ordinary skill in die art could implement die bandwiddi management unit 420 in a variety of other manners, including entirely as software on a very fast processor or as a conventional combination of hardware and software.
Although the invention has been shown and described witii respect to exemplary embodiments tiiereof, it should be understood by those skilled in die art that various changes, omissions and additions may be made therein and thereto, without departing from me spirit and the scope of the invention.

Claims

WHAT IS CLAIMED IS:
1. A bandwiddi management system, for managing a plurality of virtual data connections within a communications network, comprising:
an input for receiving data cells, wherein each cell is associated widi a particular one of the virtual connections; a cell pool, coupled to said input, for storing said cells; a first queue containing particular ones of die virtual connections, wherein a relative position of a virtual connection in said first queue is determined by an eligibility value that varies according to an anticipated data rate associated with the particular virtual connection and according to an amount of time that the particular virtual connection has been in said first queue; a second queue, coupled to said first queue and containing particular other ones of the virtual connections, wherein a relative position of a virtual connection in said second queue varies according to a predetermined quality of service that is assigned to each of die virtual connections; and an output, coupled to said second queue and said cell pool, for transmitting a cell from said cell pool corresponding to a virtual connection at the front of said
second queue.
2. A bandwidth management system, according to claim 1, wherein virtual connections having equal eligibility values are ordered in said first queue according to said predetermined quality of service.
3. A bandwidth management system, according to claim 1, wherein virtual connections having equal quality of service values are ordered in said second queue according to a priority assigned to each of said virtual connections.
4. A bandwiddi management system, according to claim 3, wherein said priority varies according to one or more anticipated data rates of each of the virtual connections.
5. A bandwiddi management system, according to claim 3, wherein said priority varies according to one or more credit variables assigned to each of die virtual connections, said credit variables being indicative of allocated time slots provided to each of die virtual connections.
6. A bandwiddi management system, according to claim 3, wherein said priority varies according to a backlog of cells for each virtual connection that are awaiting transmission.
7. A bandwiddi management system, according to claim 3, wherein said priority varies according to one or more anticipated data rates of each of die virtual connections, one or more credit variables assigned to each of the virtual connections, said credit variables being indicative of allocated time slots provided to each of die virtual connections, and to a backlog of cells for each virtual connection that are awaiting transmission.
8. A bandwiddi management system, according to claim 7, wherein said data rates, said credit variables, and said backlog are weighted prior to determining said priority.
9. A bandwiddi management system, according to claim 1, wherein said system is one of: a pacing unit and an enforcement unit, wherein said pacing unit receives cells from a data source node and provides cells to a communication link, and wherein an enforcement unit receives data from a communication link and provides data to a data sink node.
10. A bandwiddi management system, according to claim 1, further comprising: means for establishing a virtual connection by specifying initial traffic
parameters; and means for dropping cells diat are received at a rate diat exceeds diat specified by said initial traffic parameters.
11. A bandwiddi management system, according to claim 1, wherein said predetermined quality of service has four possible values.
12. A bandwidth management system, according to claim 11, wherein virtual connections having equal quality of service values are ordered in said second queue according to a priority assigned to each of said virtual connections.
13. A bandwiddi management system, according to claim 12, wherein said priority varies according to one or more anticipated data rates of each of die virtual connections, one or more credit variables assigned to each of die virtual connections, said credit variables being indicative of allocated time slots provided to each of die virtual connections, and to a backlog of cells for each virtual connection that are awaiting transmission.
14. A bandwiddi management system, according to claim 13, wherein said data rates, said credit variables, and said backlog are weighted prior to determining said
priority.
15. A bandwiddi management system, according to claim 1, further comprising:
a burst bit indicator for indicating whether a virtual connection associated with a cell that has been received by said input should be placed directly in said second queue.
16. A bandwidth management system, according to claim 15, wherein virtual connections having equal quality of service values are ordered in said second queue according to a priority assigned to each of said virtual connections.
17. A bandwiddi management system, according to claim 16, wherein said priority varies according to one or more anticipated data rates of each of die virtual connections, one or more credit variables assigned to each of die virtual connections, said credit variables being indicative of allocated time slots provided to each of die virtual connections, and to a backlog of cells for each virtual connection that are awaiting transmission.
18. A bandwiddi management system, according to claim 17, wherein said data rates, said credit variables, and said backlog are weighted prior to determining said priority.
19. A metiiod of managing a plurality of virtual data connections within a communications network, comprising the steps of: receiving data cells, wherein each cell is associated with a particular one of die virtual connections; storing die received cells in a cell pool; providing a first queue containing particular ones of die virtual connections, wherein a relative position of a virtual connection in the first queue is determined by an eligibility value that varies according to an anticipated data rate associated witii the particular virtual connection and according to an amount of time diat die particular virtual connection has been in the first queue; providing a second queue, coupled to die first queue and containing particular odier ones of die virtual connections, wherein a relative position of a virtual connection in the second queue varies according to a predetermined quality of service diat is assigned to each of the virtual connections; and transmitting a cell from the cell pool corresponding to a virtual connection at the front of die second queue.
20. A metiiod, according to claim 19, further comprising the step of: ordering virtual connections having equal eligibility values in the first queue according to die predetermined quality of service.
21. A method, according to claim 19, further comprising the step of: ordering virtual connections having equal quality of service values in the second queue according to a priority assigned to each of die virtual connections.
22. A metiiod, according to claim 21, further including die step of: varying die priority according to one or more anticipated data rates of each of die virtual connections.
23. A method, according to claim 21, further including die step of: varying die priority according to one or more credit variables assigned to each of die virtual connections, wherein die credit variables are indicative of allocated time slots provided to each of die virtual connections.
24. A metiiod, according to claim 21, further including the step of: varying the priority according to a backlog of cells, for each virtual connection, diat are awaiting transmission.
25. A method, according to claim 21, further including die step of: varying die priority according to one or more anticipated data rates of each of die virtual connections, one or more credit variables assigned to each of die virtual connections, wherein the credit variables are indicative of allocated time slots provided to each of die virtual connections, and according to a backlog of cells, for each virtual connection, that are awaiting transmission.
26. A method, according to claim 25, further including die step of: varying die priority according to weights assigned to die data rates, die credit variables, and die backlog.
27. A metiiod, according to claim 19, further comprising the steps of establishing a virtual connection by specifying initial traffic parameters; and dropping cells that are received at a rate diat exceeds diat specified by die initial traffic parameters.
28. A method, according to claim 19, further including die step of: assigning one of four possible values die predetermined quality of service.
29. A metiiod, according to claim 28, further comprising the step of: ordering virtual connections having equal quality of service values in the second queue according to a priority assigned to each of the virtual connections.
30. A method, according to claim 29, further including die step of: varying die priority according to one or more anticipated data rates of each of die virtual connections, one or more credit variables assigned to each of die virtual connections, wherein the credit variables are indicative of allocated time slots provided to each of die virtual connections, and according to a backlog of cells, for each virtual connection, that are awaiting transmission.
31. A method, according to claim 30, further including die step of: varying die priority according to weights assigned to die data rates, die credit variables, and die backlog.
32. A metiiod, according to claim 19, further comprising the step of:
placing a virtual connection associated witii an admitted cell directly on the second queue in response to a burst indicator for the virtual connection being set.
33. A method, according to claim 32, further comprising the step of: ordering virtual connections having equal quality of service values in die second queue according to a priority assigned to each of die virtual connections.
34. A metiiod, according to claim 33, further including die step of: varying the priority according to one or more anticipated data rates of each of die virtual connections, one or more credit variables assigned to each of die virtual connections, wherein the credit variables are indicative of allocated time slots provided to each of die virtual connections, and according to a backlog of cells, for each virtual connection, that are awaiting transmission.
35. A method, according to claim 34, further including die step of: varying die priority according to weights assigned to die data rates, die credit variables, and the backlog.
PCT/US1996/001208 1995-02-03 1996-01-23 Bandwidth management and access control for an atm network WO1996024212A1 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
EP96906219A EP0754383B1 (en) 1995-02-03 1996-01-23 Bandwidth management and access control for an atm network
CA002186449A CA2186449C (en) 1995-02-03 1996-01-23 Bandwidth management and access control for an atm network
JP08523661A JP3088464B2 (en) 1995-02-03 1996-01-23 ATM network bandwidth management and access control
KR1019960705640A KR100222743B1 (en) 1995-02-03 1996-01-23 Bandwidth management and access control for atm network

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US08/383,400 1995-02-03
US08/383,400 US5533009A (en) 1995-02-03 1995-02-03 Bandwidth management and access control for an ATM network

Publications (1)

Publication Number Publication Date
WO1996024212A1 true WO1996024212A1 (en) 1996-08-08

Family

ID=23512968

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1996/001208 WO1996024212A1 (en) 1995-02-03 1996-01-23 Bandwidth management and access control for an atm network

Country Status (8)

Country Link
US (1) US5533009A (en)
EP (1) EP0754383B1 (en)
JP (1) JP3088464B2 (en)
KR (1) KR100222743B1 (en)
CA (1) CA2186449C (en)
MY (1) MY112027A (en)
TW (1) TW344920B (en)
WO (1) WO1996024212A1 (en)

Families Citing this family (97)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5696764A (en) * 1993-07-21 1997-12-09 Fujitsu Limited ATM exchange for monitoring congestion and allocating and transmitting bandwidth-guaranteed and non-bandwidth-guaranteed connection calls
US5446726A (en) * 1993-10-20 1995-08-29 Lsi Logic Corporation Error detection and correction apparatus for an asynchronous transfer mode (ATM) network device
JPH0823332A (en) * 1994-07-05 1996-01-23 Mitsubishi Electric Corp Access controller for ring type atm node
EP0702472A1 (en) * 1994-09-19 1996-03-20 International Business Machines Corporation A method and an apparatus for shaping the output traffic in a fixed length cell switching network node
JPH08139737A (en) * 1994-11-14 1996-05-31 Nec Corp Congestion control system
EP0717532A1 (en) * 1994-12-13 1996-06-19 International Business Machines Corporation Dynamic fair queuing to support best effort traffic in an ATM network
US5713043A (en) * 1995-01-04 1998-01-27 International Business Machines Corporation Method and system in a data processing system for efficient determination of quality of service parameters
DE19507570C2 (en) * 1995-03-03 1997-08-21 Siemens Ag Method and circuit arrangement for forwarding message cells transmitted via an ATM communication device to a customer line
EP0813783A1 (en) * 1995-03-08 1997-12-29 Oxford Brookes University Broadband switching system
CA2216171A1 (en) * 1995-03-24 1996-10-03 Ppt Vision, Inc. High speed digital video serial link
US5586121A (en) * 1995-04-21 1996-12-17 Hybrid Networks, Inc. Asymmetric hybrid access system and method
US5996019A (en) 1995-07-19 1999-11-30 Fujitsu Network Communications, Inc. Network link access scheduling using a plurality of prioritized lists containing queue identifiers
AU6502096A (en) * 1995-07-19 1997-02-18 Ascom Nexion Inc. Minimum guaranteed cell rate method and apparatus
JPH11511303A (en) * 1995-07-19 1999-09-28 フジツウ ネットワーク コミュニケーションズ,インコーポレイテッド Method and apparatus for sharing link buffer
JPH11510005A (en) * 1995-07-19 1999-08-31 フジツウ ネットワーク コミュニケーションズ,インコーポレイテッド Method and system for controlling network service parameters in cell-based communication network
EP0873611A1 (en) 1995-09-14 1998-10-28 Fujitsu Network Communications, Inc. Transmitter controlled flow control for buffer allocation in wide area atm networks
US6445708B1 (en) * 1995-10-03 2002-09-03 Ahead Communications Systems, Inc. ATM switch with VC priority buffers
US5751708A (en) * 1995-10-25 1998-05-12 Lucent Technologies Inc. Access method for broadband and narrowband networks
KR100278573B1 (en) * 1995-12-13 2001-01-15 포만 제프리 엘 Connection admission control in high-speed packet switched networks
US5781531A (en) * 1995-12-27 1998-07-14 Digital Equipment Corporation Method and apparatus for hierarchical relative error scheduling
US6130878A (en) 1995-12-27 2000-10-10 Compaq Computer Corporation Method and apparatus for rate-based scheduling using a relative error approach
US5751709A (en) * 1995-12-28 1998-05-12 Lucent Technologies Inc. Adaptive time slot scheduling apparatus and method for end-points in an ATM network
US5991298A (en) 1996-01-16 1999-11-23 Fujitsu Network Communications, Inc. Reliable and flexible multicast mechanism for ATM networks
US5920561A (en) * 1996-03-07 1999-07-06 Lsi Logic Corporation ATM communication system interconnect/termination unit
US6535512B1 (en) * 1996-03-07 2003-03-18 Lsi Logic Corporation ATM communication system interconnect/termination unit
US5841772A (en) * 1996-03-07 1998-11-24 Lsi Logic Corporation ATM communication system interconnect/termination unit
US5848068A (en) * 1996-03-07 1998-12-08 Lsi Logic Corporation ATM communication system interconnect/termination unit
US5982749A (en) * 1996-03-07 1999-11-09 Lsi Logic Corporation ATM communication system interconnect/termination unit
US6373846B1 (en) 1996-03-07 2002-04-16 Lsi Logic Corporation Single chip networking device with enhanced memory access co-processor
EP0798897A3 (en) * 1996-03-26 1999-07-14 Digital Equipment Corporation Method and apparatus for relative error scheduling using discrete rates and proportional rate scaling
FR2746992B1 (en) * 1996-03-27 1998-09-04 Quinquis Jean Paul LOCAL MOBILE ACCESS NETWORK
GB9606708D0 (en) * 1996-03-29 1996-06-05 Plessey Telecomm Bandwidth bidding
DK174882B1 (en) * 1996-04-12 2004-01-19 Tellabs Denmark As Method and network element for transmitting data packets in a telephony transmission network
US6058114A (en) * 1996-05-20 2000-05-02 Cisco Systems, Inc. Unified network cell scheduler and flow controller
US5668738A (en) * 1996-05-31 1997-09-16 Intel Corporation Dynamic idle aggregation in digital signal transmission
FR2750283B1 (en) * 1996-06-20 1998-07-31 Quinquis Jean Paul LOCAL MOBILE ACCESS NETWORK PROVIDED WITH MEANS FOR MANAGING RESOURCES IN SUCH A NETWORK
US5748905A (en) 1996-08-30 1998-05-05 Fujitsu Network Communications, Inc. Frame classification using classification keys
JP2882384B2 (en) * 1996-09-27 1999-04-12 日本電気株式会社 Traffic shaping device
WO1998019427A1 (en) * 1996-10-31 1998-05-07 Siemens Aktiengesellschaft Method of routing asynchronously transferred message cells with a 100 % power capacity utilization
US6335927B1 (en) * 1996-11-18 2002-01-01 Mci Communications Corporation System and method for providing requested quality of service in a hybrid network
JP2964968B2 (en) * 1996-12-06 1999-10-18 日本電気株式会社 Shaping processing apparatus and shaping processing method
WO1998025378A1 (en) 1996-12-06 1998-06-11 Fujitsu Network Communications, Inc. Method for flow controlling atm traffic
CH690887A5 (en) * 1996-12-13 2001-02-15 Alcatel Sa Shaper for a stream of data packets
US5850398A (en) * 1996-12-30 1998-12-15 Hyundai Electronics America Method of scheduling data cell transmission in an ATM network
US6091455A (en) * 1997-01-31 2000-07-18 Hughes Electronics Corporation Statistical multiplexer for recording video
US6188436B1 (en) 1997-01-31 2001-02-13 Hughes Electronics Corporation Video broadcast system with video data shifting
US6005620A (en) * 1997-01-31 1999-12-21 Hughes Electronics Corporation Statistical multiplexer for live and pre-compressed video
US6097435A (en) * 1997-01-31 2000-08-01 Hughes Electronics Corporation Video system with selectable bit rate reduction
US6084910A (en) * 1997-01-31 2000-07-04 Hughes Electronics Corporation Statistical multiplexer for video signals
US6078958A (en) * 1997-01-31 2000-06-20 Hughes Electronics Corporation System for allocating available bandwidth of a concentrated media output
US6026075A (en) 1997-02-25 2000-02-15 International Business Machines Corporation Flow control mechanism
US5844890A (en) * 1997-03-25 1998-12-01 International Business Machines Corporation Communications cell scheduler and scheduling method for providing proportional use of network bandwith
US5864540A (en) * 1997-04-04 1999-01-26 At&T Corp/Csi Zeinet(A Cabletron Co.) Method for integrated traffic shaping in a packet-switched network
US7103050B1 (en) * 1997-04-10 2006-09-05 International Business Machines Corporation Method and means for determining the used bandwidth on a connection
JP2865139B2 (en) * 1997-04-18 1999-03-08 日本電気株式会社 ATM cell buffer circuit and arbitrary priority allocation method in ATM exchange
US6041059A (en) * 1997-04-25 2000-03-21 Mmc Networks, Inc. Time-wheel ATM cell scheduling
US6014367A (en) * 1997-04-25 2000-01-11 Mmc Networks, Inc Method for weighted fair queuing for ATM cell scheduling
EP0886403B1 (en) * 1997-06-20 2005-04-27 Alcatel Method and arrangement for prioritised data transmission of packets
GB2327317B (en) 1997-07-11 2002-02-13 Ericsson Telefon Ab L M Access control and resourse reservation in a communications network
US6147998A (en) * 1997-08-26 2000-11-14 Visual Networks Technologies, Inc. Method and apparatus for performing in-service quality of service testing
US6310886B1 (en) * 1997-08-28 2001-10-30 Tivo, Inc. Method and apparatus implementing a multimedia digital network
US6198724B1 (en) 1997-10-02 2001-03-06 Vertex Networks, Inc. ATM cell scheduling method and apparatus
US5999963A (en) * 1997-11-07 1999-12-07 Lucent Technologies, Inc. Move-to-rear list scheduling
US6256308B1 (en) * 1998-01-20 2001-07-03 Telefonaktiebolaget Lm Ericsson Multi-service circuit for telecommunications
US6181684B1 (en) * 1998-02-02 2001-01-30 Motorola, Inc. Air interface statistical multiplexing in communication systems
US6483850B1 (en) 1998-06-03 2002-11-19 Cisco Technology, Inc. Method and apparatus for routing cells having different formats among service modules of a switch platform
US6967961B1 (en) * 1998-06-03 2005-11-22 Cisco Technology, Inc. Method and apparatus for providing programmable memory functions for bi-directional traffic in a switch platform
US6438102B1 (en) 1998-06-03 2002-08-20 Cisco Technology, Inc. Method and apparatus for providing asynchronous memory functions for bi-directional traffic in a switch platform
US6463485B1 (en) 1998-06-03 2002-10-08 Cisco Technology, Inc. System for providing cell bus management in a switch platform including a write port cell count in each of a plurality of unidirectional FIFO for indicating which FIFO be able to accept more cell
US6041048A (en) * 1998-06-12 2000-03-21 Motorola, Inc. Method for providing information packets from a packet switching network to a base site and corresponding communication system
US6446122B1 (en) * 1998-06-24 2002-09-03 Cisco Technology, Inc. Method and apparatus for communicating quality of service information among computer communication devices
US6298071B1 (en) * 1998-09-03 2001-10-02 Diva Systems Corporation Method and apparatus for processing variable bit rate information in an information distribution system
US6498782B1 (en) 1999-02-03 2002-12-24 International Business Machines Corporation Communications methods and gigabit ethernet communications adapter providing quality of service and receiver connection speed differentiation
US6477168B1 (en) * 1999-02-03 2002-11-05 International Business Machines Corporation Cell/frame scheduling method and communications cell/frame scheduler
US6765911B1 (en) 1999-02-03 2004-07-20 International Business Machines Corporation Communications adapter for implementing communications in a network and providing multiple modes of communications
JP2001127762A (en) * 1999-10-25 2001-05-11 Matsushita Electric Ind Co Ltd Communication control method and system
US6820128B1 (en) * 1999-11-04 2004-11-16 Nortel Networks Limited Method and apparatus of processing packets having varying priorities by adjusting their drop functions according to a predefined fairness relationship
US6625122B1 (en) 1999-11-24 2003-09-23 Applied Micro Circuits Corporation Selection of data for network transmission
US7315901B1 (en) * 2000-04-13 2008-01-01 International Business Machines Corporation Method and system for network processor scheduling outputs using disconnect/reconnect flow queues
US7142558B1 (en) * 2000-04-17 2006-11-28 Cisco Technology, Inc. Dynamic queuing control for variable throughput communication channels
WO2002003263A2 (en) * 2000-06-29 2002-01-10 Object Reservoir, Inc. Method and system for coordinate transformation to model radial flow near a singularity
US6658512B1 (en) * 2000-09-28 2003-12-02 Intel Corporation Admission control method for data communications over peripheral buses
GB2375927B (en) * 2001-05-26 2004-09-29 Cambridge Broadband Ltd Method and apparatus for communications bandwidth allocation
US7370096B2 (en) * 2001-06-14 2008-05-06 Cariden Technologies, Inc. Methods and systems to generate and implement a changeover sequence to reconfigure a connection-oriented network
KR100429795B1 (en) * 2001-07-21 2004-05-04 삼성전자주식회사 Method for managing bandwidth of serial bus and apparatus thereof
US7280476B2 (en) * 2002-06-04 2007-10-09 Lucent Technologies Inc. Traffic control at a network node
US7949871B2 (en) * 2002-10-25 2011-05-24 Randle William M Method for creating virtual service connections to provide a secure network
US7477603B1 (en) * 2003-07-08 2009-01-13 Cisco Technology, Inc. Sharing line bandwidth among virtual circuits in an ATM device
US7443857B1 (en) * 2003-07-09 2008-10-28 Cisco Technology Inc. Connection routing based on link utilization
ES2245590B1 (en) * 2004-03-31 2007-03-16 Sgc Telecom - Sgps, S.A. RADIO FREQUENCY COMMUNICATIONS SYSTEM WITH USE OF TWO FREQUENCY BANDS.
US7539176B1 (en) 2004-04-02 2009-05-26 Cisco Technology Inc. System and method for providing link, node and PG policy based routing in PNNI based ATM networks
JPWO2006093221A1 (en) * 2005-03-04 2008-08-07 ヒューレット−パッカード デベロップメント カンパニー エル.ピー. Transmission control apparatus and method
CN100452892C (en) * 2005-07-11 2009-01-14 华为技术有限公司 Method for carrying out playback on conversation
US20080165685A1 (en) * 2007-01-09 2008-07-10 Walter Weiss Methods, systems, and computer program products for managing network bandwidth capacity
US10135917B2 (en) 2017-04-20 2018-11-20 At&T Intellectual Property I, L.P. Systems and methods for allocating customers to network elements
CN109582719B (en) * 2018-10-19 2021-08-24 国电南瑞科技股份有限公司 Method and system for automatically linking SCD file of intelligent substation to virtual terminal
JP2021170289A (en) * 2020-04-17 2021-10-28 富士通株式会社 Information processing system, information processing device and program

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5237564A (en) * 1990-04-06 1993-08-17 France Telecom Frame switching relay for asynchronous digital network
US5271004A (en) * 1990-09-05 1993-12-14 Gpt Limited Asynchronous transfer mode switching arrangement providing broadcast transmission
US5276676A (en) * 1990-10-29 1994-01-04 Siemens Aktiengesellschaft Method for monitoring a bit rate of at least one virtual connection
US5278828A (en) * 1992-06-04 1994-01-11 Bell Communications Research, Inc. Method and system for managing queued cells
US5280475A (en) * 1990-08-17 1994-01-18 Hitachi, Ltd. Traffic shaping method and circuit
US5313579A (en) * 1992-06-04 1994-05-17 Bell Communications Research, Inc. B-ISDN sequencer chip device
US5390184A (en) * 1993-09-30 1995-02-14 Northern Telecom Limited Flexible scheduling mechanism for ATM switches

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5237564A (en) * 1990-04-06 1993-08-17 France Telecom Frame switching relay for asynchronous digital network
US5280475A (en) * 1990-08-17 1994-01-18 Hitachi, Ltd. Traffic shaping method and circuit
US5271004A (en) * 1990-09-05 1993-12-14 Gpt Limited Asynchronous transfer mode switching arrangement providing broadcast transmission
US5276676A (en) * 1990-10-29 1994-01-04 Siemens Aktiengesellschaft Method for monitoring a bit rate of at least one virtual connection
US5278828A (en) * 1992-06-04 1994-01-11 Bell Communications Research, Inc. Method and system for managing queued cells
US5313579A (en) * 1992-06-04 1994-05-17 Bell Communications Research, Inc. B-ISDN sequencer chip device
US5390184A (en) * 1993-09-30 1995-02-14 Northern Telecom Limited Flexible scheduling mechanism for ATM switches

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
PEHA J M: "THE PRIORITY TOKEN BANK: INTEGRATED SCHEDULING AND ADMISSION CONTROL FOR AN INTEGRATED-SERVICES NETWORK", PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, 23 May 1993 (1993-05-23), pages 345 - 351
See also references of EP0754383A4

Also Published As

Publication number Publication date
CA2186449A1 (en) 1996-08-08
EP0754383A1 (en) 1997-01-22
CA2186449C (en) 2000-06-20
KR970702642A (en) 1997-05-13
KR100222743B1 (en) 1999-10-01
US5533009A (en) 1996-07-02
TW344920B (en) 1998-11-11
JPH09505720A (en) 1997-06-03
JP3088464B2 (en) 2000-09-18
EP0754383A4 (en) 2000-09-27
MY112027A (en) 2001-03-31
EP0754383B1 (en) 2011-05-25

Similar Documents

Publication Publication Date Title
US5533009A (en) Bandwidth management and access control for an ATM network
US5831971A (en) Method for leaky bucket traffic shaping using fair queueing collision arbitration
US5818815A (en) Method and an apparatus for shaping the output traffic in a fixed length cell switching network node
EP0749668B1 (en) Broadband switching network
US6504820B1 (en) Method and system for connection admission control
KR100293920B1 (en) Apparatus and method for controlling traffic of atm user network interface
KR100431191B1 (en) An apparatus and method for scheduling packets by using a round robin based on credit
US6392994B1 (en) ATM adaption layer traffic scheduling
US6396843B1 (en) Method and apparatus for guaranteeing data transfer rates and delays in data packet networks using logarithmic calendar queues
EP0862299A2 (en) Multi-class connection admission control method for asynchronous transfer mode (ATM) switches
US6442164B1 (en) Method and system for allocating bandwidth and buffer resources to constant bit rate (CBR) traffic
US6452905B1 (en) Broadband switching system
JP3338000B2 (en) Real-time traffic monitoring and control method in ATM switching node
EP0936834A2 (en) Method and apparatus for controlling traffic flows in a packet-switched network
US6504824B1 (en) Apparatus and method for managing rate band
WO2000067402A1 (en) Methods and apparatus for managing traffic in an atm network
EP0818098B1 (en) Method for rejecting cells at an overloaded node buffer
Vickers et al. Congestion control and resource management in diverse ATM environments
US7295558B2 (en) ATM adaption layer traffic scheduling
Awater et al. Optimal queueing policies for fast packet switching of mixed traffic
Katevenis et al. Multi-queue management and scheduling for improved QoS in communication networks
US6735214B1 (en) Method and system for a hierarchical traffic shaper
JPH05122242A (en) Band management and call reception control system in atm exchange
KR960003782B1 (en) Scheduling controller and its method of subscriber access device of isdn
KR100580864B1 (en) A scheduling method for guarantee of CDV and fairness of real- time traffic in ATM networks

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): CA JP KR SG

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH DE DK ES FR GB GR IE IT LU MC NL PT SE

WWE Wipo information: entry into national phase

Ref document number: 1996906219

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2186449

Country of ref document: CA

121 Ep: the epo has been informed by wipo that ep was designated in this application
WWP Wipo information: published in national office

Ref document number: 1996906219

Country of ref document: EP