WO1997025831A1 - Per channel frame queuing and servicing in the egress direction of a communications network - Google Patents

Per channel frame queuing and servicing in the egress direction of a communications network Download PDF

Info

Publication number
WO1997025831A1
WO1997025831A1 PCT/US1997/000278 US9700278W WO9725831A1 WO 1997025831 A1 WO1997025831 A1 WO 1997025831A1 US 9700278 W US9700278 W US 9700278W WO 9725831 A1 WO9725831 A1 WO 9725831A1
Authority
WO
WIPO (PCT)
Prior art keywords
queues
egress
queuing
servicing
frames
Prior art date
Application number
PCT/US1997/000278
Other languages
French (fr)
Inventor
Homayoun S. Valizadeh
Original Assignee
Cisco Systems, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cisco Systems, Inc. filed Critical Cisco Systems, Inc.
Priority to AU15307/97A priority Critical patent/AU1530797A/en
Publication of WO1997025831A1 publication Critical patent/WO1997025831A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q11/00Selecting arrangements for multiplex systems
    • H04Q11/04Selecting arrangements for multiplex systems for time-division multiplexing
    • H04Q11/0428Integrated services digital network, i.e. systems for transmission of different types of digitised signals, e.g. speech, data, telecentral, television signals
    • H04Q11/0478Provisions for broadband connections
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5614User Network Interface
    • H04L2012/5615Network termination, e.g. NT1, NT2, PBX
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5638Services, e.g. multimedia, GOS, QOS
    • H04L2012/5645Connectionless
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5638Services, e.g. multimedia, GOS, QOS
    • H04L2012/5646Cell characteristics, e.g. loss, delay, jitter, sequence integrity
    • H04L2012/5652Cell construction, e.g. including header, packetisation, depacketisation, assembly, reassembly
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5678Traffic aspects, e.g. arbitration, load balancing, smoothing, buffer management
    • H04L2012/5681Buffer or queue management

Definitions

  • the present invention pertains to the field of communication systems and more particularly to per channel frame queuing and servicing in the egress direction of a communications network.
  • ATM networks are often used by telecommunication service providers to transfer digital information over long distances on a demand driven basis.
  • ATM networks are cell switching networks that transfer fixed length packets or "cells" in a time multiplexed manner using a plurality of virtual paths (“VPs”) and virtual channels (“VCs”) defined within the physical transmission medium of the network.
  • VPs virtual paths
  • VCs virtual channels
  • Communication controllers act as end nodes of common carrier ATM networks and provide entry points so that customers may use the ATM networks.
  • a communication controller connects to an ATM network using a set of one or more common carrier communication links such as high speed T3 or E3 lines, wherein customer premises equipment (i.e. user networks and devices) is typically connected to a communication controller using lower speed links such as Tl or El lines.
  • a common carrier ATM network and customer premises equipment operate asynchronously, typically at different data rates, and often using different communications protocols. Therefore, communication controllers must provide the necessary services and facilities 1) for ensuring that data received from the customer premises equipment (i.e. "ingress traffic") is transmitted correctly over the ATM network and 2) for ensuring that data received from the common carrier ATM network (i.e.
  • egress traffic is transmitted correctly to the customer premises equipment.
  • Three basic services that a typical communication controller will provide are segmentation and reassembly, multiplexing and demultiplexing, and storing and forwarding (buffering).
  • Another type of service that is often provided is congestion control.
  • a communication controller typically includes a segmentation and reassembly (SAR) unit that packs data into cells for transmission over the ATM network and that packs data into appropriately sized units or "frames" for transmission to the customer premises equipment.
  • SAR segmentation and reassembly
  • the SAR unit segments frames into cells wherein each cell created by the SAR unit is specified for transmission over a particular VC or VP by appropriately setting the virtual channel identifier (VCI) or virtual path identifier (VPI) in the header of the cell.
  • VCI virtual channel identifier
  • VPI virtual path identifier
  • the ingress traffic is multiplexed for transmission over the ATM network.
  • Ingress queues are provided for each VC to store cells until they may be transmitted.
  • the SAR unit reassembles previously segmented frames using the received cells.
  • Each frame reassembled by the SAR unit is stored in a selected one of a set of egress queues, wherein each egress queue typically corresponds to a logical channel or port of the customer premises equipment. In this manner, the egress traffic is demultiplexed for transmission to the customer premises equipment.
  • egress queuing and demultiplexing One problem with prior approaches to egress queuing and demultiplexing is that multiple VCs may be associated with a particular logical channel of the customer premises equipment, and fair access to that logical channel for all of the VCs is not be assured.
  • a method for queuing and servicing egress traffic of a network A first set of n egress queues that are each coupled to store frames received from a corresponding one of a set of n receive channels are provided, and a distinct set of queuing parameters is maintained for each of the first set of n egress queues.
  • a second set of m egress queues each coupled to store frames for transmission to a corresponding one of a set of m transmit channels are also provided, wherein m is less than n.
  • the first set of n egress queues are serviced to fill the second set of m egress queues using a first service algorithm and the sets of queuing parameters for each of the first set of n egress queues. In this manner, a certain level of fairness may be maintained as between multiple receive channels that are mapped for output via the same transmit channel.
  • only one set of m egress queues is provided to store frames for transmission to a corresponding one of a set of m transmit channels.
  • a distinct set of queuing parameters are maintained for each of a set of n receive channels, and the set of m egress queues is filled using a first service algorithm and the sets of queuing parameters for each of the set of n receive channels.
  • a certain level of fairness is provided between multiple receive channels that are mapped for output via the same transmit channel.
  • FIGURE 1 shows a network that includes a communications controller that interconnects a common carrier network and customer premises equipment.
  • FIGURE 2 shows a communications controller of one embodiment.
  • FIGURE 3 shows a port card that uses a two-level queuing and servicing scheme of one embodiment.
  • FIGURE 4 shows queuing parameters of one embodiment as maintained by the admin processor.
  • FIGURE 5 shows an alternative embodiment wherein a common receive queue stores received and reassembled frames.
  • Schemes for queuing digital information in a communications controller are described wherein an additional set of egress queues called "channel" queues that each buffer digital information received from a corresponding one of n receive channels of a network are introduced in the egress path between the receive processor and the m port queues that buffer digital information for transmission over the m transmit channels of the customer premises equipment.
  • Each port queue is filled using digital information buffered by a selected subset of the channel queues.
  • Each channel queue has its own set of queuing parameters that may be defined to ensure that each channel of a subset of channels that are mapped to the same port is given a fair opportunity to transmit to the corresponding port queue.
  • the network is an ATM network
  • the customer premises equipment comprises a Frame Relay network.
  • Each channel queue corresponds to one of the n VCs of the ATM network
  • each port queue corresponds to one of the m logical ports that services a logical channel of the Frame Relay network.
  • the number of VCs n is assumed to exceed the number of logical ports m such that each logical port receives frames received from multiple VCs.
  • the queuing and servicing schemes described herein may be readily adapted to any system wherein additional granularity is desired for ensuring fairness when data received from multiple receive channels is to be transmitted over a single transmit channel.
  • FIG 1 shows a network 10 that comprises a communications controller 15 that interconnects a common carrier ATM network 20 and a customer premises equipment (CPE) 30 that operates according to the Frame Relay standard protocol.
  • Communications controller 15 is connected to ATM network 20 by common carrier link 25, which may be, for example, a T3 line.
  • CPE 30 is connected to communication controller 15 by a private link 35, which may be, for example, a Tl line.
  • Communications controller 15 may also be linked to other CPEs (not shown).
  • FIG. 2 shows a communications controller in greater detail.
  • communications controller 15 includes a bi-directional cell bus 50 that operates according to the ATM standard protocol.
  • Cell bus 50 is implemented as a backplane bus to provide scalability of communications controller 15.
  • a trunk card 55 that operates as a network interface is coupled to common carrier link 25 and cell bus 50 for receiving ATM cells from ATM network 20. Trunk card 55 transmits received cells to the port cards using cell bus 50.
  • Port cards 60 and 65 operate as CPE interfaces for coupling to CPEs via communication links 75.
  • port card 60 is coupled to CPE 30 via private link 35.
  • the port cards distribute cells received from cell bus 50 to the appropriate CPEs by using communication links 75. It is possible that each communication link of a port card is time division multiplexed into a plurality of "logical" communication channels, and each physical port that is connected to receive a communication link may be viewed as a multiplicity of logical ports coupled to service the logical channels of the communication link.
  • One or more expansion slots 70 may be provided to receive additional port cards.
  • new port cards may be swapped with port cards 60 and 65 should port cards 60 and 65 fail or the configuration of the CPEs change.
  • port card 60 includes multiple physical ports for linking to CPEs that operate according to the Frame Relay standard, and a user may replace port card 60 with an alternative port card that includes multiple physical ports for linking to CPEs that operate according to the ATM standard protocol should the user's needs change.
  • communication between CPEs and the ATM network is bi-directional, the following discussion only details traffic flow and queuing schemes for use in the egress direction from the ATM network 20 to the CPE 30. Queuing and servicing in the ingress direction may be done according to any appropriate method.
  • Figure 3 shows a port card in greater detail.
  • port card 60 includes a receive processor 75 that is coupled to receive cells from cell bus 50 in a first in, first out manner.
  • Receive processor 75 is configured to perform the SAR unit function of reassembling received cells into frames.
  • each frame is a Tl frame.
  • a set of channel queues 80, one queue for each of n VCs, are provided, and receive processor 75 buffers ("enqueues") a reassembled frame into the channel queue that corresponds to the VC from which the frame was received.
  • channel queue 81 buffers frames received from VC0
  • channel queue 82 buffers frames received from VC1
  • channel queue 83 buffers frames received from VC2
  • channel queue 84 buffers frames received from VCn. If a channel queue is full when receive processor 75 attempts to enqueue a frame, that frame is dropped.
  • cell bus 50 operates according to the ATM standard protocol, and receive processor 75 sorts reassembled frames into the appropriate channel queues according to the values stored in the VCI and VPI fields of the cell headers.
  • An administrative (“admin") processor 85 is coupled to transmit buffered frames from the channel queues 80 to the port queues 95, wherein each port queue corresponds to a logical port of port card 60.
  • Admin processor 85 services each channel queue by implementing a service algorithm that uses queuing parameters 90 specific to that channel queue. Any desired service algorithm may be used.
  • the queuing parameters 90 for all the channel queues are maintained by admin processor 85 and stored in memory (not shown), and a single microprocessor may perform the functions of both the receive processor 75 and the admin processor 85.
  • Direct memory access (DMA) engine 100 is responsible for servicing the port queues 95 according to queuing parameters 105 specified for each logical port queue. DMA engine 100 may service the port queues 95 using any desired servicing method or algorithm, including those found in the prior art.
  • the channel queues 80 and the port queues 95 may be implemented using any reasonable method.
  • the channel and port queues are maintained by forming linked lists of buffers from a common free buffer pool.
  • the buffer pool is maintained using a one or more random access memory (RAM) devices.
  • RAM random access memory
  • a separate buffer pool may be maintained for each of the set of channel queues and the set of port queues.
  • Figure 4 shows an exemplary set of queuing parameters maintained by admin processor 85.
  • Admin processor 85 maintains a distinct set of queuing parameters 90 for each channel queue.
  • Each set of queuing parameters 90 includes a queue depth parameter 110, a discard eligible threshold parameter 111, an error correction notification threshold parameter 112, and a set of queue status bits 113.
  • the queue depth parameter 110 indicates a maximum number of bytes in the corresponding channel queue that are available for buffering frames received from the corresponding virtual channel.
  • the queue status bits 113 include a discard eligible (DE) status bit along with forward explicit congestion notification (FECN) and backward congestion notification (BECN) status bits for the corresponding VC.
  • the admin processor 85 updates the DE, FECN, and BECN status bits on a per-VC basis as the corresponding threshold parameters are exceeded in the corresponding channel queue. For example, the admin processor 85 sets the DE status bit for a VC if the corresponding discard eligible threshold for that VC is exceeded as the admin processor 85 transfers a frame from the channel queue of VC to the appropriate port queue.
  • the admin processor 85 similarly updates the FECN and BECN status bits for a particular VC as the error correction notification threshold for that VC is exceeded.
  • Queuing and servicing frames on a per-channel (or per-VC) basis provides greater granularity when implementing congestion control such that fairness may be ensured as between multiple channels (or VCs) that are mapped to a single logical port.
  • VCs 0-2 might be mapped to logical port 0.
  • all traffic from VCs 0-2 would be reassembled and buffered in the port queues on a first come, first served basis. It is possible that VC0 could be transmitting high amounts of communications traffic to logical port 0 such that frames received from VCO alone would fill the port queue of logical port 0, and frames from VCs 1 and 2 would be dropped.
  • congested VCs can be prevented from blocking other VCs from accessing the same logical port.
  • FIG. 5 shows an alternative embodiment wherein a common receive queue 120 stores received and reassembled frames. Wherein frames are not stored in separate channel queues, each frame includes a logical identifier comprising a combination of the VPI and the VCI found in each cell header for the segmented frame.
  • Admin processor 85 continues to maintain queuing parameters 90 on a per-channel basis such that a certain level of additional granularity in congestion control is maintained; however, the use of channel queues provides additional flexibility at the expense of additional procession overhead

Abstract

A method for queuing and servicing egress traffic of a network. A first set of n egress queues that are each coupled to store frames received from a corresponding one of a set of n receive channels are provided, and a distinct set of queuing parameters is maintained for each of the first set of n egress queues. A second set of m egress queues each coupled to store frames for transmission to a corresponding one of a set of m transmit channels are also provided, wherein m is less than n. The first set of n egress queues are serviced to fill the second set of m egress queues using a first service algorithm and the sets of queuing parameters for each of the first set of n egress queues.

Description

PER CHANNEL FRAME QUEUING AND SERVICING IN THE EGRESS DIRECTION OF A COMMUNICATIONS NETWORK
FIELD OF THE INVENTION
The present invention pertains to the field of communication systems and more particularly to per channel frame queuing and servicing in the egress direction of a communications network.
BACKGROUND
Asynchronous Transfer Mode (ATM) networks are often used by telecommunication service providers to transfer digital information over long distances on a demand driven basis. ATM networks are cell switching networks that transfer fixed length packets or "cells" in a time multiplexed manner using a plurality of virtual paths ("VPs") and virtual channels ("VCs") defined within the physical transmission medium of the network.
Communication controllers act as end nodes of common carrier ATM networks and provide entry points so that customers may use the ATM networks. A communication controller connects to an ATM network using a set of one or more common carrier communication links such as high speed T3 or E3 lines, wherein customer premises equipment (i.e. user networks and devices) is typically connected to a communication controller using lower speed links such as Tl or El lines. A common carrier ATM network and customer premises equipment operate asynchronously, typically at different data rates, and often using different communications protocols. Therefore, communication controllers must provide the necessary services and facilities 1) for ensuring that data received from the customer premises equipment (i.e. "ingress traffic") is transmitted correctly over the ATM network and 2) for ensuring that data received from the common carrier ATM network (i.e. "egress traffic") is transmitted correctly to the customer premises equipment. Three basic services that a typical communication controller will provide are segmentation and reassembly, multiplexing and demultiplexing, and storing and forwarding (buffering). Another type of service that is often provided is congestion control.
A communication controller typically includes a segmentation and reassembly (SAR) unit that packs data into cells for transmission over the ATM network and that packs data into appropriately sized units or "frames" for transmission to the customer premises equipment. For ingress traffic, the SAR unit segments frames into cells wherein each cell created by the SAR unit is specified for transmission over a particular VC or VP by appropriately setting the virtual channel identifier (VCI) or virtual path identifier (VPI) in the header of the cell. In this manner, the ingress traffic is multiplexed for transmission over the ATM network. Ingress queues are provided for each VC to store cells until they may be transmitted. For egress traffic, the SAR unit reassembles previously segmented frames using the received cells. Each frame reassembled by the SAR unit is stored in a selected one of a set of egress queues, wherein each egress queue typically corresponds to a logical channel or port of the customer premises equipment. In this manner, the egress traffic is demultiplexed for transmission to the customer premises equipment. One problem with prior approaches to egress queuing and demultiplexing is that multiple VCs may be associated with a particular logical channel of the customer premises equipment, and fair access to that logical channel for all of the VCs is not be assured.
SUMMARY AND OBTECTS OF THE INVENTION
Therefore, it is an object of the present invention to provide an improved buffering and servicing scheme in the egress direction.
This and other objects of the invention are provided by a method for queuing and servicing egress traffic of a network. A first set of n egress queues that are each coupled to store frames received from a corresponding one of a set of n receive channels are provided, and a distinct set of queuing parameters is maintained for each of the first set of n egress queues. A second set of m egress queues each coupled to store frames for transmission to a corresponding one of a set of m transmit channels are also provided, wherein m is less than n. The first set of n egress queues are serviced to fill the second set of m egress queues using a first service algorithm and the sets of queuing parameters for each of the first set of n egress queues. In this manner, a certain level of fairness may be maintained as between multiple receive channels that are mapped for output via the same transmit channel.
According to an alternative embodiment, only one set of m egress queues is provided to store frames for transmission to a corresponding one of a set of m transmit channels. A distinct set of queuing parameters are maintained for each of a set of n receive channels, and the set of m egress queues is filled using a first service algorithm and the sets of queuing parameters for each of the set of n receive channels. Again, a certain level of fairness is provided between multiple receive channels that are mapped for output via the same transmit channel.
Other objects, features and advantages of the present invention will be apparent from the accompanying drawings, and from the detailed description that follows below.
BRIEF DESCRIPTION OF THE DRAWINGS
The present invention is illustrated by way of example and not limitation in the figures of the accompanying drawings in which like references indicate similar elements, and in which:
FIGURE 1 shows a network that includes a communications controller that interconnects a common carrier network and customer premises equipment. FIGURE 2 shows a communications controller of one embodiment.
FIGURE 3 shows a port card that uses a two-level queuing and servicing scheme of one embodiment.
FIGURE 4 shows queuing parameters of one embodiment as maintained by the admin processor.
FIGURE 5 shows an alternative embodiment wherein a common receive queue stores received and reassembled frames.
DETAILED DESCRIPTION
Schemes for queuing digital information in a communications controller are described wherein an additional set of egress queues called "channel" queues that each buffer digital information received from a corresponding one of n receive channels of a network are introduced in the egress path between the receive processor and the m port queues that buffer digital information for transmission over the m transmit channels of the customer premises equipment. Each port queue is filled using digital information buffered by a selected subset of the channel queues. Each channel queue has its own set of queuing parameters that may be defined to ensure that each channel of a subset of channels that are mapped to the same port is given a fair opportunity to transmit to the corresponding port queue. According to an alternative embodiment, only port queues are provided, but a distinct set of queuing parameters are provided for each receive channel such that access to the port queues may be controlled. According to the presently described embodiments, the network is an ATM network, and the customer premises equipment comprises a Frame Relay network. Each channel queue corresponds to one of the n VCs of the ATM network, and each port queue corresponds to one of the m logical ports that services a logical channel of the Frame Relay network. For the sake of simplifying discussion, the number of VCs n is assumed to exceed the number of logical ports m such that each logical port receives frames received from multiple VCs. The queuing and servicing schemes described herein may be readily adapted to any system wherein additional granularity is desired for ensuring fairness when data received from multiple receive channels is to be transmitted over a single transmit channel.
Figure 1 shows a network 10 that comprises a communications controller 15 that interconnects a common carrier ATM network 20 and a customer premises equipment (CPE) 30 that operates according to the Frame Relay standard protocol. Communications controller 15 is connected to ATM network 20 by common carrier link 25, which may be, for example, a T3 line. CPE 30 is connected to communication controller 15 by a private link 35, which may be, for example, a Tl line. Communications controller 15 may also be linked to other CPEs (not shown).
Figure 2 shows a communications controller in greater detail. According to the present embodiment, communications controller 15 includes a bi-directional cell bus 50 that operates according to the ATM standard protocol. Cell bus 50 is implemented as a backplane bus to provide scalability of communications controller 15. A trunk card 55 that operates as a network interface is coupled to common carrier link 25 and cell bus 50 for receiving ATM cells from ATM network 20. Trunk card 55 transmits received cells to the port cards using cell bus 50.
Port cards 60 and 65 operate as CPE interfaces for coupling to CPEs via communication links 75. For example, port card 60 is coupled to CPE 30 via private link 35. The port cards distribute cells received from cell bus 50 to the appropriate CPEs by using communication links 75. It is possible that each communication link of a port card is time division multiplexed into a plurality of "logical" communication channels, and each physical port that is connected to receive a communication link may be viewed as a multiplicity of logical ports coupled to service the logical channels of the communication link.
One or more expansion slots 70 may be provided to receive additional port cards. Furthermore, new port cards may be swapped with port cards 60 and 65 should port cards 60 and 65 fail or the configuration of the CPEs change. For example, port card 60 includes multiple physical ports for linking to CPEs that operate according to the Frame Relay standard, and a user may replace port card 60 with an alternative port card that includes multiple physical ports for linking to CPEs that operate according to the ATM standard protocol should the user's needs change. Wherein communication between CPEs and the ATM network is bi-directional, the following discussion only details traffic flow and queuing schemes for use in the egress direction from the ATM network 20 to the CPE 30. Queuing and servicing in the ingress direction may be done according to any appropriate method.
Figure 3 shows a port card in greater detail. As shown, port card 60 includes a receive processor 75 that is coupled to receive cells from cell bus 50 in a first in, first out manner. Receive processor 75 is configured to perform the SAR unit function of reassembling received cells into frames. According to the present embodiment, each frame is a Tl frame. A set of channel queues 80, one queue for each of n VCs, are provided, and receive processor 75 buffers ("enqueues") a reassembled frame into the channel queue that corresponds to the VC from which the frame was received. For example, channel queue 81 buffers frames received from VC0, channel queue 82 buffers frames received from VC1, channel queue 83 buffers frames received from VC2, and channel queue 84 buffers frames received from VCn. If a channel queue is full when receive processor 75 attempts to enqueue a frame, that frame is dropped. As previously described, cell bus 50 operates according to the ATM standard protocol, and receive processor 75 sorts reassembled frames into the appropriate channel queues according to the values stored in the VCI and VPI fields of the cell headers.
An administrative ("admin") processor 85 is coupled to transmit buffered frames from the channel queues 80 to the port queues 95, wherein each port queue corresponds to a logical port of port card 60. Admin processor 85 services each channel queue by implementing a service algorithm that uses queuing parameters 90 specific to that channel queue. Any desired service algorithm may be used. The queuing parameters 90 for all the channel queues are maintained by admin processor 85 and stored in memory (not shown), and a single microprocessor may perform the functions of both the receive processor 75 and the admin processor 85.
Direct memory access (DMA) engine 100 is responsible for servicing the port queues 95 according to queuing parameters 105 specified for each logical port queue. DMA engine 100 may service the port queues 95 using any desired servicing method or algorithm, including those found in the prior art.
The channel queues 80 and the port queues 95 may be implemented using any reasonable method. According to the present embodiment, the channel and port queues are maintained by forming linked lists of buffers from a common free buffer pool. The buffer pool is maintained using a one or more random access memory (RAM) devices. A separate buffer pool may be maintained for each of the set of channel queues and the set of port queues.
Figure 4 shows an exemplary set of queuing parameters maintained by admin processor 85. Admin processor 85 maintains a distinct set of queuing parameters 90 for each channel queue. Each set of queuing parameters 90 includes a queue depth parameter 110, a discard eligible threshold parameter 111, an error correction notification threshold parameter 112, and a set of queue status bits 113. The queue depth parameter 110 indicates a maximum number of bytes in the corresponding channel queue that are available for buffering frames received from the corresponding virtual channel.
For one embodiment, the queue status bits 113 include a discard eligible (DE) status bit along with forward explicit congestion notification (FECN) and backward congestion notification (BECN) status bits for the corresponding VC. The admin processor 85 updates the DE, FECN, and BECN status bits on a per-VC basis as the corresponding threshold parameters are exceeded in the corresponding channel queue. For example, the admin processor 85 sets the DE status bit for a VC if the corresponding discard eligible threshold for that VC is exceeded as the admin processor 85 transfers a frame from the channel queue of VC to the appropriate port queue. The admin processor 85 similarly updates the FECN and BECN status bits for a particular VC as the error correction notification threshold for that VC is exceeded.
Queuing and servicing frames on a per-channel (or per-VC) basis provides greater granularity when implementing congestion control such that fairness may be ensured as between multiple channels (or VCs) that are mapped to a single logical port. For example, according to one implementation, VCs 0-2 might be mapped to logical port 0. According to prior single level queuing schemes, all traffic from VCs 0-2 would be reassembled and buffered in the port queues on a first come, first served basis. It is possible that VC0 could be transmitting high amounts of communications traffic to logical port 0 such that frames received from VCO alone would fill the port queue of logical port 0, and frames from VCs 1 and 2 would be dropped. By providing the intermediate level of channel queuing and servicing described herein, congested VCs can be prevented from blocking other VCs from accessing the same logical port.
Figure 5 shows an alternative embodiment wherein a common receive queue 120 stores received and reassembled frames. Wherein frames are not stored in separate channel queues, each frame includes a logical identifier comprising a combination of the VPI and the VCI found in each cell header for the segmented frame. Admin processor 85 continues to maintain queuing parameters 90 on a per-channel basis such that a certain level of additional granularity in congestion control is maintained; however, the use of channel queues provides additional flexibility at the expense of additional procession overhead
In the foregoing specification the invention has been described with reference to specific exemplary embodiments thereof. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the invention. The specification and drawings are accordingly to be regarded as illustrative rather than a restrictive sense.

Claims

CLAMSWhat is claimed is:
1. A method for queuing and servicing egress traffic of a network, comprising: providing a first set of n egress queues each coupled to store frames received from a corresponding one of a set of n receive channels; maintaining a distinct set of queuing parameters for each of the first set of n egress queues; providing a second set of m egress queues each coupled to store frames for transmission to a corresponding one of a set of m transmit channels, wherein m is less than n; servicing the first set of n egress queues to fill the second set of m egress queues using a first service algorithm and the sets of queuing parameters for each of the first set of n egress queues.
The method of claim 1, further comprising: maintaining a set of queuing parameters for each of the second set of m egress queues; and servicing the second set of m egress queues to transmit frames via the m transmit channels using a second service algorithm and the sets of queuing parameters for each of the second set of me egress queues.
3. A method for queuing and servicing egress traffic of an asynchronous transfer mode (ATM) network, comprising: providing a set of n channel queues each coupled to store reassembled frames originally received from a corresponding one of n virtual channels; maintaining a distinct set of queuing parameters for each of the n channel queues; providing a set of m port queues each coupled to store frames for transmission over a corresponding one of m logical ports, wherein m is less than n; and servicing the set of n channel queues according to a first service algorithm and using the sets of queuing parameters to transfer frames from the n channel queues to the m port queues.
4. The method of claim 3, further comprising: maintaining a distinct set of queuing parameters for each of the set of m port queues; and servicing the set of m port queues according to a second service algorithm and the queuing parameters of the port queues to transmit frames via the m logical ports.
5. An arrangement for queuing and servicing egress traffic directed from a network to customer premise equipment, comprising: a receive processor coupled to receive packets from the network, the receive processor reassembling packets into frames; a first set of n egress queues each storing frames received from a corresponding one of a set of n receive channels of the network; a second set of m egress queues each storing frames for transmission over a corresponding one of a set of m ports that transmits data over a corresponding one of a set of m transmit channels of the customer premises equipment, wherein m is less than n; and an administrative processor coupled to the first set of n egress queues, the administrative processor maintaining a distinct set of queuing parameters for each of the first set of egress queues and servicing the first set of egress queues using the sets of queuing parameters to transfer frames to the second set of egress queues.
The arrangement of claim 5 further comprising: a direct memory access (DMA) engine coupled to the set of port queues, the DMA engine maintaining a distinct set of queuing parameters for each of the second set of egress queues and servicing the second set of egress queues using the sets of queuing parameters for the second set of egress queues to transfer frames from the second set of egress queues via the transmit channels.
7. An arrangement for queuing and servicing egress traffic directed from an asynchronous transfer mode (ATM) network to customer premise equipment, comprising: a receive processor coupled to receive cells from the ATM network, the receive processor reassembling cells into frames; a set of channel queues each storing frames received from a corresponding one of a set of virtual channels of the ATM network; a set of port queues each storing frames for transmission over a corresponding one of a set of logical ports that transmits data over a corresponding one of a set of logical channels of the customer premises equipment; and an administrative processor coupled to the set of channel queues, the administrative processor maintaining a distinct set of queuing parameters for each channel queue and servicing the set of channel queues using the sets of queuing parameters to fill the port queues.
8. The arrangement of claim 7 further comprising: a direct memory access (DMA) engine coupled to the set of port queues, the DMA engine maintaining a distinct set of queuing parameters for each port queue and servicing the set of port queues using the sets of queuing parameters to transfer frames from the port queues via the logical ports.
9. A method for queuing and servicing egress traffic of a network, comprising: maintaining a distinct set of queuing parameters for each of a set of n receive channels; providing a set of m egress queues each coupled to store frames for transmission to a corresponding one of a set of m transmit channels, wherein m is less than n; filling the set of m egress queues using a first service algorithm and the sets of queuing parameters for each of the set of n receive channels.
PCT/US1997/000278 1996-01-11 1997-01-07 Per channel frame queuing and servicing in the egress direction of a communications network WO1997025831A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU15307/97A AU1530797A (en) 1996-01-11 1997-01-07 Per channel frame queuing and servicing in the egress direction of a communications network

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US08/586,939 US5765032A (en) 1996-01-11 1996-01-11 Per channel frame queuing and servicing in the egress direction of a communications network
US08/586,939 1996-01-11

Publications (1)

Publication Number Publication Date
WO1997025831A1 true WO1997025831A1 (en) 1997-07-17

Family

ID=24347705

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1997/000278 WO1997025831A1 (en) 1996-01-11 1997-01-07 Per channel frame queuing and servicing in the egress direction of a communications network

Country Status (3)

Country Link
US (1) US5765032A (en)
AU (1) AU1530797A (en)
WO (1) WO1997025831A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1999021326A2 (en) * 1997-10-21 1999-04-29 Nokia Networks Oy Resource optimization in a multiprocessor system for a packet network

Families Citing this family (63)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6034945A (en) 1996-05-15 2000-03-07 Cisco Technology, Inc. Method and apparatus for per traffic flow buffer management
US6058114A (en) * 1996-05-20 2000-05-02 Cisco Systems, Inc. Unified network cell scheduler and flow controller
US6151325A (en) * 1997-03-31 2000-11-21 Cisco Technology, Inc. Method and apparatus for high-capacity circuit switching with an ATM second stage switch
US6041059A (en) * 1997-04-25 2000-03-21 Mmc Networks, Inc. Time-wheel ATM cell scheduling
EP0886403B1 (en) * 1997-06-20 2005-04-27 Alcatel Method and arrangement for prioritised data transmission of packets
US6430191B1 (en) 1997-06-30 2002-08-06 Cisco Technology, Inc. Multi-stage queuing discipline
US6201813B1 (en) 1997-06-30 2001-03-13 Cisco Technology, Inc. Method and apparatus for using ATM queues for segmentation and reassembly of data frames
US6487202B1 (en) 1997-06-30 2002-11-26 Cisco Technology, Inc. Method and apparatus for maximizing memory throughput
US6016511A (en) * 1997-09-12 2000-01-18 Motorola Inc. Apparatus and method for interfacing protocol application data frame operation requests with a data frame input/output device
US6005851A (en) * 1997-10-10 1999-12-21 Nortel Networks Corporation Adaptive channel control for data service delivery
US6252878B1 (en) 1997-10-30 2001-06-26 Cisco Technology, Inc. Switched architecture access server
US6526060B1 (en) * 1997-12-05 2003-02-25 Cisco Technology, Inc. Dynamic rate-based, weighted fair scheduler with explicit rate feedback option
US6148004A (en) * 1998-02-11 2000-11-14 Mcdata Corporation Method and apparatus for establishment of dynamic ESCON connections from fibre channel frames
US6738814B1 (en) * 1998-03-18 2004-05-18 Cisco Technology, Inc. Method for blocking denial of service and address spoofing attacks on a private network
US6092108A (en) * 1998-03-19 2000-07-18 Diplacido; Bruno Dynamic threshold packet filtering of application processor frames
US6535520B1 (en) 1998-08-14 2003-03-18 Cisco Technology, Inc. System and method of operation for managing data communication between physical layer devices and ATM layer devices
US6269096B1 (en) 1998-08-14 2001-07-31 Cisco Technology, Inc. Receive and transmit blocks for asynchronous transfer mode (ATM) cell delineation
US6292491B1 (en) 1998-08-25 2001-09-18 Cisco Technology, Inc. Distributed FIFO queuing for ATM systems
US6381245B1 (en) 1998-09-04 2002-04-30 Cisco Technology, Inc. Method and apparatus for generating parity for communication between a physical layer device and an ATM layer device
US6430153B1 (en) 1998-09-04 2002-08-06 Cisco Technology, Inc. Trunk delay simulator
US6584108B1 (en) 1998-09-30 2003-06-24 Cisco Technology, Inc. Method and apparatus for dynamic allocation of multiple signal processing resources among multiple channels in voice over packet-data-network systems (VOPS)
US7339924B1 (en) * 1998-09-30 2008-03-04 Cisco Technology, Inc. Method and apparatus for providing ringing timeout disconnect supervision in remote telephone extensions using voice over packet-data-network systems (VOPS)
US6535505B1 (en) 1998-09-30 2003-03-18 Cisco Technology, Inc. Method and apparatus for providing a time-division multiplexing (TDM) interface among a high-speed data stream and multiple processors
US6611531B1 (en) 1998-09-30 2003-08-26 Cisco Technology, Inc. Method and apparatus for routing integrated data, voice, and video traffic
US7009962B1 (en) 1998-09-30 2006-03-07 Cisco Technology, Inc. Method and apparatus for providing forwarding on ring-no-answer for remote telephone extensions using voice over packet-data-network systems (VOPS)
US6763017B1 (en) 1998-09-30 2004-07-13 Cisco Technology, Inc. Method and apparatus for voice port hunting of remote telephone extensions using voice over packet-data-network systems (VOPS)
US6560196B1 (en) 1998-11-19 2003-05-06 Cisco Technology, Inc. Method and apparatus for controlling the transmission of cells across a network
US6700872B1 (en) 1998-12-11 2004-03-02 Cisco Technology, Inc. Method and system for testing a utopia network element
US6917617B2 (en) * 1998-12-16 2005-07-12 Cisco Technology, Inc. Use of precedence bits for quality of service
US6643260B1 (en) 1998-12-18 2003-11-04 Cisco Technology, Inc. Method and apparatus for implementing a quality of service policy in a data communications network
US6453357B1 (en) * 1999-01-07 2002-09-17 Cisco Technology, Inc. Method and system for processing fragments and their out-of-order delivery during address translation
US6535511B1 (en) 1999-01-07 2003-03-18 Cisco Technology, Inc. Method and system for identifying embedded addressing information in a packet for translation between disparate addressing systems
US6449655B1 (en) 1999-01-08 2002-09-10 Cisco Technology, Inc. Method and apparatus for communication between network devices operating at different frequencies
US7068594B1 (en) 1999-02-26 2006-06-27 Cisco Technology, Inc. Method and apparatus for fault tolerant permanent voice calls in voice-over-packet systems
US6657970B1 (en) 1999-02-26 2003-12-02 Cisco Technology, Inc. Method and apparatus for link state determination in voice over frame-relay networks
US6614794B1 (en) * 1999-03-03 2003-09-02 Conexant Systems, Inc. System and method for multiple modem traffic redirection
US7006493B1 (en) 1999-03-09 2006-02-28 Cisco Technology, Inc. Virtual voice port configured to connect a switched voice call to a permanent voice call
US6331978B1 (en) * 1999-03-09 2001-12-18 Nokia Telecommunications, Oy Generic label encapsulation protocol for carrying label switched packets over serial links
US6778555B1 (en) 1999-05-28 2004-08-17 Cisco Technology, Inc. Voice over packet system configured to connect different facsimile transmission protocols
US6977898B1 (en) 1999-10-15 2005-12-20 Cisco Technology, Inc. Method for supporting high priority calls on a congested WAN link
US6484224B1 (en) 1999-11-29 2002-11-19 Cisco Technology Inc. Multi-interface symmetric multiprocessor
US6810044B1 (en) * 1999-12-17 2004-10-26 Texas Instruments Incorporated Order broadcast management (IOBMAN) scheme
US6798746B1 (en) 1999-12-18 2004-09-28 Cisco Technology, Inc. Method and apparatus for implementing a quality of service policy in a data communications network
US6775292B1 (en) 2000-01-24 2004-08-10 Cisco Technology, Inc. Method for servicing of multiple queues carrying voice over virtual circuits based on history
CA2301973A1 (en) * 2000-03-21 2001-09-21 Spacebridge Networks Corporation System and method for adaptive slot-mapping input/output queuing for tdm/tdma systems
US6977895B1 (en) 2000-03-23 2005-12-20 Cisco Technology, Inc. Apparatus and method for rate-based polling of input interface queues in networking devices
US6657962B1 (en) * 2000-04-10 2003-12-02 International Business Machines Corporation Method and system for managing congestion in a network
US7142558B1 (en) 2000-04-17 2006-11-28 Cisco Technology, Inc. Dynamic queuing control for variable throughput communication channels
US6424657B1 (en) * 2000-08-10 2002-07-23 Verizon Communications Inc. Traffic queueing for remote terminal DSLAMs
US7801158B2 (en) * 2000-10-16 2010-09-21 Verizon Communications Inc. Congestion and thru-put visibility and isolation
US20020174246A1 (en) * 2000-09-13 2002-11-21 Amos Tanay Centralized system for routing signals over an internet protocol network
US7627870B1 (en) 2001-04-28 2009-12-01 Cisco Technology, Inc. Method and apparatus for a data structure comprising a hierarchy of queues or linked list data structures
US7289513B1 (en) * 2001-06-15 2007-10-30 Cisco Technology, Inc. Switching fabric port mapping in large scale redundant switches
US7225271B1 (en) 2001-06-29 2007-05-29 Cisco Technology, Inc. System and method for recognizing application-specific flows and assigning them to queues
US6862293B2 (en) 2001-11-13 2005-03-01 Mcdata Corporation Method and apparatus for providing optimized high speed link utilization
US7075940B1 (en) 2002-05-06 2006-07-11 Cisco Technology, Inc. Method and apparatus for generating and using dynamic mappings between sets of entities such as between output queues and ports in a communications system
KR100564743B1 (en) * 2002-12-18 2006-03-27 한국전자통신연구원 Multi-functional switch fabric apparatus and control method of the same
US7555002B2 (en) * 2003-11-06 2009-06-30 International Business Machines Corporation Infiniband general services queue pair virtualization for multiple logical ports on a single physical port
US7778169B2 (en) * 2005-09-02 2010-08-17 Cisco Technology, Inc. Packetizing media for a time slotted communication system
JP4516999B2 (en) * 2008-03-28 2010-08-04 富士通株式会社 Data communication control device, data communication control method, and program therefor
US8472482B2 (en) * 2008-10-27 2013-06-25 Cisco Technology, Inc. Multiple infiniband ports within a higher data rate port using multiplexing
US20110129201A1 (en) * 2009-11-30 2011-06-02 International Business Machines Corporation Customized playback of broadcast media
US9183057B2 (en) * 2013-01-21 2015-11-10 Micron Technology, Inc. Systems and methods for accessing memory

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0593843A2 (en) * 1992-10-22 1994-04-27 Roke Manor Research Limited Improvements in frame relay data transmission systems
US5313454A (en) * 1992-04-01 1994-05-17 Stratacom, Inc. Congestion control for cell networks
WO1994014263A1 (en) * 1992-12-14 1994-06-23 Nokia Telecommunications Oy A method for congestion management in a frame relay network and a node in a frame relay network

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5453981A (en) * 1990-10-16 1995-09-26 Kabushiki Kaisha Toshiba Method of controlling communication network incorporating virtual channels exchange nodes and virtual paths exchange nodes
WO1992016066A1 (en) * 1991-02-28 1992-09-17 Stratacom, Inc. Method and apparatus for routing cell messages using delay
US5224099A (en) * 1991-05-17 1993-06-29 Stratacom, Inc. Circuitry and method for fair queuing and servicing cell traffic using hopcounts and traffic classes
JP3262142B2 (en) * 1992-01-16 2002-03-04 富士通株式会社 ATM cell forming apparatus, ATM cell forming method, node, and multiplexing method in node
JPH06335079A (en) * 1993-05-19 1994-12-02 Fujitsu Ltd Cell multiplexer in atm network
US5359592A (en) * 1993-06-25 1994-10-25 Stratacom, Inc. Bandwidth and congestion control for queue channels in a cell switching communication controller
US5390175A (en) * 1993-12-20 1995-02-14 At&T Corp Inter-cell switching unit for narrow band ATM networks
US5528592A (en) * 1994-01-27 1996-06-18 Dsc Communications Corporation Method and apparatus for route processing asynchronous transfer mode cells
JPH07297830A (en) * 1994-04-21 1995-11-10 Mitsubishi Electric Corp Multiplexer, non-multiplexer, switching device, and network adapter
EP0700186B1 (en) * 1994-08-31 2005-04-13 Hewlett-Packard Company, A Delaware Corporation Method and apparatus for regulating virtual-channel cell transmission
EP0717532A1 (en) * 1994-12-13 1996-06-19 International Business Machines Corporation Dynamic fair queuing to support best effort traffic in an ATM network
WO1997000278A1 (en) * 1995-06-16 1997-01-03 Hoechst Celanese Coporation Process for preparing polyhydroxystyrene with a novolak type structure

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5313454A (en) * 1992-04-01 1994-05-17 Stratacom, Inc. Congestion control for cell networks
EP0593843A2 (en) * 1992-10-22 1994-04-27 Roke Manor Research Limited Improvements in frame relay data transmission systems
WO1994014263A1 (en) * 1992-12-14 1994-06-23 Nokia Telecommunications Oy A method for congestion management in a frame relay network and a node in a frame relay network

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1999021326A2 (en) * 1997-10-21 1999-04-29 Nokia Networks Oy Resource optimization in a multiprocessor system for a packet network
WO1999021326A3 (en) * 1997-10-21 1999-07-29 Nokia Telecommunications Oy Resource optimization in a multiprocessor system for a packet network

Also Published As

Publication number Publication date
AU1530797A (en) 1997-08-01
US5765032A (en) 1998-06-09

Similar Documents

Publication Publication Date Title
US5765032A (en) Per channel frame queuing and servicing in the egress direction of a communications network
EP0763915B1 (en) Packet transfer device and method adaptive to a large number of input ports
EP0577269B1 (en) Arrangement for bounding jitter in a priority-based switching system
US6754206B1 (en) Distributed telecommunications switching system and method
EP1151556B1 (en) Method of inverse multiplexing for atm
US6526060B1 (en) Dynamic rate-based, weighted fair scheduler with explicit rate feedback option
AU695106B2 (en) Method and equipment for prioritizing traffic in an ATM network
US6535484B1 (en) Method and apparatus for per traffic flow buffer management
US6879590B2 (en) Methods, apparatuses and systems facilitating aggregation of physical links into logical link
US6430187B1 (en) Partitioning of shared resources among closed user groups in a network access device
US6587437B1 (en) ER information acceleration in ABR traffic
EP0944976A2 (en) Distributed telecommunications switching system and method
EP1067737B1 (en) A traffic shaper that accommodates maintenance cells without causing jitter or delay
US5956322A (en) Phantom flow control method and apparatus
US7508761B2 (en) Method, communication arrangement, and communication device for transmitting message cells via a packet-oriented communication network
US7433365B1 (en) System architecture for linking channel banks of a data communication system
US5978357A (en) Phantom flow control method and apparatus with improved stability
Chao Architecture design for regulating and scheduling user's traffic in ATM networks
EP0481447B1 (en) Method of controlling communication network incorporating virtual channels exchange nodes and virtual paths exchange nodes, and the said communication network
US6396807B1 (en) Method for the control of flows of digital information
EP1381192A1 (en) Improved phantom flow control method and apparatus
JP2001148698A (en) Atm switch having lowest band assuring function
Alaiwan IBM 8265 ATM backbone switch hardware architecture
WO1998043395A9 (en) Improved phantom flow control method and apparatus
Chandan Performance evaluation of an ATM multiplexer.

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AU CA JP

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH DE DK ES FI FR GB GR IE IT LU MC NL PT SE

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
NENP Non-entry into the national phase

Ref country code: JP

Ref document number: 97525372

Format of ref document f/p: F

122 Ep: pct application non-entry in european phase