CA2313771C - Networking systems - Google Patents

Networking systems Download PDF

Info

Publication number
CA2313771C
CA2313771C CA002313771A CA2313771A CA2313771C CA 2313771 C CA2313771 C CA 2313771C CA 002313771 A CA002313771 A CA 002313771A CA 2313771 A CA2313771 A CA 2313771A CA 2313771 C CA2313771 C CA 2313771C
Authority
CA
Canada
Prior art keywords
packet
data
cell
switch
cells
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CA002313771A
Other languages
French (fr)
Other versions
CA2313771A1 (en
Inventor
Zbigniew Opalka
Vijay Aggarwal
Thomas Kong
Christopher Firth
Carl Constantino
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nexabit Networks Inc
Original Assignee
Nexabit Networks Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nexabit Networks Inc filed Critical Nexabit Networks Inc
Publication of CA2313771A1 publication Critical patent/CA2313771A1/en
Application granted granted Critical
Publication of CA2313771C publication Critical patent/CA2313771C/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/20Support for services
    • H04L49/201Multicast operation; Broadcast operation
    • H04L49/203ATM switching fabrics with multicast or broadcast capabilities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/60Software-defined switches
    • H04L49/606Hybrid ATM switches, e.g. ATM&STM, ATM&Frame Relay or ATM&IP
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5638Services, e.g. multimedia, GOS, QOS
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5678Traffic aspects, e.g. arbitration, load balancing, smoothing, buffer management
    • H04L2012/5679Arbitration or scheduling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5678Traffic aspects, e.g. arbitration, load balancing, smoothing, buffer management
    • H04L2012/5681Buffer or queue management

Abstract

A novel networking architecture and technique for transmitting both cells and packets or frames across a common switch fabric, effected, at least in part, by utilizing a common set of algorithms for the forwarding engine (the ingress side) and a common set of algorithms for the QoS management (the egress part) that are provided for each I/O module to process packet/cell information without impacting the correct operation of ATM switching and without transforming packets into cells for transfer across the switch fabric.

Description

NETWORKING SYSTEl'VIS
The present invention relates to networking systems and 'the forwarding and routing of information therein, being more particularly directed to the problems of a common method for managing both cell and packet or frame switching in the same device, having Gammon hardware, common QoS
(Quality of Service) algorithms, common forwarding algorithms; building a switch that handles frame switching without interfering with cell switching.
Background of Invention Two architectures driving networking solutions are cell switching and frame forwarding. Cell switching involves the transmission of data in fixed size units called cells.
This is based on technology referred to as Asynchronous Transfer Mode (ATM). Frame forwarding transmits data in arbitrary size units referred to either as frames or packets. The basis of frame forwarding is used by a variety of protocols, the most noteworthy being the lntemet Protocol (IP) suite.
The present invention is concerned with forwarding cells and frames in a common system utilizing common forwarding algorithms.
Most traditional Internet-style host-to-host data communication is carried out in variable size packet format, interconnected by networks (defined as a collection oh' switches) using packet switches called routers. Recently, ATM has become widely available as a technology to move data between hosts, having been developed to provide a common method for sending traditional telephony data as well as data for computer-to-computer communication.
_1_ WO 99135577 . . PCT/IB98I01940 The previous method employed was to apply Timc Division Multiplexing (TDM) to telephony data, with each circuit allocated a fixed amount of time on a channel. For example, circuit A may be allocated x amount of time (and thus data), followed by y and z and then x again, as latec described in connection with hereinafter discussed Fig. 3. Thus each circuit is completely synchronous. This method, however, has intrinsic limitations with bandwidth utilization, since if a circuit has nothing to send its allocated bandwidth is not used on the (ine. ATM addresses this bandwidth issue by allowing the circuits to be asynchronous. Though bandwidth is still divided among fixed length data items, any circuit can transmit at any point in time.
The ITU-T (International Telecommunications Union - Telecommunications, formally the CCITT), is an organization chartered by the United Nations to provide telecommunications standards defined four classes of service: 1) Constant Bit Rate for Circuit Emulation, i.e. constant-rate voice and video; 2) Variable Bit Rate for certain voice and video applications; 3) Data for Connection-Oriented Traffic:
and 4) Data for Connectionle~s~
Oriented Traffic. These services, in turn, are supported by certain "classes"
of ATM traffic. ATM moves data in fixed size units called cells. There arc several types of ATM "types", these are referred to as ATM Adaptation Layers (AAL), these ATM adaptation layers are defined in ITU-T Recommendation L363. There are 3 defined types: AALI, AAL3/4 and AALS. AAL2 has never been defined in the iTU-T
recommendations and AAL 3 and AAL 4 were combined into one type. With respect to the ATM cell make-up, them is no way to distinguish culls that belong to one layer as opposed to cells that belong to another layer.
'llte adaptation layer is determined during circuit setup; i.e. when a host computer communicates to the network. At this time, the host computer informs the network of the layer it will use for a specific virtual circuit.
AAL1 has been defined t~ be used for real-time applications such as voice or video: while AALS has been defined for use by traditional datagram oriented services such as forwarding IP
datagrams. A series of AALS cells are defined to make up a packet. The definition of an AALS packet consists of a stream of cells with the PTI bit set to 0, except for the last one (as later illustrated in Fig. I ). This is referred to as a segmented packet.
Thus, in current networking technology data is transported in either variable size packets or fixed size cells depending on the tyres pf switching devices installed in the network. Reuters can be connected to each other directly or through ATM networks. If connected directly, then packets arc arbitrary size; but if connected by ATM switches, then all packets exiting the routcr arc chopped into fixed site cells of 53 bytes.
wo ~r~ssn~ rcrnB~om4o Network architectures based on the Internet Protocol (IP) technology are designed as a "best effort"
service. This means that if bandwidth is available, the data gets through. If, on the other hand, bandwidth is not available then the data is dropped. This works well with most computer data applications such as file transfers or remote terminal access. This does not work well with applications that can not retransmit, or where retransmission is of no value, such as with video and voice. Getting a video frame out of order makes no sense, whereas file tranafer applications can tolerate such anomalies. Since the packet size is arbitrary at any point in time making specific delay variation commitments between any two frames is almost impossible, as there is no way of predicting what type and size of traffic is ahead of any other type of traffic. The buffers that must handle the data, moreover, must be able to receive the maximum data size, meaning that that buffering scheme must be optimized to handle larger data packets while at the same time not wasting too much memory on smaller packets.
ATM is designed to provide several service categories for different applications. These include Constant Bit Rate (CBR), Available Bit Rate (ABR), Unspecified Bit Rate (UBR) and two versions of Variable Bit Rate (VBR), real-time and non-real-time. These service categories are defined in teams of Traffic Parameters and QoS Parameters.
Traffic Parameters include Peak Cell Rate (constant bandwidth), Sustainable Cell Rate (SCR), Maximum Bunts Size (MBS), Minimum Cell Rate (MCR) and Cell Delay Variance Tolerance. QoS
parameters include Celi Delay Variation (CDV), Cell Loss Ratio (CLR) and maximum Cell Transfer Delay (maxCTD). As an example, Constant Bit Rate CBR (e.g. the service used for voice and video applications) is defined as a service category that allows the user at call setup lime to specify the PCR (peak cell rate, essentially the bandwidth), the CDV, maxCTD and CLR.
The network must then ensure that the values requested by the user and accepted by the network are met; if they are met, the network is said to be supporting CBR.
The various classes of service direct the network to provide better service for some traffic as opposed to other types of traffic. In ATM, with fixed length cells, switches manage bandwidth utilization on a line effectively by controlling the amount of data each traffic flow is allowed to put on a tine at any moment in time. They generally have simpler buffer techniques arising from the fact that there is but one size of data unit. Another advantage is predictable network delays, especially queuing latencies at each switch. Since all data units are the same size, this helps to ensure that such traffic QoS parameters as CDV arc easily measurable in the network. In non-ATM
networks (i.e. frame-based networks), frames can range anywhere from, say, 40 bytes to thousands of bytes, rendering it difficult to ensure a consistent CDV (or PDV, Packet Delay Variation) since it is impossible to predict the delays in the network, lacking consistent transfer times of indiv idual packets.
By carving data into smaller units, ATh1 can increase the ability of the network to decrease the latency of uansmitting data from one host to another. Such also allows for eaxier queue and buffer management at each hop through the network. A disadvantage, however, is that a header is added to each cell making the effective bandwidth of the network Icss than if the network had a larger transmission unit. For example, if 1,000 bytes are to be transferred from one host to another, then a frame-based solution would append a header (approximately 4 bytes) and transmit the entire frame in less than a second. In ATM, the 1.000 bytes is chopped into 48 bytes with a 5 bytes header: i.e. !.000148 = 20.833 (or 21 cells). Each cell is then given a 5 byte header increasing the bytes to be transmitted by 5 ' 21 = 105 extra bytes. Thus ATM effectively decreases the available bandwidth to the actual data by approximately 100 bytes (or about 109o)r the decreasing of end-to-end latency also decreases the available bandwidth for data transmission.
For some applications, such as video and voice, latency is more important than bandwidth while for other applications, such as file transfers, better bandwidth utilization increases performance rather than decreased hop-by-hap latency.
Recently, the demands on more bandwidth and QoS have grown many fold due to new applications for multimedia services, including the before described video and voice. This is forcing the growth of ATM networks in the core of traditional packet-haled networks. ATM, because of its fixed packet size, brings reduced processing time in networks and hence faster forwarding (i.e. lower latency). It also brings with it the ability to take advantage of traffic classification. Since the cells, ac earlier pointed out, are of fixed size, traffic patterns can be controlled through QoS assignments: i.e. networks can carry traditional packets (in cell format) and constant bandwidth stream data (e.g. voice/vidco based data).
As will subsequently he demonstrated, most conventional networking systems inherently are designed for either forwarding frames or cells hut not both. In accordance with the present invention, on the other hand, through use of novel search algorithms, Q~S management and management of packeUcell architecture, both cells and frames can he transmitted in the same device and with significant advantage over the prior techniques, as later more fully explained.
WO 991355'f7 PCTIIB98101940 Obkcts of Invention An object of the present invention, accordingly, is to provide a novel system architecture and method, useful with any technique for processing data packets and/or cells simultaneously with data packets, and without impacting the performance aspects of cell forwarding characteristics.
A further object is to provide such a novel architecture in which the architected switch can serve as a packet switch in one application and as a cell switch in another application, using the same hardware and software.
Still a further object is to provide such a system wherein improved results are achieved in managing QoS
characteristics for both cells and data packets simultaneously based on a common celUdata packets algorithm.
An additional object is to provide a common parsing algorithm for forwarding culls and data packets using common and . similar techniques.
Other and further objects will be explained hereinafter, and are more particularly delineated in the appended claims.
umma In summary; from one of its important viewpoints, the invention encompasses in a data networking system wherein data is received as either ATM cells or arbitrarily-sized multi-protocol frames from a plurality of 1I0 modules any of which can be cell or frame interfaces, a method of processing both ATM cells or such frames in a native mode, i.e. not transforming frames to cells, using common algorithms for forwarding based on control information contained in the cell or frame and in such a manner as to preserve QoS characteristics necessary for connect operation of cell forwarding; processing the packet/cell control information in a forwarding engine with common algorithms not dependent on context-sensitive information contained in the cell or packet, and passing results including QoS information to an egress queue manager; passing the celU
packet to the egress 1/O transmit facility in such a manner as to provide a minimal all delay variation (CDV} so as not to impact correct cell forwarding characteristics; and controlling the transmit facility so as to provide a common bandwidth management algorithm for both cell and packets and alt without impacting the correct operation of either cells or packets.
Preferred and best mode designs and techniques are hereinafter presented in detail.
Drawines The invention will now be described in connection with the accompanying drawings in which the before-mentioned Fig. 1 is a diagram illustrating an ATM (Asynchronous Transfer Mode) cell format;
Fig. 2 is a similar diagram of an Internet Protocol (IP) frame format for 32 bit words;
Fig. 3 is a flowchart comparing Time-Division Multiplexing (TDM), ATM and Packet Data frame forwarding;
Fig. 4 is a block diagram of the switch of the invention with the cell and packet interfaces;
Fig. 5 is a block diagram of a traditional prior art bus-based switching architecture, and Fig. 6, its memory-based switch data flow diagram;
Fig. 7 is a block diagram of a traditional prior art cross-bar type switching architecture, and Fig.
8, its cross-bar data flow diagram;
Figs. 9-11 are interface diagrams illustrating, respectively, a cell switch with a native interface card, a packet interface on cell switch, and an AALS packet interface on cell switch, all with a cross-bar or memory switch;
Figs. 12 and I3 are similar diagrams of a packet switch with native packet interface cards and with AALS interface, respectively, for NxN memory connection buses;
Fig. 14 is a block diagram of the switch architecture of the present invention, using the word "NeoN" in connection with the packet and cell data switch as a trade name of NeoNET LLC, the assignee of the present application;
Figs. 15 and 16 are diagrams respectively of extended parsing function flows for forwarding decisions and an overview of such functions and Fig. I 7 is a diagram of the forwarding elements;
Fig. 18 is a first stage parse graph tree lookup block diagram, and Fig. 19 is a second stage forwarding table lookup (FLT) diagram;
Figs. 20 and 21 are respective diagrams of parse graph memory on power up and of a simple illustrative IP multicast packet;
Fig. 22 presents an initialized lookup table, with all entries pointing to unknown route/cell forwarding information, and Fig. 23 illusuatcs the lookup table after adding an illustrative IP address (209.6.34.224/32); and Fig. 24 is a queuing diagram for scheduling system operation.
Further Background'I'o Preferred Embodiments of Invention Before proceeding to illustrate the preferred architecture of the invention, it is believed necessary to review the limitations of the prior and of current network systems, which the present invention admirably overeottxs.
Current networking solutions arc designed either for switching data packets or cells. As before stated, all types of data networking switches must receive data on an ingress port, make a forwarding decision, transfer data from the ingress port to the egress port and transmit that data on the appropriate egress port physical intuface.
Beyond the basic data forwarding aspects, there are different requirements for cell switching versus lame forwarding. As before stated, all current technology divides switching elements into three types: bridges, routers and switches, and in particular. ATM switches. The distinction between bridges and routers is blurred in that both forward datagrams and typically most routers also do bridging functions as well. thus the discussion focuses on datagram switches !i.e. routers) and ATM switches.
It is in order first to investigate the basic architectural requirements for these two types of switching devices based on current solutions, and then to present the reasons why current solutions do not provide mechanisms to allow simultaneous transfer of cells and frames without severely impacting the correct operations of either ATM switching or frame forwarding. 7lte novel solution based on the present invention will then be clear.
Reuters typically have a wide variety of physics) interfaces: LAN interfaces, such as Ethernet, Token ring and FDDI, and wide-area interfaces, such Frame Relay, X.25, TI and ATM. A
router has methods for receiving frames from these various interfaces, and each interface has different frame characteristics. For example, an Ethernet frame may be anywhere from G4 hytes to 1500 bytes, and an FDDI frarne can be anywhere from 64 bytes to 4500(including header and trailer) bytes. The router's I/O module strips the header that is associated with the physical interface and presents the resulting frame, such as an IP datagram, to the forwarding engine. The forwtuding engine looks at the IP destination address. Fig. 2, and makes an appmpriatc forwarding decision. The result of a forwarding decision is to send datagram to the egress port as determined by the forwarding tables. The egress port then attaches the appropriate network-dependent header and transmits the frame out the physical interface. Since wo ~r~ss~~ PCT/IB98/01940 different interfaces may have different frame size requirements, a muter may be required to "fragment" a frame, i.e.
"chop" the datagram into useable size. For example, a 2000 byte FDDI frame must be fragmented into frames of 1500 bytes or less before being sent out on a Ethernet interface.
Current roofer technology offers "best effort" service. This means that there are no guarantees that datagrams will not be dropped in a muter-based network. Furthermore, because roofers transfer datagrams of varying sizes, there are no per datagram delay variation or latency guarantees.
Typically a muter is characterized by its ability to transfer datagrams of a certain size. Thus, the capacity of a roofer may be characterized by its ability to transfer 64 byte frames in one second or the latency to transfer a 1500 byte frame from an ingress port to an egress port. This latency is char~terized by last bit in, first bit out.
An ATM switch, by comparison, has only one type of interface, i.e. ATM. An ATM
switch makes forwarding decision by looking at a forwarding table based on VP1/VCI numbers, Fig. 1. The forwarding table is typically indexed by physical port number, i.e. an incoming cell with a VPI/VCI on ingress port N gets mapped to an egress port M with a new VP1/VCI pair. The table is managed by software elsewhere in the system. All cells, no matter what the ATM Adaptation Layer (AALx), have the same structure, so that if ATM switches can forward one AAL type, they can forward any type.
In order to switch ATM cells, several fundamental criteria must be met. The switch must be able to make forwarding decisions based on control information provided in the ATM header, specifically VPI/VCI. The switch must provide appropriate QoS functions. The switch must provide for specific service types, in particular Constant Bit Rate (CBR) traffic and Variable Bit Rate (VBR). CBR (voice or video) traffic is characterized by low latency and more importantly low or guaranteed Cell Delay Variation (CDV) and guaranteed bandwidth.
The three main requirements of implementing CBR type conmections over a traditional packet switch are low CDV, small Delay and guaranteed bandwidth. Voice, for example, consumes a fixed amount of bandwidth, based on the fundamental Nyquist's sampling Theorem. CDV is also part of a CBR
contract, and plays a role into the overall Delay. CDV is the total worst case variance in expected arrival time and actual arrival time of a packetlcell.
In so far as an application is concerned, it wants to see data arrive equidistant in time. 1f, however, the network cannot guarantee this equidistant requirement, some hardware has to buffer data - equal or more than the worst case CDV amount introduced by the network. The higher the CDV, the higher is the buffer requirement and hence the _g_ WO 99135577 .. PCT/IB9>i/01940 higher Dctay: and, as illustrated earlier. Delay is not good for CBR type circuits.
Packet-based networks traditionally queue data at the egress based on priority of traffic. Regardless of how data is queutd, traffic with low delay variation requirements will get queued behind one yr more packets. Each of them could be maximum packet size, and this inherently contributes the most to delay variation on a packet-baud network.
There are many methodologies used to manage bandwidth and priorities. Erom a Network htanagement point of view, a network manager usually likes to carve out the total egress bandwidth into priorities. There are several reasons for carving this bandwidth: e.g. it ensures the manager that control traffic (Higher Priority and Low Bandwidth) always has room on the wire oven during very high lint bandwidth utilization, or perhaps a CBR
(Constant Bit Ratc) traffic will be guaranteed on the wire, etc.
There are numerous methods to address bandwidth per traffic priority. Broad classes of these mechanisms are Round Robin Queuing, Weighted Fair Queuing and Priority Queuing. Each methodology will be explained foi the sake of discussion and completeness of this document. In all cases of queuing, traffic is put into queues based on priorities, usually by a hardware engine that looks at a celUpacket header or control information associated with cell/packet as the cell/packet arrives from the backplane. It is how data is extracted/de-queued from these queues that differentiates one queuing mechanism from another.
Simple Round Rohin Queuing This queuing mechanism empties all queues in a round robin fashion. This means that traffic is divided into queues and each queue gets the same fixed bandwidth. While a clear advantage is simplicity of implementation, a major disadvantage of this queuing technique is that this mechanism completely loses the concept of priority. Priority must then be managed by buffer allocation mechanisms. The only clear advantage is simplicity of implementation.
Weighted Round Robin This queuing mechanism is an enhancement of 'Simple Round Robin Queuing ', where a weight is placed on each queue by the network manager during initialization time. In this mechanism, each priority queue is serviced based on the weight. If one queue is allocated IO~Io of the bandwidth, it will be serviced i0~k of the time. Another queue may have SO~fc of the alkxated bandwidth, and will be serviced 5090 of the time. The major drawback is there is unused bandwidth on the wire when there is no traffic in a queue of the allocated bandwidth. This results in wasted bandwidth. There is, moreover, no association of packet size in the de-queuing algorithm, which is crucial for WO 99135577 PC"f/IB98/01940 packet-based switches. Giving equal weight to all packet sizes throws off the bandwidth allocation schertte.
Priority Queuing In this queuing rnechanism, output queues are serviced purely haled on priority. The Highest Priorny Queue gets serviced first, and the Lowest Priority Queue gets serviced last.
In this mechanism. Higher Priority Traffic always preempts the Lower Priority Queue. The drawback of this type of mechanism is that the Lower Priority Mechanism may result in zero bandwidth. The advantage of this mechanism, besides being simple, is that the bandwidth is not wasted: so long as there is data to send, it will be sent.
There is, however, no association of packet size in the de-queuing algorithm, which is crucial for packet-based switches.
Giving equal weight to all packet sizes throws off the bandwidth allocation scheme, as before noted.
From the above examples, there is a need to strike a balance between Priority Queuing and Weighted Round Robin Queuing, along with packet size. This calls for a solution provided by the present invention where .
high priority traffic is serviced before lower priority traffic, but each queue is serviced at least within its bandwidth allocation. In addition to the above requirement, the output buffer should be filled with data from a queue even when the bandwidth of that queue is exhausted, including with other bandwidth eligible queue data. This technique enforces bandwidth per traffic queue requirement and also does not waste bandwidth on the wire and is embodied in the invention Architectural Issues in Switch Design Current switching solutions employ two distinct solutions: 1 ) memory and 2) cross-bar. These solutions are illustrated in Figs. 5 and 6 showing a traditional bus-based and memory based architecture, and in Fig. 7, showing a traditional cross-bar switching architecture.
In the traditional memory-based solutions represented by Fig. 5, data must first be placed inside of train memory from the IIO card. This data transfer takes several cycles as bits are moved from the LO module to the main memory. Since several different I/O modules must transitr data to common memory, contention for this resource occurs. Main memory provides both a buffering mechanism and a transfer mechanism for data from one physical port to another physical port. The raft of transfer is then highly dependent on the speed of the egress port and the ability of the system to move data in and out of main memory and the number of interfaces that must aaess main memory.
As more fully shown in Fig. 6, the CPU interfaces through a common bus, with memory access, with a plurality of data-receiving and transmitting UO pons 111, #2, etc.. with the various dotted and dashed lines showing the interfacing paths and the shared memory, as is well known. As pointed out previously, the various accesses of the shared memory result in substantial contention, increasing the latency and unpredictability, which is already substantial in this kind of architecture because the processing o! the control information cannot begin until the entire packet/cell is received.
Furthermore, ac the accesses to the shared memory are increased, so does the contention; and as the contention is increased, this results in increasing the latency of the system.
In the traditional memory-based switch data flow diagram of Fig. 6, thus, where the access time per read or write to the memory is equal to M, and the number of bits for a memory access is W, the following functions occur:
There is the write of data from the receive port # f to shared memory. The time to transfer a packet or cell is equal to ((B'8)/1V)'M, where B is equal to the number of bytes for the packet or cell. M is the access time per read or write to the memory and W is the number of bits for a memory access . As the packet gets target so does the time to write it to memory.
This means that if a packet is destined to an ATM interface as in Fig. 5, followed by a cell, the cell is delayed by the amount of transicr time from main memory, and in the worst case this could be N packets (where N is the number of packet, non-ATM interfaces) including the contention among other roads and writes on the bus. If, for example, B=4000 bytes and M is 80 nanoseconds (for a 64 bit-wide bus for DRAM
access), then ((4000 * 864) *
80= 40.000 nanoseconds for a packet transfer qusued before a cell can be sent, and OC 48 is 170 nanoseconds pet 64 byte cells. This is only if there is no contention on the bus whatsoever.
In the worst case, if a switch has 16 ports and all the ports are contending simultaneously, then to transfer the same packet would require 640,000 nanoseconds just to get into the memory, and the same amount to get out-- a total time of about 1.3 milliseconds. This occurs if between each write into memory, another port has to write to memory as well.
So for n~16 ports, n-l, or IS ports have to gain access to memory. This means that 15 ports * 80 nanoseconds =
1200 nanoseconds are used by the system before the next transfer into memory of the original port can occur.
Since there arc '4000 bytes * 8 bitclbyte)/64 bite = StW nccesses. each access is separated by 1200 nanoseconds, and the full transfer takes 500 *

wo ~r~ss~~ rcrns9slol940 1200 = 600.000 nanoseconds. So the total is system time plus actual transfer time which is 600.000 nanoseconds +
40.000 nanoseconds = 6.10,000 nanoseconds for the transfer into memory, and another 640.000 nanoseconds out of memory. This calculation, moreover, does not include any CPU contention issues or delay because of egress port busy, which would make this calculation even larger.
There arc similar disadvantages in traditional cross-bar based solutions as shown in Fig. 7, before referenced, where there is no main memory, and buffering of data occurs both at the ingress port and egress port. In the memory-based design of Figs. 5 and 6, buffer memory is shared across all ports, making for very efficient utilization of memory on the switch. In the cross-bar approach of Fig. 7, each port must provide a large amount of memory, so that the overall memory of the system is large as there is no common sharing of buffers. The cross-bar switch is only a conduit for the transfer of data from one physical port on the system to another physical port on the system. If two ports are simultaneously to transfer data to one output port, one of the two input ports must buffer the data thereby increasing the latency and unpredictability as the data from the first input port is transferred to the output port. The advantage of a cross-bar switch ov er a memory-based switch, however, is the high rate of data transfer from one point to another without the inherent limitation of main memory contention on the memory-based switch.
In the traditional cross-bar switching architecture system of Fig. 7, the CPU
interfaces through a common bus. with memory acccsc, to an interface with the various dotted and dashed lines of Fig. 8 showing the interfacing paths and the shared memory, as is well known. The CPU makes a forwarding decision based on information in the data. The data must then he transmitted across the cross-bar switch fabric to the egress port. But if outer traffic is being forwarded to that egress interface, then the data must be huffered in the ingress interface for so long as the amount of time it takes to transfer the entire cclllpacket to the egress memory. There is:
A. Write of data from the receive port # I to local memory. The time to transfer a packet or cell is equal to ((B'8)<W)~M. where H is equal to the number of bytes for the packet or cell. M is the access time pa read or write to the memory and W is the number of bits for a memory access . As the packet gets larger so dots the time to write it to memory.
B. Write of data from the receive port # I to local memory of egress port #2.
The time to transfer a packet or cell is equal to ((B~R)/1V)'h1 + T, where D is equal to the number of bytes for the packet or - Iz-cell. M is the access tithe per read or write to the memory, W is the number of bits for a memory access and T is the transfer lima of the cross-bar switch.. As the packet gets larger, so does the time to transfer it across the cross bar switch and write it to local memory.
For a packet transfer followed by a cell transfer to an egress port, the calculation is the same as for the memory-based solution of Figs. 5 and 6. The packet must be transferred to local memory at the same apesds as for the memory-based solution. The advantage that there is no contention for central memory, does not alleviate the problem that a packet transfer in front of a cell transfer can cause delays that prevent the proper functioning of very fastinterface speeds.
Tire goat is to create a switching device running at high speeds (i.e. SONET
defened rates) that provides the required QoS. The device should he scalable in terms o! speed and ports, and the device should allow for equal.time transfer of cells and frames from an in (tress port to an egress.
White current designs have started to corne up with very high speed routers, they have not, however~been able to provide all the ATht service requirements, thus still maintaining a polarized set of networking devices, i.e.
roofers and ATM switches. An optimal solution is one that achieves very high speeds and that provides the required QoS support and has interfaces that merge ATM and Packet-based technologies on the same interface, Fig. 3. This will allow the current investment in either networking technology to be preserved, yet satisfy bandwidth andQoS
demands.
The issues in merging interfaces on a data switch port that accepts ATM cells and treats certain ATM cells as packets and others as ATM (lows. accepts only packets on other interfaces and only cells on yet another set of interfaces, is shown in later-discussed Fig. 4. These issues are three fold:
a) Forwarding decision at the ingress interface for packet and cells, b) switching packet and cells through the switch fabric and, c) managing egress bandwidth on packet and cells. The present invention, based on this technique of the previously cited co-pending applications, explains hnw to create a general data switch that merges the two technologies (i.e. ATM switching and packet switching) and selves the three issues listed above.
lntor'face Issues Switch Designs The purpose of this section is to compare and contrast AT114 and Packet-based switch designs and various interlaces on either type of switch design. Specifically it identifies problems with both devices as they pertain to forwarding WO 99/35597 PGTIiB98/01940 packets or cells; i.e. issues with ATht switches forwarding packets, and issues with Packets switches forwarding cells, Fig.
3.
Typical Design of an ATh1 Switch As previously explained, defined within the ATM standard there arc multiple ATM Adaptation Layers ( AAL I-AALS) , each one specifying a different type of sewice frotn a wide spectrum of services; namely, Constant Bit Rate (CBR) to Unspecified Bit Rate (UBR). Constant Bit Rate (AAL1 ) contract guarantees minimal cell loss with low CDV, while Unspecified Bit Rate contract specifies no traffic parameters, and no quality of Service guarantees. For the purposes of this invention it is convenient to limit the discussion to AALI (CHR) and AALS
(Fragmented Packets).
Fig. 9 illustrates cell switching with native cell interface cards, showing different modules of a generic ATM
Switch with native ATM interfaces. The cells arriving from the physical layer module (PHY) are processed by a module called Policing Function nodule, which validates per VCI established contracts (services ) for incoming cells; e.g Peak Cell Rate. Sustained Cell Rate. Maximum Burst Rate. Other parameters such as Ccll Delay Variation (CDV) and Celi Loss Rate (CLR) arc guarantees provided by the box based on the actual design of the cards and the switch. The contracts are set by the network manager or via ATh4 signaling mechanisms. Cell Data from the policing function then Boos, in the example of Fig. 9 .to a Cross Bar-type (Fig. 7) or Memory-based Switch (Fig 5). Cells are thin forwarded to the egress port which has some requirements of shaping traffic to avoid congestion on the remote connection. To provide egress shaping, the design will have to buffer data on the egress side. Since ATM connections are based on a point-to-point basis, the Egress shaper module also has to translate the ATIvt Header. This is because the next hop has no relationship to the ingress VCWPI. ' Native Packet Interface on ATM Switch As mentioned in the 'Background' section, if an ATM switch is to provide a method that facilitates the routing of packets, there have to be at least two points between two hosts where packets and cells networks meet. This means that ettrrent cell switching equipment hac to carry interfaces that have native packet interfaces, unless the switch is sitting deep in the core of the ATM network. It is now in order, therefore, to examine the design of such a packet interface that connects to the ATM switch.
A typical Packet interface nn an ATTt Switch is shown in Fig. 10, elaborating on packet interface on the cell switch. The physical interface would put incoming packets into a buffer and then they arc fed to the "Header Lookup and Forwarding Engine". The packet-based forwarding engine decides the egress port and associates a VCI number for cells of that packet. The packet then gets segmented into cells by the Segmentation Unit. From there vn, the packet is treated just as in the native Cell Switching case, which involves going through a policing function and to the Switch Buffer before entering the switch. On the egress side, if the cells enter a cell interface, then the processing is just as explained above (in the native cell interface on ATN1 switch). if the cells enter a packet interface, then the cells have to be reassembled into packets. These packets arc then put into various priority queues and then emptied as in the packet switch.
Two types of packet interfaces on the ATM Switch should be examined.
AALS Interface on ATM Switch A Router connected to ATM Switch could segment packets before sending the packet to the ATM Switch. In that case, packets would arrive at the ATM Switch in AALS format, before described.
If the ATM Switch were to act as a Router and an ATM Switch, it would have to reassemble the AALS Packet and perform a routing decision on it. Once the ATM Switch/Router makes the forwarding decision on the AALS packet, it would then push it through the ATM Switch after segmenting it again. .
In AALS, perfect interface on an ATM Switch is shown in Fig. 11. Incoming AALS
coda are first policed on a per VCI based to ensure that the sender is honoring the contract. Once the policing function is done, an Assembler will assemble the cells of a VCI into packets. These pxkets are then forwarded to the forwarding engine, which makes the forwarding decision on the assembled packet and some routing algorithm. The packet then travels the ATM Switch as mentioned in the Packet Interface on ATM Switch section, above.
Difficulties in Processing Packets on Ce!! Switch Keeping the goal of the present invention in mind, i.e. to achieve strict QoS
parameters such as CDV and latency and packet loss, Ibis section will list the difficulties of attempting to design for packets through a traditional cell switch.
According to Fig. t I, once the incoming AALS segmented packets are assembled and a forwarding decision is made, they arc rescgmented in the "Segmentation Unit". Across the Switch, the AALS cells are then reassembled into packets before they arc shipped on the egress wire. This augmentation and reassembly adds to the delay and unpredictable and unmcasurable PDV (Packet Delay Variation) and cell loss. As earlier mentioned, for packets to be provide QoS, it would tKCd to support contract that includes providing mcasurab)c PDV and delay. Delay is caused due to the fact the cells have to be reassembled. Each rcassemhly would have to, in best case, buffer an entire packet worth of data before calling it complete and sending it to the QoS section. For a 8000 byte packet, for example, this could result in 64 usec delay in buffering on a lGigabit switch.
- IS -The PDV for a packet through a cell switch is even more of a concern than the additional delay. The assembly process can be processing multiple packets at the same time from various ingress pons and packets, and this causes an unpredictable amount of PDV, essentially based on switch contention and the number of retries of sending cells from ingress to egress.
Cell loss through the switch ClUSts packets to get reassembled incorrectly and therefore adversely affects applications that are real-time content specific. Most file transfer protocols do recover from n dropped packet (due to dropped cells), but it causes more traffic on the switch due to retransmissions.
1n summary, passing packets through an ATM switch does not provide packets with the same CDV and latency characteristics as cells. It simply provides a mechanism for passing a packet path through a cell switch.
Design of Packet Switch A traditional Packet Switch is shown in Fig. 1 I with native packet interface cards. Packets are forwarded to the Forwarding Engine via the physical interface. The Forwarding Engine makes a routing decision based on some alg~ithm and the header of the packet. Once the egress port is decided, the packet travels to the egress via the Packet Switch, which could be designed in one of many ways (e.g. N by N busses, large central memory pool, ere.). On egress, the packets end up on different traffic priority Queues. These Queues are responsible for prioritizing traffic and bandwidth management.
Celt Interface on Packet Switch The traditional packet switch, shown in Fig. 13 with AALS interfaces, provides a mechanism to allow cells to pass through the box so long as the cells are of AALS type. There is no practical way of creating a virtual cell switch through a traditional packet switch, and part ~f the present invention deals with the requirements of such an architecture.
After AALS cells arc policed for contract agreements, they are assembled into packets by an Assembly module.
'the packets thus created are then processed exactly like native packet interfaces. On the egress side, if packets have to go out of the Switch as AALS cells, they arc first segmented and then header translated. Finally they arc shaped and sent out.
Difficulties in Processing Cells on Packet Switch:
There arc problems that a cell tlow faces as it traverses a traditional packet switch. It is extremely difficult for a traditional data switch, such as a muter, to support the QoS guarantees required of ATM. To illustrate the point, reference is made to the diagram shown in before-described Fig. 13. One of the biggest challenges far a packet switch is to support AALI cells. The simple reacon is that the traditional Packet-based header Lookup and Forwarding engines do not wo 99r~ss~~ Pcr~s9a~omo simultaneous recognise cells and packets: therefore, AALS cells which can be converted into packets are supported. This is a severe restriction in the capability of the switch.
Among the features of cells, are the CDV and the delay Characteristics.
Pushing cells through a traditiwtal packet switch adds more delay and an unpredictable CDV. The packet switch, as is inherent in its name, implies that packets of various sizes and numbers arc queued up on the switch. Packetized cells would then have no chance of maintaining any type of reasonable QoS through the switch.
Preferred Embodiments) of the lnventiot;i The present invention, cxemplarily illustrated in Figs. 4 and 14, and unlike all these prior systems, optimizes the networking system for transmitting both cells and frames without internally converting one into the other. Furthermore, it maintains the strict QoS parameters expected in ATM
switches, such as strict CDV, latency and cell loss. This is achieved by having a common ingress forwarding engine that is context independent, a switch fabric that transfers cells and frames with similar latency, and a common egress QoS engine-- packets flowing through the acchitecture of the invention acquiring cell QoS characteristics while the cells still maintain theirQoS
characteristics.
The main components of the novel switch architecture of the invention, sometimes referred to herein by the acronym for the assignee herein, "NeoN." as shown in Fig. l4, comprise the ingress part, the switch fabric and the egress part. The ingress cart is comprised of differing physical interfaces that may be cell or frame. A cell interface furthermore may be either pure cell forwarding or a mixture of cell and frame forwarding where a frame is comprised of a. collcclion of cells as defined in AALS. Another part of the ingress component is the forwarding engine which is common to both cells and frames. The switch fabric is common to both cells and frames. The egress QoS is also common to both ceps and frames. The final part of egress processing is the physical layer processing which is dependent on the type of interface. Thus, the NeoN switch architecture of the invention describes those pans that are common to both cell and frame processing.
The key parameters required for ATM switching, as earlier explained, and thnt are provided even in the case of simultaneous packet switching are predictable CDV, low Latency, low Cell Loss and bandwidth management; i.e. providing a guaranteed Peak Cell Rate (PCR). The architecture of the invention, Figs. 4 and 14, however, contains two physical interfaces AAL5/1 and packet interface at the ingress and egress. The difference between the two types of interface is the modules listed as "Per VC
Policing Function" and "Per VC Shaping". For cell interfaces (AAL1-5); the system has to honor contracts set by the network manager as per any ATM switch and also provide some sort of shaping on per VCI bases at the egress. Besides those physical interface modules, the system is identical for a packet or a cell interface. The system is designed with the concept that once the data traverses the physical interface module, there should be no distinction between a packet and cell. Fig. 14 lists the core of the architecture which has three major blocks, namely, "Header Lookup and Forwarding Engine", "QoS", and "Switch"
fabric, that handle cells and packets indiscriminately. The discussion, as it relates to this invention, lies in the design of these three modules which will now be discussed in detail.
Switch Fabric Other inventions both of common assignee herewith, optimize the networking system for minimal latency, and can indeed achieve zero latency even as data rates and port densities are increased. They achieve this equally well, moreover, for either 53 byte cells or 64 byte to 64K bytes packets through extracting the control information from the packetlcell as it is being written into memory, and providing the control information to a forwarding engine which will make switching, xouting and/or filtering decisions as the data is being written into memory.
Native Cells through the Switch The switch cells (AAL1/5) of Fig. 14 are first policed at 2 as per the contract the network manager has installed on a per VCI base. This module could also assemble AALS
cells into packets on selected VCI. Coming out of the policing function 2 are either cells or assembled packets. Beyond this juncture of the data flow, there is no distinction between a packet or a cell until the data reaches the egress port where data has to comply with the interface requirements. The cells are queued up in the "NeoN Data Switch" 4 and the cell header is examined for destination interface and QoS
requirements. This information is passed on to the egress interface QoS module 6 via a Control Data Switch, so-labeled at 8.
The QoS for a cell-type interface will simply ensure that cell rates beyond the Peak Cell Rate are clipped.
The cells are then forwarded to the "Per VCI Shaping" module 10, where the cells are forwarded to the physical interface after they are shaped as per the requirements of the next hop switch. Since the QoS module 6 does not know Irom the control data whether a packet or a cell is involved: it simply requests the data from the NcoN Sw itch into the "Buffer 12." The control data informs the "Per VCI shaping" hlock 10 to do either header translation if it were a cell going into another VCI tunnel, andlor segmentation if the data was a packet going out on a cell interface and/or reform shaping as per the remote end requirements.
Native Packets through the NeoN Switch As packets enter the interface card, the packet header is examined by a Header Lookup and Forwarding Engine module 14 while the data is sent to the NeoN data switch 4. The Ingress Forwarding Engine makes a forwarding decision about the QoS and the destination interface card based on the incoming packet hander. The Forwarding Engine 14 also gathers all information regarding the data packet, like NeoN Switch address, Packet QoS, Egress Header Translation information, and sends it across to the egress interface card. This information is carried as a control packet to the egress port through the small non-blocking control data switch 8 to the Egress QoS module 6, which will queue data as per the control packet and scud it to the module listed PHY at the egress. If the packet were to egress to a cell interface, the packet will be segmented, then header translated and shaped before it leaves the interface.
Advantages of the NeoN Switch Architecture of the Invention As seen shore, cell and packet flow through the box without any distinction except at the physical interfaces, such that if cell characteristics are maintained. then packets have the same characteristics a5 the cells. The packets may thus have measurahle and low PDV (Packet Delay Variation) and low latency, with the architteture ~rpporting packet switching with cell characteristics and yet interfacing to existing cell interfaces.
While the traditional packet switch is unable to send non-AAL1 cells as before explained. AALS cells also suffer an unpredictable amount of PDV and delay - this being obviated by the NeoN Switch of the invention.
Packets through a traditional AThi Switch also suffer the same long delays and unpredictable CDV - again, not the case in the NeoN Switch of the invention. The modules that make this type of hybrid switching of the invention possihle include the Ingress Forwarding Engine, the Egress QoS, and the Switch Fabric.
_ 19_ Ingress Forwarding Engine Description The purpose of the Ingress Forwarding Engine 14, Fig. 14, is to parse the input framelcell and, based on predefined criteria and contents of the frame/cell, make a forwarding decision. This means that the input ce111frame is compared against items stored in memory. If a match is determined, then the contents of the memory location provides commands for actions on the cell/frame in question. The termination of the search, which is an iterative process, results in a forwarding decision. A forwarding decision is a determination of how to process the aforementiontd framelcell. Such processing may include counting statistics, dropping the frame or cell, or sending the frame or cell to a set of specified egress ports. In Fig. IS, this process is shown at a gross level. An input stream of four characters is shown b.c.d.e. The characters have appropriate matching entries in memory, with a character input producing a pointer to the next character. The final character b produces a pointer to a forwarding entry. A
different stream of characters than that illustrated would have a different collection of entries in memory producing different results.
The proposed Ingress Forwarding Engine 14 is defined to be a Parsing Micro-Engine. The Parsing Micro-Engine is divided into two parts -- an active part and a passive part. The active part is referred to as the parser, being logic that follows instructions written into the passive memory component which is composed of two major storage sections: 1)Parse Graph Trce (PGT), Fig. 18, and 2) Forwarding Lookup Tabte (FLT), Fig. 19, and a minor storage section for statistics collection. 'Ihe Parse Graph Tree is storage area that contains all the packet header parsing information, the results of which is an offset in the Forwarding Lookup. The FLT comains information about the destination port, multicast information, egress header manipulation. The design is very flexible, e.g. in a datagram, it can traverse beyond the I)A and SA fields in the packet header and search into the Protocol field and TCP Port number, etc. The proposed PGT is memory that is divided into the 2" blocks with each block having 2"' elements (where m < n ). Each element can be one of three types - branch element, leaf element, or skip tlement and within each block, there can he any comhination of element types.
While particularly useful for the purpose of the present im~ention, the Parsing Micro-Engine is generic from the standpoint that it examinec an arbitrary collection of bits and makes decisions based on that comparison. This can be applied, for example, to any text-searching functions, starching for certain arbitrary words. In such applications. as an illustration, words such as "bomb" or "detonate" in a lever or emaii may be xarched and if a match is detected, the starch engine may then execute predetermined functions such as signaling an alarm. In fact the same memory can even be used to search for words in different languages.
In the context of the invention. Fig. 14 illustrates having two entry points.
Une entry point is used to xarch for text in one language, while the xcond entry point is used to search for text in another language. Thus the same mechanisms and the same hardware arc used for two types of searches.
There are two components to the datagram header search, software component and the hardware component. The software component creates the elements in the Parse Graph for every new route it finds on an interface. The software has to create a unique graph starting from a Branch Element and ending on a Leaf Element, later defined, for each additional new route. The hardware walks the graph from branch to Leaf Element, clueless ahoutthc IP header.
In fact thcrecan be many entry points in the memory region as illustrated in Fig. 21. The initial memory can be divided into multiple regions, each region of memory being a separate series of instructions used for different applications. In the case of Fig. 22, one of the regions is used for IP
forwarding while the other region is used for ATM forwarding. At system start, the memory is initialized to point to "unknown route", meaning that no forwarding information is available. When a new entry is inserted, the structure of the Lookup Table changes.. as illustrated in Fig 23. The illustrative IP address 209.6.34.224 is shown inxrted. Since this is a byte-oriented lookup engine, the first block has a pointer inserted in the 209 location. The pointer points to a block that has a new pointer value in the 6 location, and so on until all of the 209.6.34.224 address is inserted. All other values still point to unknown route.
Inserting the address in the 1P portion of memory has no impact in the ATM
portion of memory.
As mentioned earlier, there are 2" blocks each with 2'" elements in the parse graph tree. The structure of each element is a shown in Fig. !7, with each element having the following fit:lds.
1. Instruction Field: In the current design there are three instructions resulting in two bit instruction field. The instruction description is as follows.
~ Branch Element (00). In so far as the Micro Engine is concerned, the branch element essentially points the Forwarding Engine to the next hlnck address. Also, within the branch element, the user may set various fields in. the 'Incremental Forwarding Info Field,' Fig. 18, and update various mutually exclusive elements of the final Forwarding Information. For example, if the micro engine was parsing an IF header, and the branch clement was placed at the end of the destination field, then the user could update the egress port field of the forwarding info. For AThI switching, the user would update the egress pert information at the end of parsing the VPI field.
~ Lcaf Element (01 ). 'Ibis element instructs the end of parsing to the micro engine. The forwarding information accumulated during the search is then forwarded to the next logical block in the design.
~ Skip Element ( 10). This element is provided to speed up the parsing. The time it takes to parse a packet header depends on the number of black addresses the micro engine has to look up. Not every sequential field in the incoming header is used to make a decision. If the skip element were not there then the micro engine would have to keep hopping on non-significant fields of the incoming stream, adding to parsing time.~The skip clement allows the micro engine to skip fields in the incoming datagram and continue the search. The skip sire is descrihcd below.
2. Skip Field: This field is especially used for the skip clement. This allows the parser to skip incoming datagram header fields to allow for faster searching. In an IP header, for example, if the user wanted to forward packet based on DA but count statistics based on ToS (Type of Service) field, it would parse the entire DA and then step to the ToS Field. This makes for a faster Forwarding Engine. The sine of this field should be calculated to allow for the largest skip that the user would ever need for its data switching box, which could be based on the protocol, etc. .
3. Incremental Forwarding Info Field; During header parsing, forwarding information is accumulated. The forwarding information may have many mutually exclusive gelds. The Forwarding Engine should be flexible enough to update each of these mutually exclusive fields independently, as it traverses the incoming datagram header. During parsing of an IP packet, for example, the egress port could be decided based on the destination field, filtering could be decided on the source address, with QoS decided based on the TOS field. Another example could be for ATM parsing. The egress parsing could be decided based on VPI field, and the statistics count could be decided based on the VCI. As the parsing is done, therefore,earious pieces of the forwarding information are collected, and when a Icaf node is reached, the resulting forwarding information is passed on to the control path. The width of the incremental forwarding information (hereafter referred to as IFI) should be equal to the number of mutually exclusive incremental pieces in the forwarding information.
4. Next Block Address Field: This field is the next block address to lookup after the current one. The leaf node instruction ignores this field.
5. Statistics Offset Field: In data switches, keeping flow statistics is as crucial as the switching data itselG
Without keeping flow statistics it would he difficult. at best, to manage a switch. Having this statistics offset field allows one to update statistics at various points of the parse. On an IP
Router, for example, one could collect packet count on various groups of DA, various Groups of SA, all ToS, various protocols etc. In another example dealing with an ATM switch, this field could allow the user to count cells on individual VPI or VCI or combinations thereof. If the designer wants to maintain 2'counters, then the size of this fietd should be s.
6. FLT Offset Field: This is an offset into the Forwarding Lookup Tahle. Fig.
18, later discussed in more detail. Tfte Forwarding Lookup Table has all the mutually exclusive pieces of information that is required to build the final forwarding information packet.
Reference Hardware Design Example The following is an example of a hardware reference design for the parser useful with practice of the present invention. The reference design parser has storage that contains the packetlcell under scrutiny. This storage element for the ccltlframc header information is to be two levels in depth.
This creates a two-stage pipeline for header information into the destination lookup stage of the Ingress Forwarding Engine. This is necessary because the Ingress Forwarding Engine will not be able to perform a lookup until the entire header information has been stored due to the flexible starting point capability. Tltc two stage pipeline allows the Ingress Forwarding Engine to perform a lookup on the present header information and also stores the next header information in parallel. When the present header lookup is completed, then the next header lookup can proceed immediately.
The storage clement stores a programmable amount of the incoming bit stream.
As an example, the configuration may he G4 hytes for If datagrnms and 5 bytes for cells. For an interface that handles both cells and frames, the maximum of these two values may he used.
A DMA Transfer Done signal from each D1~1A channel will indicate tp a state machine that it can begin snooping and storing header information from the Ingress DMA bus. A packeUcell signal will indicate that the wo 99r~ss~n pc~r~B9sroi94o header to he ~u~rcd is either a packet header or a cell header. When header information has been completely stored from a DhIA channel, a request icx~kup will he asserted.
For header I~okurs, there will lx a register-based table which will indicate to the Ingress Forwarding Engine the Ioc~kup starting mint in the IP Header'fahle. The Ingress Forwarding Engine uses the source interface number to index this tahle, this information allows the Ingress Forwarding Engine to start the search at any field in the iP header or fields e~ntnined in the data portion of the packet. This capability, along with the skip functions later explained, will allow the Ingress Forwarding Engine to search any fields and string them together to form complex filtering cases per interfxe..
A suitable hardware lookup is shown in Fig. 19 using a Parse Trcc Graph lookup algorithm to determine a forwarding decision. This algorithm parses either a nibble or a byte at a time of either an IP destination address or VPWCI header. This capability is programmable by software. Each lookup can have a unique tree structure which is pointed to by one of sixteen originating nodes, one per interface. The originating nodes arc stored in a programmahlc register-based table, allowing software to build these trees anywhere in the memory structure.
A nihhlc or byte lockup can result in either an end node result or a branch node result. The lookup contml state machine controls the lookup process by examining the status flag bits associated with each lookup. These status flag bits are the end node, skip node, and skip size. The end node flag hit indicates if the lookup was an end node or a branch node. If it was an end node, then the lookup result is the index value into the second stage Forwarding Table Lookup memory. If it was a branch node, then the nihhle or byte lookups will continue until an end node is found. Each hranch node lookup result is the pointer to the next branch node.
The skip node flag bit instructs the state machine to skip a number of nibhles, indicated by the skip size. during the lookup.1'hc bank select flag bits indicate which bank will be used in the next lookup. The lookup state machine will use these bits to determine which ehxk enahles and mux controls to activate.
The result of the Parser lookup is the Fonvarding 'fable lookup which is a bank of memory yielding the forwarding result, including the for~rarding information called the Fonvarding ID. In order to optimioe lookup time performance, this lookup stage can he pipclined, allowing the first stage to start another lookup in parallel. The Forwarding !D field will be used in several ways. First, the htSH (Most Significant Hyte) of the field is used to indicate a unicast or multicast packet at the netawrk interface level. For multicast packets. far example, the Egress . 2.~ .

we 99r~ss~~ pcT~9sro>t94o Queue Manager will need to look at this bit for queuing of multicast packets to multiple interfaces. For unicast packets, for example, six bits of the Forwarding ID can indicate the destination interface number and the remaining 16 bits will provide a Layer 2 ID. The Layer 2 ID will be used by the Egress Forwarding logic to determine what Layer 2 header needs to he rrependcd to the packet data. For packets. these headers will be added to the packet as it is moved from the Egress DMA FIFO (first in, first out) to the Egress Suffer Memory. For cells, the Layer 2 ID will provide the transmit device with the appropriate Channel ID.
For unicast traffic, the Destination 1/F number indicates the network destination interface and the Layer 2 ID indicates what type of Layer 2 header needs to added onto the packet data.
For multicast. the multicast ID
indicates both the type of Layer 2 header addition and which network interfaces can transmit the multicast. The Egress Queue Manager will perform a Multicast ID table lookup to determine on which interfaces the packet will get uansmitted on and what kind of Layer 2 header is put back on the packet data.
An Example of Lifc of a Packet Under the Forwarding Engine It is now in order to explain examples of a simple and a complex packet through the Forwarding Engine of the invention. On power up, Fig. 19, all 2~ blocks of the parse graph are filled with Icaf elements pointing to an FLT
offset that will eventually forward all packets to the Control Processor on the Network Card. This is a default route of ail unrecognized packets. Software is responsible of setting up the default route. The way in which the various elements are updated into this parse graph memory will he explained for the illustrated cases of a simple multicast IP
packet with mask 255.255.0 sand a complex filter packet, aging the simple IP Packet.
Simple Multicast Packet On power up, the entire hlocks in the Parse Graph Memory may be assumed to be filled with leaf elements that point to 1" offset of FLT which will route the packet to the Network Processor. Let it now be assumed for this example. that the ingress packet has a destination IP Address of 224.5.6.7. In this case, the hardware will lookup the 224'" offset in the 1" block (the first lonkup black is also called originating node) and find a leaf. The hardware will end the search and look up the default offset in the 224' location and look up the FLT and forward the packet to the contml processor.
- 25 .

WO 99/35577 Pt'TIIB98/01940 When the control processor forwards suhsequcnt packets of Destination IP
address 224.5.6.7. it will generate the graph shown in Fig. 21.
The software first has to create the parse graph locally. The parse graph created is listed ac I-129-2-13I.
'Ihe software always looks up the first blcxk a.k.a originating node. The offset in the fiirst hlock is 224, which is the firs) bye of the destination IP header. It finds a default route. -- an indication for software to allocate a new block for all subsequent hytes of the destination IP address. Once the software hits a default route, it knows that this is a link node. From the link node onwards, the software has to allocate new blocks for every byte it wants the hardware to search for a matched destination IP address. Through an appropriate software algorithm, it finds that l29, 2, 131 are the next three available blocks to use. The software will then install continuation element with BA of 2 in the Ss' offset of block 129, continuation element with BA of 13 I in 6'" offset of block 2, and a leaf element of E1.,T offset 5 at 7"' offset of block 131. Once such a branch with a leaf is created, the node link is then installed. The node has to be installed last in the new leafed branch. The node in this case, is a continuation element with BA of 131 at offset 224 of the I" block.
The hardware is now ready for any subsequent packets with destination IP
address 224.5.6.7, even though it knows nothing about it. Now , when the hardware sees the 224 of the the destination IP address, it goes to the 224'"
offset of l" block of the parse graph and finds a continuation element with HA
of 129. The hardware will then go to the S'" offset (second byte of destination IF address) of the 129'" block and find another continuation element with BA of 2. Tlte hardware will then go to 6'" offset (third byte of destination IP address) of the 2"° block and find another continuation clement with BA of 1:i 1. The hardware will then go to T"
offset (fourth byte of destination IP
address) of the 131" hlock and find a leaf element with FLT of 3. The hardware now knows that it has completed the 1P match and will forward the forwzrding ID in location 2 to the subsequent hardware block, calling the end of packet parsing.
It should he noted that the hardware is simply a slave of the parse graph put in memory by software. The length of the search purely depends on the software requirements of parsing length and memory size. The adverse effects of such parsing arc size of memory, and search time which is directly proportional to the length of the search.
In this case, the search will result in the hardware effecting 4 lookups in Parse Graph and 1 lookup in FLT.
Facket with Mask 255.255.255.p Building upon a the parse graph in Fig. 20, a packet with an illustrative mask 255.255.255.0 and address of 4.6.7.x is now installed. In this case, the software will go to the 4'" offset in the originating node and find a continuation clement with BA of I 29. The software will then go to offset 6 in block 129 and find a default FLT
offset. The software then knows that this is a link node. From now on, it has to allocate more blocks in the parse graph, such as block 2. At offset 7 of block 2, it will install a leaf element with FLT 3. Then it will install the (ink node consisting of writing a continuation element with BA of 2 at offset 6 of block 129.
When the hardware receives any packet with the header 4.6.7.x, it will look into the 4"' offset originating node and find a continuation element with BA of 129, then look at the ti'"
offset in block 129 and find a continuation element with BA 131, and then look at the leaf element at offset 7 with FLT of 3. This FLT will be of value 3 which is then forwarded to the Buffer Manager and eventually the Egress bandwidth manager.
Packet with Mack 255.255Ø0 ' This subsection will build upon the parse graph in Fig. 20 and install a packet with an illustrative mask 255.255Ø0 and address of 4.8.x.y. in this case, the software will go to the 4'" offset in the originating node and find a continuation element with BA of 129. The software will then go to offset 8 in block 129 and find a default FLT
offset. At this time the software knows that it has to install a new FLT (say 4)offset in the 8'" offset of block 129.
The hardware when receives any packet with the header A.8.x.y it will look into the 4'" offset originating node and find a ccmtinuation element with BA of 129 , then look ar the leaf element of block with FLT of 4, and terminate the search. In this care the hardware will do only 2 lookups.
Complex Filtered Packet Now assume that there was a requirement to filter a packet with header 4.5.6.8.9.x.y.z.l 1. There ere no restrictions to the above concept of parsing the packet, and the limo it takes to parse the packet will increase since the hardware will have t~ read and compare 9 bytes. The hardware will simply keep parsing however until it sees a leaf element. The x.y:z hytes arc blocks which contain continuation elements pointing to the next block with ail continuation elements of x pointing to block y, all continuation elements of y pointing to block z, and all continuation elements of z pointing to the block which has entry 1 I as a leaf, and the rest being default. This is where the fork element comes into play and may be called up to lookup the fanvarding at the and of search 4.5.6.8.

WO 99/3SS77 PC'TIIB98I01940 Removing Simple IP Multicast Packets The removal of packct~ is cimilar to the reverse of adding address to the parse graph, above explained. The psucdocode for removal in this embodiment is as follows:
Walk down to end of Icaf remembering each block address and offset in block.
FOR ( From Lcaf node to originating node) IF ( only clement in block) set default FLT offset at the previous NODE offset address free the last block go to previous block ELSE
ENDIF
END FOR
Egress Bandwidth Manager set default FLT offset at last Icaf.
exit Every UO Module connects a NeoN port to one or multiple physical ports. Each UO Module supports multiple traffic priorities injected via a single physical NeoN Port. Each traffic priority is assigned some bandwidth by a network manager, as illustrated in Fig. 14, being latxled as the "QoS
(Packet & Ccll)". The purpose of this section is to de(inc how bandwidth rc managed on multiple traffic profiles.
NeoN Queuing Concepts The goal of NcoN Queuing, of the invention, thus, is to be able to associate a fixed configurable bandwidth with every priority queue and also tc~ ensure maximum line utilization.
Traditionally, bandwidth enforcement is done in systems by allocating a fixed number of buficrs per priority queue. This means that the enqucing of data on the priority queues enforces bandwidth allocation. When buffers of a certain queue are Elled, then data for that queue is dropped (hy not enqucuing data on that queue), this being a rough approximation of the ideal requirement.
'there arc many real life analogies to understanding the concept of QoS of the present invention, e.g. cars on a highway with multiple entry ramps or moving objects cm a multi-channeled com~eyor in a manufacturing operation.
For our purposes, Ict us examine the simple case of "cars on a highway".
Assume that 8 ramps were to merge into one lane at some paint on the highway. In real life experiences, everyone knows that this could create traffic jams.

we 99r~ss~~ PCT/IB98101940 But if managed correctly (i.e. with the right QoS), then the single highway tone can be utilized for maximum efficiency. One way to manage this flow is to have no control, and hove it be serviced on a first come, first serviced method. This means that there is no distinction between an ambulance on one ramp and someone headed to the beach on another ramp. But in the methodology of the invention, we define certain preferential characteristics for certain entry ramps. There arc different mechanisms that we can create. One is to send one car from each entry ramp in a round robin fashion, i.e. each ramp is equal. This means counting cars. But if one of these "cars" turns out to be a tractor trailer wish 3 trailers, then in fact equal service is not being given to all entry ramps as measured by the amount of highway occupied. In fact if one entry ramp is all tractor trailers, then the backup on the other ramps could be very significant. So it is important to measure the size of the vehicle and its importance. The purpose of the "traffic cop" (eke QoS manager) is to manage which vehicle has the right of way, based on size, importance and perhaps lane number. The "traffic cop" can, in fact, have different instructions every other day on the lane entry characteristics based on what the "town hall manage' aka network manager has decided. To conclude the concept of QoS understanding. QoS is a mechanism which allows certain datagramato pass through queues in a controlled manner, so as to achieve a deterministic and desired goal, which may vary from application to application e.g.
bandwidth utilization, precision bandwidth allocation, low latency, low delay, priority etc.
The NeoN Queuing of the invention handles the problem directly. Neon Queuing views the buffer allocation as an orthogonal parameter to the Queuing and bandwidth issue. NcoN
Queuing will literally segment the physical wire into small time units called "Time Slice" (as an example, approximately 2U0 nanoseconds on OC48 -time of fNl byte packet on an OC48). Packets from the back-plane arc put into the Priority Queues. Each time a packet is extracted from a queue, a timestamp is also tacked along with that queue. The time stamp indicates distance in time from a 'Current 'Time Counter' in Time Slice Units, and when the next packet should be de-queued. The 'distance in time' is function of a) packet size information coming in from the back plane, b) the size of Slice Time itself and c) the bandwidth alkxnted for the priority queue. Once a packet is de-queued, another counter is updated which represents the Next Time to De-queue (NTTD) - such purely a function of the size of the packet just de-queued. NTTD is one for cell-based cards, because ail packets are the same size and fit in one buffer. This really proves that the NcoN Egress Bandwidth Manager is monitoring the line to determine exactly what next to send. This mechanism, therefore, is a bandwidth manager rather than just a de-queuing engine.

wo ~r~ss~~ rC'r~s98roi94o The NcoN Queuing of the present im~ention, moreover, may he thought of se TDM
scheme for allocating bandwidth for different priorities, using priority queuing for ABR (Awilablc Hit Rate) bandwidth. Added advantages of the NeoN Queuing arc that, within the TDI~1 mechanism, bandwidth is calculated not on 'packet count' hut on 'packet byte :cite'. 7lus granularity is a much better replica of the actual bandwidth utilization and allows true bandwidth calcufationc rather than simulated/approximations. The second 'NeoN Advantage' is that the Network Manager can dynamically change the bandwidth requirement. similarly to a sliding scale on a volume control. This is feasible since the bandwidth calculations for priority queues arc not at all based on buffer allocations.
In Neon Queuing, rather, the bandwidth allocation is based on the time slicing the bandwidth on the physical wire.
This type of bandwidth management is absolutely necessary when running at very high line speeds, to keep line utilization high.
Mathematics Used during Queuing First we will develop the variables and constants being used in the ultimate mathematics.
Symbols Description ~-.~

TS Time Slice of bandwidth on wire used for calculations, (200nSec for OC48).

NTTS Next Time To Send. This number in units of Ts representing a address to de-queue from current time.

BitTime Time period of a single bit on the wire of the current UO module An Delay factor in Number of TS, representing bandwidth calculations act by Network Manager, for priority Queue n.

HWn Bandwidth of Queue n in Percentage as entered or calculated by the CPU Software.

Pn Number of Priority Queues.

TBW Total Bandwidth of the wire NTTD Next Timc To Dequeue.

CT Current Time in TS units.

WO 99/35577 PC'f/IB98/01940 Consider first the user interface level to sec how handwidth is allocated amongst various priorities, the user is normally given the jab of dividing IOO~e bandwidth amongst various priorities. The user could also be presented with hreaking up the entire bandwidth in bits per second (as an example for OC48, it would be 2.4Gbits). In either case, some CPU software calculates a number pair, priority-An, from cXv-priority or mBitslsec-priority. Since the f:PU
is doing this calculation, it can Ix easily changed based on the 1/0 module.
The Bandwidth Manager does not need to know shout the IIO module type, only caring about the priority-~n pair. Thus if a user connected to the NeoN port that cannot handle data at full line rate, the CPU can change this value to adjust for the customer requirements.
An = 100/8Wn (I) Data (in form of packet address) from the priority queues is dc-queued on the output fifo. The de-queue engine calculation of the Next Time To Send for that queue is governed by equation (2) below. There is one such number for each queue, which gets updated every time a packet is de-queued. The Calculations for NTTS arer.
NTTS~ _ ((Packet Byte Count * BitTimc) I (TS)) * D") + NTTS~., (2) where Bit Time is a constant that may be fed by the CPU on power-up, depending on the UO Module.
Keeling NTTS" two decimal places would mean that we would have the ability to enforce bandwidth to the 100' of a TS time, as time approaches infinity, but with instant granularity always being TS time.
Next Time To De-queue is the time that we start the de-queue process after the current de-queue.
This is primarily based on the current time and the number of buffers in a packet just dc-queued:
NTTD" _ ((Packet Byte Count * BitTimc) mod(TS)) + CT
Queuing Processing It is now in order lc, decide the processing needed to queue addresses from the back-plane on to the Priority Queues Fig. 24, which depicts the overall queuing and scheduling process.
Control Data, which includes datagram addresses, from the 'NeoN Control Data Switch', is sorted into priority queues based on the QoS information embedded in the control Data, by the Queue Engine. The Scheduling Engine operation is rendered independent of the Qucuc Engine which schedules datagram addresses through use of the novel algorithms of the invention listed further below.
Tile queuing Engine has the following tasksr ~ Enqueuc Data Read input fifo and queue the packet onto the appropriate queue. There are 8 priority queues and 1 Local CPU queue and one Drop Queue.
~ Watermark Calculations Calculate when to put back pressure on the ingress based on watermarks act fir a queue.
Drop Packets Start Dropping packets when the Priority Queues arc full.
. 31 .

For each Priority Queue P", there will be a 'head pointer - pHcad"' and a 'tail pointer - pTail,; . Input Fifo feeds the priority Queues P" with buffer address from the back-plane.
Additionally, there is a forward. For OC48 rates, and assuming !~-1 hyte packets as average size packets, the following processing will be done in about 2l')tMSecs. The preferred rseudo code of the invention for the En-queue Processor is as follows Read input Fifo.
Find priority of the racket IF(room on queue) move huffcr from Input Fifo to ~'pTail" priority queue.
Advance pTail".
update statistics increment buffer count on queue IF(packet count on >= watermark of that queue) set back-pressure for that priority UpdaIC SIatlStiCS
ENDIF
ELSE

move buffer from Input Fifo to drop queue.
t~11(latC 4talISIICS
The verbal explanation of the psucdocode lislcd above. As each control packet is read from the 'Neon Control Data Switch' it is put onto one of N queues after it is verified for physics) space available on the queue. If there is no room set on the queue the data is put on a drop queue. which allows the hardware to return addresses back to the originating port via the 'NeoN Control Data Switch'. Also n watermark is set, per queue, to indicate to the ingress to filter out non-preferred traffic. This algorithm is simple but needs to be executed in one TS.
Scheduling Processing This section will list the algorithm used to de-queue address from Priority Queues Pn onto the output fifo.
This calculation also has to be done during one TS.
Wait hero till C'I' _= NTTD AND no back pressure from output fifo. II sync up X = FALSE II some variable.
FOR (all P" , I sigh to I_ow) IP (pHcad" != pTail") IF (CT >= NTTSp) Dc-Qucuc (pHcad") Calculate new NTTS" II see equation (2) above.
Calculate NTTD II sec equation (3) above.
update statistics X=TRUE
ENDFOR

wo ~r~ss~~ PcNrns9sromo ENDIF
ENDFOR
1F(X==FALSE) FOR ( all P" , I sigh to Low) IF (pHcad" != pTail") Det?ueue (pTail,) update statistics X=TRUE
ENDFOR
ENDiF
ENDFOR
ENDIF
EF (X =FALSE) update statistics ENDIF
Update CT
The function De-Queue is conceptually a simple routine, listed below.~
De-Queue(Q") *pOutputQTail++ -_ *p[.Icatl"++
The explanation of the psuciiocodc fisted above is that there arc two FOR
loops in the algorithm -- the ftrst FOR loop enforcing the committed handwidth to the queue, and the second FOR
loop serving for bandwidth utilization, sometimes called aggregate bandwidth FOR Loop.
Examining first the Committed FORtoop, the queues are checked from the Highest Priority Queue to the Lowest Priority Queue for available datagram to schedule. if a queue has available datagram, the algorithm will check to see if the Queues Time has to dequcuc, by comparing its NTTS" against CT. If the NTTS" has fallen behind CT, then the queue is Dequeued: otherwise, the search goes on for the next Queue until all queues are checked. If a data from a queue is scheduled to go out, a new N'I'TS" is calculated for that queue and a NTTD is always calculated when any queue is dc-queued. When a Network manager assigns weight for the queues, the sum of all weights should not be 10090. Since NTTS" is based on datagram size, the output data per queue is a very accurate implementation of the bandwidth set by the manager.
Let us now exatnine the Aggregate FOR Loop. This loop is only executed when no queue is de-queued during the Committed FOR loop. In other words only one de-queue operation is performed in one TS. In this FOR
Loop, all queues are checked from Highest Priority to Lowest Priority for available data to dequeuc. The algorithm got in this FOR Loop for one of two reasons: either there was no data in all the queues, or the NTTS~ of all queues were still ahead of CT (it was not time to send). If the algorithm entered the aggregate FOR Loop for empty queues then the second time around the fate will he the same. However if the aggregate FOR Loop was entered hecause the NTTS" was not reached for all queues then the aggregate will find the highest priority such qutue and de-queue it.
also in that case it would update NTTS" and calculate NTTD.
The algorithm has huilt in credits for queue that do not have data to de-queue in their time slot: and debits for data that is de-queued in the Aggregate Loop. These credits and debits can accumulate over large periods of time.
The debit and credit accumulation time. is a direct function of the size of NTTS" field in bits, for example a 32 bit number would yield 6 minutes in each direction at using 160 nSec as TS (2'"
160nSec). Each individual queut could he configured to loose credits and/or debits, depending on the application this algorithm is used. For example if the algorithm was to be used mainly for CBR type circuits one would want to clear the debits fairly quickly, where as for bursty traffic they could be cleared rather slowly. The mechanism for clearing debits/credits is very simplt, asynchronously setting NTTS" to CT. If NTTS" is way ahead of CT, Queue has build a lot of debit, then setting the NTTS" to CT would mean loosing all the debit. Similarly if NTTS" had fallen behind CT, Queue has build a lot of Credit, then setting NTTS" to CT wciuld mean losing all the credit.
Example of Implementing CBR Queue Using the Algorithm It is now appropriate to examine how to build a CBR queue out of the algorithm listed about, again refertncing Fig. 24. Let it be assumed that the output wire is running at OC48 speeds (2.4Gbits Pcr second) and that Qucuc 1 (highest Priority Queue) has been assigned to be the CBR Qucuc. The way we configure the weight on the CBR queue is configured by summing all the input CBR (low bandwidth requirements. f~or sake of simplicity then are 100 flows going through the CBR Queue, each with a bandwidth requirement of 2.4 Mbits per second. The CBR
Queue bandwidth will then be 2.4Mhits/scc Times 100, i.e. 240Mbits per second (i.e. 10%). In other words QRATF.~" = E Ingress flow Bandwidth.
D" ~ I OQIlO = 10. Based on Equation 1 NTTS" would result in 10 every time a 45 byte datagram is dequucd. - Based on Equation 2.
NTTS" would result in 20 every time a 90 byte datagram is dequucd. - Based on Equation 2.
NTTD would result in 1 every time a 45 hyte datagram is dcqucucd. - Based on Equation 3.
NTTD would result in 2 every time a 90 hyte datagram is dcqucucd. - Based on Equation 3.
This shows that the queue will he de-queued very timely; based on datagram size and the Sfc of bandwidth allocated to the queue. This algorithm is independent of wire speed, making it very scalable, and can achieve very high data speeds. This alogorithm also takes datagram size into account during scheduling regardless of a the datagram being a cell or a packet. So long as the network Manager sets the weight of the queue as the sum of all ingress CHR flow bandwidth, the algorithm provides the scheduling very accurately.
Example of Implementing UBR Queue Using the Algorithm.
It is very simple to implement a UBR queue using this algorithm. UBR standing for the queue which uses the left over bandwidth on the wire. To implement this type of queue, one of N queues with 09fo Bandwidth, and then this queue is de-queued when them is literally no other queue to de-queue. The NTTS will be set so far in the future that after the algorithm de-queues one datagram the next one is never scheduled.
QoS Conclusion As has been demonstrated, the algorithm of the invention is very precise in delivering bandwidth, and its granularity is based on the size of TS being independent of CcIIIPacket information, and also provides all of the ATM services required: implying not only packets also enjoy the ATM services but cells and packets coexist on the same interface.
Real Life Network Manager Examples This section will now consider different Network Management bandwidth management scenarios~ap well handled by the invention. Ipso far as the NeoN Network controller is concerned, there are n queues egress (sa an example it could be R), each queue teeing assigned a bandwidth. The Egress Bandwidth Manager will deliver that percentage very precisely. The Network Manager can also decide not to assign 1 t709b of the bandwidth to all queues.
in which case the left over bandwidth will simply be distributed on a high to low priority basis. Besides these two keels of control, the Network Managtr can also examine statistics per priority and make strategic stntistical decisions on it own and change percentage allocations.
Exemplary Case t : Fixed Bandwidth In this scenario, IOO~o of the bandwidth is divided into all queues. 1f all queues are full at all times~then the queues will behave exactly like Fair Weighted Queuing. The reason far this is that - the Egress Bandwidth Manager will deliver the percentage of the line bandwidth as requested by the Network Manager, and since the queues are never empty, the egress bandwidth does not have time to execute the second FOR
loop (Aggregate Loop), above wo ~r~ss~~ PCTIIB98I01940 discussed.
If the queues are not full all the time. however, then during the time the queue is empty some other queue may be serviced ahead of its time without a charge against its bandwidth.
As an example~the Network rlanagcr decided to allocate 12.5 ~ bandwidth to every one of the eight queues, then the Network Manager has to provide to the Egress bandwidth Manager;
D,; Priority List of all D" one for each priority.
Bit Time Based on 1/O Module Egress Bandwidth Manager is running on.
For a bandwidth of 12.5 ~k, D" would calculate to be 8.00 ( 100/12.5). For a OC48 Bit Time would calculate to be 402 pscc.
Exemplary Case 2: Mixed Bandwidth In this example, not all of.the bandwidth is divided into all of the queues.
tn fact, the sum of all fixed bandwidth on the queues is not 100~7r of the bandwith available. The Egress bandwidth Manager will deliver the constant bandwidth vn the queues up to the allocated amount, and then aggregate traffic amongst the priorities on the remaining bandwidth. This guarantees some percent of a class of traffic to make it through the port and also provides prioritized traffic. For queues that are not full during the allocated time, that bandwidth will be lost to the aggregate bandwidth.
Exemplary Case 3: No Mixed Bandwidth For Al! Queues In this scenario~0~: is allocated as fixed bandwidth for ail queues. The queues will then behave purely like prioritized queuing. The first For Loop (fisted in section 0 Scheduling , will considered as NOP.
Exemplary Case 4: Dynamic Bandwidth In this illustration, the Network Manager may initially come up with No Mixed Bandwidth for all Queues and then, as it starts to build committed bandwidth circuits, it may create fixed bandwidth queues. The sum of the tequiremems of bandwidth of the flows at an ingress port would dictate the size of the constant bandwidth on the egress port. The granularity of the allocatable egress bandwidth is iargcly dependent on the depth of the (loafing point depth. As an example, it may be assumed that two decimal places may suffice. This then implies IOd~' of one percent, and would calculate to be 240kHz for an OC48 line and 62 kHz for an OC12 line.

It should be observed that the shove cases are examples only, and the application of the algorithm of the invention is not limited to these cnscs.
Further modifications wilt occur to (hose skilled in this art, and such are considered to fall within the spirit and scope of the invention as defined in the appended claims.
- 37 .

Claims (17)

What is claimed is:
1. A method of simultaneously processing information contained in data cells and data packets or frames received at an egress of a data networking system, that comprises, applying both the received data cells and data packets to a common data switch; controlling the switch for cell and packet data-forwarding, using common network hardware and algorithms for forwarding, based on control information contained in the cell or packet and without transforming packets into cells; and controlling with a common bandwidth management algorithm both cell and packet data forwarding without impacting the Quality of Service (QoS) characteristics necessary for the correct operation of either cells or packets, wherein the cell and packet control information is processed in a common forwarding engine with common algorithms independent of information contained in the cell or packet, and wherein the information from the forwarding engine is passed to a network egress queue manager and thence to a network egress transmit facility and in a manner to provide minimum cell delay variation, and further wherein quality of service information is included in the information passed from the forwarding engine and managed by the queue manager for both cells and packets simultaneously and based upon the common algorithm, with queuing managing processing as each control packet is read from the switch, to put the control packet into one of a plurality of queues after it is verified that available physical space exists on the queue, and wherein, should there be no such physical space, the data is put in a drop queue and returned by the switch to an ingress of the network.
2. A method as claimed in claim 1 wherein a watermark is set for each queue to instruct each ingress to filter out non-preferred data traffic.
3. A method of simultaneously processing information contained in data cells and data packets or frames received at an egress of a data networking system, that comprises, applying both the received data cells and data packets to a common data switch; controlling the switch for cell and packet data-forwarding, using common network hardware and algorithms for forwarding, based on control information contained in the cell or packet and without transforming packets into cells; and controlling with a common bandwidth management algorithm both cell and packet data forwarding without impacting the Quality of Service (QoS) characteristics necessary for the correct operation of either cells or packets, wherein the cell and packet control information is processed in a common forwarding engine with common algorithms independent of information contained in the cell or packet, and wherein the information from the forwarding engine is passed to a network egress queue manager and thence to a network egress transmit facility and in a manner to provide minimum cell delay variation, and further wherein quality of service information is included in the information passed from the forwarding engine and managed by the queue manager for both cells and packets simultaneously and based upon the common algorithm, with queuing managing processing as each control packet is read from the switch, to put the control packet into one of a plurality of queues after it is verified that available physical space exists on the queue, and wherein bandwidth is allocated for different priorities by packet byte size and based upon time slicing the bandwidth.
4. A method as claimed in claim 3 wherein the network manager dynamically varies the bandwidth requirement.
5. A method of processing packets of information from a forwarding switch and queue managing the forwarding of packets, that comprises, as each packet is read from the switch, putting the packet into one of a plurality of queues after it is verified that available physical space exists in the queue;
placing the packet in a drop queue should there be no such physical space and returning the packet through the switch; setting a watermark for each queue to enable filtering of non-preferred information traffic; and allocating bandwidth for different priorities by packet byte size and based upon time slicing of bandwidth.
6. A system architecture apparatus for simultaneously processing information contained in data cells and data packets received at ingress of a data networking system, said apparatus having, in combination, means for applying both the received data cells and data packets from the ingress to a common data switch within the system; means for controlling the switch for cell and packet, forwarding data by a common algorithm based on control information contained in the cell or packet and without transforming packets into cells; means for controlling with a common bandwidth management algorithm both cell and packet data forwarding without impacting the Quality of Service (QoS) characteristics necessary for the correct operation of either cells or packets, wherein the cell and packet control information is processed in a common forwarding engine with common algorithms, independent of information contained in the cell or packet, and wherein means is provided for passing the information from the forwarding engine to a network egress queue manager and thence to a network egress transmit facility, and in a manner to provide minimal cell/packet delay variation.
7. Apparatus as claimed in claim 6 wherein quality of service information is included in the information passed from the forwarding engine and managed by the queue manager for both cells and packets simultaneously based upon the common algorithm.
8. Apparatus as claimed in claim 7 wherein a common parsing algorithm is also provided for similarly forwarding both cells and data packets.
9. Apparatus as claimed in claim 7 wherein the queue manager employs processing that operates as each control packet is read from the switch, to put the control packet into one of a plurality of queues after it is verified that available physical space exists on the queue.
10. Apparatus as claimed in claim 9 wherein, should there be no such physical space, means is provided for the data to be put in a drop queue and returned by the switch to the ingress of the network.
11. Apparatus as claimed in claim 10 wherein a watermark is set for each queue to instruct such ingress to filter out non-preferred data traffic.
12. Apparatus as claimed in claim 9 wherein means is provided for allocating bandwidth for different priorities by packet byte size and based upon time slicing the bandwidth.
13. Apparatus as claimed in claim 12 wherein the network manager dynamically varies the bandwidth requirement.
14. A system architecture apparatus for simultaneously processing information contained in data cells and data packets received at an ingress of a data networking system, said apparatus having, in combination, means for applying both the received data cells and data packets from the ingress to a common data switch within the system; means for controlling the switch for cell and packet, forwarding data by a common algorithm based on control information contained in the cell or packet and without transforming packets into cells; means for controlling with a common bandwidth management algorithm both cell and packet data forwarding without impacting Quality of Service (QoS) characteristics necessary for the correct operation of either cells or packets, wherein the cell and packet control information is processed in a common forwarding engine with common algorithms, independent of information contained in the cell or packet, and wherein, between the ingress and the switch, a VCI
(Virtual Channel Identifier) function or assembly is interfaced.
15. Apparatus as claimed in claim 14 wherein said assembly connects not only to the switch but also to a header lookup and forwarding engine for both the cell and packet data; with the engine connecting through a control data switch and a quality of service managing module to a buffer, also inputting from an output of the switch.
16. Apparatus as claimed in claim 15 wherein the buffer feeds a cell data VC
shaping circuit that connects with the system egress.
17. Apparatus as claimed in claim 15 wherein the cell data is of ATM
(Asynchronous Transfer Mode) fixed size units and the packet data is of arbitrary size.
CA002313771A 1997-12-30 1998-12-07 Networking systems Expired - Fee Related CA2313771C (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US09/001,040 1997-12-30
US09/001,040 US6259699B1 (en) 1997-12-30 1997-12-30 System architecture for and method of processing packets and/or cells in a common switch
PCT/IB1998/001940 WO1999035577A2 (en) 1997-12-30 1998-12-07 Data switch for simultaneously processing data cells and data packets

Publications (2)

Publication Number Publication Date
CA2313771A1 CA2313771A1 (en) 1999-07-15
CA2313771C true CA2313771C (en) 2006-07-25

Family

ID=21694095

Family Applications (1)

Application Number Title Priority Date Filing Date
CA002313771A Expired - Fee Related CA2313771C (en) 1997-12-30 1998-12-07 Networking systems

Country Status (9)

Country Link
US (1) US6259699B1 (en)
EP (2) EP1050181B1 (en)
JP (2) JP2002501311A (en)
CN (1) CN1197305C (en)
AU (1) AU1254699A (en)
CA (1) CA2313771C (en)
DE (1) DE69838688D1 (en)
IL (1) IL136653A0 (en)
WO (1) WO1999035577A2 (en)

Families Citing this family (277)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7937312B1 (en) 1995-04-26 2011-05-03 Ebay Inc. Facilitating electronic commerce transactions through binding offers
US7702540B1 (en) * 1995-04-26 2010-04-20 Ebay Inc. Computer-implement method and system for conducting auctions on the internet
US7647243B2 (en) 1995-11-07 2010-01-12 Ebay Inc. Electronic marketplace system and method for creation of a two-tiered pricing scheme
US6602817B1 (en) * 1998-10-23 2003-08-05 University Of Southern California Combination approach to chiral reagents or catalysts having amine or amino alcohol ligands
US7462746B2 (en) * 1996-06-28 2008-12-09 University Of Southern California Amino polyols and amino sugars
US6549519B1 (en) * 1998-01-23 2003-04-15 Alcatel Internetworking (Pe), Inc. Network switching device with pipelined search engines
US6161144A (en) 1998-01-23 2000-12-12 Alcatel Internetworking (Pe), Inc. Network switching device with concurrent key lookups
US6470021B1 (en) * 1998-01-27 2002-10-22 Alcatel Internetworking (Pe), Inc. Computer network switch with parallel access shared memory architecture
US6643285B1 (en) * 1998-02-17 2003-11-04 Nortel Networks Limited Message based packet switch based on a common, generic bus medium for transport
GB2337429B (en) * 1998-05-15 2003-10-29 Northern Telecom Ltd Telecommunications system
US6650644B1 (en) * 1998-05-20 2003-11-18 Nortel Networks Limited Method and apparatus for quality of service translation
WO2000003516A1 (en) * 1998-07-08 2000-01-20 Broadcom Corporation Network switching architecture with multiple table synchronization, and forwarding of both ip and ipx packets
JP3077677B2 (en) * 1998-07-14 2000-08-14 日本電気株式会社 Quality assurance node equipment
JP3602972B2 (en) * 1998-07-28 2004-12-15 富士通株式会社 Communication performance measuring device and its measuring method
JP3002726B1 (en) * 1998-07-31 2000-01-24 東京大学長 Variable speed digital switching system
US6580721B1 (en) * 1998-08-11 2003-06-17 Nortel Networks Limited Routing and rate control in a universal transfer mode network
US7843898B1 (en) * 1998-08-31 2010-11-30 Verizon Services Corp. Selective bandwidth connectivity through network line cards
US6393026B1 (en) * 1998-09-17 2002-05-21 Nortel Networks Limited Data packet processing system and method for a router
US6920146B1 (en) * 1998-10-05 2005-07-19 Packet Engines Incorporated Switching device with multistage queuing scheme
US6678269B1 (en) 1998-10-05 2004-01-13 Alcatel Network switching device with disparate database formats
US6631119B1 (en) * 1998-10-16 2003-10-07 Paradyne Corporation System and method for measuring the efficiency of data delivery in a communication network
US6747986B1 (en) * 1998-11-25 2004-06-08 Telefonaktiebolaget Lm Ericsson (Publ) Packet pipe architecture for access networks
US6353858B1 (en) * 1998-11-30 2002-03-05 Lucent Technologies Inc. Multiple-local area networks interconnected by a switch
US6917617B2 (en) * 1998-12-16 2005-07-12 Cisco Technology, Inc. Use of precedence bits for quality of service
US6498782B1 (en) 1999-02-03 2002-12-24 International Business Machines Corporation Communications methods and gigabit ethernet communications adapter providing quality of service and receiver connection speed differentiation
US6765911B1 (en) * 1999-02-03 2004-07-20 International Business Machines Corporation Communications adapter for implementing communications in a network and providing multiple modes of communications
US6466580B1 (en) * 1999-02-23 2002-10-15 Advanced Micro Devices, Inc. Method and apparatus for processing high and low priority frame data transmitted in a data communication system
JP4182180B2 (en) * 1999-02-24 2008-11-19 株式会社日立製作所 Network relay device and network relay method
US6683885B1 (en) * 1999-02-24 2004-01-27 Hitachi, Ltd. Network relaying apparatus and network relaying method
US6628617B1 (en) * 1999-03-03 2003-09-30 Lucent Technologies Inc. Technique for internetworking traffic on connectionless and connection-oriented networks
US7366171B2 (en) * 1999-03-17 2008-04-29 Broadcom Corporation Network switch
AU3529500A (en) * 1999-03-17 2000-10-04 Broadcom Corporation Network switch
US7643481B2 (en) * 1999-03-17 2010-01-05 Broadcom Corporation Network switch having a programmable counter
US7392279B1 (en) * 1999-03-26 2008-06-24 Cisco Technology, Inc. Network traffic shaping using time-based queues
US6757791B1 (en) * 1999-03-30 2004-06-29 Cisco Technology, Inc. Method and apparatus for reordering packet data units in storage queues for reading and writing memory
US6594279B1 (en) * 1999-04-22 2003-07-15 Nortel Networks Limited Method and apparatus for transporting IP datagrams over synchronous optical networks at guaranteed quality of service
EP2360635A3 (en) * 1999-04-30 2013-04-10 PayPal, Inc. System and method for electronically exchanging value among distributed users
US6636509B1 (en) * 1999-05-11 2003-10-21 Cisco Technology, Inc. Hardware TOS remapping based on source autonomous system identifier
US6460088B1 (en) * 1999-05-21 2002-10-01 Advanced Micro Devices, Inc. Method and apparatus for port vector determination at egress
US6633565B1 (en) * 1999-06-29 2003-10-14 3Com Corporation Apparatus for and method of flow switching in a data communications network
US6785228B1 (en) * 1999-06-30 2004-08-31 Alcatel Canada Inc. Subscriber permissions and restrictions for switched connections in a communications network
US6990103B1 (en) * 1999-07-13 2006-01-24 Alcatel Canada Inc. Method and apparatus for providing distributed communication routing
US7068661B1 (en) 1999-07-13 2006-06-27 Alcatel Canada Inc. Method and apparatus for providing control information in a system using distributed communication routing
US6985431B1 (en) * 1999-08-27 2006-01-10 International Business Machines Corporation Network switch and components and method of operation
US6868082B1 (en) * 1999-08-30 2005-03-15 International Business Machines Corporation Network processor interface for building scalable switching systems
WO2001016702A1 (en) 1999-09-01 2001-03-08 Intel Corporation Register set used in multithreaded parallel processor architecture
US6882642B1 (en) 1999-10-14 2005-04-19 Nokia, Inc. Method and apparatus for input rate regulation associated with a packet processing pipeline
US6757249B1 (en) 1999-10-14 2004-06-29 Nokia Inc. Method and apparatus for output rate regulation and control associated with a packet pipeline
US6934250B1 (en) 1999-10-14 2005-08-23 Nokia, Inc. Method and apparatus for an output packet organizer
US6856967B1 (en) 1999-10-21 2005-02-15 Mercexchange, Llc Generating and navigating streaming dynamic pricing information
AU3528600A (en) * 1999-10-21 2001-04-30 Navlet.Com, Inc. Context-sensitive switching in a computer network environment
US7389251B1 (en) 1999-10-21 2008-06-17 Mercexchange, Llc Computer-implemented method for managing dynamic pricing information
US6963572B1 (en) * 1999-10-22 2005-11-08 Alcatel Canada Inc. Method and apparatus for segmentation and reassembly of data packets in a communication switch
US7046665B1 (en) * 1999-10-26 2006-05-16 Extreme Networks, Inc. Provisional IP-aware virtual paths over networks
US6687247B1 (en) * 1999-10-27 2004-02-03 Cisco Technology, Inc. Architecture for high speed class of service enabled linecard
US6741591B1 (en) * 1999-11-03 2004-05-25 Cisco Technology, Inc. Search engine interface system and method
US6976258B1 (en) 1999-11-30 2005-12-13 Ensim Corporation Providing quality of service guarantees to virtual hosts
US20010030969A1 (en) * 1999-11-30 2001-10-18 Donaghey Robert J. Systems and methods for implementing global virtual circuits in packet-switched networks
US20020009088A1 (en) * 1999-11-30 2002-01-24 Donaghey Robert J. Systems and methods for negotiating virtual circuit paths in packet switched networks
US6463067B1 (en) * 1999-12-13 2002-10-08 Ascend Communications, Inc. Submission and response architecture for route lookup and packet classification requests
US6738819B1 (en) * 1999-12-27 2004-05-18 Nortel Networks Limited Dynamic admission control for IP networks
US6744767B1 (en) * 1999-12-30 2004-06-01 At&T Corp. Method and apparatus for provisioning and monitoring internet protocol quality of service
US6744776B1 (en) * 2000-01-28 2004-06-01 Advanced Micro Devices, Inc. Servicing priority traffic in multiport network switch
GB2358764B (en) * 2000-01-28 2004-06-30 Vegastream Ltd Casualty-based memory access ordering in a multriprocessing environment
US7343421B1 (en) * 2000-02-14 2008-03-11 Digital Asset Enterprises Llc Restricting communication of selected processes to a set of specific network addresses
US6731644B1 (en) 2000-02-14 2004-05-04 Cisco Technology, Inc. Flexible DMA engine for packet header modification
US6778546B1 (en) * 2000-02-14 2004-08-17 Cisco Technology, Inc. High-speed hardware implementation of MDRR algorithm over a large number of queues
US6813243B1 (en) 2000-02-14 2004-11-02 Cisco Technology, Inc. High-speed hardware implementation of red congestion control algorithm
US6721316B1 (en) 2000-02-14 2004-04-13 Cisco Technology, Inc. Flexible engine and data structure for packet header processing
US6977930B1 (en) 2000-02-14 2005-12-20 Cisco Technology, Inc. Pipelined packet switching and queuing architecture
ATE344226T1 (en) 2000-02-16 2006-11-15 Brigham & Womens Hospital ASPIRIN-RELEASED LIPID MEDIATORS
US6850516B2 (en) * 2000-03-02 2005-02-01 Agere Systems Inc. Virtual reassembly system and method of operation thereof
US7000034B2 (en) * 2000-03-02 2006-02-14 Agere Systems Inc. Function interface system and method of processing issued functions between co-processors
US6704794B1 (en) * 2000-03-03 2004-03-09 Nokia Intelligent Edge Routers Inc. Cell reassembly for packet based networks
US6948003B1 (en) 2000-03-15 2005-09-20 Ensim Corporation Enabling a service provider to provide intranet services
US6665868B1 (en) * 2000-03-21 2003-12-16 International Business Machines Corporation Optimizing host application presentation space recognition events through matching prioritization
US6671280B1 (en) * 2000-03-29 2003-12-30 International Business Machines Corporation Network processor for multiprotocol data flows
US6751224B1 (en) * 2000-03-30 2004-06-15 Azanda Network Devices, Inc. Integrated ATM/packet segmentation-and-reassembly engine for handling both packet and ATM input data and for outputting both ATM and packet data
US6810039B1 (en) * 2000-03-30 2004-10-26 Azanda Network Devices, Inc. Processor-based architecture for facilitating integrated data transfer between both atm and packet traffic with a packet bus or packet link, including bidirectional atm-to-packet functionally for atm traffic
US6785237B1 (en) * 2000-03-31 2004-08-31 Networks Associates Technology, Inc. Method and system for passive quality of service monitoring of a network
US6657962B1 (en) * 2000-04-10 2003-12-02 International Business Machines Corporation Method and system for managing congestion in a network
US7106728B1 (en) 2000-05-01 2006-09-12 Industrial Technology Research Institute Switching by multistage interconnection of concentrators
US6985937B1 (en) 2000-05-11 2006-01-10 Ensim Corporation Dynamically modifying the resources of a virtual server
US6907421B1 (en) 2000-05-16 2005-06-14 Ensim Corporation Regulating file access rates according to file type
US7222147B1 (en) 2000-05-20 2007-05-22 Ciena Corporation Processing network management data in accordance with metadata files
US7240364B1 (en) 2000-05-20 2007-07-03 Ciena Corporation Network device identity authentication
US7266595B1 (en) 2000-05-20 2007-09-04 Ciena Corporation Accessing network device data through user profiles
US7225244B2 (en) 2000-05-20 2007-05-29 Ciena Corporation Common command interface
US7143153B1 (en) 2000-11-09 2006-11-28 Ciena Corporation Internal network device dynamic health monitoring
US6880086B2 (en) 2000-05-20 2005-04-12 Ciena Corporation Signatures for facilitating hot upgrades of modular software components
US7111053B1 (en) 2000-05-20 2006-09-19 Ciena Corporation Template-driven management of telecommunications network via utilization of operations support services clients
US7062642B1 (en) * 2000-05-20 2006-06-13 Ciena Corporation Policy based provisioning of network device resources
US6876652B1 (en) 2000-05-20 2005-04-05 Ciena Corporation Network device with a distributed switch fabric timing system
US6742134B1 (en) 2000-05-20 2004-05-25 Equipe Communications Corporation Maintaining a local backup for data plane processes
US7349960B1 (en) 2000-05-20 2008-03-25 Ciena Corporation Throttling distributed statistical data retrieval in a network device
US7280529B1 (en) 2000-05-20 2007-10-09 Ciena Corporation Providing network management access through user profiles
US6332198B1 (en) 2000-05-20 2001-12-18 Equipe Communications Corporation Network device for supporting multiple redundancy schemes
US7054272B1 (en) 2000-07-11 2006-05-30 Ciena Corporation Upper layer network device including a physical layer test port
US6708291B1 (en) 2000-05-20 2004-03-16 Equipe Communications Corporation Hierarchical fault descriptors in computer systems
US6654903B1 (en) 2000-05-20 2003-11-25 Equipe Communications Corporation Vertical fault isolation in a computer system
US6868092B1 (en) 2000-05-20 2005-03-15 Ciena Corporation Network device with embedded timing synchronization
US6639910B1 (en) 2000-05-20 2003-10-28 Equipe Communications Corporation Functional separation of internal and external controls in network devices
US7051097B1 (en) 2000-05-20 2006-05-23 Ciena Corporation Embedded database for computer system management
US7020696B1 (en) 2000-05-20 2006-03-28 Ciena Corp. Distributed user management information in telecommunications networks
US6601186B1 (en) 2000-05-20 2003-07-29 Equipe Communications Corporation Independent restoration of control plane and data plane functions
US6715097B1 (en) 2000-05-20 2004-03-30 Equipe Communications Corporation Hierarchical fault management in computer systems
US7225240B1 (en) 2000-05-20 2007-05-29 Ciena Corporation Decoupling processes from hardware with logical identifiers
US6671699B1 (en) 2000-05-20 2003-12-30 Equipe Communications Corporation Shared database usage in network devices
US7039046B1 (en) 2000-05-20 2006-05-02 Ciena Corporation Network device including central and distributed switch fabric subsystems
US6760339B1 (en) 2000-05-20 2004-07-06 Equipe Communications Corporation Multi-layer network device in one telecommunications rack
US6934749B1 (en) 2000-05-20 2005-08-23 Ciena Corporation Tracking distributed data retrieval in a network device
US6658579B1 (en) 2000-05-20 2003-12-02 Equipe Communications Corporation Network device with local timing systems for automatic selection between redundant, synchronous central timing systems
US6591285B1 (en) 2000-06-16 2003-07-08 Shuo-Yen Robert Li Running-sum adder networks determined by recursive construction of multi-stage networks
DE60115154T2 (en) * 2000-06-19 2006-08-10 Broadcom Corp., Irvine Method and device for data frame forwarding in an exchange
US7286565B1 (en) * 2000-06-28 2007-10-23 Alcatel-Lucent Canada Inc. Method and apparatus for packet reassembly in a communication switch
US7143024B1 (en) 2000-07-07 2006-11-28 Ensim Corporation Associating identifiers with virtual processes
US7111163B1 (en) 2000-07-10 2006-09-19 Alterwan, Inc. Wide area network using internet with quality of service
SE519269C2 (en) * 2000-07-25 2003-02-11 Telia Ab Method and arrangement for packet management in a router
US7184440B1 (en) * 2000-07-26 2007-02-27 Alcatel Canada Inc. Multi-protocol switch and method therefore
US20020016708A1 (en) * 2000-08-02 2002-02-07 Henry Houh Method and apparatus for utilizing a network processor as part of a test system
US6909691B1 (en) * 2000-08-07 2005-06-21 Ensim Corporation Fairly partitioning resources while limiting the maximum fair share
US6724759B1 (en) * 2000-08-11 2004-04-20 Paion Company, Limited System, method and article of manufacture for transferring a packet from a port controller to a switch fabric in a switch fabric chipset system
GB0019863D0 (en) * 2000-08-11 2000-09-27 Univ Court Of The University O Assay for inhibitors of cell cycle progression
US7681018B2 (en) 2000-08-31 2010-03-16 Intel Corporation Method and apparatus for providing large register address space while maximizing cycletime performance for a multi-threaded register file set
JP3646638B2 (en) * 2000-09-06 2005-05-11 日本電気株式会社 Packet switching apparatus and switch control method used therefor
US8250357B2 (en) 2000-09-13 2012-08-21 Fortinet, Inc. Tunnel interface for securing traffic over a network
US7227862B2 (en) * 2000-09-20 2007-06-05 Broadcom Corporation Network switch having port blocking capability
JP4328459B2 (en) * 2000-10-27 2009-09-09 Necエンジニアリング株式会社 Network service quality measurement system and method
US7133399B1 (en) 2000-10-31 2006-11-07 Chiaro Networks Ltd System and method for router central arbitration
US6894970B1 (en) * 2000-10-31 2005-05-17 Chiaro Networks, Ltd. Router switch fabric protection using forward error correction
CA2326851A1 (en) * 2000-11-24 2002-05-24 Redback Networks Systems Canada Inc. Policy change characterization method and apparatus
KR100358153B1 (en) * 2000-12-18 2002-10-25 한국전자통신연구원 QoS supported IP packet forwarding dispersion processing apparatus and method
US6691202B2 (en) * 2000-12-22 2004-02-10 Lucent Technologies Inc. Ethernet cross point switch with reduced connections by using column control buses
US7219354B1 (en) 2000-12-22 2007-05-15 Ensim Corporation Virtualizing super-user privileges for multiple virtual processes
US7130302B2 (en) * 2000-12-28 2006-10-31 International Business Machines Corporation Self-route expandable multi-memory packet switch
US6990121B1 (en) * 2000-12-30 2006-01-24 Redback, Networks, Inc. Method and apparatus for switching data of different protocols
US7035212B1 (en) * 2001-01-25 2006-04-25 Optim Networks Method and apparatus for end to end forwarding architecture
US7342942B1 (en) * 2001-02-07 2008-03-11 Cortina Systems, Inc. Multi-service segmentation and reassembly device that maintains only one reassembly context per active output port
US6901073B2 (en) * 2001-02-14 2005-05-31 Northrop Grumman Corporation Encapsulation method and apparatus for communicating fixed-length data packets through an intermediate network
US6965945B2 (en) 2001-03-07 2005-11-15 Broadcom Corporation System and method for slot based ARL table learning and concurrent table search using range address insertion blocking
US7626999B2 (en) * 2001-03-16 2009-12-01 Tellabs San Jose, Inc. Apparatus and methods for circuit emulation of a point-to-point protocol operating over a multi-packet label switching network
US20020181476A1 (en) * 2001-03-17 2002-12-05 Badamo Michael J. Network infrastructure device for data traffic to and from mobile units
US6940854B1 (en) * 2001-03-23 2005-09-06 Advanced Micro Devices, Inc. Systems and methods for determining priority based on previous priority determinations
US7263597B2 (en) 2001-04-19 2007-08-28 Ciena Corporation Network device including dedicated resources control plane
US6526046B1 (en) * 2001-04-24 2003-02-25 General Bandwidth Inc. System and method for communicating telecommunication information using asynchronous transfer mode
US7042848B2 (en) * 2001-05-04 2006-05-09 Slt Logic Llc System and method for hierarchical policing of flows and subflows of a data stream
US6904057B2 (en) * 2001-05-04 2005-06-07 Slt Logic Llc Method and apparatus for providing multi-protocol, multi-stage, real-time frame classification
US6944168B2 (en) * 2001-05-04 2005-09-13 Slt Logic Llc System and method for providing transformation of multi-protocol packets in a data stream
US6901052B2 (en) 2001-05-04 2005-05-31 Slt Logic Llc System and method for policing multiple data flows and multi-protocol data flows
US7327760B1 (en) * 2001-05-08 2008-02-05 Cortina Systems, Inc. Multi-service segmentation and reassembly device operable with either a cell-based or a packet-based switch fabric
US7099325B1 (en) 2001-05-10 2006-08-29 Advanced Micro Devices, Inc. Alternately accessed parallel lookup tables for locating information in a packet switched network
US6990102B1 (en) * 2001-05-10 2006-01-24 Advanced Micro Devices, Inc. Parallel lookup tables for locating information in a packet switched network
US7406518B2 (en) * 2001-05-18 2008-07-29 Lucent Technologies Inc. Method and system for connecting virtual circuits across an ethernet switch
US7082104B2 (en) * 2001-05-18 2006-07-25 Intel Corporation Network device switch
US7006518B2 (en) * 2001-05-25 2006-02-28 Integrated Device Technology, Inc. Method and apparatus for scheduling static and dynamic traffic through a switch fabric
US7130276B2 (en) * 2001-05-31 2006-10-31 Turin Networks Hybrid time division multiplexing and data transport
US7230948B2 (en) * 2001-06-01 2007-06-12 Telefonaktiebolaget Lm Ericsson (Publ) Bandwidth efficient Quality of Service separation of AAL2 traffic
US7609695B2 (en) * 2001-06-15 2009-10-27 Industrial Technology Research Institute Optimizing switching element for minimal latency
US7103059B2 (en) 2001-06-15 2006-09-05 Industrial Technology Research Institute Scalable 2-stage interconnections
US7181547B1 (en) 2001-06-28 2007-02-20 Fortinet, Inc. Identifying nodes in a ring network
US8001248B1 (en) 2001-07-13 2011-08-16 Cisco Technology, Inc. System and method for providing quality of service to DSL internet connections
US20030033463A1 (en) * 2001-08-10 2003-02-13 Garnett Paul J. Computer system storage
ATE352150T1 (en) 2001-08-30 2007-02-15 Tellabs Operations Inc SYSTEM AND METHOD FOR TRANSMITTING DATA USING A COMMON SWITCHING FIELD
US7349403B2 (en) * 2001-09-19 2008-03-25 Bay Microsystems, Inc. Differentiated services for a network processor
US7310348B2 (en) * 2001-09-19 2007-12-18 Bay Microsystems, Inc. Network processor architecture
US7042888B2 (en) * 2001-09-24 2006-05-09 Ericsson Inc. System and method for processing packets
US7039061B2 (en) * 2001-09-25 2006-05-02 Intel Corporation Methods and apparatus for retaining packet order in systems utilizing multiple transmit queues
US7248593B2 (en) * 2001-09-25 2007-07-24 Intel Corporation Method and apparatus for minimizing spinlocks and retaining packet order in systems utilizing multiple transmit queues
US6801764B2 (en) * 2001-10-02 2004-10-05 The Boeing Company Broadband medical emergency response system
US20030074473A1 (en) * 2001-10-12 2003-04-17 Duc Pham Scalable network gateway processor architecture
US7283538B2 (en) * 2001-10-12 2007-10-16 Vormetric, Inc. Load balanced scalable network gateway processor architecture
US7310345B2 (en) 2001-11-01 2007-12-18 International Business Machines Corporation Empty indicators for weighted fair queues
US7280474B2 (en) * 2001-11-01 2007-10-09 International Business Machines Corporation Weighted fair queue having adjustable scaling factor
US7317683B2 (en) 2001-11-01 2008-01-08 International Business Machines Corporation Weighted fair queue serving plural output ports
US7187684B2 (en) * 2001-11-01 2007-03-06 International Business Machines Corporation Weighted fair queue having extended effective range
US7103051B2 (en) * 2001-11-01 2006-09-05 International Business Machines Corporation QoS scheduler and method for implementing quality of service with aging time stamps
US20030105830A1 (en) * 2001-12-03 2003-06-05 Duc Pham Scalable network media access controller and methods
US7023856B1 (en) 2001-12-11 2006-04-04 Riverstone Networks, Inc. Method and system for providing differentiated service on a per virtual circuit basis within a packet-based switch/router
US7030159B2 (en) * 2001-12-18 2006-04-18 The Brigham And Women's Hospital, Inc. Approach to anti-microbial host defense with molecular shields with EPA and DHA analogs
KR100428773B1 (en) 2002-01-21 2004-04-28 삼성전자주식회사 Router System and Method for Duplication of Forwarding Engine Utilizing
US6927294B1 (en) 2002-03-08 2005-08-09 University Of Southern California Nitrogen-containing heterocycles
US7280541B2 (en) * 2002-03-15 2007-10-09 Broadcom Corporation Packet filtering based on conditional expression table
US20030174725A1 (en) * 2002-03-15 2003-09-18 Broadcom Corporation IP multicast packet replication process and apparatus therefore
US7477612B2 (en) * 2002-03-15 2009-01-13 Broadcom Corporation Topology discovery process and mechanism for a network of managed devices
US7274698B2 (en) * 2002-03-15 2007-09-25 Broadcom Corporation Multilevel parser for conditional flow detection in a network device
US7257124B2 (en) * 2002-03-20 2007-08-14 International Business Machines Corporation Method and apparatus for improving the fairness of new attaches to a weighted fair queue in a quality of service (QoS) scheduler
US7680043B2 (en) * 2002-03-20 2010-03-16 International Business Machines Corporation Network processor having fast flow queue disable process
EP1528909A4 (en) 2002-04-01 2006-05-24 Univ Southern California Trihydroxy polyunsaturated eicosanoids
US8481772B2 (en) 2002-04-01 2013-07-09 University Of Southern California Trihydroxy polyunsaturated eicosanoid derivatives
US6946542B2 (en) * 2002-04-01 2005-09-20 University Of Southern California Amino amides, peptides and peptidomimetics
US7902257B2 (en) 2002-04-01 2011-03-08 University Of Southern California Trihydroxy polyunsaturated eicosanoid
US7582785B2 (en) * 2002-04-01 2009-09-01 University Of Southern California Trihydroxy polyunsaturated eicosanoid derivatives
WO2003090018A2 (en) * 2002-04-14 2003-10-30 Bay Microsystems, Inc. Network processor architecture
US8010751B2 (en) * 2002-04-14 2011-08-30 Bay Microsystems Data forwarding engine
US7376125B1 (en) 2002-06-04 2008-05-20 Fortinet, Inc. Service processing switch
US7239635B2 (en) * 2002-06-27 2007-07-03 International Business Machines Corporation Method and apparatus for implementing alterations on multiple concurrent frames
US6678828B1 (en) * 2002-07-22 2004-01-13 Vormetric, Inc. Secure network file access control system
US7334124B2 (en) * 2002-07-22 2008-02-19 Vormetric, Inc. Logical access block processing protocol for transparent secure file storage
US6931530B2 (en) 2002-07-22 2005-08-16 Vormetric, Inc. Secure network file access controller implementing access control and auditing
US7372864B1 (en) * 2002-08-01 2008-05-13 Applied Micro Circuits Corporation Reassembly of data fragments in fixed size buffers
US7759395B2 (en) 2002-08-12 2010-07-20 The Brigham And Women's Hospital, Inc. Use of docosatrienes, resolvins and their stable analogs in the treatment of airway diseases and asthma
AU2003258194B2 (en) * 2002-08-12 2009-11-12 The Brigham And Women's Hospital, Inc. Resolvins: biotemplates for therapeutic interventions
US7272149B2 (en) 2002-08-19 2007-09-18 World Wide Packets, Inc. Bandwidth allocation systems and methods
US7272150B2 (en) * 2002-08-19 2007-09-18 World Wide Packets, Inc. System and method for shaping traffic from a plurality of data streams using hierarchical queuing
US7277389B2 (en) * 2002-08-29 2007-10-02 World Wide Packets, Inc. Systems and methods for grouping of bandwidth allocations
US7224691B1 (en) 2002-09-12 2007-05-29 Juniper Networks, Inc. Flow control systems and methods for multi-level buffering schemes
US7143288B2 (en) 2002-10-16 2006-11-28 Vormetric, Inc. Secure file system server architecture and methods
US8051211B2 (en) 2002-10-29 2011-11-01 Cisco Technology, Inc. Multi-bridge LAN aggregation
US7269180B2 (en) * 2002-11-04 2007-09-11 World Wide Packets, Inc. System and method for prioritizing and queuing traffic
US7330468B1 (en) * 2002-11-18 2008-02-12 At&T Corp. Scalable, reconfigurable routers
US7269348B1 (en) 2002-11-18 2007-09-11 At&T Corp. Router having dual propagation paths for packets
US7626986B1 (en) 2002-11-18 2009-12-01 At&T Corp. Method for operating a router having multiple processing paths
US7200114B1 (en) * 2002-11-18 2007-04-03 At&T Corp. Method for reconfiguring a router
US7266120B2 (en) 2002-11-18 2007-09-04 Fortinet, Inc. System and method for hardware accelerated packet multicast in a virtual routing system
US7474672B2 (en) * 2003-02-11 2009-01-06 International Business Machines Corporation Frame alteration logic for network processors
WO2004078143A2 (en) * 2003-03-05 2004-09-16 The Brigham And Women's Hospital Inc. Methods for identification and uses of anti-inflammatory receptors for eicosapentaenoic acid analogs
US7272496B2 (en) * 2003-06-12 2007-09-18 Temic Automotive Of North America, Inc. Vehicle network and method of communicating data packets in a vehicle network
US7324536B1 (en) * 2003-06-26 2008-01-29 Nortel Networks Limited Queue scheduling with priority and weight sharing
US20050018693A1 (en) * 2003-06-27 2005-01-27 Broadcom Corporation Fast filtering processor for a highly integrated network device
KR100557138B1 (en) * 2003-07-16 2006-03-03 삼성전자주식회사 Video Data Transmitting Method In Optical Network
US7720095B2 (en) 2003-08-27 2010-05-18 Fortinet, Inc. Heterogeneous media packet bridging
US7362763B2 (en) * 2003-09-04 2008-04-22 Samsung Electronics Co., Ltd. Apparatus and method for classifying traffic in a distributed architecture router
US7787471B2 (en) * 2003-11-10 2010-08-31 Broadcom Corporation Field processor for a network device
US7672302B2 (en) * 2003-11-21 2010-03-02 Samsung Electronics Co., Ltd. Router using switching-before-routing packet processing and method of operation
US7558890B1 (en) 2003-12-19 2009-07-07 Applied Micro Circuits Corporation Instruction set for programmable queuing
US7283524B2 (en) * 2004-01-23 2007-10-16 Metro Packet Systems Inc. Method of sending a packet through a node
EP1755537A4 (en) * 2004-04-14 2009-12-09 Univ Boston Methods and compositions for preventing or treating periodontal diseases
US20050251608A1 (en) * 2004-05-10 2005-11-10 Fehr Walton L Vehicle network with interrupted shared access bus
US8170019B2 (en) * 2004-11-30 2012-05-01 Broadcom Corporation CPU transmission of unmodified packets
US8000324B2 (en) * 2004-11-30 2011-08-16 Broadcom Corporation Pipeline architecture of a network device
US7583588B2 (en) * 2004-11-30 2009-09-01 Broadcom Corporation System and method for maintaining a layer 2 modification buffer
WO2006101549A2 (en) 2004-12-03 2006-09-28 Whitecell Software, Inc. Secure system for allowing the execution of authorized computer program code
US7948896B2 (en) * 2005-02-18 2011-05-24 Broadcom Corporation Weighted-fair-queuing relative bandwidth sharing
US20060187917A1 (en) * 2005-02-18 2006-08-24 Broadcom Corporation Pre-learning of values with later activation in a network device
US8228932B2 (en) * 2005-02-18 2012-07-24 Broadcom Corporation Layout architecture for expandable network device
US8457131B2 (en) * 2005-02-18 2013-06-04 Broadcom Corporation Dynamic table sharing of memory space within a network device
US7630306B2 (en) * 2005-02-18 2009-12-08 Broadcom Corporation Dynamic sharing of a transaction queue
US7522622B2 (en) * 2005-02-18 2009-04-21 Broadcom Corporation Dynamic color threshold in a queue
US20060187919A1 (en) * 2005-02-18 2006-08-24 Broadcom Corporation Two stage parser for a network
US7606231B2 (en) 2005-02-18 2009-10-20 Broadcom Corporation Pipeline architecture for a network device
US8331380B2 (en) * 2005-02-18 2012-12-11 Broadcom Corporation Bookkeeping memory use in a search engine of a network device
US7254768B2 (en) * 2005-02-18 2007-08-07 Broadcom Corporation Memory command unit throttle and error recovery
US7529191B2 (en) * 2005-02-18 2009-05-05 Broadcom Corporation Programmable metering behavior based on table lookup
US7577096B2 (en) * 2005-02-18 2009-08-18 Broadcom Corporation Timestamp metering and rollover protection in a network device
US7802148B2 (en) 2005-02-23 2010-09-21 Broadcom Corporation Self-correcting memory system
WO2006113553A2 (en) * 2005-04-15 2006-10-26 New Jersey Institute Of Technology Dynamic bandwidth allocation and service differentiation for broadband passive optical networks
US7965708B2 (en) 2005-06-07 2011-06-21 Cisco Technology, Inc. Method and apparatus for using meta-packets in a packet processing system
US7366817B2 (en) * 2005-06-29 2008-04-29 Intel Corporation Frame order processing apparatus, systems, and methods
US7797463B2 (en) * 2005-06-30 2010-09-14 Intel Corporation Hardware assisted receive channel frame handling via data offset comparison in SAS SSP wide port applications
WO2007041440A2 (en) * 2005-10-03 2007-04-12 The Brigham And Women's Hospital, Inc. Anti-inflammatory actions of neuroprotectin d1/protectin d1 and its natural stereoisomers
WO2007061783A1 (en) * 2005-11-18 2007-05-31 Trustees Of Boston University Treatment and prevention of bone loss using resolvins
US7869411B2 (en) * 2005-11-21 2011-01-11 Broadcom Corporation Compact packet operation device and method
US8279877B2 (en) * 2005-11-22 2012-10-02 Freescale Semiconductor, Inc. Method for processing ATM cells and a device having ATM cell processing capabilities
CN100450095C (en) * 2006-02-18 2009-01-07 华为技术有限公司 System and method for providing QoS service to virtual special line
US8437739B2 (en) * 2007-08-20 2013-05-07 Qualcomm Incorporated Method and apparatus for generating a cryptosync
JP4389983B2 (en) * 2007-08-28 2009-12-24 沖電気工業株式会社 Interconnect apparatus, interface board, and traffic processing method
US7898985B1 (en) * 2008-04-23 2011-03-01 Juniper Networks, Inc. Composite next hops for forwarding data in a network switching device
US8948084B2 (en) * 2008-05-15 2015-02-03 Telsima Corporation Systems and methods for data path control in a wireless network
EP2277330A4 (en) * 2008-05-15 2013-10-09 Harris Stratex Networks Operat Systems and methods for distributed data routing in a wireless network
US9071498B2 (en) * 2008-05-15 2015-06-30 Telsima Corporation Systems and methods for fractional routing redundancy
US8331369B2 (en) 2008-07-10 2012-12-11 At&T Intellectual Property I, L.P. Methods and apparatus to distribute network IP traffic
US8014317B1 (en) 2008-08-21 2011-09-06 Juniper Networks, Inc. Next hop chaining for forwarding data in a network switching device
US8159944B2 (en) * 2008-12-24 2012-04-17 At&T Intellectual Property I, L.P. Time based queuing
WO2010088298A1 (en) 2009-01-28 2010-08-05 Headwater Partners I Llc Adaptive ambient services
US8743877B2 (en) * 2009-12-21 2014-06-03 Steven L. Pope Header processing engine
US8638792B2 (en) 2010-01-22 2014-01-28 Synopsys, Inc. Packet switch based logic replication
US8397195B2 (en) * 2010-01-22 2013-03-12 Synopsys, Inc. Method and system for packet switch based logic replication
US8971345B1 (en) * 2010-03-22 2015-03-03 Riverbed Technology, Inc. Method and apparatus for scheduling a heterogeneous communication flow
US8699484B2 (en) 2010-05-24 2014-04-15 At&T Intellectual Property I, L.P. Methods and apparatus to route packets in a network
US9491085B2 (en) 2010-05-24 2016-11-08 At&T Intellectual Property I, L.P. Methods and apparatus to route control packets based on address partitioning
WO2012050968A1 (en) 2010-09-29 2012-04-19 Aviat Networks, Inc. Systems and methods for distributed data routing in a wireless network
EP2712131B1 (en) * 2011-05-16 2015-07-22 Huawei Technologies Co., Ltd. Method and network device for transmitting data stream
JP5978792B2 (en) * 2012-06-12 2016-08-24 富士通株式会社 Transmission apparatus and transmission method
CN103809579B (en) * 2012-11-08 2018-01-02 厦门雅迅网络股份有限公司 By the method for each car status information of center extraction
US10606785B2 (en) 2018-05-04 2020-03-31 Intel Corporation Flex bus protocol negotiation and enabling sequence
US11349704B2 (en) 2020-06-17 2022-05-31 Credo Technology Group Limited Physical layer interface with redundant data paths
US11646959B2 (en) 2020-07-20 2023-05-09 Credo Technology Group Limited Active ethernet cable with broadcasting and multiplexing for data path redundancy

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5144619A (en) * 1991-01-11 1992-09-01 Northern Telecom Limited Common memory switch for routing data signals comprising ATM and STM cells
EP0676878A1 (en) * 1994-04-07 1995-10-11 International Business Machines Corporation Efficient point to point and multi point routing mechanism for programmable packet switching nodes in high speed data transmission networks
ES2137296T3 (en) * 1994-09-28 1999-12-16 Siemens Ag ATM COMMUNICATION SYSTEM FOR STATISTICAL CELL MULTIPLEXION.
US5623492A (en) * 1995-03-24 1997-04-22 U S West Technologies, Inc. Methods and systems for managing bandwidth resources in a fast packet switching network
DE69635880T2 (en) * 1995-09-18 2006-10-05 Kabushiki Kaisha Toshiba, Kawasaki System and method for the transmission of parcels, suitable for a large number of entrance gates
EP0814583A2 (en) * 1996-06-20 1997-12-29 International Business Machines Corporation Method and system for minimizing the connection set up time in high speed packet switching networks
US5802052A (en) * 1996-06-26 1998-09-01 Level One Communication, Inc. Scalable high performance switch element for a shared memory packet or ATM cell switch fabric
US5918074A (en) * 1997-07-25 1999-06-29 Neonet Llc System architecture for and method of dual path data processing and management of packets and/or cells and the like

Also Published As

Publication number Publication date
DE69838688D1 (en) 2007-12-20
EP1050181B1 (en) 2007-11-07
IL136653A0 (en) 2001-06-14
JP2002501311A (en) 2002-01-15
CN1286009A (en) 2001-02-28
JP2006262517A (en) 2006-09-28
WO1999035577A2 (en) 1999-07-15
EP1860835A2 (en) 2007-11-28
CA2313771A1 (en) 1999-07-15
CN1197305C (en) 2005-04-13
US6259699B1 (en) 2001-07-10
AU1254699A (en) 1999-07-26
WO1999035577A3 (en) 1999-10-21
EP1050181A2 (en) 2000-11-08

Similar Documents

Publication Publication Date Title
CA2313771C (en) Networking systems
CA2301823C (en) A quality of service facility in a device for performing ip forwarding and atm switching
AU765396B2 (en) Allocating buffers for data transmission in a network communication device
EP0766425B1 (en) A communication service quality control system
US7023856B1 (en) Method and system for providing differentiated service on a per virtual circuit basis within a packet-based switch/router
EP1324552A2 (en) Method and system for mediating traffic between an asynchronous transfer mode (ATM) network and an adjacent network
JP2003501912A (en) Fair disposal system
US6618356B1 (en) Method for policing data traffic, a data traffic policer realizing such a method and a telecommunication network including such a policer
EP1037496B1 (en) Computationally-efficient traffic shaper
WO1999003234A1 (en) Abr server
US6952420B1 (en) System and method for polling devices in a network system
US6219351B1 (en) Implementation of buffering in a packet-switched telecommunications network
Shiomoto et al. Scalable multi-QoS IP+ ATM switch router architecture
Gerla et al. Interconnecting LANs and MANs to ATM
Cisco Traffic Management
Cisco Traffic Management
JP3848962B2 (en) Packet switch and cell transfer control method
JP2005244290A (en) Shaping device capable of minimizing delay of preferential packet
Iliadis Performance of TCP traffic and ATM feedback congestion control mechanisms
JP3849635B2 (en) Packet transfer device
Basu et al. A simulation study of IPv6 to ATM flow-mapping techniques
Baldi et al. A Comparison of ATM Stream Merging Techniques
CHEN SEVER INSTITUTE OF TECHNOLOGY DEPARTMENT OF ELECTRICAL ENGINEERING
Shimojo et al. A 622 Mbps ATM switch access LSI with multicast capable per-VC queueing architecture
van Luinen Lossless statistical data service over Asynchronous Transfer Mode

Legal Events

Date Code Title Description
EEER Examination request
MKLA Lapsed