US20030063609A1 - Hardware copy assist for data communication switch - Google Patents

Hardware copy assist for data communication switch Download PDF

Info

Publication number
US20030063609A1
US20030063609A1 US10/292,735 US29273502A US2003063609A1 US 20030063609 A1 US20030063609 A1 US 20030063609A1 US 29273502 A US29273502 A US 29273502A US 2003063609 A1 US2003063609 A1 US 2003063609A1
Authority
US
United States
Prior art keywords
packet
data
queue
switch
table entry
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US10/292,735
Inventor
Bruce Bergenfeld
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nokia of America Corp
Original Assignee
Alcatel Internetworking Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alcatel Internetworking Inc filed Critical Alcatel Internetworking Inc
Priority to US10/292,735 priority Critical patent/US20030063609A1/en
Publication of US20030063609A1 publication Critical patent/US20030063609A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/02Details
    • H04L12/16Arrangements for providing special services to substations
    • H04L12/18Arrangements for providing special services to substations for broadcast or conference, e.g. multicast
    • H04L12/1886Arrangements for providing special services to substations for broadcast or conference, e.g. multicast with traffic restrictions for efficiency improvement, e.g. involving subnets or subdomains
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/35Switches specially adapted for specific applications
    • H04L49/351Switches specially adapted for specific applications for local area network [LAN], e.g. Ethernet switches
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/20Support for services
    • H04L49/201Multicast operation; Broadcast operation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/30Peripheral units, e.g. input or output ports
    • H04L49/3018Input queuing

Definitions

  • the present invention relates to data communication switching and, more particularly, to methods and devices for assisting the copying of packets for multicasting.
  • Data communication switches receive packets on ingress ports, format them for the “next hop”, and transmit them on egress ports en route to their ultimate destinations.
  • th switch When more than one ultimate destination is indicated, i.e., the packet requires multicasting, th switch must generally make multiple copies of the packet or a portion thereof and prepends each of the copies with a different outbound header.
  • Conventional switches have relied heavily on software-driven central processing units (CPUs) to accomplish the required copying. Such CPU reliance has introduced intervening steps into the switching process which have caused latency and imposed additional buffering requirements. Overall switching performance has suffered as a result. Therefore, there is a general need for methods and devices for more efficiently processing packets requiring multicasting in data communication switches, and a more particular need for a hardware-based solution to the task of multicast copying.
  • the present invention provides a hardware copy assist for facilitating data communication switch multicasting.
  • packets are copies in hardware in a quantity required to meet multicasting needs.
  • This inventive aspect is achieved by storing packets in a switch queue and retaining a home mark to which a read address is reset when additional copying is indicated. Inbound packets are stored in the switch queue pending resolution of forwarding requirements. A home mark is always set to the first-written address of the packet at the head of the queue. If additional copying of the packet is indicated, the read address is reset to the home mark after the most recent copy of the packet is delivered. If additional copying is not indicated, the home mark is advanced to the first-written address for the next packet for copying from the switch queue after the most recent copy is delivered.
  • the home mark is used in a watermark check to guarantee that the packet at the head of the queue is not overwritten before the required number of copies has been made.
  • This inventive aspect is achieved by using the differential between the write address and the home mark (rather than the current read address) as the benchmark of current queue fullness in a watermark check wherein the decision is made whether to grant queuing clearance to the next inbound data.
  • the write address/home mark differential (rather than the write address/read address differential) in the watermark check, the addresses in which the packet at the head of the queue are stored are placed off-limits to the next inbound packet until it is certain that additional copying of the packet at the head of the queue will not be required.
  • the hardware copy assist is implemented with minimal switching overhead by making copying decisions incidental to the retrieval of outbound headers.
  • Outbound headers are preferably retrieved by indexing a header table wherein all outbound headers for the same packet are stored as a linked list of entries. A check is made of each entry as the linked list is “walked-down” to determine if there is another entry in the linked list, as indicated by the presence of a valid “next entry” index. If there is a valid “next entry” index, the read address is reset to the home mark after the most recent copy of the packet is delivered. If there is not a valid “next entry” index, however, the home mark is advanced to the read address after the most recent copy of the packet is delivered.
  • FIG. 1 is a block diagram of a data communication switching architecture in which the present invention may be implemented
  • FIG. 2 is a more detailed block diagram of the queue control unit of FIG. 1 including its interfaces to the ingress queue, switch queue and header table;
  • FIG. 3 is a flow diagram describing a read policing methodology performed by the queue control unit of FIG. 1;
  • FIG. 4 is a flow diagram describing a write policing methodology performed by the queue control unit of FIG. 1;
  • FIG. 5 is a diagram illustrating the processing of an exemplary packet within the switching architecture of FIG. 1;
  • FIGS. 6A and 6B are diagrams illustrating how a watermark check within the switching architecture of FIG. 1 is operative to prevent a premature overwrite of an exemplary packet.
  • inbound packets arrive at ingress queue 100 , are formatted for the “next hop” by prepending appropriate outbound headers, and are delivered as outbound packets to egress queue 150 . More particularly, identifiers in the headers of inbound packets are transmitted to switching logic 120 for a switching decision. If forwarding is indicated, switching logic 120 transmits the appropriate forwarding index to header table 130 to retrieve information for encoding in outbound headers for the packet. In this regard, linked lists of entries are constructed in header table 130 for forwarding multicast packets to an appropriate array of destinations.
  • each entry may include a valid “next entry” index which identifies the index of another table entry having information for encoding in another outbound header for the same packet.
  • Packet assembly 140 receives outbound headers from header queue 170 and combines outbound headers and copies of packet data separately-received from switch queue 110 “on the fly” into outbound packets which may be transferred on egress queue 150 to the appropriate “next hops”.
  • One possible configuration of such an “on the fly” packet assembly is described in application Ser. No. 09/097,898 entitled PACKET ASSEMBLY HARDWARE FOR DATA COMMUNICAITON SWITCH, owned by the assignee hereof.
  • Identifiers transmitted to switching logic 120 for a switching decision may include Open System Interconnection (OSI) Layer Two (Bridging), Layer Three (Network) and Layer Four (Transport) addresses and identifiers, by way of example.
  • Switching logic 120 may make the switching decision by performing associative comparisons of such identifiers with known identifiers stored in a memory within switching logic 120 .
  • a memory may be a content addressable memory (CAM) or may be a random access memory (RAM).
  • CAM content addressable memory
  • RAM random access memory
  • Data in inbound packets which will be included in any counterpart outbound packet are retained in switch queue 110 pending the results of switching decisions.
  • Data in inbound packets which will not be included in any counterpart outbound packet may also be stored in switch queue 110 and “skipped” upon reading the packet from switch queue 110 to packet assembly 140 .
  • packet data may be dropped at ingress queue 100 .
  • the data for a particular packet which are retained in switch queue 110 will be referred to herein as a “packet” whether the entire or only selected portions of the inbound packet are actually retained.
  • Queue control unit 160 manipulates the switch queue read address, in a manner hereinafter described, to ensure delivery the number of copies of each packet required to meet multicasting needs is delivered to packet assembly 140 .
  • Unit 160 also regulates access to switch queue 110 to prevent packets from being overwritten before the required number of copies is delivered.
  • Packets are preferably transferred in and out of switch queue 110 on a bus which, when active, transfers a constant-bit “width” of data on each clock cycle. Each packet may span one or more widths. In addition to having bits of packet data, a “width” may include control bits sufficient to convey if the width is the first or last width of a packet.
  • queue control unit 160 includes queue flow control logic 210 , write address counter 220 , read address counter 230 and home mark register 240 .
  • Logic 210 polices data flows in and out of switch queue 110 to ensure that the appropriate number of copies of each packet are delivered to packet assembly 140 and that packets are not prematurely overwritten.
  • logic 160 has a line on header queue 170 for receiving the current “next entry” index for the packet at the head of switch queue 110 from an entry retrieved from header table 130 .
  • Write address counter 220 holds the current write address for switch queue 110 and is incremented with each new width of data received from ingress queue 100 .
  • Read address counter 230 holds the current read address for switch queue 110 .
  • the value stored in read counter is incremented with each new width of data transmitted to packet assembly 140 and is reset under certain circumstances hereinafter explained.
  • Home mark register 240 retains the address of the first width of the packet at the head of switch queue 110 , hereinafter referred to as the home mark.
  • the value stored in home mark register is advanced under certain circumstances hereinafter explained.
  • Step 310 When a packet is pending in switch queue 110 (Step 310 ), read address counter 220 is consulted for the current read address and the first width of the packet at the head of the queue is read from switch queue 110 to packet assembly 140 (Step 320 ). Read address counter 220 is incremented (Step 330 ) and the control bits associated with the width just read are consulted to determine if the width is the last width of the packet (Step 340 ). If the width is not the last width, Step 320 is repeated. If the width is the last width, however, a check is made to determine if the packet must be retained for additional copying (Step 350 ).
  • queue flow control logic 160 reviews the current “next entry” index for the packet retrieved from header queue 170 . If the “next entry” index is valid, it is known that the packet will have to be retained for additional copying to meet multicasting needs and the multicast flag is set. Otherwise, if the entry does not have a valid “next entry” index, it is known that additional copies of the packet are not required and the multicast flag is not set. If the multicast flag is set, the read address is reset to the home mark (i.e., the first address of the current packet) by updating read address counter 230 , and Step 320 is repeated.
  • the home mark i.e., the first address of the current packet
  • Step 370 the home mark is advanced to the read address (i.e., the first address of the next pending packet, if any) by updating home mark register 240 , and the algorithm is exited.
  • Write policing is done to avoid overwriting the packet at the head of switch queue 110 prematurely.
  • the home mark rather than the read address is advantageously used in the queue fullness calculation.
  • the preferred write policing methodology implemented with the assistance of logic 210 is described in FIG. 4 .
  • the difference between the write address and the home mark is compared against a configured watermark (Step 420 ). If the differential is less than the watermark, it is known there is ample room in switch queue 110 to receive the inbound width without overwriting the packet at the head of the switch queue 110 . Therefore, the inbound width is written to switch queue 110 (Step 430 ). If, on the other hand, the differential is not less than the watermark, it is known that there may not be ample room in switch queue 110 to receive the inbound width without risking a premature overwrite of the packet at the head of switch queue 110 . Therefore, logic 210 asserts stall line 212 and the inbound width is not delivered to switch queue 110 .
  • Watermark checks are performed regularly to reveal changes in the available status of switch queue 110 resulting from advances in the home mark.
  • the lower limit on the configured value of the watermark is defined by the maximum allowable packet size in the switching architecture, such that a packet of any size may be queued in its entirety under a condition of maximum available capacity (i.e., when switch queue 110 is empty).
  • the upper limit on the configured value of the watermark is defined by the capacity of switch queue 110 .
  • FIG. 5 Processing of an exemplary packet A at the head of switch queue 110 is illustrated in FIG. 5, which may be read in conjunction with FIG. 1.
  • Identifiers from the exemplary packet are sent to switching logic 120 for a switching decision.
  • Switching logic 120 returns forwarding index A 1 .
  • Header table 130 is consulted at index A 1 and reveals header data A 1 ′ for encoding in an outbound header.
  • Header A 1 ′′ is constructed in header queue 170 and header A 1 ′′ is delivered to packet assembly 140 .
  • packet A has advanced to the head of switch queue 110 where the home mark is set to the first-written address for packet A.
  • Packet A is delivered to packet assembly 140 in a series of widths by incrementing the read address.
  • header A 1 ′′ is prepended to packet A to form an outbound packet for transfer to egress queue 150 . Because “next entry” field in the entry retrieved from index Al has a valid “next entry” index A 2 , it is known that packet A must be retained for additional copying. Therefore, the multicast flag is set and the read address is reset to the home mark. “Next entry” index A 2 is looked-up in header table 130 and reveals header data A 2 ′ for encoding in another outbound header for prepending to packet data A. Header A 2 ′′ is constructed in header queue 170 and header A 2 ′′ is delivered to packet assembly 140 . Separately, another copy of packet data A is delivered to packet assembly 140 using the read address to deliver successive widths of packet A.
  • header A 2 ′′ is prepended to packet A to form another outbound packet for transfer to egress queue 150 . Because “next entry” field in the entry retrieved from index A 2 does not have a valid “next entry” index, it is known that packet A no longer needs to be retained for additional copying. Therefore, the multicast flag is not set and the home mark is advanced to the read address. Processing then begins on packet B in similar fashion.
  • FIGS. 6A and 6B illustrate how the preferred watermark check operates to prevent premature overwrite of an exemplary packet A at the head of an exemplary switch queue 610 .
  • packet A and a width of data from packet B are pending in switch queue 610 and a copy of packet A at the head of the queue is in the process of being delivered to packet assembly 140 .
  • a watermark check must be passed before additional widths of packet B (pending in ingress queue 100 ) may be delivered to switch queue 610 .
  • the differential between the write address and the home mark is equal to the watermark and the watermark check is failed.

Abstract

A hardware copy assist for a data communication switch copies packets in a number required to meet multicasting needs. Packets are read from a switch queue and a home mark is retained for the packet at the head of the queue to facilitate multiple reads and to prevent premature overwrites. Copying decisions may be made incidental to the retrieval of outbound headers from a linked list of table entries to minimize overhead.

Description

    CROSS-REFERENCE TO RELATED APPLICATION
  • This application is a continuation of U.S. patent application Ser. No. 09/126,916, filed on Jul. 30, 1998, entitled “HARDWARE COPY ASSIST FOR DATA COMMUNICATION SWITCH.”[0001]
  • BACKGROUND OF THE INVENTION
  • The present invention relates to data communication switching and, more particularly, to methods and devices for assisting the copying of packets for multicasting. [0002]
  • Data communication switches receive packets on ingress ports, format them for the “next hop”, and transmit them on egress ports en route to their ultimate destinations. When more than one ultimate destination is indicated, i.e., the packet requires multicasting, th switch must generally make multiple copies of the packet or a portion thereof and prepends each of the copies with a different outbound header. Conventional switches have relied heavily on software-driven central processing units (CPUs) to accomplish the required copying. Such CPU reliance has introduced intervening steps into the switching process which have caused latency and imposed additional buffering requirements. Overall switching performance has suffered as a result. Therefore, there is a general need for methods and devices for more efficiently processing packets requiring multicasting in data communication switches, and a more particular need for a hardware-based solution to the task of multicast copying. [0003]
  • SUMMARY OF THE INVENTION
  • In its most basic feature, the present invention provides a hardware copy assist for facilitating data communication switch multicasting. [0004]
  • In one aspect of the invention, packets are copies in hardware in a quantity required to meet multicasting needs. This inventive aspect is achieved by storing packets in a switch queue and retaining a home mark to which a read address is reset when additional copying is indicated. Inbound packets are stored in the switch queue pending resolution of forwarding requirements. A home mark is always set to the first-written address of the packet at the head of the queue. If additional copying of the packet is indicated, the read address is reset to the home mark after the most recent copy of the packet is delivered. If additional copying is not indicated, the home mark is advanced to the first-written address for the next packet for copying from the switch queue after the most recent copy is delivered. [0005]
  • In another aspect of the invention, the home mark is used in a watermark check to guarantee that the packet at the head of the queue is not overwritten before the required number of copies has been made. This inventive aspect is achieved by using the differential between the write address and the home mark (rather than the current read address) as the benchmark of current queue fullness in a watermark check wherein the decision is made whether to grant queuing clearance to the next inbound data. By relying on the write address/home mark differential (rather than the write address/read address differential) in the watermark check, the addresses in which the packet at the head of the queue are stored are placed off-limits to the next inbound packet until it is certain that additional copying of the packet at the head of the queue will not be required. [0006]
  • In a preferred embodiment of the invention, the hardware copy assist is implemented with minimal switching overhead by making copying decisions incidental to the retrieval of outbound headers. Outbound headers are preferably retrieved by indexing a header table wherein all outbound headers for the same packet are stored as a linked list of entries. A check is made of each entry as the linked list is “walked-down” to determine if there is another entry in the linked list, as indicated by the presence of a valid “next entry” index. If there is a valid “next entry” index, the read address is reset to the home mark after the most recent copy of the packet is delivered. If there is not a valid “next entry” index, however, the home mark is advanced to the read address after the most recent copy of the packet is delivered. [0007]
  • These and other aspects of the present invention may be better understood by reference to the following detailed description taken in conjunction with the accompanying drawings which are briefly described below. Of course, the actual scope of the invention is defined by the appended claims.[0008]
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a block diagram of a data communication switching architecture in which the present invention may be implemented; [0009]
  • FIG. 2 is a more detailed block diagram of the queue control unit of FIG. 1 including its interfaces to the ingress queue, switch queue and header table; [0010]
  • FIG. 3 is a flow diagram describing a read policing methodology performed by the queue control unit of FIG. 1; [0011]
  • FIG. 4 is a flow diagram describing a write policing methodology performed by the queue control unit of FIG. 1; [0012]
  • FIG. 5 is a diagram illustrating the processing of an exemplary packet within the switching architecture of FIG. 1; and [0013]
  • FIGS. 6A and 6B are diagrams illustrating how a watermark check within the switching architecture of FIG. 1 is operative to prevent a premature overwrite of an exemplary packet. [0014]
  • DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT
  • In FIG. 1, a switching architecture in which the present invention may be implemented is shown. In the basic switching operation, inbound packets arrive at [0015] ingress queue 100, are formatted for the “next hop” by prepending appropriate outbound headers, and are delivered as outbound packets to egress queue 150. More particularly, identifiers in the headers of inbound packets are transmitted to switching logic 120 for a switching decision. If forwarding is indicated, switching logic 120 transmits the appropriate forwarding index to header table 130 to retrieve information for encoding in outbound headers for the packet. In this regard, linked lists of entries are constructed in header table 130 for forwarding multicast packets to an appropriate array of destinations. In addition to storing information for encoding in a particular outbound header, therefore, each entry may include a valid “next entry” index which identifies the index of another table entry having information for encoding in another outbound header for the same packet. Packet assembly 140 receives outbound headers from header queue 170 and combines outbound headers and copies of packet data separately-received from switch queue 110 “on the fly” into outbound packets which may be transferred on egress queue 150 to the appropriate “next hops”. One possible configuration of such an “on the fly” packet assembly is described in application Ser. No. 09/097,898 entitled PACKET ASSEMBLY HARDWARE FOR DATA COMMUNICAITON SWITCH, owned by the assignee hereof. Identifiers transmitted to switching logic 120 for a switching decision may include Open System Interconnection (OSI) Layer Two (Bridging), Layer Three (Network) and Layer Four (Transport) addresses and identifiers, by way of example. Switching logic 120 may make the switching decision by performing associative comparisons of such identifiers with known identifiers stored in a memory within switching logic 120. Such a memory may be a content addressable memory (CAM) or may be a random access memory (RAM). One possible RAM-based implementation of switching logic 120 is described in application Ser. No. 08/964,597 entitled CUSTOM CIRCUITRY FOR ADAPTIVE HARDWARE ROUTING ENGINE, owned by the assignee hereof.
  • Data in inbound packets which will be included in any counterpart outbound packet are retained in [0016] switch queue 110 pending the results of switching decisions. Data in inbound packets which will not be included in any counterpart outbound packet may also be stored in switch queue 110 and “skipped” upon reading the packet from switch queue 110 to packet assembly 140. Alternatively, such packet data may be dropped at ingress queue 100. For simplicity, however, the data for a particular packet which are retained in switch queue 110 will be referred to herein as a “packet” whether the entire or only selected portions of the inbound packet are actually retained. Queue control unit 160 manipulates the switch queue read address, in a manner hereinafter described, to ensure delivery the number of copies of each packet required to meet multicasting needs is delivered to packet assembly 140. Unit 160 also regulates access to switch queue 110 to prevent packets from being overwritten before the required number of copies is delivered. Packets are preferably transferred in and out of switch queue 110 on a bus which, when active, transfers a constant-bit “width” of data on each clock cycle. Each packet may span one or more widths. In addition to having bits of packet data, a “width” may include control bits sufficient to convey if the width is the first or last width of a packet.
  • Referring now to FIG. 2, [0017] queue control unit 160 is illustrated in greater detail. In a preferred embodiment, unit 160 includes queue flow control logic 210, write address counter 220, read address counter 230 and home mark register 240. Logic 210 polices data flows in and out of switch queue 110 to ensure that the appropriate number of copies of each packet are delivered to packet assembly 140 and that packets are not prematurely overwritten. To this end, logic 160 has a line on header queue 170 for receiving the current “next entry” index for the packet at the head of switch queue 110 from an entry retrieved from header table 130. Write address counter 220 holds the current write address for switch queue 110 and is incremented with each new width of data received from ingress queue 100. Read address counter 230 holds the current read address for switch queue 110. The value stored in read counter is incremented with each new width of data transmitted to packet assembly 140 and is reset under certain circumstances hereinafter explained. Home mark register 240 retains the address of the first width of the packet at the head of switch queue 110, hereinafter referred to as the home mark. The value stored in home mark register is advanced under certain circumstances hereinafter explained.
  • The read policing methodology implemented with the assistance of [0018] logic 210 is described with greater particularity in the flow diagram of FIG. 3. When a packet is pending in switch queue 110 (Step 310), read address counter 220 is consulted for the current read address and the first width of the packet at the head of the queue is read from switch queue 110 to packet assembly 140 (Step 320). Read address counter 220 is incremented (Step 330) and the control bits associated with the width just read are consulted to determine if the width is the last width of the packet (Step 340). If the width is not the last width, Step 320 is repeated. If the width is the last width, however, a check is made to determine if the packet must be retained for additional copying (Step 350). In this regard, queue flow control logic 160 reviews the current “next entry” index for the packet retrieved from header queue 170. If the “next entry” index is valid, it is known that the packet will have to be retained for additional copying to meet multicasting needs and the multicast flag is set. Otherwise, if the entry does not have a valid “next entry” index, it is known that additional copies of the packet are not required and the multicast flag is not set. If the multicast flag is set, the read address is reset to the home mark (i.e., the first address of the current packet) by updating read address counter 230, and Step 320 is repeated. If the multicast flag is not set, however, the home mark is advanced to the read address (Step 370) (i.e., the first address of the next pending packet, if any) by updating home mark register 240, and the algorithm is exited. It will be appreciated that through the above policing scheme, copies of packets are delivered to packet assembly 140 in the number required to meet multicasting needs without the need for software intervention. Moreover, reliance on the dual purpose “next entry” index in header table 130 as the determinant of the need for additional copying allows this advantageous result to be achieved with minimal additional overhead.
  • Write policing is done to avoid overwriting the packet at the head of [0019] switch queue 110 prematurely. In this regard, because it is not known at the time decisions whether to write into switch queue 10 must be made whether the packet at the head of switch queue 110 will have to be retained for additional copying, the home mark rather than the read address is advantageously used in the queue fullness calculation. The preferred write policing methodology implemented with the assistance of logic 210 is described in FIG. 4. When a width of an inbound packet is pending in ingress queue 100 (Step 410), a watermark check is performed before releasing the width to switch queue 110. In the watermark check, the difference between the write address and the home mark (a measure of queue fullness) is compared against a configured watermark (Step 420). If the differential is less than the watermark, it is known there is ample room in switch queue 110 to receive the inbound width without overwriting the packet at the head of the switch queue 110. Therefore, the inbound width is written to switch queue 110 (Step 430). If, on the other hand, the differential is not less than the watermark, it is known that there may not be ample room in switch queue 110 to receive the inbound width without risking a premature overwrite of the packet at the head of switch queue 110. Therefore, logic 210 asserts stall line 212 and the inbound width is not delivered to switch queue 110. Watermark checks are performed regularly to reveal changes in the available status of switch queue 110 resulting from advances in the home mark. The lower limit on the configured value of the watermark is defined by the maximum allowable packet size in the switching architecture, such that a packet of any size may be queued in its entirety under a condition of maximum available capacity (i.e., when switch queue 110 is empty). The upper limit on the configured value of the watermark is defined by the capacity of switch queue 110.
  • Processing of an exemplary packet A at the head of [0020] switch queue 110 is illustrated in FIG. 5, which may be read in conjunction with FIG. 1. Identifiers from the exemplary packet are sent to switching logic 120 for a switching decision. Switching logic 120 returns forwarding index A1. Header table 130 is consulted at index A1 and reveals header data A1′ for encoding in an outbound header. Header A1″ is constructed in header queue 170 and header A1″ is delivered to packet assembly 140. Separately, packet A has advanced to the head of switch queue 110 where the home mark is set to the first-written address for packet A. Packet A is delivered to packet assembly 140 in a series of widths by incrementing the read address. In packet assembly 140, header A1″ is prepended to packet A to form an outbound packet for transfer to egress queue 150. Because “next entry” field in the entry retrieved from index Al has a valid “next entry” index A2, it is known that packet A must be retained for additional copying. Therefore, the multicast flag is set and the read address is reset to the home mark. “Next entry” index A2 is looked-up in header table 130 and reveals header data A2′ for encoding in another outbound header for prepending to packet data A. Header A2″ is constructed in header queue 170 and header A2″ is delivered to packet assembly 140. Separately, another copy of packet data A is delivered to packet assembly 140 using the read address to deliver successive widths of packet A. In packet assembly 140, header A2″ is prepended to packet A to form another outbound packet for transfer to egress queue 150. Because “next entry” field in the entry retrieved from index A2 does not have a valid “next entry” index, it is known that packet A no longer needs to be retained for additional copying. Therefore, the multicast flag is not set and the home mark is advanced to the read address. Processing then begins on packet B in similar fashion.
  • Finally, FIGS. 6A and 6B illustrate how the preferred watermark check operates to prevent premature overwrite of an exemplary packet A at the head of an [0021] exemplary switch queue 610. First, consider FIG. 6A, wherein packet A and a width of data from packet B are pending in switch queue 610 and a copy of packet A at the head of the queue is in the process of being delivered to packet assembly 140. A watermark check must be passed before additional widths of packet B (pending in ingress queue 100) may be delivered to switch queue 610. As illustrated in FIG. 6A, in the watermark check, the differential between the write address and the home mark is equal to the watermark and the watermark check is failed. (Note that if the differential between the write address and the read address were used as the basis for comparison, the watermark check would be passed and packet A would be subject to a risk of premature overwrite). The additional width is therefore not queued. Subsequently, referring to FIG. 6B, once it is known that no additional copies of packet A will have to be made, the home mark is advanced to the first-written width of packet B and the watermark check is again performed. This time, the differential between the write address and the home mark is less than the watermark and the watermark check is passed. The additional width is therefore written in switch queue 610.
  • It will be appreciated by those of ordinary skill in the art that the invention can be embodied in other specific forms without departing from the spirit or essential character hereof. The present invention is therefore considered in all respects illustrative and not restrictive. The scope of the invention is indicated by the appended claims, and all changes that come within the meaning and range of equivalents thereof are intended to be embraced therein. [0022]

Claims (17)

1. In a data communication switch, a method for generating a required number of copies of a packet, comprising:
(a) writing a packet into a plurality of addresses in a switch queue;
(b) reading the packet from the plurality of addresses;
(c) determining if additional copying is indicated from information in a table entry having header data for combining with the packet;
(d) if additional copying is indicated, resetting the read address to the first address in which the packet is written and repeating steps (b) and (c).
2. The method for generating required copies of a packet according to claim 1, wherein the determining step comprises:
(i) consulting the table entry; and
(ii) learning if the information in the consulted table entry has a valid index to another table entry having header data for combining with the packet.
3. The method of claim 1, wherein if additional copying is not indicated, advancing the read address to a first-written address of a next packet for copying from the switch queue.
4. In a data communication switch, a method for generating a required number of copies of a packet, comprising:
(a) writing a packet into a plurality of addresses of a switch queue;
(b) setting a home mark to the first address in which the packet is written;
(c) reading the packet from the plurality of addresses;
(d) determining if additional copying is indicated from information in a table entry having header data for combining with the packet;
(e) if additional copying is indicated, resetting the read address to the home mark and repeating steps (c) and (d).
5. The method of claim 4 further comprising:
if additional copying is not indicated, advancing the home mark to a first-written address of a next packet for copying from the switch queue.
6. A method for processing an inbound packet in a data communication switch, comprising:
(a) receiving a packet in an ingress queue, the received packet having packet identifiers and packet data;
(b) transmitting the packet identifiers to switching logic for a forwarding decision;
(c) writing the packet data into a plurality of addresses in a switch queue;
(d) consulting a table entry at an index returned from the switching logic in response to the packet identifiers;
(e) reading the packet data from the plurality of addresses;
(f) determining if additional copying is indicated from information in the consulted table entry, wherein the information in the consulted table entry includes header data for combining with the packet data and additional copying is indicated if the information includes a valid index to another table entry having header data for combining with the packet data; and
(g) if additional copying is indicated, resetting the read address to the first address in which the packet data is written and repeating steps (e) and (f).
7. The method of claim 6, wherein if additional copying is not indicated, advancing the read address to a first-written address of a next packet for copying from the switch queue.
8. In a data communication switch, a method for generating a required number of copies of a packet, comprising:
storing a packet in a switch queue;
obtaining a copy of the packet from the switch queue;
retrieving a first header data for the packet from a first entry of a header table;
combining the copy of the packet with the first header data;
determining if the first entry of the header table includes a valid index to a second table entry storing a second header data for the packet; and
if the entry of the header table includes a valid index to a second table entry:
obtaining a second copy of the packet; and
combining the second copy of the packet with the second header data.
9. The method of claim 8, wherein the obtaining of the second copy of the packet comprises resetting a read address to a first address in which the packet is written in the switch queue.
10. The method of claim 8 further comprising setting a mark to a first address in which the packet is written in the switch queue, wherein the obtaining a second copy of the packet comprises resetting a read address to the mark.
11. The method of claim 10, wherein if the first entry of the header table does not include a valid index to a second table entry, advancing the mark to a first-written address of a next packet for copying from the switch queue.
12. A data communication switch comprising:
a switch queue configured to store a packet;
a header table including a first table entry storing first header data and an index to a second table entry storing second header data;
a queue control unit causing retrieval of a first copy of the packet, the queue control unit further causing retrieval of a second copy of the packet if the first table entry includes a valid index to the second table entry storing second header data; and
a packet assembly unit receiving the first copy of the packet and the second copy of the packet and combining the first copy of the packet with the first header data and the second copy of the packet with the second header data.
13. The switch of claim 12, wherein the queue control unit obtains the second copy of the packet by resetting a read address to a first address in which the packet is written in the switch queue.
14. The switch of claim 12, wherein the queue control unit further sets a mark to a first address in which the packet is written in the switch queue and obtains the second copy of the packet by resetting a read address to the mark.
15. The switch of claim 14, wherein if the first table entry does not include a valid index to the second table entry, the queue control unit advances the mark to a first-written address of a next packet for copying from the switch queue.
16. A data communication switch comprising:
means for receiving a packet having packet identifiers and packet data;
means for making a forwarding decision based on the packet identifiers;
means for writing the packet data into a plurality of addresses;
means for consulting a table entry based on an index returned by the means for making a forwarding decision;
means for reading the packet data from the plurality of addresses;
means for determining if additional copying is indicated from information in the consulted table entry, wherein the information in the consulted table entry includes header data for combining with the packet data and additional copying is indicated if the information includes a valid index to another table entry having header data for combining with the packet data; and
if additional copying is indicated, means for resetting the read address to the first address in which the packet data is written for reading the packet data again from the plurality of addresses.
17. The switch of claim 16, further comprising:
if additional copying is not indicated, means for advancing the read address to a first-written address of a next packet for copying the next packet.
US10/292,735 1998-07-30 2002-11-12 Hardware copy assist for data communication switch Abandoned US20030063609A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US10/292,735 US20030063609A1 (en) 1998-07-30 2002-11-12 Hardware copy assist for data communication switch

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US09/126,916 US6504842B1 (en) 1998-07-30 1998-07-30 Hardware copy assist for data communication switch
US10/292,735 US20030063609A1 (en) 1998-07-30 2002-11-12 Hardware copy assist for data communication switch

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US09/126,916 Continuation US6504842B1 (en) 1998-07-30 1998-07-30 Hardware copy assist for data communication switch

Publications (1)

Publication Number Publication Date
US20030063609A1 true US20030063609A1 (en) 2003-04-03

Family

ID=22427364

Family Applications (2)

Application Number Title Priority Date Filing Date
US09/126,916 Expired - Lifetime US6504842B1 (en) 1998-07-30 1998-07-30 Hardware copy assist for data communication switch
US10/292,735 Abandoned US20030063609A1 (en) 1998-07-30 2002-11-12 Hardware copy assist for data communication switch

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US09/126,916 Expired - Lifetime US6504842B1 (en) 1998-07-30 1998-07-30 Hardware copy assist for data communication switch

Country Status (1)

Country Link
US (2) US6504842B1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060013210A1 (en) * 2004-06-18 2006-01-19 Bordogna Mark A Method and apparatus for per-service fault protection and restoration in a packet network
US20100040062A1 (en) * 2003-08-27 2010-02-18 Bbn Technologies Corp Systems and methods for forwarding data units in a communications network
US20140269302A1 (en) * 2013-03-14 2014-09-18 Cisco Technology, Inc. Intra Switch Transport Protocol
WO2016053692A1 (en) * 2014-09-30 2016-04-07 Level 3 Communications, Llc Allocating capacity of a network connection to data steams based on type
US9628406B2 (en) 2013-03-13 2017-04-18 Cisco Technology, Inc. Intra switch transport protocol
US10122645B2 (en) 2012-12-07 2018-11-06 Cisco Technology, Inc. Output queue latency behavior for input queue based device

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7292529B1 (en) * 2002-07-31 2007-11-06 Juniper Networks, Inc. Memory load balancing for single stream multicast
US7594996B2 (en) 2004-01-23 2009-09-29 Aquatech, Llc Petroleum recovery and cleaning system and process
US7376774B1 (en) 2004-08-27 2008-05-20 Xilinx, Inc. Network media access controller embedded in a programmable logic device—host interface control generator
US7143218B1 (en) 2004-08-27 2006-11-28 Xilinx, Inc. Network media access controller embedded in a programmable logic device-address filter
US7580372B2 (en) * 2005-12-15 2009-08-25 Alcatel Lucent System and method for implementing multiple spanning tree protocol automatic 802.1Q trunking
WO2008120327A1 (en) * 2007-03-28 2008-10-09 Fujitsu Limited Message transfer program, message transfer method, and message transfer system
US8442045B2 (en) * 2010-03-16 2013-05-14 Force10 Networks, Inc. Multicast packet forwarding using multiple stacked chassis
US8654680B2 (en) 2010-03-16 2014-02-18 Force10 Networks, Inc. Packet forwarding using multiple stacked chassis
US11323372B2 (en) * 2020-04-21 2022-05-03 Mellanox Technologies Ltd. Flexible steering
US11425230B2 (en) 2021-01-28 2022-08-23 Mellanox Technologies, Ltd. Efficient parsing tuned to prevalent packet types
US11711453B2 (en) 2021-10-24 2023-07-25 Mellanox Technologies, Ltd. Template-based packet parsing

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5903564A (en) * 1997-08-28 1999-05-11 Ascend Communications, Inc. Efficient multicast mapping in a network switch
US6052373A (en) * 1996-10-07 2000-04-18 Lau; Peter S. Y. Fault tolerant multicast ATM switch fabric, scalable speed and port expansion configurations
US6101187A (en) * 1996-12-20 2000-08-08 International Business Machines Corporation Method and system for multicasting cells in an ATM protocol adapter
US6185206B1 (en) * 1997-12-19 2001-02-06 Nortel Networks Limited ATM switch which counts multicast cell copies and uses a second memory for a decremented cell count value
US6216167B1 (en) * 1997-10-31 2001-04-10 Nortel Networks Limited Efficient path based forwarding and multicast forwarding
US6219352B1 (en) * 1997-11-24 2001-04-17 Cabletron Systems, Inc. Queue management with support for multicasts in an asynchronous transfer mode (ATM) switch
US6272134B1 (en) * 1997-11-20 2001-08-07 International Business Machines Corporation Multicast frame support in hardware routing assist

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6052373A (en) * 1996-10-07 2000-04-18 Lau; Peter S. Y. Fault tolerant multicast ATM switch fabric, scalable speed and port expansion configurations
US6101187A (en) * 1996-12-20 2000-08-08 International Business Machines Corporation Method and system for multicasting cells in an ATM protocol adapter
US5903564A (en) * 1997-08-28 1999-05-11 Ascend Communications, Inc. Efficient multicast mapping in a network switch
US6216167B1 (en) * 1997-10-31 2001-04-10 Nortel Networks Limited Efficient path based forwarding and multicast forwarding
US6272134B1 (en) * 1997-11-20 2001-08-07 International Business Machines Corporation Multicast frame support in hardware routing assist
US6219352B1 (en) * 1997-11-24 2001-04-17 Cabletron Systems, Inc. Queue management with support for multicasts in an asynchronous transfer mode (ATM) switch
US6185206B1 (en) * 1997-12-19 2001-02-06 Nortel Networks Limited ATM switch which counts multicast cell copies and uses a second memory for a decremented cell count value

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100040062A1 (en) * 2003-08-27 2010-02-18 Bbn Technologies Corp Systems and methods for forwarding data units in a communications network
US8103792B2 (en) * 2003-08-27 2012-01-24 Raytheon Bbn Technologies Corp. Systems and methods for forwarding data units in a communications network
US20060013210A1 (en) * 2004-06-18 2006-01-19 Bordogna Mark A Method and apparatus for per-service fault protection and restoration in a packet network
US10122645B2 (en) 2012-12-07 2018-11-06 Cisco Technology, Inc. Output queue latency behavior for input queue based device
US9628406B2 (en) 2013-03-13 2017-04-18 Cisco Technology, Inc. Intra switch transport protocol
US20140269302A1 (en) * 2013-03-14 2014-09-18 Cisco Technology, Inc. Intra Switch Transport Protocol
US9860185B2 (en) * 2013-03-14 2018-01-02 Cisco Technology, Inc. Intra switch transport protocol
WO2016053692A1 (en) * 2014-09-30 2016-04-07 Level 3 Communications, Llc Allocating capacity of a network connection to data steams based on type
US9912709B2 (en) 2014-09-30 2018-03-06 Level 3 Communications, Llc Allocating capacity of a network connection to data streams based on type
US10277647B2 (en) 2014-09-30 2019-04-30 Level 3 Communications, Llc Allocating capacity of a network connection to data streams based on type
US10581942B2 (en) 2014-09-30 2020-03-03 Level 3 Communications, Llc Allocating capacity of a network connection to data streams based on type

Also Published As

Publication number Publication date
US6504842B1 (en) 2003-01-07

Similar Documents

Publication Publication Date Title
US6504842B1 (en) Hardware copy assist for data communication switch
US6553031B1 (en) Communication node apparatus with routing tables in cache memories
JP3777161B2 (en) Efficient processing of multicast transmission
US6987768B1 (en) Packet transferring apparatus
US6977941B2 (en) Shared buffer type variable length packet switch
US6571291B1 (en) Apparatus and method for validating and updating an IP checksum in a network switching system
US6363067B1 (en) Staged partitioned communication bus for a multi-port bridge for a local area network
US7146478B2 (en) Cache entry selection method and apparatus
US6633565B1 (en) Apparatus for and method of flow switching in a data communications network
US7218632B1 (en) Packet processing engine architecture
JP3640299B2 (en) A proposal and response architecture for route lookup and packet classification requests
US6738384B1 (en) Technique for optimizing cut-through for broadcast and multi-cast packets in a multi-port bridge for a local area network
JP3109591B2 (en) ATM switch
JP4542539B2 (en) Route lookup engine
US6683885B1 (en) Network relaying apparatus and network relaying method
US6801950B1 (en) Stackable network unit including register for identifying trunk connection status of stacked units
US6044079A (en) Statistical packet discard
US6604147B1 (en) Scalable IP edge router
JP2001292155A (en) Priority re-mapping for data communication switch
US6779043B1 (en) Network address manager
JP2003516029A (en) Method and apparatus for wire-rate IP multicast forwarding
US6658003B1 (en) Network relaying apparatus and network relaying method capable of high-speed flow detection
US6026093A (en) Mechanism for dispatching data units via a telecommunications network
US5495478A (en) Apparatus and method for processing asynchronous transfer mode cells
US7174394B1 (en) Multi processor enqueue packet circuit

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION