US6999464B2 - Method of scalable non-blocking shared memory output-buffered switching of variable length data packets from pluralities of ports at full line rate, and apparatus therefor - Google Patents

Method of scalable non-blocking shared memory output-buffered switching of variable length data packets from pluralities of ports at full line rate, and apparatus therefor Download PDF

Info

Publication number
US6999464B2
US6999464B2 US09/941,144 US94114401A US6999464B2 US 6999464 B2 US6999464 B2 US 6999464B2 US 94114401 A US94114401 A US 94114401A US 6999464 B2 US6999464 B2 US 6999464B2
Authority
US
United States
Prior art keywords
data
memory
output
queues
cells
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime, expires
Application number
US09/941,144
Other versions
US20030043828A1 (en
Inventor
Xiaolin Wang
Satish Soman
Subhasis Pal
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xylon LLC
Original Assignee
Axiowave Networks Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Axiowave Networks Inc filed Critical Axiowave Networks Inc
Priority to US09/941,144 priority Critical patent/US6999464B2/en
Assigned to AXIOWAVE NETWORKS, INC. reassignment AXIOWAVE NETWORKS, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: PAL, SUBHASIS, SOMAN, SATISH, WANG, XIAOLIN
Priority to CNA02817058XA priority patent/CN1550091A/en
Priority to PCT/IB2002/002751 priority patent/WO2003024033A1/en
Priority to IL16064402A priority patent/IL160644A0/en
Priority to JP2003527955A priority patent/JP2005503071A/en
Priority to KR10-2004-7003059A priority patent/KR20040044519A/en
Priority to CA002459001A priority patent/CA2459001A1/en
Priority to EP02743529A priority patent/EP1423942A1/en
Publication of US20030043828A1 publication Critical patent/US20030043828A1/en
Publication of US6999464B2 publication Critical patent/US6999464B2/en
Application granted granted Critical
Assigned to WEST LANE DATA LLC reassignment WEST LANE DATA LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: AXIOWAVE NETWORKS, INC.
Assigned to XYLON LLC reassignment XYLON LLC MERGER (SEE DOCUMENT FOR DETAILS). Assignors: WEST LANE DATA LLC
Adjusted expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/10Packet switching elements characterised by the switching fabric construction
    • H04L49/104Asynchronous transfer mode [ATM] switching fabrics
    • H04L49/105ATM switching elements
    • H04L49/108ATM switching elements using shared central buffer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5603Access techniques
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5678Traffic aspects, e.g. arbitration, load balancing, smoothing, buffer management
    • H04L2012/5681Buffer or queue management

Definitions

  • the present invention relates to communication data switching between pluralities of input and output ports, and, more particularly, to problems and limitations of present-day generally input-buffering system architectures and the like for the switching of variable-length data packets—limitations in the available number of ports for current data switching “speeds” and “feeds”; limitations with current data transmission delays, and in current available quality of service, including multiplexing jitter, interruptions, and in bandwidth, latency guarantees for particular data transmission services, and in obviating deleterious head-of-the-line blocking and non-scalability of architecture.
  • the usual “feed” today is 8 to 12 ports, but this can go up as time goes by.
  • the “speed” today is, say, OC192 (which is 10 gigabytes), but it can also go to OC768 which is 40 gigabytes, and then beyond.
  • the prior art has most commonly taken the before-described input buffering approach, wherein the input data is locally buffered on an input port that has no “knowledge” of what input data may also be present at other input ports and contending for the same output port destination.
  • the input port merely blindly makes the request of the input buffered switch to direct its data to the particular output port; and this prior architecture thus has had to live with its classic problems of potential head-of-the-line (HOL) blocking and inability to guarantee delay and jitter in quality of service.
  • HOL head-of-the-line
  • the input-buffered systems accordingly, have to put up with sometimes even unrealistic periods of time before data can make its way to the switch for enabling transmission to destination output ports.
  • the particular output-buffered approach of the invention uses a central shared memory architecture comprised of a plurality of similar successive data memory channels defining a memory space, with fixed limited times of data distribution from the input ports successively into the successive memory cells of the successive memory channels, and in striped fashion across the memory space.
  • This enables non-blocking shared memory output-buffered data switching, with the data stored across the memory channels uniformly.
  • the invention embraces a method of receiving and outputting a plurality m of queues of data traffic streams to be switched from data traffic line card input ports to output ports, that comprises, providing a plurality n of similar successive data memory channels each having a number of memory cells defining a shared memory space assigned to the m queues; providing buffering for m memory cells, in front of each memory channel to receive and buffer data switched thereto from line card traffic streams, and providing sufficient buffering to absorb a burst from up to n line cards; and distributing successive data in each of the queues during fixed limited times only to corresponding successive cells of each of the successive memory channels and in striped fashion across the memory space, thereby providing non-blocking shared memory output-buffered data switching.
  • FIG. 1 is a combined generalized block and circuit diagram of a preferred architecture for practicing the data write-path method of the invention.
  • FIG. 2 is a similar diagram of read-out from the shared memory channel system of FIG. 1 .
  • an illustrative preferred memory architecture for practicing the invention having, for the write path, a plurality n of similar successive data memory channels or banks (say, for 256 megabytes times n storage channels), labeled Memory Channel o through memory channel n-1 for storing and outputting m queues of variable length data traffic streams Queue o through Queue m-1 from respective data traffic line cards Line Card o through Line Card n-1 at input ports I, with, say, 10 Gigabits/sec.of bandwidth, and stored in the memory channels.
  • Each of the n data memory channels is provided with a buffer having m memory cells, with the memory channels defining a shared memory space assigned to the m queues.
  • the buffers are shown connected in front of each memory channel and are illustrated as in the form of first-in-first-out buffers FIFO o , FIFO 1 , . . . FIFO n-1 to receive and buffer data switched thereto at SW from the line cards.
  • the buffers are designed to provide sufficient buffering to absorb a burst of data from up to n line cards; i.e. big enough to store data for m cells and to absorb a burst of, for example, OC192 traffic of variable length data packets from the line cards at the input ports I. [Example: 64 OC192 or 16 OC768 ports.]
  • the maximum depth of each FIFO at the front of each memory channel is thus made equal to the number m of queues in the system.
  • the data of the variable-length queues is applied or distributed only for fixed limited time(s) to corresponding successive cells of each of the successive memory channels so as to distribute these time-bounded inputs in striped fashion across the memory space of the channels.
  • every memory channel or bank receives data in about the same number of data cells, though arrival time is traffic dependent; and this, whether there is a data burst or the data is distributed equally throughout the period.
  • Two exemplary (and extreme condition) traffic scenarios may be considered.
  • all traffic streams from the line cards may be destined to one queue. Since the cell addresses are assigned continually, all the memory channels will absorb a data burst. There will be no accumulation in any FIFO, provided the aggregation of bandwidth to memory is made to match the input bandwidth.
  • all the cells may happen to end on the same memory channel.
  • the FIFO at the front of that memory channel will absorb the burst; and the next burst to come along, will move to the next memory channel.
  • the depth of the FIFOs is set at about the number of queues supported by the system, and the aggregated bandwidth between the FIFOs in the memory channels is adjusted, as indicated previously, at least to match the input bandwidth.
  • the bandwidth can be assigned and guaranteed to designated users. If a predetermined assigned depth is exceeded by a user, such excess is stored in available unoccupied shared memory and may be additionally charged for, to that user.
  • FIG. 2 illustrates the read path architecture of the invention for use with the write path system of FIG. 1 , providing for every line card, a corresponding FIFO that is able to draw from the shared memory and at the full bandwidth of the shared memory in a TDM type fashion.

Abstract

A novel scalable-port non-blocking shared-memory output-buffered variable length queued data switching method and apparatus wherein successive data in each of a plurality of queues of data traffic is distributed to corresponding cells of each of successive memory channels in striped fashion across a shared memory space.

Description

FIELD
The present invention relates to communication data switching between pluralities of input and output ports, and, more particularly, to problems and limitations of present-day generally input-buffering system architectures and the like for the switching of variable-length data packets—limitations in the available number of ports for current data switching “speeds” and “feeds”; limitations with current data transmission delays, and in current available quality of service, including multiplexing jitter, interruptions, and in bandwidth, latency guarantees for particular data transmission services, and in obviating deleterious head-of-the-line blocking and non-scalability of architecture.
The usual “feed” today is 8 to 12 ports, but this can go up as time goes by. The “speed” today is, say, OC192 (which is 10 gigabytes), but it can also go to OC768 which is 40 gigabytes, and then beyond.
BACKGROUND
Prevalent products in the industry today can only support 8 to 12 OC192 ports, and they suffer from the other limitations mentioned above.
To endeavor to meet some of the quality of service requirements concurrently with data “speed” and “feed” requirements, the prior art has most commonly taken the before-described input buffering approach, wherein the input data is locally buffered on an input port that has no “knowledge” of what input data may also be present at other input ports and contending for the same output port destination. The input port merely blindly makes the request of the input buffered switch to direct its data to the particular output port; and this prior architecture thus has had to live with its classic problems of potential head-of-the-line (HOL) blocking and inability to guarantee delay and jitter in quality of service. The input-buffered systems, accordingly, have to put up with sometimes even unrealistic periods of time before data can make its way to the switch for enabling transmission to destination output ports.
The particular output-buffered approach of the invention, on the other hand, uses a central shared memory architecture comprised of a plurality of similar successive data memory channels defining a memory space, with fixed limited times of data distribution from the input ports successively into the successive memory cells of the successive memory channels, and in striped fashion across the memory space. This enables non-blocking shared memory output-buffered data switching, with the data stored across the memory channels uniformly. By so limiting the time of storing data from an input port in each successive memory channel, the problem is admirably solved of guaranteeing that data is written into memory in a non-blocking fashion across the memory space with bounded delay.
SUMMARY OF INVENTION
From one of its important viewpoints, accordingly, the invention embraces a method of receiving and outputting a plurality m of queues of data traffic streams to be switched from data traffic line card input ports to output ports, that comprises, providing a plurality n of similar successive data memory channels each having a number of memory cells defining a shared memory space assigned to the m queues; providing buffering for m memory cells, in front of each memory channel to receive and buffer data switched thereto from line card traffic streams, and providing sufficient buffering to absorb a burst from up to n line cards; and distributing successive data in each of the queues during fixed limited times only to corresponding successive cells of each of the successive memory channels and in striped fashion across the memory space, thereby providing non-blocking shared memory output-buffered data switching.
Preferred and best mode embodiments and architectural design features are hereinafter more fully detailed.
DRAWINGS
The invention will now be described in connection with the accompanying drawings, FIG. 1 of which is a combined generalized block and circuit diagram of a preferred architecture for practicing the data write-path method of the invention; and
FIG. 2 is a similar diagram of read-out from the shared memory channel system of FIG. 1.
PREFERRED EMBODIMENT(S) OF THE INVENTION
Referring to FIG. 1, an illustrative preferred memory architecture for practicing the invention is shown having, for the write path, a plurality n of similar successive data memory channels or banks (say, for 256 megabytes times n storage channels), labeled Memory Channelo through memory channeln-1 for storing and outputting m queues of variable length data traffic streams Queueo through Queuem-1 from respective data traffic line cards Line Cardo through Line Cardn-1 at input ports I, with, say, 10 Gigabits/sec.of bandwidth, and stored in the memory channels. Each of the n data memory channels is provided with a buffer having m memory cells, with the memory channels defining a shared memory space assigned to the m queues. The buffers are shown connected in front of each memory channel and are illustrated as in the form of first-in-first-out buffers FIFOo, FIFO1, . . . FIFOn-1 to receive and buffer data switched thereto at SW from the line cards. In accordance with the invention, the buffers are designed to provide sufficient buffering to absorb a burst of data from up to n line cards; i.e. big enough to store data for m cells and to absorb a burst of, for example, OC192 traffic of variable length data packets from the line cards at the input ports I. [Example: 64 OC192 or 16 OC768 ports.] The maximum depth of each FIFO at the front of each memory channel is thus made equal to the number m of queues in the system.
Further in accordance with the invention, the data of the variable-length queues is applied or distributed only for fixed limited time(s) to corresponding successive cells of each of the successive memory channels so as to distribute these time-bounded inputs in striped fashion across the memory space of the channels. Within each period, every memory channel or bank receives data in about the same number of data cells, though arrival time is traffic dependent; and this, whether there is a data burst or the data is distributed equally throughout the period.
Two exemplary (and extreme condition) traffic scenarios may be considered. In the first, all traffic streams from the line cards may be destined to one queue. Since the cell addresses are assigned continually, all the memory channels will absorb a data burst. There will be no accumulation in any FIFO, provided the aggregation of bandwidth to memory is made to match the input bandwidth.
In a second extreme scenario, all the cells may happen to end on the same memory channel. The FIFO at the front of that memory channel will absorb the burst; and the next burst to come along, will move to the next memory channel.
This demonstrates that with the proper sizing of the FIFOs to absorb any data burst at the front of each memory channel, the burst problem is well solved and with a bounded latency. As above explained, moreover, the depth of the FIFOs is set at about the number of queues supported by the system, and the aggregated bandwidth between the FIFOs in the memory channels is adjusted, as indicated previously, at least to match the input bandwidth.
Through the invention, accordingly, not only is non-blocking shared memory output-buffered data switched, but the bandwidth can be assigned and guaranteed to designated users. If a predetermined assigned depth is exceeded by a user, such excess is stored in available unoccupied shared memory and may be additionally charged for, to that user.
FIG. 2 illustrates the read path architecture of the invention for use with the write path system of FIG. 1, providing for every line card, a corresponding FIFO that is able to draw from the shared memory and at the full bandwidth of the shared memory in a TDM type fashion. In the read operation, it is important that the bandwidths are completely balanced to each line card with equal access to the shared memory system, wherein each line card gets its fixed limited time slot to read out the required amount of data to satisfy the bandwidth needs. As an example, Line Cardo and FIFOo of FIG. 2 read from the shared memory the full bandwidth of the shared memory going up the FIFOo; the Line Card, and its corresponding FIFO, will get its share of the full bandwidth from the shared memory, and so on--each line card getting each required share of the shared memory bank data.
Further modifications will occur to those skilled in this art, and such are considered to fall within the spirit and scope of the invention as defined in the appended claims.

Claims (24)

1. A method of receiving and outputting a plurality m of queues of data traffic streams to be switched from data traffic line card input ports to output ports, that comprises, providing a plurality n of similar successive data memory channels each having a number of memory cells defining a shared memory space assigned to the m queues; providing buffering for m memory cells in front of each memory channel to receive and buffer data switched thereto from line card traffic streams, and providing sufficient buffering to absorb a burst from up to n line cards; and distributing successive data in each of the queues during fixed limited times only to corresponding successive cells of each of the successive memory channels and in striped fashion across the memory space, thereby providing non-blocking shared memory output-buffered data switching.
2. The method of claim 1 wherein, in read mode, each line card draws data from storage in the shared memory through a corresponding buffer and in a fixed limited time slot to read out the required amount of data to satisfy its bandwidth needs.
3. The method of claim 1 wherein the buffering is provided by FIFO buffers each sized to store m cells of data.
4. The method of claim 3 wherein the aggregation of bandwidth to memory is adjusted for matching the data input bandwidth.
5. The method of claim 4 wherein the cell addresses are assigned continually such that the memory channels absorb said burst.
6. The method of claim 5 wherein in the event that all traffic streams from the line card ports are directed to one queue, accumulation of data is prevented in any FIFO by said matching.
7. The method of claim 5 wherein, in the event that all cells storing different queues happen to end on the same memory channel, the occurrence of a burst is absorbed on the FIFO at the front end of that channel.
8. The method of claim 7 wherein a subsequent burst is directed to the next successive memory channel of the memory space.
9. The method of claim 3 wherein the depth of each FIFO is adjusted to about the number m of queues.
10. The method of claim 2 wherein each buffer is a FIFO buffer sized for m cells of data.
11. The method of claim 3 wherein the number of input and output ports is scalable.
12. The method of claim 3 wherein 256 megabytes×n memory channels are employed.
13. A scalable-port, non-blocking, shared-memory output-buffered variable-length queued data switch wherein a data write path is provided having, in combination, a plurality of data line card input ports connected to a switch for switching m queues of data to a shared memory space assigned to the queues and comprising a plurality n of similar successive data memory channels, each having memory cells; a plurality n of buffers each fed data by the switch and each gated to feed a corresponding memory channel but only for fixed limited times; each of the buffers being provided with sufficient buffering to absorb a burst from up to n line cards; and means for distributing the successively gated data in each of the queues to corresponding successive cells of each of the successive memory channels in striped fashion across the memory space, thereby to provide non-blocking, shared-memory output-buffered data switching.
14. The shared memory output-buffered switch of claim 13 wherein a read path is provided for each line card to draw data from storage in the shared memory through a corresponding buffer and in a fixed limited time slot to read out the required amount of data to satisfy its bandwidth needs.
15. The output-buffered switch of claim 13 wherein the buffering is provided by FIFO buffers each sized to store m cells of data.
16. The output-buffered switch of claim 15 wherein the aggregation of bandwidth to memory is adjusted for matching the data input bandwidth.
17. The output-buffered switch of claim 16 wherein means is provided for continually assigning the cell addresses such that the memory channels absorb said burst.
18. The output-buffered switch of claim 17 wherein, in the event that all traffic streams from the line card ports are directed to one queue, means is provided for preventing accumulation of data in any FIFO.
19. The output-buffered switch of claim 17 wherein, in the event that all cells storing different queues happen to end on the same memory channel, the occurrence of a burst is absorbed on the FIFO at the front end of that channel.
20. The output-buffered switch of claim 19 wherein means is provided for directing a subsequent burst to the next successive memory channel.
21. The output-buffered switch of claim 15 wherein the depth of each FIFO is adjusted to about the number m of queues.
22. The shared memory output-buffered switch system of claim 14 wherein each buffer is a FIFO buffer sized for m cells of data.
23. The shared memory output-buffered switch system of claim 22 wherein the line card drawing from shared memory is effected in a TDM type fashion.
24. The method of claim 2 wherein the line card drawing from shared memory is effected in a TDM type fashion.
US09/941,144 2001-08-28 2001-08-28 Method of scalable non-blocking shared memory output-buffered switching of variable length data packets from pluralities of ports at full line rate, and apparatus therefor Expired - Lifetime US6999464B2 (en)

Priority Applications (8)

Application Number Priority Date Filing Date Title
US09/941,144 US6999464B2 (en) 2001-08-28 2001-08-28 Method of scalable non-blocking shared memory output-buffered switching of variable length data packets from pluralities of ports at full line rate, and apparatus therefor
CA002459001A CA2459001A1 (en) 2001-08-28 2002-07-04 Shared memory data switching
PCT/IB2002/002751 WO2003024033A1 (en) 2001-08-28 2002-07-04 Shared memory data switching
IL16064402A IL160644A0 (en) 2001-08-28 2002-07-04 Shared memory data switching
JP2003527955A JP2005503071A (en) 2001-08-28 2002-07-04 Shared memory data exchange
KR10-2004-7003059A KR20040044519A (en) 2001-08-28 2002-07-04 Shared memory data switching
CNA02817058XA CN1550091A (en) 2001-08-28 2002-07-04 Shared memory data switching
EP02743529A EP1423942A1 (en) 2001-08-28 2002-07-04 Shared memory data switching

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US09/941,144 US6999464B2 (en) 2001-08-28 2001-08-28 Method of scalable non-blocking shared memory output-buffered switching of variable length data packets from pluralities of ports at full line rate, and apparatus therefor

Publications (2)

Publication Number Publication Date
US20030043828A1 US20030043828A1 (en) 2003-03-06
US6999464B2 true US6999464B2 (en) 2006-02-14

Family

ID=25475994

Family Applications (1)

Application Number Title Priority Date Filing Date
US09/941,144 Expired - Lifetime US6999464B2 (en) 2001-08-28 2001-08-28 Method of scalable non-blocking shared memory output-buffered switching of variable length data packets from pluralities of ports at full line rate, and apparatus therefor

Country Status (8)

Country Link
US (1) US6999464B2 (en)
EP (1) EP1423942A1 (en)
JP (1) JP2005503071A (en)
KR (1) KR20040044519A (en)
CN (1) CN1550091A (en)
CA (1) CA2459001A1 (en)
IL (1) IL160644A0 (en)
WO (1) WO2003024033A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012512215A (en) * 2008-12-17 2012-05-31 ロレアル Cosmetic method and composition for controlling skin browning induced by UV irradiation

Families Citing this family (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080028157A1 (en) * 2003-01-13 2008-01-31 Steinmetz Joseph H Global shared memory switch
WO2005101763A1 (en) * 2004-04-12 2005-10-27 Integrated Device Technology, Inc. Method and apparatus for forwarding bursty data
US7940662B2 (en) * 2004-04-12 2011-05-10 Integrated Device Technology, Inc. Method and apparatus for forwarding bursty data
US20060039284A1 (en) * 2004-04-12 2006-02-23 Integrated Device Technology, Inc. Method and apparatus for processing a complete burst of data
US7983295B2 (en) * 2005-10-28 2011-07-19 Broadcom Corporation Optimizing packet queues for channel bonding over a plurality of downstream channels of a communications management system
US8644140B2 (en) 2009-09-09 2014-02-04 Mellanox Technologies Ltd. Data switch with shared port buffers
CN101741720B (en) * 2009-11-06 2013-01-16 中兴通讯股份有限公司 Device and method for multichannel cell time slot multiplexing
US8699491B2 (en) 2011-07-25 2014-04-15 Mellanox Technologies Ltd. Network element with shared buffers
US9130885B1 (en) 2012-09-11 2015-09-08 Mellanox Technologies Ltd. End-to-end cache for network elements
US9582440B2 (en) 2013-02-10 2017-02-28 Mellanox Technologies Ltd. Credit based low-latency arbitration with data transfer
US8989011B2 (en) 2013-03-14 2015-03-24 Mellanox Technologies Ltd. Communication over multiple virtual lanes using a shared buffer
US8966164B1 (en) * 2013-09-27 2015-02-24 Avalanche Technology, Inc. Storage processor managing NVME logically addressed solid state disk array
WO2014146302A1 (en) * 2013-03-22 2014-09-25 华为技术有限公司 Uplink data transmission method and apparatus
US9641465B1 (en) 2013-08-22 2017-05-02 Mellanox Technologies, Ltd Packet switch with reduced latency
US9548960B2 (en) 2013-10-06 2017-01-17 Mellanox Technologies Ltd. Simplified packet routing
US9325641B2 (en) * 2014-03-13 2016-04-26 Mellanox Technologies Ltd. Buffering schemes for communication over long haul links
US9584429B2 (en) 2014-07-21 2017-02-28 Mellanox Technologies Ltd. Credit based flow control for long-haul links
JP2016100674A (en) * 2014-11-19 2016-05-30 富士通株式会社 Transmission device
CN109062661B (en) * 2018-07-10 2021-10-26 中国电子科技集团公司第三十八研究所 Multi-channel arbitration circuit of online simulation debugger and scheduling method thereof
US10951549B2 (en) 2019-03-07 2021-03-16 Mellanox Technologies Tlv Ltd. Reusing switch ports for external buffer network
CN111078150A (en) * 2019-12-18 2020-04-28 成都定为电子技术有限公司 High-speed storage equipment and uninterrupted capacity expansion method
US11558316B2 (en) 2021-02-15 2023-01-17 Mellanox Technologies, Ltd. Zero-copy buffering of traffic of long-haul links

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4644529A (en) * 1985-08-02 1987-02-17 Gte Laboratories Incorporated High-speed switching processor for a burst-switching communications system
US4748618A (en) * 1986-05-21 1988-05-31 Bell Communications Research, Inc. Telecommunications interface
US6622232B2 (en) * 2001-05-18 2003-09-16 Intel Corporation Apparatus and method for performing non-aligned memory accesses
US6621828B1 (en) * 1999-12-01 2003-09-16 Cisco Technology, Inc. Fused switch core and method for a telecommunications node
US6822960B1 (en) * 1999-12-01 2004-11-23 Cisco Technology, Inc. Asynchronous transfer mode (ATM) switch and method

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE69021213T2 (en) * 1990-12-20 1996-02-29 Ibm Modular buffer storage for a packet switched network.
DE69841486D1 (en) * 1997-05-31 2010-03-25 Texas Instruments Inc Improved packet switching

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4644529A (en) * 1985-08-02 1987-02-17 Gte Laboratories Incorporated High-speed switching processor for a burst-switching communications system
US4748618A (en) * 1986-05-21 1988-05-31 Bell Communications Research, Inc. Telecommunications interface
US6621828B1 (en) * 1999-12-01 2003-09-16 Cisco Technology, Inc. Fused switch core and method for a telecommunications node
US6822960B1 (en) * 1999-12-01 2004-11-23 Cisco Technology, Inc. Asynchronous transfer mode (ATM) switch and method
US6622232B2 (en) * 2001-05-18 2003-09-16 Intel Corporation Apparatus and method for performing non-aligned memory accesses

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012512215A (en) * 2008-12-17 2012-05-31 ロレアル Cosmetic method and composition for controlling skin browning induced by UV irradiation

Also Published As

Publication number Publication date
JP2005503071A (en) 2005-01-27
CA2459001A1 (en) 2003-03-20
IL160644A0 (en) 2004-07-25
US20030043828A1 (en) 2003-03-06
KR20040044519A (en) 2004-05-28
EP1423942A1 (en) 2004-06-02
WO2003024033A1 (en) 2003-03-20
CN1550091A (en) 2004-11-24

Similar Documents

Publication Publication Date Title
US6999464B2 (en) Method of scalable non-blocking shared memory output-buffered switching of variable length data packets from pluralities of ports at full line rate, and apparatus therefor
AU637250B2 (en) Traffic shaping method and circuit
US5577035A (en) Apparatus and method of processing bandwidth requirements in an ATM switch
US5202885A (en) Atm exchange with copying capability
JP2927550B2 (en) Packet switch
US6351466B1 (en) Switching systems and methods of operation of switching systems
US5774453A (en) Input/output buffer type ATM switch
US6611527B1 (en) Packet switching apparatus with a common buffer
US20040151197A1 (en) Priority queue architecture for supporting per flow queuing and multiple ports
US6212165B1 (en) Apparatus for and method of allocating a shared resource among multiple ports
US6487171B1 (en) Crossbar switching matrix with broadcast buffering
US6167041A (en) Switch with flexible link list manager for handling ATM and STM traffic
JP3269273B2 (en) Cell switching device and cell switching system
US20030179759A1 (en) Method and apparatus for switching data using parallel switching elements
US6061358A (en) Data communication system utilizing a scalable, non-blocking, high bandwidth central memory controller and method
US10693811B2 (en) Age class based arbitration
US20070140232A1 (en) Self-steering Clos switch
JP2002198993A (en) Packet switch
EP0537743B1 (en) Switching method for a common memory based switching field and the switching field
AU2002345294A1 (en) Shared memory data switching
CN115473862B (en) Method and system for avoiding blocking of multicast packet queue head of switching chip
US20040215869A1 (en) Method and system for scaling memory bandwidth in a data network
US7143185B1 (en) Method and apparatus for accessing external memories
KR0151917B1 (en) Priority control apparatus in restricted common memory atm switching system
US20050052920A1 (en) Time slot memory management

Legal Events

Date Code Title Description
AS Assignment

Owner name: AXIOWAVE NETWORKS, INC., MASSACHUSETTS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:WANG, XIAOLIN;SOMAN, SATISH;PAL, SUBHASIS;REEL/FRAME:012572/0506

Effective date: 20010822

STCF Information on status: patent grant

Free format text: PATENTED CASE

AS Assignment

Owner name: WEST LANE DATA LLC, NEVADA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:AXIOWAVE NETWORKS, INC.;REEL/FRAME:021731/0627

Effective date: 20080827

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Free format text: PAYER NUMBER DE-ASSIGNED (ORIGINAL EVENT CODE: RMPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

FEPP Fee payment procedure

Free format text: PAT HOLDER NO LONGER CLAIMS SMALL ENTITY STATUS, ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: STOL); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

REFU Refund

Free format text: REFUND - SURCHARGE, PETITION TO ACCEPT PYMT AFTER EXP, UNINTENTIONAL (ORIGINAL EVENT CODE: R2551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

FPAY Fee payment

Year of fee payment: 4

FEPP Fee payment procedure

Free format text: PAYER NUMBER DE-ASSIGNED (ORIGINAL EVENT CODE: RMPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

FPAY Fee payment

Year of fee payment: 8

AS Assignment

Owner name: XYLON LLC, NEVADA

Free format text: MERGER;ASSIGNOR:WEST LANE DATA LLC;REEL/FRAME:036641/0101

Effective date: 20150813

FPAY Fee payment

Year of fee payment: 12