US20050105538A1 - Switching system with distributed switching fabric - Google Patents

Switching system with distributed switching fabric Download PDF

Info

Publication number
US20050105538A1
US20050105538A1 US10/965,444 US96544404A US2005105538A1 US 20050105538 A1 US20050105538 A1 US 20050105538A1 US 96544404 A US96544404 A US 96544404A US 2005105538 A1 US2005105538 A1 US 2005105538A1
Authority
US
United States
Prior art keywords
switch
elements
ingress
egress
protocol
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US10/965,444
Inventor
Ananda Perera
Edwin Hoffman
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Raptor Networks Technology Inc
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to US10/965,444 priority Critical patent/US20050105538A1/en
Assigned to RAPTOR NETWORKS TECHNOLOGY, INC. reassignment RAPTOR NETWORKS TECHNOLOGY, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: HOFFMAN, EDWIN, PERERA, ANANDA
Publication of US20050105538A1 publication Critical patent/US20050105538A1/en
Priority to US11/248,707 priority patent/US20060029071A1/en
Priority to US11/248,711 priority patent/US20060029057A1/en
Priority to US11/248,639 priority patent/US20060029055A1/en
Priority to US11/248,708 priority patent/US20060029072A1/en
Priority to US11/248,111 priority patent/US20060039369A1/en
Priority to US11/248,710 priority patent/US20060029056A1/en
Assigned to BRIDGE BANK N.A., AGILITY CAPITAL, LLC reassignment BRIDGE BANK N.A. SECURITY AGREEMENT Assignors: RAPTOR NETWORKS TECHNOLOGY, INC.
Assigned to AGILITY CAPITAL, LLC, BRIDGE BANK N.A. reassignment AGILITY CAPITAL, LLC SECURITY AGREEMENT Assignors: RAPTOR NETWORKS TECHNOLOGY INC.
Assigned to RAPTOR NETWORKS TECHNOLOGY, INC. reassignment RAPTOR NETWORKS TECHNOLOGY, INC. SECURITY AGREEMENT Assignors: AGILITY CAPITAL, LLC, BRIDGE BANK N.A.
Assigned to RAPTOR NETWORKS TECHNOLOGY INC., RAPTOR NETWORKS TECHNOLOGY, INC. reassignment RAPTOR NETWORKS TECHNOLOGY INC. RELEASE BY SECURED PARTY (SEE DOCUMENT FOR DETAILS). Assignors: AGILITY CAPITAL, LLC, BRIDGE BANK N.A.
Priority to US11/610,281 priority patent/US7352745B2/en
Assigned to CASTLERIGG MASTER INVESTMENTS LTD., AS COLLATERAL AGENT reassignment CASTLERIGG MASTER INVESTMENTS LTD., AS COLLATERAL AGENT GRANT OF SECURITY INTEREST Assignors: RAPTOR NETWORKS TECHNOLOGY, INC.
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/35Switches specially adapted for specific applications
    • H04L49/351Switches specially adapted for specific applications for local area network [LAN], e.g. Ethernet switches
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/30Peripheral units, e.g. input or output ports
    • H04L49/3009Header conversion, routing tables or routing tags
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/55Prevention, detection or correction of errors
    • H04L49/552Prevention, detection or correction of errors by ensuring the integrity of packets received through redundant connections
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/10Packet switching elements characterised by the switching fabric construction
    • H04L49/102Packet switching elements characterised by the switching fabric construction using shared medium, e.g. bus or ring

Definitions

  • the field of the invention is network switches.
  • Modern computer networks typically communicate using discrete packets or frames of data according to predefined protocols. There are multiple such standards, including the ubiquitous TCP and IP standards. For all but the simplest local topologies, networks employ intermediate nodes between the end-devices. Bridges, switches, and/or routers, are all examples of intermediate nodes.
  • a network switch is any intermediate device that forwards packets between end-devices and/or other intermediate devices. Switches operate at the data link layer (layer 2) and sometimes the network layer (layer 3) of the OSI Reference Model, and therefore typically support any packet protocol.
  • a switch has a plurality of input and output ports. Although a typical switch has only 8, 16, or other relatively small number of ports, it is known to connect switches together to provide large numbers of inputs and outputs.
  • Prior art FIG. 1 shows a typical arrangement of switch modules into a large switch that provides 128 inputs and 128 outputs.
  • U.S. Pat. No. 6,256,546 to Beshai (March 2002) describes a protocol that uses an adaptive packet header to simplify packet routing and increase transfer speed among switch modules.
  • Beshai's system is advantageous because it is not limited to a fixed cell length, such as the 53 byte length of an Asynchronous Transfer Mode (ATM) system, and because it reportedly has better quality of service and higher throughput that an Internetworking Protocol (IP) switched network.
  • ATM Asynchronous Transfer Mode
  • IP Internetworking Protocol
  • pluralities of edge modules are interconnected by a passive core 120 .
  • Each of the ingress modules 110 A-D accept data packets in multiple formats, adds a standardized header that indicates a destination for the packet, and switches the packets to the appropriate egress modules 130 A-D through the passive core 120 .
  • the header is removed from the packet, and the packet is transferred to a sink in its native format.
  • the solid lines of 112 A- 112 D depict unencapsulated information arriving to circuit ports, ATM ports, frame relay ports, IP ports, and UTM ports.
  • the solid lines of 132 A-D depict unencapsulated information exiting to the various ports in the native format of the information.
  • the dotted lines of core 120 and facing portions of the ingress 110 A-D and egress 130 A-D modules depict information that is contained UTM headed packets.
  • the entire system 100 operates as a single distributed switch, in which all switching is done at the edge (ingress and egress modules).
  • Beshai's solution in the '546 patent has significant drawbacks.
  • the system is described as a multi-service switch (with circuit ports, ATM ports, frame relay ports, IP ports, and UTM ports), there is no contemplation of using the switch as an Ethernet switch. Ethernet offers significant advantages over other protocols, including connectionless stateful communication.
  • a second drawback is that the optical core is contemplated to be entirely passive. The routes need to be set up and torn down before packets are switched across the core. As such Beshai does not propose a distributed switching fabric, he only discloses a distributed edge fabric with optical cross-connected cores.
  • a third, related disadvantage is that Beshai's concept only supports a single channel from one module to another. All of those deficiencies reduce functionality.
  • Beshai publication no. 2001/0006522 resolves one of the deficiencies of the '546 patent, namely the single channel limitation between modules.
  • Beshai teaches a switching system having packet-switching edge modules and channel switching core modules.
  • traffic entering the system through ports 162 A is sorted at each edge module 160 A-D, and switched to various core elements 180 A-C via paths 170 .
  • the core elements switch the traffic to other destination edge modules 180 A-C, for delivery to final destinations.
  • Beshai contemplates that the core elements can use channel switching to minimize the potential wasted time in a pure TDM (time division mode) system, and that the entire system can use time counter co-ordination to realize harmonious reconfiguration of edge modules and core modules.
  • the channel switching core of the '522 application provides nothing more than virtual channels between edge devices. It does not switch individual packets of data.
  • the '522 application incorporates by reference Beshai's Ser. No. 09/244,824 application regarding. High-Capacity Packet Switch (issued as 6721271 in April 2004)
  • the '522 application still fails to teach, suggest, or motivate one of ordinary skill to provide a fully distributed network (edge and core) that acts as a single switch.
  • the present invention provides apparatus, systems, and methods in which the switching takes place both at the distributed edge nodes and within a distributed core, and where the entire system acts as a single switch through encapsulation of information using a special header that is added by the system upon ingress, and removed by the system upon egress.
  • the routing header includes as least a destination element address, and preferably also includes a destination port address, a source element address. Where the system is configured to address clusters of elements, the header also preferably includes a destination cluster address and a source cluster address.
  • the ingress and egress elements preferably support Ethernet or other protocol providing connectionless media with a stateful connection. At least some of the ingress and egress elements preferably have least 8 input ports and 8 output ports, and communicate at a speed of at least one, and more preferably at least 10 Gbs.
  • Preferred switches include management protocols for discovering which elements are connected, for constructing appropriate connection tables, for designating a master element, and for resolving failures and off-line conditions among the switches.
  • Secure data protocol (SDP), port to port (PTP) protocol, and active/active protection service (AAPS) are all preferably implemented.
  • SRT Strict Ring Topology
  • Other topologies can be can alternatively or additionally employed.
  • Components of a distributed switching fabric can be geographically separated by at least one kilometer, and in some cases by over 150 kilometers.
  • FIG. 1A is a schematic of a prior art arrangement of switch modules that cooperate to act as a single switch.
  • FIG. 1B is a schematic of a prior art arrangement of switch modules connected by an active core, but where the modules operate independently of one another.
  • FIG. 2 is a schematic of a true distributed fabric switching system, in which edge elements add or remove headers, and the core actively switches packets according to the headers.
  • FIG. 3 is a schematic of a routing header.
  • FIG. 4 shows a high level design of a preferred combination Ingress/Egress element
  • FIG. 5 shows a high level design of a preferred core element
  • FIG. 6 is a schematic of a RaptorTM 1010 switch.
  • FIG. 7 is a schematic of a RaptorTM 1808 switch.
  • FIG. 8 is a schematic of an exemplary distributed switching system according to preferred aspects of the present invention.
  • FIG. 9 is a schematic of a super fabric implementation of a distributed switching fabric.
  • a switching system 200 generally includes ingress elements 210 A-C, egress elements 230 A-C, core switching elements 220 A-C and connector elements 240 A-C.
  • the ingress elements encapsulate incoming packets with a routing header (see FIG. 3 ), and perform initial switching. The encapsulated packets then enter the core elements for further switching.
  • the intermediate elements facilitate communication between core elements.
  • the egress elements remove the header, and deliver the packets to a sink or final destination.
  • switching (encapsulation) header must, at a bare minimum, include at least a destination element address.
  • the header also includes destination port ID, and where elements are clustered and optional destination cluster ID.
  • an “ID” is something that is the same as, or can be resolved into an address.
  • a preferred switching header 300 generally includes a Destination Cluster ID 310 , a Destination Element ID 320 , a Destination Port ID 330 , a Source Cluster ID 340 and a Source Element ID 350 .
  • the each of the fields has a length of at least 1 byte and up to 2 bytes.
  • header is used here as in a euphemistic sense to mean any additional routing data that is included in a package that encapsulates other information.
  • the header need not be located at the head end of the frame or packet.
  • Ingress 210 A-C and egress 230 A-C elements are shown in FIG. 2 as distinct elements. In fact, they are similar in construction, and they may be implemented as a single device. Such elements can have any suitable number of ports, and can operate using any suitable logic.
  • Currently preferred chips to implement the design are Broadcom'sTM BCM5690, BCM5670, and BCM5464S chips, according to the detailed schematics included in one or more of the priority provisional applications.
  • FIG. 4 shows a high level design of a preferred combination ingress/egress element 400 , which can be utilized for any of the ingress 210 A-C and egress 230 A-C elements.
  • Ingress/Egress element 400 generally includes a logical switching frame 410 , Ethernet ingress/egress ports 420 A-L, encapsulated packet I/O port 430 , layer 2 table(s) 440 , layer 3 table(s) 450 , and access control table(s) 460 .
  • Ingress/egress elements are the only elements that are typically assigned element IDs.
  • the destination MAC address is searched in the layer 2 MAC table 440 , where the destination element ID and destination port ID are already stored. Once matched, the element and port IDs are placed into the switching header, along with the destination cluster ID, and source element ID. The resulting frame is then sent out to the core element.
  • the ID is checked to make sure the packet is targeted to the particular element at which it arrived. If there is a discrepancy, the frame is checked to determine whether it is a multicast or broadcast frame. If it is a multicast frame, the internal switching header is stripped and the resulting packet is copied to all interested parties (registered IGMP “Internet Group Management Protocol” joiners). If it is a broadcast frame, the RAST header is stripped, and the resulting packet is copied to all ports except the incoming port over which the frame arrived. If the frame is a unicast frame, the element ID is stripped off, and the packet is cut through to the corresponding physical port.
  • ingress/egress elements could be single port, in preferred embodiments they would typically have multiple ports, including at least one encapsulated packet port, and at least one standards based port (such as Gigabit Ethernet).
  • standards based port such as Gigabit Ethernet
  • preferred ingress/egress elements include 1 Gigabit Ethernet multi-port modules, and 10 Gigabit Ethernet single port modules.
  • an ingress/egress element may be included in the same physical device with a core element. In that case the device comprises a hybrid core-ingress/egress device. See FIGS. 6 and 7 .
  • FIG. 5 shows a high level design of a preferred core element 500 , which can be utilized for any of the core switching elements 220 A-C.
  • Core element 500 generally includes a logical switching frame 510 , a plurality of ingress and/or egress ports 520 A-H, one or more unicast tables 530 , one or more multicast tables 540 .
  • the unicast table contains a list of all registered element IDs that are known to the core element. Elements become registered during the MDP (Management Discovery Protocol) phase of startup.
  • the multicast table contains element IDs that are registered during the “discovery phase” of a multicast protocol's joining sequence. This is where the multicast protocol evidences an interested party, and uses these IDs to decide which ports take part in the hardware copy of the frames. If the element ID is not known to this core element, or the frame is designated a broadcast frame, the frame floods all egress ports.
  • Connector elements 240 A-C are low level devices that allow the core elements to communicate with other core elements over cables or fibers. They assist in enforcing protocols, but have no switching functions. Examples of such elements are XAU1 over copper connectors XAU1/XGmil over fiber connectors using MSA XFP.
  • FIG. 6 is a schematic of a preferred commercial embodiment of a hybrid core-ingress device, designated as a RaptorTM 1010 switch.
  • the switch 600 generally includes two 10 GBase ingress elements 610 A-B, two ingress elements other than 10 GBase 615 A-B, a core element 620 , and intermediate connector elements 630 A-D.
  • the system is capable of providing 12.5 Gbps throughput.
  • FIG. 7 is a schematic of a preferred commercial embodiment of a hybrid core-ingress device, designated as a Raptomm 1808 switch.
  • the switch 700 could include eight 10 GBase ingress elements 710 A-D, a core element 720 , or eight intermediate connector elements 730 A-D, or any combination of elements up to a total of eight.
  • a switching system 800 includes two of the RaptorTM 1010 switches 600 A-B and four of the RaptorTM 1808 switches 700 A-D, as well as connecting optical or other lines 810 .
  • the lines preferably comprise a 10 GB or greater backplane.
  • the links between the 1010 switches can be 10-40 km at present, and possibly greater lengths in the future.
  • the links between the core switches can be over 40 km.
  • a major advantage of the inventive subject matter is that it implements switching of Ethernet packets using a distributed switching fabric.
  • Contemplated embodiments are not strictly limited to Ethernet, however. It is contemplated, for example, that an ingress element can convert SONET to Ethernet, encapsulate and route the packets as described above, and then convert back from Ethernet to SONET.
  • Switching systems contemplated herein can use any suitable topology.
  • the distributed switch fabric contemplated herein can even support a mixture of ring, mesh, star and bus topologies, with looping controlled via Spanning Tree Avoidance algorithms.
  • the presently preferred topology is a Strict Ring Topology (SRT), in which there is only one physical or logical link between elements.
  • SRT Strict Ring Topology
  • each source element address is checked upon ingress via any physical or logical link into a core element. If the source element address is the one that is directly connected to the core element, the data stream will be blocked. If the source element address is not the one that is directly connected to this core element, the package will be forwarded using the normal rules.
  • a break in the ring can be handled in any of several known ways, including reversion to a straight bus topology, which would cause an element table update to all elements.
  • EMU element manager unit
  • MDP Management Discovery Protocol
  • Each element transmits an initial MDP establish message containing its MAC address and user assigned priority number (if assigned 0 used if not set). Each element also listens for incoming MDP messages, containing such information. As each element receives the MDP messages, one of two decisions is made. If the received MAC address is lower than the MAC address assigned to the receiving element, the message is forwarded to all active links with the original MAC address, the link number it was received on, and the MAC address of the system that is forwarding the message. If a priority is set, the lowest priority (greater than 0) is deemed as lowest MAC address and processed as such. If on the other hand the received MAC address is higher than the MAC address assigned to the receiving element, then the message is not forwarded. If a priority is set that is higher than the received priority, the same process is carried out.
  • the system identifies the MAC address of the master unit, and creates a connection matrix based on the MAC addresses of the elements discovered, the active port numbers, and the MAC addresses of each of the elements, as well as each of their ports.
  • This matrix is distributed to all elements, and forms the base of the distributed switch fabric.
  • the matrix can be any reasonable size, including the presently preferred support for a total of 1024 elements.
  • each new element As each new element joins an established cluster, it issues a MDP initialization message, which is answered by a stored copy of the adjacency table.
  • the new element insert its own information into the table, and issues an update element message to the master, which in turn will check the changes and issue an element update message to all elements.
  • Heart Beat Protocol enables the detection of a faked element. If an element fails or is removed from the matrix, a Heart Beat Protocol (HBP) can be used to signal that a particular link to an element is not in service. Whatever system is running the HBP sends an element update message to the master, which then reformats the table, and issues an element update message to all elements.
  • HBP Heart Beat Protocol
  • Traffic Load factors can be calculated in any suitable manner.
  • traffic load is calculated by local management units and periodically communicated in element load messages to the master. It is contemplated that such information can be used to load balance multiple physical or logical links between elements.
  • Element messages are preferably sent using a secure data protocol (SDP), which performs an ACK/NAK function on all messages to ensure their delivery.
  • SDP is preferably operated as a layer 2 secure data protocol that also includes the ability to encrypt element messages between elements.
  • element messages and SDP can also be used to communicate other data between elements, and thereby support desired management features.
  • element messages can be used to support Port To Port Protocol (PTPP), which provides a soft permanent virtual connection to exist between element/port pairs.
  • PTPP Port To Port Protocol
  • PTPP is simply an element-to-element message that sets default encapsulation to a specific element address/port address for source and destination.
  • MPLS Multiprotocol Label Switching
  • PTPP allows for extremely convenient routing around failures, provided that another link is available at both the originating (ingress) side and the terminating (egress) side, and there is no other blockage in the intervening links (security/Access Control List (ACL)/Quality of Service (QoS), etc),
  • AAPS Active/Active Protection Service
  • the method is analogous to multicasting in that the hardware copies data from the master link to the secondary link.
  • the receiving end of the AAPS will only forward the first copy of any data received (correctly) to the end node.
  • cluster enabled elements are simply normal elements, but with one or more links that are capable of adding/subtracting cluster address numbers.
  • a system that utilizes clusters in this manner is referred to herein as a super fabric.
  • Super fabrics can be designed to any reasonable size, including especially a current version of super fabric that allows up to 255 clusters of 1024 elements to be connected in a “single” switch system.
  • the management unit operating in super fabric mode retains details about all clusters, but does not MAC address data. Inter-cluster communication is via dynamic Virtual LAN (VLAN) tunnels which are created when a cluster level ACL detects a matched sequence that has been predefined.
  • VLAN Virtual LAN
  • matches include any of: (a) a MAC address or MAC address pairs; (b) VLAN ID pairs; (c) IP subnet or subnet pair; (d) TCP/UDP Protocol numbers or pairs, ranges etc; (e) protocol number(s); and (f) layer 2-7 match of specific data.
  • the management unit can also keep a list of recent broadcasts, and perform a matching operation on broadcasts received. Forwarding of previously sent broadcasts can thereby be prevented, so that after a learning period only new broadcasts will forwarded to other links.
  • clusters are managed by a management unit, they can continue to operate upon failure of the master. If the master management unit fails, a new master is selected and the cluster continues to operate. In preferred embodiments, any switch unit can be the master unit. In cases where only the previous management has failed, the ingress/egress elements and core element are manageable by the new master over an inband connection.
  • Inter-cluster communication is preferably via a strict PTPP based matrix of link addresses.
  • MDP discovers this link
  • HBP checks the link for health
  • SDP allows communication between management elements to keep the cluster informed of any changes. If all of the above is properly implemented, a cluster of switch elements can act as a single logical Gigabit Ethernet or 10 Gigabit Ethernet LAN switch, with all standards based switch functions available over the entire logical switch.
  • a super fabric implementation 900 of a distributed switching fabric generally includes four 20 Gbps pipes 910 A-D, each of which is connected to a corresponding cluster 920 A-D that includes a control element 922 A-D that understand the cluster messaging structure.
  • each cluster there are numerous ingress/egress elements 400 coupled together.
  • each of the control elements 922 A-D has two 10 Gbps pipes that connect the ingress/egress elements 400 for intra-cluster communication.
  • inter-cluster pipes 930 A-D which in this instance also communicate at 10 Gbps.

Abstract

A switch encapsulates incoming information using a header, and removes the header upon egress. The header is used by both distributed ingress nodes and within a distributed core to facilitate switching. The ingress and egress elements preferably support Ethernet or other protocol providing connectionless media with a stateful connection. Preferred switches include management protocols for discovering which elements are connected, for constructing appropriate connection tables, for designating a master element, and for resolving failures and off-line conditions among the switches. Secure data protocol (SDP), port to port (PTP) protocol, and active/active protection service (AAPS) are all preferably implemented. Systems and methods contemplated herein can advantageously use Strict Ring Topology (SRT), and conf configure the topology automatically. Components of a distributed switching fabric can be geographically separated by at least one kilometer, and in some cases by over 150 kilometers.

Description

  • This application claims priority to provisional application No. 60/511,145 filed Oct. 14, 2003; provisional application No. 60/511,144 filed Oct. 14, 2003; provisional application No. 60/511,143, filed Oct. 14, 2003; provisional application No. 60/511,142 filed Oct. 14, 2003; provisional application No. 60/511,141 filed Oct. 14, 2003; provisional application No. 60/511,140 filed Oct. 14, 2003; provisional application No. 60/511,139 filed Oct. 14, 2003; provisional application No. 60/511,138 filed Oct. 14, 2003; provisional application No. 60/511,021 filed Sep. 14, 2003; and provisional application No. 60/563,262 filed Apr. 16, 2004, all of which are incorporated herein by reference in their entirety.
  • FIELD OF THE INVENTION
  • The field of the invention is network switches.
  • BACKGROUND
  • Modern computer networks typically communicate using discrete packets or frames of data according to predefined protocols. There are multiple such standards, including the ubiquitous TCP and IP standards. For all but the simplest local topologies, networks employ intermediate nodes between the end-devices. Bridges, switches, and/or routers, are all examples of intermediate nodes.
  • As used herein, a network switch is any intermediate device that forwards packets between end-devices and/or other intermediate devices. Switches operate at the data link layer (layer 2) and sometimes the network layer (layer 3) of the OSI Reference Model, and therefore typically support any packet protocol. A switch has a plurality of input and output ports. Although a typical switch has only 8, 16, or other relatively small number of ports, it is known to connect switches together to provide large numbers of inputs and outputs. Prior art FIG. 1 shows a typical arrangement of switch modules into a large switch that provides 128 inputs and 128 outputs.
  • One problem with simple embodiments of the prior art design of FIG. 1 is that failure of any given switch destroys integrity of the entire switching system. One solution is to provide entire redundant backup systems (external redundancy), so that a spare system can quickly replace functionality of a defective system. That solution, however, is overly expensive because an entire backup must be deployed for each working system. The solution is also problematic in that the redundant system must be engaged upon failure of substantially any component within the working system. Another solution is to provide redundant modules within the system, and to deploy those modules intelligently (internal redundancy). But that solution is problematic because all the components are situated locally to one another. A fire, earthquake or other catastrophe will still terminally disrupt the functionality of the entire system.
  • U.S. Pat. No. 6,256,546 to Beshai (March 2002) describes a protocol that uses an adaptive packet header to simplify packet routing and increase transfer speed among switch modules. Beshai's system is advantageous because it is not limited to a fixed cell length, such as the 53 byte length of an Asynchronous Transfer Mode (ATM) system, and because it reportedly has better quality of service and higher throughput that an Internetworking Protocol (IP) switched network. The Beshai patent, is incorporated herein by reference along with all other extrinsic material discussed herein Prior art FIG. 1A depicts a system according to Beshai's '546 patent. There, pluralities of edge modules (ingress modules 110A-D and egress modules 130A-D) are interconnected by a passive core 120. Each of the ingress modules 110A-D accept data packets in multiple formats, adds a standardized header that indicates a destination for the packet, and switches the packets to the appropriate egress modules 130A-D through the passive core 120. At the egress modules 130A-D the header is removed from the packet, and the packet is transferred to a sink in its native format. The solid lines of 112A-112D depict unencapsulated information arriving to circuit ports, ATM ports, frame relay ports, IP ports, and UTM ports. Similarly, the solid lines of 132A-D depict unencapsulated information exiting to the various ports in the native format of the information. The dotted lines of core 120 and facing portions of the ingress 110A-D and egress 130A-D modules depict information that is contained UTM headed packets. The entire system 100 operates as a single distributed switch, in which all switching is done at the edge (ingress and egress modules).
  • Despite numerous potential advantages, Beshai's solution in the '546 patent has significant drawbacks. First, although the system is described as a multi-service switch (with circuit ports, ATM ports, frame relay ports, IP ports, and UTM ports), there is no contemplation of using the switch as an Ethernet switch. Ethernet offers significant advantages over other protocols, including connectionless stateful communication. A second drawback is that the optical core is contemplated to be entirely passive. The routes need to be set up and torn down before packets are switched across the core. As such Beshai does not propose a distributed switching fabric, he only discloses a distributed edge fabric with optical cross-connected cores. A third, related disadvantage, is that Beshai's concept only supports a single channel from one module to another. All of those deficiencies reduce functionality.
  • Beshai publication no. 2001/0006522 (July 2001) resolves one of the deficiencies of the '546 patent, namely the single channel limitation between modules. In the '522 application Beshai teaches a switching system having packet-switching edge modules and channel switching core modules. As shown in prior art FIG. 1B, traffic entering the system through ports 162A is sorted at each edge module 160A-D, and switched to various core elements 180A-C via paths 170. The core elements switch the traffic to other destination edge modules 180A-C, for delivery to final destinations. Beshai contemplates that the core elements can use channel switching to minimize the potential wasted time in a pure TDM (time division mode) system, and that the entire system can use time counter co-ordination to realize harmonious reconfiguration of edge modules and core modules.
  • Leaving aside the switching mechanisms between and within the core elements, the channel switching core of the '522 application provides nothing more than virtual channels between edge devices. It does not switch individual packets of data. Thus, even though the '522 application incorporates by reference Beshai's Ser. No. 09/244,824 application regarding. High-Capacity Packet Switch (issued as 6721271 in April 2004), the '522 application still fails to teach, suggest, or motivate one of ordinary skill to provide a fully distributed network (edge and core) that acts as a single switch.
  • What is still needed is a switching system in which the switching takes place both at the distributed edge nodes and within a distributed core, and where the entire system acts as a single switch.
  • SUMMARY OF THE INVENTION
  • The present invention provides apparatus, systems, and methods in which the switching takes place both at the distributed edge nodes and within a distributed core, and where the entire system acts as a single switch through encapsulation of information using a special header that is added by the system upon ingress, and removed by the system upon egress.
  • The routing header includes as least a destination element address, and preferably also includes a destination port address, a source element address. Where the system is configured to address clusters of elements, the header also preferably includes a destination cluster address and a source cluster address.
  • The ingress and egress elements preferably support Ethernet or other protocol providing connectionless media with a stateful connection. At least some of the ingress and egress elements preferably have least 8 input ports and 8 output ports, and communicate at a speed of at least one, and more preferably at least 10 Gbs.
  • Preferred switches include management protocols for discovering which elements are connected, for constructing appropriate connection tables, for designating a master element, and for resolving failures and off-line conditions among the switches. Secure data protocol (SDP), port to port (PTP) protocol, and active/active protection service (AAPS) are all preferably implemented.
  • Systems and methods contemplated herein can advantageously use Strict Ring Topology (SRT), and conf configure the topology automatically. Other topologies can be can alternatively or additionally employed. Components of a distributed switching fabric can be geographically separated by at least one kilometer, and in some cases by over 150 kilometers.
  • Various objects, features, aspects and advantages of the present invention will become more apparent from the following detailed description of preferred embodiments of the invention, along with the accompanying drawings in which like numerals represent like components.
  • BRIEF DESCRIPTION OF THE DRAWING
  • FIG. 1A is a schematic of a prior art arrangement of switch modules that cooperate to act as a single switch.
  • FIG. 1B is a schematic of a prior art arrangement of switch modules connected by an active core, but where the modules operate independently of one another.
  • FIG. 2 is a schematic of a true distributed fabric switching system, in which edge elements add or remove headers, and the core actively switches packets according to the headers.
  • FIG. 3 is a schematic of a routing header.
  • FIG. 4 shows a high level design of a preferred combination Ingress/Egress element FIG. 5 shows a high level design of a preferred core element FIG. 6 is a schematic of a Raptor™ 1010 switch.
  • FIG. 7 is a schematic of a Raptor™ 1808 switch.
  • FIG. 8 is a schematic of an exemplary distributed switching system according to preferred aspects of the present invention.
  • FIG. 9 is a schematic of a super fabric implementation of a distributed switching fabric.
  • DETAILED DESCRIPTION
  • In FIG. 2 a switching system 200 generally includes ingress elements 210A-C, egress elements 230A-C, core switching elements 220A-C and connector elements 240A-C. The ingress elements encapsulate incoming packets with a routing header (see FIG. 3), and perform initial switching. The encapsulated packets then enter the core elements for further switching. The intermediate elements facilitate communication between core elements. The egress elements remove the header, and deliver the packets to a sink or final destination.
  • Those skilled in the art will appreciate that switching (encapsulation) header must, at a bare minimum, include at least a destination element address. In preferred embodiments the header also includes destination port ID, and where elements are clustered and optional destination cluster ID. Also optional are fields for source cluster, source element, and source port IDs. As used herein an “ID” is something that is the same as, or can be resolved into an address. In FIG. 3 a preferred switching header 300 generally includes a Destination Cluster ID 310, a Destination Element ID 320, a Destination Port ID 330, a Source Cluster ID 340 and a Source Element ID 350. In this particular example, the each of the fields has a length of at least 1 byte and up to 2 bytes. Those skilled in the art should also appreciate that the term “header” is used here as in a euphemistic sense to mean any additional routing data that is included in a package that encapsulates other information. The header need not be located at the head end of the frame or packet.
  • Ingress 210A-C and egress 230A-C elements are shown in FIG. 2 as distinct elements. In fact, they are similar in construction, and they may be implemented as a single device. Such elements can have any suitable number of ports, and can operate using any suitable logic. Currently preferred chips to implement the design are Broadcom's™ BCM5690, BCM5670, and BCM5464S chips, according to the detailed schematics included in one or more of the priority provisional applications.
  • FIG. 4 shows a high level design of a preferred combination ingress/egress element 400, which can be utilized for any of the ingress 210A-C and egress 230A-C elements. Ingress/Egress element 400 generally includes a logical switching frame 410, Ethernet ingress/egress ports 420A-L, encapsulated packet I/O port 430, layer 2 table(s) 440, layer 3 table(s) 450, and access control table(s) 460.
  • Ingress/egress elements are the only elements that are typically assigned element IDs. When packets arrive at an ingress/egress port 420, it is assumed that all ISO layer 2 fault parameters are satisfied and the packet is correct. The destination MAC address is searched in the layer 2 MAC table 440, where the destination element ID and destination port ID are already stored. Once matched, the element and port IDs are placed into the switching header, along with the destination cluster ID, and source element ID. The resulting frame is then sent out to the core element.
  • When an encapsulated frame arrives, the ID is checked to make sure the packet is targeted to the particular element at which it arrived. If there is a discrepancy, the frame is checked to determine whether it is a multicast or broadcast frame. If it is a multicast frame, the internal switching header is stripped and the resulting packet is copied to all interested parties (registered IGMP “Internet Group Management Protocol” joiners). If it is a broadcast frame, the RAST header is stripped, and the resulting packet is copied to all ports except the incoming port over which the frame arrived. If the frame is a unicast frame, the element ID is stripped off, and the packet is cut through to the corresponding physical port.
  • Although ingress/egress elements could be single port, in preferred embodiments they would typically have multiple ports, including at least one encapsulated packet port, and at least one standards based port (such as Gigabit Ethernet). Currently preferred ingress/egress elements include 1 Gigabit Ethernet multi-port modules, and 10 Gigabit Ethernet single port modules. In other aspects of preferred embodiments, an ingress/egress element may be included in the same physical device with a core element. In that case the device comprises a hybrid core-ingress/egress device. See FIGS. 6 and 7.
  • FIG. 5 shows a high level design of a preferred core element 500, which can be utilized for any of the core switching elements 220A-C. Core element 500 generally includes a logical switching frame 510, a plurality of ingress and/or egress ports 520A-H, one or more unicast tables 530, one or more multicast tables 540.
  • When an encapsulated frame arrives at an ingress side of any port in the core element, the header is read for the destination ID. The ID is used to cut through the frame to the specific egress side port for which the ID has been registered. The unicast table contains a list of all registered element IDs that are known to the core element. Elements become registered during the MDP (Management Discovery Protocol) phase of startup. The multicast table contains element IDs that are registered during the “discovery phase” of a multicast protocol's joining sequence. This is where the multicast protocol evidences an interested party, and uses these IDs to decide which ports take part in the hardware copy of the frames. If the element ID is not known to this core element, or the frame is designated a broadcast frame, the frame floods all egress ports.
  • Connector elements 240A-C (depicted in FIG. 2 as RAST™, for Raptor Adaptive Switch Technology™ Header), are low level devices that allow the core elements to communicate with other core elements over cables or fibers. They assist in enforcing protocols, but have no switching functions. Examples of such elements are XAU1 over copper connectors XAU1/XGmil over fiber connectors using MSA XFP.
  • FIG. 6 is a schematic of a preferred commercial embodiment of a hybrid core-ingress device, designated as a Raptor™ 1010 switch. The switch 600 generally includes two 10 GBase ingress elements 610A-B, two ingress elements other than 10 GBase 615A-B, a core element 620, and intermediate connector elements 630A-D. The system is capable of providing 12.5 Gbps throughput.
  • FIG. 7 is a schematic of a preferred commercial embodiment of a hybrid core-ingress device, designated as a Raptomm 1808 switch. The switch 700 could include eight 10 GBase ingress elements 710A-D, a core element 720, or eight intermediate connector elements 730A-D, or any combination of elements up to a total of eight.
  • In FIG. 8 a switching system 800 includes two of the Raptor™ 1010 switches 600A-B and four of the Raptor™ 1808 switches 700A-D, as well as connecting optical or other lines 810. The lines preferably comprise a 10 GB or greater backplane. In this embodiment the links between the 1010 switches can be 10-40 km at present, and possibly greater lengths in the future. The links between the core switches can be over 40 km.
  • Ethernet
  • A major advantage of the inventive subject matter is that it implements switching of Ethernet packets using a distributed switching fabric. Contemplated embodiments are not strictly limited to Ethernet, however. It is contemplated, for example, that an ingress element can convert SONET to Ethernet, encapsulate and route the packets as described above, and then convert back from Ethernet to SONET.
  • Topology
  • Switching systems contemplated herein can use any suitable topology. Interestingly, the distributed switch fabric contemplated herein can even support a mixture of ring, mesh, star and bus topologies, with looping controlled via Spanning Tree Avoidance algorithms.
  • The presently preferred topology, however, is a Strict Ring Topology (SRT), in which there is only one physical or logical link between elements. To implement SRT each source element address is checked upon ingress via any physical or logical link into a core element. If the source element address is the one that is directly connected to the core element, the data stream will be blocked. If the source element address is not the one that is directly connected to this core element, the package will be forwarded using the normal rules. A break in the ring can be handled in any of several known ways, including reversion to a straight bus topology, which would cause an element table update to all elements.
  • Management of the topology is preferably accomplished using element messages, which can advantageously be created and promulgated by an element manager unit (EMU). An EMU would typically manage multiple types of elements, including ingress/egress elements and core switching elements.
  • Management Discovery Protocol
  • In order for a distributed switch fabric to operate, all individual elements need to discover contributing elements to the fabric. The process is referred to herein as Management Discovery Protocol (MDP). MDP discovers fabric elements that contain individual management units, and decides which element become the master unit and which become the backup units. Usually, MDP needs to be re-started in every element after power stabilizes, the individual management units have booted, and port connectivity is established. The sequence of a preferred MDP operation is as follows:
  • Each element transmits an initial MDP establish message containing its MAC address and user assigned priority number (if assigned 0 used if not set). Each element also listens for incoming MDP messages, containing such information. As each element receives the MDP messages, one of two decisions is made. If the received MAC address is lower than the MAC address assigned to the receiving element, the message is forwarded to all active links with the original MAC address, the link number it was received on, and the MAC address of the system that is forwarding the message. If a priority is set, the lowest priority (greater than 0) is deemed as lowest MAC address and processed as such. If on the other hand the received MAC address is higher than the MAC address assigned to the receiving element, then the message is not forwarded. If a priority is set that is higher than the received priority, the same process is carried out.
  • Eventually the system identifies the MAC address of the master unit, and creates a connection matrix based on the MAC addresses of the elements discovered, the active port numbers, and the MAC addresses of each of the elements, as well as each of their ports. This matrix is distributed to all elements, and forms the base of the distributed switch fabric. The matrix can be any reasonable size, including the presently preferred support for a total of 1024 elements.
  • As each new element joins an established cluster, it issues a MDP initialization message, which is answered by a stored copy of the adjacency table. The new element insert its own information into the table, and issues an update element message to the master, which in turn will check the changes and issue an element update message to all elements.
  • Heart Beat Protocol
  • Heart Beat Protocol enables the detection of a faked element. If an element fails or is removed from the matrix, a Heart Beat Protocol (HBP) can be used to signal that a particular link to an element is not in service. Whatever system is running the HBP sends an element update message to the master, which then reformats the table, and issues an element update message to all elements.
  • It is also possible that various pieces of hardware will send an interrupt or trap to the manager, which will trigger an element update message before HBP can discover the failure. Failure likely to be detected early on by hardware include; loss of signal on optical interfaces; loss of connectivity on copper interfaces; hardware failure of interface chips. A user selected interface disable command or shutdown command can also be used to trigger an element update message.
  • Traffic Load
  • Traffic Load factors can be calculated in any suitable manner. In currently preferred systems and methods, traffic load is calculated by local management units and periodically communicated in element load messages to the master. It is contemplated that such information can be used to load balance multiple physical or logical links between elements.
  • Security
  • Element messages are preferably sent using a secure data protocol (SDP), which performs an ACK/NAK function on all messages to ensure their delivery. SDP is preferably operated as a layer 2 secure data protocol that also includes the ability to encrypt element messages between elements.
  • As discussed elsewhere herein, element messages and SDP can also be used to communicate other data between elements, and thereby support desired management features. Among other things, element messages can be used to support Port To Port Protocol (PTPP), which provides a soft permanent virtual connection to exist between element/port pairs. As currently contemplated, PTPP is simply an element-to-element message that sets default encapsulation to a specific element address/port address for source and destination. PTPP is thus similar to Multiprotocol Label Switching (MPLS) in that it creates a substitute virtual circuit. But unlike MPLS, if a failure occurs, it is the “local” element that automatically re-routes data around the problem. Implemented in this manner, PTPP allows for extremely convenient routing around failures, provided that another link is available at both the originating (ingress) side and the terminating (egress) side, and there is no other blockage in the intervening links (security/Access Control List (ACL)/Quality of Service (QoS), etc),
  • It is also possible to provide a lossless failover system that will not lose a single packet of data in case of a link failure. Such a system can be implemented using Active/Active Protection Service (AAPS), in which the same data is sent in a parallel fashion. The method is analogous to multicasting in that the hardware copies data from the master link to the secondary link. Ideally, the receiving end of the AAPS will only forward the first copy of any data received (correctly) to the end node.
  • Super Fabric
  • Large numbers of elements can advantageously be mapped together in logical clusters, and addressed by including destination and source cluster IDs in the switching headers. In one sense, cluster enabled elements are simply normal elements, but with one or more links that are capable of adding/subtracting cluster address numbers. A system that utilizes clusters in this manner is referred to herein as a super fabric. Super fabrics can be designed to any reasonable size, including especially a current version of super fabric that allows up to 255 clusters of 1024 elements to be connected in a “single” switch system.
  • As currently contemplated, the management unit operating in super fabric mode retains details about all clusters, but does not MAC address data. Inter-cluster communication is via dynamic Virtual LAN (VLAN) tunnels which are created when a cluster level ACL detects a matched sequence that has been predefined. Currently contemplated matches include any of: (a) a MAC address or MAC address pairs; (b) VLAN ID pairs; (c) IP subnet or subnet pair; (d) TCP/UDP Protocol numbers or pairs, ranges etc; (e) protocol number(s); and (f) layer 2-7 match of specific data. The management unit can also keep a list of recent broadcasts, and perform a matching operation on broadcasts received. Forwarding of previously sent broadcasts can thereby be prevented, so that after a learning period only new broadcasts will forwarded to other links.
  • Although clusters are managed by a management unit, they can continue to operate upon failure of the master. If the master management unit fails, a new master is selected and the cluster continues to operate. In preferred embodiments, any switch unit can be the master unit. In cases where only the previous management has failed, the ingress/egress elements and core element are manageable by the new master over an inband connection.
  • Inter-cluster communication is preferably via a strict PTPP based matrix of link addresses. When a link exists between elements that received encapsulated packets, MDP discovers this link, HBP checks the link for health, and SDP allows communication between management elements to keep the cluster informed of any changes. If all of the above is properly implemented, a cluster of switch elements can act as a single logical Gigabit Ethernet or 10 Gigabit Ethernet LAN switch, with all standards based switch functions available over the entire logical switch.
  • The above-described clustering is advantageous in several ways.
      • Link Aggregation IEEE 802.3ad can operate across the entire cluster. This allows other vendors' systems that use IEEE 802.3ad to aggregate traffic over multiple hardware platforms, and provides greater levels of redundancy than heretofore possible.
      • Virtual LANs (VLANs) 802.1Q can operate over the entire cluster without the need for VLAN trunks or VLAN tagging on inter-switch links. Still further, port mirroring (a defacto standard) is readily implemented, providing mirroring of any port in a cluster to any other port in the cluster.
      • Pause frames received on any ingress/egress port can be reflected over the cluster to all ports contributing to the traffic flow on that port, and pause frames can be issued on those contributing ports to avoid bottlenecks.
      • ISO Layer 3 (IP routing) operates over the entire cluster as though it was a single routed hop, even though the cluster may be geographically separated by 160 Km or more.
      • ISO Layer 4 ACLs can be assigned to any switch element in the cluster just as they would be in any standard layer 2/3/4 switch, and a single ACL may be applied to the entire cluster in a single command.
      • IEEE 802.1X operates over the entire cluster, which would not the case if a standard set of switching systems were connected.
  • In FIG. 9, a super fabric implementation 900 of a distributed switching fabric generally includes four 20 Gbps pipes 910A-D, each of which is connected to a corresponding cluster 920A-D that includes a control element 922A-D that understand the cluster messaging structure. Within each cluster there are numerous ingress/egress elements 400 coupled together. In this particular embodiment there each of the control elements 922A-D has two 10 Gbps pipes that connect the ingress/egress elements 400 for intra-cluster communication. There are also inter-cluster pipes 930A-D, which in this instance also communicate at 10 Gbps.
  • Thus, specific embodiments and applications of distributed switching fabric switches have been disclosed. It should be apparent, however, to those skilled in the art that many more modifications besides those already described are possible without departing from the inventive concepts herein. The inventive subject matter, therefore, is not to be restricted except in the spirit of the appended claims. Moreover, in interpreting both the specification and the claims, all terms should be interpreted in the broadest possible manner consistent with the context. In particular, the terms “comprises” and “comprising” should be interpreted as referring to elements, components, or steps in a non-exclusive manner, indicating that the referenced elements, components, or steps may be present, or utilized, or combined with other elements, components, or steps that are not expressly referenced.

Claims (34)

1. A network switch for routing packets of information, comprising:
a plurality of ingress switching elements, each with a plurality of input and output ports;
hardware that adds a routing header to the packets entering the input ports of the plurality of ingress elements a plurality of egress switching elements, each with a plurality of input and output ports;
a backbone that provides active switching among the plurality of ingress and egress elements; and
wherein each of the switching elements is adapted to use the routing header to pass the packets through the backbone from one of the ingress switching elements to at least one of the egress switching elements to which the one ingress element is not otherwise directly connected.
2. The switch of claim 1 wherein each of the pluralities of ingress and egress elements support a protocol providing connectionless media with a stateful connection.
3. The switch of claim 2 wherein the stateful connection comprises Ethernet.
4. The switch of claim 1 wherein each of the pluralities of ingress and egress elements has at least 8 input ports and 8 output ports.
5. The switch of claim 1 wherein at least one of the pluralities of ingress elements has a capacity of at least one gigabit/sec.
6. The switch of claim 1 wherein at least one of the pluralities of ingress elements has a capacity of at least ten gigabit/sec.
7. The switch of claim 1, having hardware that executes a management discovery protocol that determines from time to time which of the plurality of ingress and egress elements becomes a master element.
8. The switch of claim 1, having hardware that executes a management discovery protocol that determines which element becomes a new master on the failure of the existing master using a heart beat protocol.
9. The switch of claim 7, wherein each of the plurality of ingress and egress elements executes the management discovery protocol.
10. The switch of claim 1, further comprising first hardware that encapsulates packets of information with a routing header that is used to route the packets within the switch, and second hardware that removes the routing header.
11. The switch of claim 10, wherein the routing header includes a source element address, destination element address, and a destination port address.
12. The switch of claim 10, wherein at least some of the pluralities of egress elements are logically coupled in a cluster and the routing header includes a destination cluster address.
13. The switch of claim 10, wherein the plurality of ingress and egress elements are coupled using a Strict Ring Topology (SRT).
14. The switch of claim 1, further comprising an element manager unit that sends element messages to at least some of the plurality of ingress and egress elements.
15. The switch of claim 14, wherein at least one of the element messages implements a secure data protocol (SDP) that performs ACK/NAK function on the messages.
16. The switch of claim 14, wherein at least one of the element messages implements a port to port (PTP) protocol.
17. The switch of claim 14, wherein at least one of the element messages implements an active/active protection service (AAPS).
18. The switch of claim 14, further comprising hardware that maintains a list of recent element messages.
19. A switching system comprising first and second clusters of switches according to claim 1.
20. The switching system according to claim 19, wherein inter-cluster communication is implemented via dynamic VLAN tunnels.
21. The switching system according to claim 19, wherein inter-cluster communication is implemented via a PTP protocol based matrix of link addresses.
22. The switching system according to claim 19, wherein the first and second clusters are separated by at least 1 kilometer.
23. The switching system according to claim 19, further comprising a third cluster, and where the first, second and third clusters execute an automatically configuring topology.
24. A method of routing Ethernet packets, comprising:
providing a plurality of switch ingress modules that encapsulate the Ethernet packets into frames having intra-system routing headers;
providing a plurality of egress switch modules that remove the headers from the packets;
providing a core that actively routes the frames between ingress and egress modules, and among core elements;
wherein at least one of the ingress and one of the egress modules are distanced from one another by at least 1 km, and at least two of the core elements are distanced from one another by the distance of a least 1 km.
24. The system of claim 24, further comprising the core connecting at least 16 ports among the plurality of ingress and egress modules.
25. The system of claim 24, further comprising providing an optical carrier as a component of the core.
26. The system of claim 24, further comprising operating a link aggregation across the entire system that conforms to an IEEE standard.
27. The system of claim 24, further comprising operating a virtual LAN (VLAN) across the entire system without a need for a VLAN trunk or VLAN tagging.
28. The system of claim 24, further comprising providing a protocol that at least temporarily prevents data from being sent to a selected one of the ports.
29. The system of claim 24, further comprising implementing layer 3 (IP) routing among the plurality of modules.
30. The system of claim 24, further comprising providing the plurality of modules with port based network access control capability that conforms with an IETF standard.
31. The system of claim 24, further comprising, providing the plurality of modules with detection of layer 2 (Ethernet) to layer 7 (Application) data, and decisions on dynamic routing or handling of said data.
32. The system of claim 24, further comprising sending management messages to at least some of the plurality of modules, wherein at least one of the messages implementing a secure data protocol (SDP) that using positive acknowledgement function on the messages, and at least another one of the element messages implementing a port to port (PTP) protocol, and at least another one of the element messages implementing an Ethernet frame based active/active protection service (AAPS).
33. The system of claim 24, further comprising logically coupling at least some of the plurality of modules into first and second clusters, and separating the clusters by at least 1 kilometer.
US10/965,444 2003-10-14 2004-10-12 Switching system with distributed switching fabric Abandoned US20050105538A1 (en)

Priority Applications (8)

Application Number Priority Date Filing Date Title
US10/965,444 US20050105538A1 (en) 2003-10-14 2004-10-12 Switching system with distributed switching fabric
US11/248,707 US20060029071A1 (en) 2003-10-14 2005-10-11 Multiplexing system that supports a geographically distributed subnet
US11/248,711 US20060029057A1 (en) 2003-10-14 2005-10-11 Time division multiplexing system
US11/248,639 US20060029055A1 (en) 2003-10-14 2005-10-11 VLAN fabric network
US11/248,708 US20060029072A1 (en) 2003-10-14 2005-10-11 Switching system for virtual LANs
US11/248,111 US20060039369A1 (en) 2003-10-14 2005-10-11 Multiplexing system having an automatically configured topology
US11/248,710 US20060029056A1 (en) 2003-10-14 2005-10-11 Virtual machine task management system
US11/610,281 US7352745B2 (en) 2003-10-14 2006-12-13 Switching system with distributed switching fabric

Applications Claiming Priority (11)

Application Number Priority Date Filing Date Title
US51114203P 2003-10-14 2003-10-14
US51114103P 2003-10-14 2003-10-14
US51114403P 2003-10-14 2003-10-14
US51114303P 2003-10-14 2003-10-14
US51114503P 2003-10-14 2003-10-14
US51102103P 2003-10-14 2003-10-14
US51114003P 2003-10-14 2003-10-14
US51113803P 2003-10-14 2003-10-14
US51113903P 2003-10-14 2003-10-14
US56326204P 2004-04-16 2004-04-16
US10/965,444 US20050105538A1 (en) 2003-10-14 2004-10-12 Switching system with distributed switching fabric

Related Child Applications (7)

Application Number Title Priority Date Filing Date
US11/248,708 Division US20060029072A1 (en) 2003-10-14 2005-10-11 Switching system for virtual LANs
US11/248,711 Division US20060029057A1 (en) 2003-10-14 2005-10-11 Time division multiplexing system
US11/248,707 Division US20060029071A1 (en) 2003-10-14 2005-10-11 Multiplexing system that supports a geographically distributed subnet
US11/248,710 Division US20060029056A1 (en) 2003-10-14 2005-10-11 Virtual machine task management system
US11/248,111 Division US20060039369A1 (en) 2003-10-14 2005-10-11 Multiplexing system having an automatically configured topology
US11/248,639 Division US20060029055A1 (en) 2003-10-14 2005-10-11 VLAN fabric network
US11/610,281 Continuation US7352745B2 (en) 2003-10-14 2006-12-13 Switching system with distributed switching fabric

Publications (1)

Publication Number Publication Date
US20050105538A1 true US20050105538A1 (en) 2005-05-19

Family

ID=34468581

Family Applications (8)

Application Number Title Priority Date Filing Date
US10/965,444 Abandoned US20050105538A1 (en) 2003-10-14 2004-10-12 Switching system with distributed switching fabric
US11/248,639 Abandoned US20060029055A1 (en) 2003-10-14 2005-10-11 VLAN fabric network
US11/248,710 Abandoned US20060029056A1 (en) 2003-10-14 2005-10-11 Virtual machine task management system
US11/248,708 Abandoned US20060029072A1 (en) 2003-10-14 2005-10-11 Switching system for virtual LANs
US11/248,711 Abandoned US20060029057A1 (en) 2003-10-14 2005-10-11 Time division multiplexing system
US11/248,111 Abandoned US20060039369A1 (en) 2003-10-14 2005-10-11 Multiplexing system having an automatically configured topology
US11/248,707 Abandoned US20060029071A1 (en) 2003-10-14 2005-10-11 Multiplexing system that supports a geographically distributed subnet
US11/610,281 Active US7352745B2 (en) 2003-10-14 2006-12-13 Switching system with distributed switching fabric

Family Applications After (7)

Application Number Title Priority Date Filing Date
US11/248,639 Abandoned US20060029055A1 (en) 2003-10-14 2005-10-11 VLAN fabric network
US11/248,710 Abandoned US20060029056A1 (en) 2003-10-14 2005-10-11 Virtual machine task management system
US11/248,708 Abandoned US20060029072A1 (en) 2003-10-14 2005-10-11 Switching system for virtual LANs
US11/248,711 Abandoned US20060029057A1 (en) 2003-10-14 2005-10-11 Time division multiplexing system
US11/248,111 Abandoned US20060039369A1 (en) 2003-10-14 2005-10-11 Multiplexing system having an automatically configured topology
US11/248,707 Abandoned US20060029071A1 (en) 2003-10-14 2005-10-11 Multiplexing system that supports a geographically distributed subnet
US11/610,281 Active US7352745B2 (en) 2003-10-14 2006-12-13 Switching system with distributed switching fabric

Country Status (4)

Country Link
US (8) US20050105538A1 (en)
EP (1) EP1673683A4 (en)
JP (1) JP2007507990A (en)
WO (1) WO2005038599A2 (en)

Cited By (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070160068A1 (en) * 2006-01-12 2007-07-12 Ciena Corporation Methods and systems for managing digital cross-connect matrices using virtual connection points
US20080159301A1 (en) * 2006-12-29 2008-07-03 De Heer Arjan Arie Enabling virtual private local area network services
US7983194B1 (en) * 2008-11-14 2011-07-19 Qlogic, Corporation Method and system for multi level switch configuration
GB2482118A (en) * 2010-07-19 2012-01-25 Gnodal Ltd Ethernet switch with link aggregation group facility
US20120207165A1 (en) * 2009-10-30 2012-08-16 Calxeda, Inc. System and Method for Using a Multi-Protocol Fabric Module Across a Distributed Server Interconnect Fabric
US20130201873A1 (en) * 2012-02-02 2013-08-08 International Business Machines Corporation Distributed fabric management protocol
US8665889B2 (en) 2012-03-01 2014-03-04 Ciena Corporation Unidirectional asymmetric traffic pattern systems and methods in switch matrices
US8964601B2 (en) 2011-10-07 2015-02-24 International Business Machines Corporation Network switching domains with a virtualized control plane
US9008079B2 (en) 2009-10-30 2015-04-14 Iii Holdings 2, Llc System and method for high-performance, low-power data center interconnect fabric
US9054989B2 (en) 2012-03-07 2015-06-09 International Business Machines Corporation Management of a distributed fabric system
US9054990B2 (en) 2009-10-30 2015-06-09 Iii Holdings 2, Llc System and method for data center security enhancements leveraging server SOCs or server fabrics
US9059911B2 (en) 2012-03-07 2015-06-16 International Business Machines Corporation Diagnostics in a distributed fabric system
US9069929B2 (en) 2011-10-31 2015-06-30 Iii Holdings 2, Llc Arbitrating usage of serial port in node card of scalable and modular servers
US9077654B2 (en) 2009-10-30 2015-07-07 Iii Holdings 2, Llc System and method for data center security enhancements leveraging managed server SOCs
US9137141B2 (en) 2012-06-12 2015-09-15 International Business Machines Corporation Synchronization of load-balancing switches
US9311269B2 (en) 2009-10-30 2016-04-12 Iii Holdings 2, Llc Network proxy for high-performance, low-power data center interconnect fabric
US9465771B2 (en) 2009-09-24 2016-10-11 Iii Holdings 2, Llc Server on a chip and node cards comprising one or more of same
US9585281B2 (en) 2011-10-28 2017-02-28 Iii Holdings 2, Llc System and method for flexible storage and networking provisioning in large scalable processor installations
US9648102B1 (en) 2012-12-27 2017-05-09 Iii Holdings 2, Llc Memcached server functionality in a cluster of data processing nodes
US9680770B2 (en) 2009-10-30 2017-06-13 Iii Holdings 2, Llc System and method for using a multi-protocol fabric module across a distributed server interconnect fabric
US9876735B2 (en) 2009-10-30 2018-01-23 Iii Holdings 2, Llc Performance and power optimized computer system architectures and methods leveraging power optimized tree fabric interconnect
US10877695B2 (en) 2009-10-30 2020-12-29 Iii Holdings 2, Llc Memcached server functionality in a cluster of data processing nodes
US11467883B2 (en) 2004-03-13 2022-10-11 Iii Holdings 12, Llc Co-allocating a reservation spanning different compute resources types
US11496415B2 (en) 2005-04-07 2022-11-08 Iii Holdings 12, Llc On-demand access to compute resources
US11494235B2 (en) 2004-11-08 2022-11-08 Iii Holdings 12, Llc System and method of providing system jobs within a compute environment
US11522952B2 (en) 2007-09-24 2022-12-06 The Research Foundation For The State University Of New York Automatic clustering for self-organizing grids
US11630704B2 (en) 2004-08-20 2023-04-18 Iii Holdings 12, Llc System and method for a workload management and scheduling module to manage access to a compute environment according to local and non-local user identity information
US11652706B2 (en) 2004-06-18 2023-05-16 Iii Holdings 12, Llc System and method for providing dynamic provisioning within a compute environment
US11650857B2 (en) 2006-03-16 2023-05-16 Iii Holdings 12, Llc System and method for managing a hybrid computer environment
US11658916B2 (en) 2005-03-16 2023-05-23 Iii Holdings 12, Llc Simple integration of an on-demand compute environment
US11720290B2 (en) 2009-10-30 2023-08-08 Iii Holdings 2, Llc Memcached server functionality in a cluster of data processing nodes

Families Citing this family (228)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8526490B2 (en) * 2002-12-10 2013-09-03 Ol2, Inc. System and method for video compression using feedback including data related to the successful receipt of video content
US10201760B2 (en) 2002-12-10 2019-02-12 Sony Interactive Entertainment America Llc System and method for compressing video based on detected intraframe motion
US20090118019A1 (en) * 2002-12-10 2009-05-07 Onlive, Inc. System for streaming databases serving real-time applications used through streaming interactive video
US20100166056A1 (en) * 2002-12-10 2010-07-01 Steve Perlman System and method for encoding video using a selected tile and tile rotation pattern
US8949922B2 (en) * 2002-12-10 2015-02-03 Ol2, Inc. System for collaborative conferencing using streaming interactive video
US9061207B2 (en) * 2002-12-10 2015-06-23 Sony Computer Entertainment America Llc Temporary decoder apparatus and method
US8366552B2 (en) * 2002-12-10 2013-02-05 Ol2, Inc. System and method for multi-stream video compression
US9138644B2 (en) * 2002-12-10 2015-09-22 Sony Computer Entertainment America Llc System and method for accelerated machine switching
US9077991B2 (en) * 2002-12-10 2015-07-07 Sony Computer Entertainment America Llc System and method for utilizing forward error correction with video compression
US9314691B2 (en) * 2002-12-10 2016-04-19 Sony Computer Entertainment America Llc System and method for compressing video frames or portions thereof based on feedback information from a client device
US8979655B2 (en) 2002-12-10 2015-03-17 Ol2, Inc. System and method for securely hosting applications
US8549574B2 (en) 2002-12-10 2013-10-01 Ol2, Inc. Method of combining linear content and interactive content compressed together as streaming interactive video
US7450592B2 (en) 2003-11-12 2008-11-11 At&T Intellectual Property I, L.P. Layer 2/layer 3 interworking via internal virtual UNI
US7460537B2 (en) * 2004-01-29 2008-12-02 Brocade Communications Systems, Inc. Supplementary header for multifabric and high port count switch support in a fibre channel network
US8892706B1 (en) 2010-06-21 2014-11-18 Vmware, Inc. Private ethernet overlay networks over a shared ethernet in a virtual environment
US8619771B2 (en) 2009-09-30 2013-12-31 Vmware, Inc. Private allocated networks over shared communications infrastructure
US8924524B2 (en) 2009-07-27 2014-12-30 Vmware, Inc. Automated network configuration of virtual machines in a virtual lab data environment
DE102006044856B4 (en) * 2006-09-22 2010-08-12 Siemens Ag Method for switching data packets with a route coding in a network
US7684410B2 (en) * 2006-10-31 2010-03-23 Hewlett-Packard Development Company, L.P. VLAN aware trunks
US7599314B2 (en) 2007-12-14 2009-10-06 Raptor Networks Technology, Inc. Surface-space managed network fabric
US8396053B2 (en) * 2008-04-24 2013-03-12 International Business Machines Corporation Method and apparatus for VLAN-based selective path routing
GB2459838B (en) * 2008-05-01 2010-10-06 Gnodal Ltd An ethernet bridge and a method of data delivery across a network
JP5262291B2 (en) * 2008-05-22 2013-08-14 富士通株式会社 Network connection device and aggregation / distribution device
US8195774B2 (en) 2008-05-23 2012-06-05 Vmware, Inc. Distributed virtual switch for virtualized computer systems
US9847953B2 (en) 2008-09-11 2017-12-19 Juniper Networks, Inc. Methods and apparatus related to virtualization of data center resources
US11271871B2 (en) 2008-09-11 2022-03-08 Juniper Networks, Inc. Methods and apparatus related to a flexible data center security architecture
US8340088B2 (en) * 2008-09-11 2012-12-25 Juniper Networks, Inc. Methods and apparatus related to a low cost data center architecture
US8755396B2 (en) * 2008-09-11 2014-06-17 Juniper Networks, Inc. Methods and apparatus related to flow control within a data center switch fabric
US8730954B2 (en) * 2008-09-11 2014-05-20 Juniper Networks, Inc. Methods and apparatus related to any-to-any connectivity within a data center
US20100061367A1 (en) * 2008-09-11 2010-03-11 Pradeep Sindhu Methods and apparatus related to lossless operation within a data center
US8265071B2 (en) 2008-09-11 2012-09-11 Juniper Networks, Inc. Methods and apparatus related to a flexible data center security architecture
US8335213B2 (en) * 2008-09-11 2012-12-18 Juniper Networks, Inc. Methods and apparatus related to low latency within a data center
JP5251457B2 (en) 2008-11-27 2013-07-31 富士通株式会社 Data transmission device
US8780911B2 (en) * 2009-10-08 2014-07-15 Force10 Networks, Inc. Link aggregation based on port and protocol combination
US8687629B1 (en) * 2009-11-18 2014-04-01 Juniper Networks, Inc. Fabric virtualization for packet and circuit switching
US9240923B2 (en) 2010-03-23 2016-01-19 Juniper Networks, Inc. Methods and apparatus for automatically provisioning resources within a distributed control plane of a switch
US9813252B2 (en) 2010-03-23 2017-11-07 Juniper Networks, Inc. Multicasting within a distributed control plane of a switch
US9716672B2 (en) 2010-05-28 2017-07-25 Brocade Communications Systems, Inc. Distributed configuration management for virtual cluster switching
US9769016B2 (en) 2010-06-07 2017-09-19 Brocade Communications Systems, Inc. Advanced link tracking for virtual cluster switching
US8867552B2 (en) 2010-05-03 2014-10-21 Brocade Communications Systems, Inc. Virtual cluster switching
US9270486B2 (en) 2010-06-07 2016-02-23 Brocade Communications Systems, Inc. Name services for virtual cluster switching
CN101854303B (en) * 2010-05-27 2013-07-24 北京星网锐捷网络技术有限公司 Network topology linker
US9628293B2 (en) 2010-06-08 2017-04-18 Brocade Communications Systems, Inc. Network layer multicasting in trill networks
US9806906B2 (en) 2010-06-08 2017-10-31 Brocade Communications Systems, Inc. Flooding packets on a per-virtual-network basis
US9608833B2 (en) 2010-06-08 2017-03-28 Brocade Communications Systems, Inc. Supporting multiple multicast trees in trill networks
US9246703B2 (en) * 2010-06-08 2016-01-26 Brocade Communications Systems, Inc. Remote port mirroring
US8839238B2 (en) * 2010-06-11 2014-09-16 International Business Machines Corporation Dynamic virtual machine shutdown without service interruptions
US8830823B2 (en) * 2010-07-06 2014-09-09 Nicira, Inc. Distributed control platform for large-scale production networks
US9680750B2 (en) 2010-07-06 2017-06-13 Nicira, Inc. Use of tunnels to hide network addresses
US9807031B2 (en) 2010-07-16 2017-10-31 Brocade Communications Systems, Inc. System and method for network configuration
US8873389B1 (en) * 2010-08-09 2014-10-28 Chelsio Communications, Inc. Method for flow control in a packet switched network
US20130188647A1 (en) * 2010-10-29 2013-07-25 Russ W. Herrell Computer system fabric switch having a blind route
US9282060B2 (en) 2010-12-15 2016-03-08 Juniper Networks, Inc. Methods and apparatus for dynamic resource management within a distributed control plane of a switch
US9276953B2 (en) 2011-05-13 2016-03-01 International Business Machines Corporation Method and apparatus to detect and block unauthorized MAC address by virtual machine aware network switches
US8670450B2 (en) 2011-05-13 2014-03-11 International Business Machines Corporation Efficient software-based private VLAN solution for distributed virtual switches
US8837499B2 (en) 2011-05-14 2014-09-16 International Business Machines Corporation Distributed fabric protocol (DFP) switching network architecture
US20120287785A1 (en) 2011-05-14 2012-11-15 International Business Machines Corporation Data traffic handling in a distributed fabric protocol (dfp) switching network architecture
US8635614B2 (en) 2011-05-14 2014-01-21 International Business Machines Corporation Method for providing location independent dynamic port mirroring on distributed virtual switches
US8588224B2 (en) 2011-05-14 2013-11-19 International Business Machines Corporation Priority based flow control in a distributed fabric protocol (DFP) switching network architecture
US20120291034A1 (en) 2011-05-14 2012-11-15 International Business Machines Corporation Techniques for executing threads in a computing environment
US9154327B1 (en) 2011-05-27 2015-10-06 Cisco Technology, Inc. User-configured on-demand virtual layer-2 network for infrastructure-as-a-service (IaaS) on a hybrid cloud network
US9497073B2 (en) 2011-06-17 2016-11-15 International Business Machines Corporation Distributed link aggregation group (LAG) for a layer 2 fabric
US9736085B2 (en) 2011-08-29 2017-08-15 Brocade Communications Systems, Inc. End-to end lossless Ethernet in Ethernet fabric
US20130064066A1 (en) 2011-09-12 2013-03-14 International Business Machines Corporation Updating a switch software image in a distributed fabric protocol (dfp) switching network
US8767529B2 (en) 2011-09-12 2014-07-01 International Business Machines Corporation High availability distributed fabric protocol (DFP) switching network architecture
US8750129B2 (en) 2011-10-06 2014-06-10 International Business Machines Corporation Credit-based network congestion management
US9065745B2 (en) 2011-10-06 2015-06-23 International Business Machines Corporation Network traffic distribution
US11095549B2 (en) 2011-10-21 2021-08-17 Nant Holdings Ip, Llc Non-overlapping secured topologies in a distributed network fabric
US9699117B2 (en) 2011-11-08 2017-07-04 Brocade Communications Systems, Inc. Integrated fibre channel support in an ethernet fabric switch
US9450870B2 (en) 2011-11-10 2016-09-20 Brocade Communications Systems, Inc. System and method for flow management in software-defined networks
US8995272B2 (en) 2012-01-26 2015-03-31 Brocade Communication Systems, Inc. Link aggregation in software-defined networks
US8660129B1 (en) 2012-02-02 2014-02-25 Cisco Technology, Inc. Fully distributed routing over a user-configured on-demand virtual network for infrastructure-as-a-service (IaaS) on hybrid cloud networks
US9306832B2 (en) 2012-02-27 2016-04-05 Ravello Systems Ltd. Virtualized network for virtualized guests as an independent overlay over a physical network
US9742693B2 (en) 2012-02-27 2017-08-22 Brocade Communications Systems, Inc. Dynamic service insertion in a fabric switch
US9154416B2 (en) 2012-03-22 2015-10-06 Brocade Communications Systems, Inc. Overlay tunnel in a fabric switch
US9374301B2 (en) 2012-05-18 2016-06-21 Brocade Communications Systems, Inc. Network feedback in software-defined networks
US10277464B2 (en) 2012-05-22 2019-04-30 Arris Enterprises Llc Client auto-configuration in a multi-switch link aggregation
US10454760B2 (en) 2012-05-23 2019-10-22 Avago Technologies International Sales Pte. Limited Layer-3 overlay gateways
US9602430B2 (en) 2012-08-21 2017-03-21 Brocade Communications Systems, Inc. Global VLANs for fabric switches
US9401872B2 (en) 2012-11-16 2016-07-26 Brocade Communications Systems, Inc. Virtual link aggregations across multiple fabric switches
US9350680B2 (en) 2013-01-11 2016-05-24 Brocade Communications Systems, Inc. Protection switching over a virtual link aggregation
US9413691B2 (en) 2013-01-11 2016-08-09 Brocade Communications Systems, Inc. MAC address synchronization in a fabric switch
US9548926B2 (en) 2013-01-11 2017-01-17 Brocade Communications Systems, Inc. Multicast traffic load balancing over virtual link aggregation
US9565113B2 (en) 2013-01-15 2017-02-07 Brocade Communications Systems, Inc. Adaptive link aggregation and virtual link aggregation
US9565099B2 (en) 2013-03-01 2017-02-07 Brocade Communications Systems, Inc. Spanning tree in fabric switches
US9577955B2 (en) * 2013-03-12 2017-02-21 Forrest Lawrence Pierson Indefinitely expandable high-capacity data switch
US9401818B2 (en) 2013-03-15 2016-07-26 Brocade Communications Systems, Inc. Scalable gateways for a fabric switch
US9699001B2 (en) 2013-06-10 2017-07-04 Brocade Communications Systems, Inc. Scalable and segregated network virtualization
US9565028B2 (en) 2013-06-10 2017-02-07 Brocade Communications Systems, Inc. Ingress switch multicast distribution in a fabric switch
US9571386B2 (en) 2013-07-08 2017-02-14 Nicira, Inc. Hybrid packet processing
US9755963B2 (en) 2013-07-09 2017-09-05 Nicira, Inc. Using headerspace analysis to identify flow entry reachability
US9344349B2 (en) 2013-07-12 2016-05-17 Nicira, Inc. Tracing network packets by a cluster of network controllers
US9407580B2 (en) 2013-07-12 2016-08-02 Nicira, Inc. Maintaining data stored with a packet
US9282019B2 (en) 2013-07-12 2016-03-08 Nicira, Inc. Tracing logical network packets through physical network
US9887960B2 (en) 2013-08-14 2018-02-06 Nicira, Inc. Providing services for logical networks
US9952885B2 (en) 2013-08-14 2018-04-24 Nicira, Inc. Generation of configuration files for a DHCP module executing within a virtualized container
US9577845B2 (en) 2013-09-04 2017-02-21 Nicira, Inc. Multiple active L3 gateways for logical networks
US9503371B2 (en) 2013-09-04 2016-11-22 Nicira, Inc. High availability L3 gateways for logical networks
US9674087B2 (en) 2013-09-15 2017-06-06 Nicira, Inc. Performing a multi-stage lookup to classify packets
US9602398B2 (en) 2013-09-15 2017-03-21 Nicira, Inc. Dynamically generating flows with wildcard fields
US9977685B2 (en) 2013-10-13 2018-05-22 Nicira, Inc. Configuration of logical router
US10063458B2 (en) 2013-10-13 2018-08-28 Nicira, Inc. Asymmetric connection with external networks
US9264330B2 (en) 2013-10-13 2016-02-16 Nicira, Inc. Tracing host-originated logical network packets
US9912612B2 (en) 2013-10-28 2018-03-06 Brocade Communications Systems LLC Extended ethernet fabric switches
US10158538B2 (en) 2013-12-09 2018-12-18 Nicira, Inc. Reporting elephant flows to a network controller
US9967199B2 (en) 2013-12-09 2018-05-08 Nicira, Inc. Inspecting operations of a machine to detect elephant flows
US9996467B2 (en) 2013-12-13 2018-06-12 Nicira, Inc. Dynamically adjusting the number of flows allowed in a flow table cache
US9569368B2 (en) 2013-12-13 2017-02-14 Nicira, Inc. Installing and managing flows in a flow table cache
US9548873B2 (en) 2014-02-10 2017-01-17 Brocade Communications Systems, Inc. Virtual extensible LAN tunnel keepalives
US9419889B2 (en) 2014-03-07 2016-08-16 Nicira, Inc. Method and system for discovering a path of network traffic
US9384033B2 (en) 2014-03-11 2016-07-05 Vmware, Inc. Large receive offload for virtual machines
US9742682B2 (en) 2014-03-11 2017-08-22 Vmware, Inc. Large receive offload for virtual machines
US9755981B2 (en) 2014-03-11 2017-09-05 Vmware, Inc. Snooping forwarded packets by a virtual machine
US9419855B2 (en) 2014-03-14 2016-08-16 Nicira, Inc. Static routes for logical routers
US9313129B2 (en) 2014-03-14 2016-04-12 Nicira, Inc. Logical router processing by network controller
US9225597B2 (en) 2014-03-14 2015-12-29 Nicira, Inc. Managed gateways peering with external router to attract ingress packets
US9590901B2 (en) 2014-03-14 2017-03-07 Nicira, Inc. Route advertisement by managed gateways
US10581758B2 (en) 2014-03-19 2020-03-03 Avago Technologies International Sales Pte. Limited Distributed hot standby links for vLAG
US10476698B2 (en) 2014-03-20 2019-11-12 Avago Technologies International Sales Pte. Limited Redundent virtual link aggregation group
US9503321B2 (en) 2014-03-21 2016-11-22 Nicira, Inc. Dynamic routing for logical routers
US9647883B2 (en) 2014-03-21 2017-05-09 Nicria, Inc. Multiple levels of logical routers
US9413644B2 (en) 2014-03-27 2016-08-09 Nicira, Inc. Ingress ECMP in virtual distributed routing environment
US9893988B2 (en) 2014-03-27 2018-02-13 Nicira, Inc. Address resolution using multiple designated instances of a logical router
US9419874B2 (en) 2014-03-27 2016-08-16 Nicira, Inc. Packet tracing in a software-defined networking environment
US9729679B2 (en) 2014-03-31 2017-08-08 Nicira, Inc. Using different TCP/IP stacks for different tenants on a multi-tenant host
US9985896B2 (en) 2014-03-31 2018-05-29 Nicira, Inc. Caching of service decisions
US9940180B2 (en) 2014-03-31 2018-04-10 Nicira, Inc. Using loopback interfaces of multiple TCP/IP stacks for communication between processes
US10193806B2 (en) 2014-03-31 2019-01-29 Nicira, Inc. Performing a finishing operation to improve the quality of a resulting hash
US9385954B2 (en) 2014-03-31 2016-07-05 Nicira, Inc. Hashing techniques for use in a network environment
US10091125B2 (en) 2014-03-31 2018-10-02 Nicira, Inc. Using different TCP/IP stacks with separately allocated resources
US9832112B2 (en) 2014-03-31 2017-11-28 Nicira, Inc. Using different TCP/IP stacks for different hypervisor services
US9667528B2 (en) 2014-03-31 2017-05-30 Vmware, Inc. Fast lookup and update of current hop limit
US9680798B2 (en) 2014-04-11 2017-06-13 Nant Holdings Ip, Llc Fabric-based anonymity management, systems and methods
US10063473B2 (en) 2014-04-30 2018-08-28 Brocade Communications Systems LLC Method and system for facilitating switch virtualization in a network of interconnected switches
FR3020613B1 (en) * 2014-05-05 2016-04-29 Peugeot Citroen Automobiles Sa DOOR THRESHOLDING ELEMENT OF A MOTOR VEHICLE
US9444754B1 (en) 2014-05-13 2016-09-13 Chelsio Communications, Inc. Method for congestion control in a network interface card
US9800471B2 (en) 2014-05-13 2017-10-24 Brocade Communications Systems, Inc. Network extension groups of global VLANs in a fabric switch
US10491467B2 (en) 2014-05-23 2019-11-26 Nant Holdings Ip, Llc Fabric-based virtual air gap provisioning, systems and methods
US9774707B2 (en) 2014-06-04 2017-09-26 Nicira, Inc. Efficient packet classification for dynamic containers
US10110712B2 (en) 2014-06-04 2018-10-23 Nicira, Inc. Efficient packet classification for dynamic containers
US9553803B2 (en) 2014-06-30 2017-01-24 Nicira, Inc. Periodical generation of network measurement data
US9577927B2 (en) 2014-06-30 2017-02-21 Nicira, Inc. Encoding control plane information in transport protocol source port field and applications thereof in network virtualization
US9419897B2 (en) 2014-06-30 2016-08-16 Nicira, Inc. Methods and systems for providing multi-tenancy support for Single Root I/O Virtualization
US9379956B2 (en) 2014-06-30 2016-06-28 Nicira, Inc. Identifying a network topology between two endpoints
US9742881B2 (en) 2014-06-30 2017-08-22 Nicira, Inc. Network virtualization using just-in-time distributed capability for classification encoding
US9692698B2 (en) 2014-06-30 2017-06-27 Nicira, Inc. Methods and systems to offload overlay network packet encapsulation to hardware
US10616108B2 (en) 2014-07-29 2020-04-07 Avago Technologies International Sales Pte. Limited Scalable MAC address virtualization
US9544219B2 (en) 2014-07-31 2017-01-10 Brocade Communications Systems, Inc. Global VLAN services
US9807007B2 (en) 2014-08-11 2017-10-31 Brocade Communications Systems, Inc. Progressive MAC address learning
US10020960B2 (en) 2014-09-30 2018-07-10 Nicira, Inc. Virtual distributed bridging
US9768980B2 (en) 2014-09-30 2017-09-19 Nicira, Inc. Virtual distributed bridging
US10250443B2 (en) 2014-09-30 2019-04-02 Nicira, Inc. Using physical location to modify behavior of a distributed virtual network element
US10511458B2 (en) 2014-09-30 2019-12-17 Nicira, Inc. Virtual distributed bridging
US11178051B2 (en) 2014-09-30 2021-11-16 Vmware, Inc. Packet key parser for flow-based forwarding elements
US9524173B2 (en) 2014-10-09 2016-12-20 Brocade Communications Systems, Inc. Fast reboot for a switch
US10469342B2 (en) 2014-10-10 2019-11-05 Nicira, Inc. Logical network traffic analysis
US9699029B2 (en) 2014-10-10 2017-07-04 Brocade Communications Systems, Inc. Distributed configuration management in a switch group
US9628407B2 (en) 2014-12-31 2017-04-18 Brocade Communications Systems, Inc. Multiple software versions in a switch group
US9626255B2 (en) 2014-12-31 2017-04-18 Brocade Communications Systems, Inc. Online restoration of a switch snapshot
US10003552B2 (en) 2015-01-05 2018-06-19 Brocade Communications Systems, Llc. Distributed bidirectional forwarding detection protocol (D-BFD) for cluster of interconnected switches
US9942097B2 (en) 2015-01-05 2018-04-10 Brocade Communications Systems LLC Power management in a network of interconnected switches
US10129180B2 (en) 2015-01-30 2018-11-13 Nicira, Inc. Transit logical switch within logical router
US10038592B2 (en) 2015-03-17 2018-07-31 Brocade Communications Systems LLC Identifier assignment to a new switch in a switch group
US9807005B2 (en) 2015-03-17 2017-10-31 Brocade Communications Systems, Inc. Multi-fabric manager
US10044676B2 (en) 2015-04-03 2018-08-07 Nicira, Inc. Using headerspace analysis to identify unneeded distributed firewall rules
US10038628B2 (en) 2015-04-04 2018-07-31 Nicira, Inc. Route server mode for dynamic routing between logical and physical networks
US10579406B2 (en) 2015-04-08 2020-03-03 Avago Technologies International Sales Pte. Limited Dynamic orchestration of overlay tunnels
US10361952B2 (en) 2015-06-30 2019-07-23 Nicira, Inc. Intermediate logical interfaces in a virtual distributed router environment
US10171430B2 (en) 2015-07-27 2019-01-01 Forrest L. Pierson Making a secure connection over insecure lines more secure
US10230629B2 (en) 2015-08-11 2019-03-12 Nicira, Inc. Static route configuration for logical router
US10057157B2 (en) 2015-08-31 2018-08-21 Nicira, Inc. Automatically advertising NAT routes between logical routers
US10171303B2 (en) 2015-09-16 2019-01-01 Avago Technologies International Sales Pte. Limited IP-based interconnection of switches with a logical chassis
US10095535B2 (en) 2015-10-31 2018-10-09 Nicira, Inc. Static route types for logical routers
US11675587B2 (en) 2015-12-03 2023-06-13 Forrest L. Pierson Enhanced protection of processors from a buffer overflow attack
US10564969B2 (en) 2015-12-03 2020-02-18 Forrest L. Pierson Enhanced protection of processors from a buffer overflow attack
US9912614B2 (en) 2015-12-07 2018-03-06 Brocade Communications Systems LLC Interconnection of switches based on hierarchical overlay tunneling
US10333849B2 (en) 2016-04-28 2019-06-25 Nicira, Inc. Automatic configuration of logical routers on edge nodes
US10484515B2 (en) 2016-04-29 2019-11-19 Nicira, Inc. Implementing logical metadata proxy servers in logical networks
US10841273B2 (en) 2016-04-29 2020-11-17 Nicira, Inc. Implementing logical DHCP servers in logical networks
US10091161B2 (en) 2016-04-30 2018-10-02 Nicira, Inc. Assignment of router ID for logical routers
US10560320B2 (en) 2016-06-29 2020-02-11 Nicira, Inc. Ranking of gateways in cluster
US10153973B2 (en) 2016-06-29 2018-12-11 Nicira, Inc. Installation of routing tables for logical router in route server mode
US10454758B2 (en) 2016-08-31 2019-10-22 Nicira, Inc. Edge node cluster network redundancy and fast convergence using an underlay anycast VTEP IP
CN107809387B (en) * 2016-09-08 2020-11-06 华为技术有限公司 Message transmission method, device and network system
CA3038147A1 (en) 2016-09-26 2018-03-29 Nant Holdings Ip, Llc Virtual circuits in cloud networks
US10341236B2 (en) 2016-09-30 2019-07-02 Nicira, Inc. Anycast edge service gateways
US10237090B2 (en) 2016-10-28 2019-03-19 Avago Technologies International Sales Pte. Limited Rule-based network identifier mapping
US10212071B2 (en) 2016-12-21 2019-02-19 Nicira, Inc. Bypassing a load balancer in a return path of network traffic
US10237123B2 (en) 2016-12-21 2019-03-19 Nicira, Inc. Dynamic recovery from a split-brain failure in edge nodes
US10742746B2 (en) 2016-12-21 2020-08-11 Nicira, Inc. Bypassing a load balancer in a return path of network traffic
US10616045B2 (en) 2016-12-22 2020-04-07 Nicira, Inc. Migration of centralized routing components of logical router
US10805239B2 (en) 2017-03-07 2020-10-13 Nicira, Inc. Visualization of path between logical network endpoints
US10587479B2 (en) 2017-04-02 2020-03-10 Nicira, Inc. GUI for analysis of logical network modifications
US10313926B2 (en) 2017-05-31 2019-06-04 Nicira, Inc. Large receive offload (LRO) processing in virtualized computing environments
US10681000B2 (en) 2017-06-30 2020-06-09 Nicira, Inc. Assignment of unique physical network addresses for logical network addresses
US10637800B2 (en) 2017-06-30 2020-04-28 Nicira, Inc Replacement of logical network addresses with physical network addresses
US10608887B2 (en) 2017-10-06 2020-03-31 Nicira, Inc. Using packet tracing tool to automatically execute packet capture operations
US10374827B2 (en) 2017-11-14 2019-08-06 Nicira, Inc. Identifier that maps to different networks at different datacenters
US10511459B2 (en) 2017-11-14 2019-12-17 Nicira, Inc. Selection of managed forwarding element for bridge spanning multiple datacenters
WO2020019315A1 (en) * 2018-07-27 2020-01-30 浙江天猫技术有限公司 Computational operation scheduling method employing graphic data, system, computer readable medium, and apparatus
US10931560B2 (en) 2018-11-23 2021-02-23 Vmware, Inc. Using route type to determine routing protocol behavior
US10797998B2 (en) 2018-12-05 2020-10-06 Vmware, Inc. Route server for distributed routers using hierarchical routing protocol
US10938788B2 (en) 2018-12-12 2021-03-02 Vmware, Inc. Static routes for policy-based VPN
US11198062B2 (en) 2019-07-18 2021-12-14 Nani Holdings IP, LLC Latency management in an event driven gaming network
US11159343B2 (en) 2019-08-30 2021-10-26 Vmware, Inc. Configuring traffic optimization using distributed edge services
US11283699B2 (en) 2020-01-17 2022-03-22 Vmware, Inc. Practical overlay network latency measurement in datacenter
US11606294B2 (en) 2020-07-16 2023-03-14 Vmware, Inc. Host computer configured to facilitate distributed SNAT service
US11616755B2 (en) 2020-07-16 2023-03-28 Vmware, Inc. Facilitating distributed SNAT service
US11611613B2 (en) 2020-07-24 2023-03-21 Vmware, Inc. Policy-based forwarding to a load balancer of a load balancing cluster
US11902050B2 (en) 2020-07-28 2024-02-13 VMware LLC Method for providing distributed gateway service at host computer
US11451413B2 (en) 2020-07-28 2022-09-20 Vmware, Inc. Method for advertising availability of distributed gateway service and machines at host computer
US11196628B1 (en) 2020-07-29 2021-12-07 Vmware, Inc. Monitoring container clusters
US11558426B2 (en) 2020-07-29 2023-01-17 Vmware, Inc. Connection tracking for container cluster
US11570090B2 (en) 2020-07-29 2023-01-31 Vmware, Inc. Flow tracing operation in container cluster
US11875172B2 (en) 2020-09-28 2024-01-16 VMware LLC Bare metal computer for booting copies of VM images on multiple computing devices using a smart NIC
US11636053B2 (en) 2020-09-28 2023-04-25 Vmware, Inc. Emulating a local storage by accessing an external storage through a shared port of a NIC
US11792134B2 (en) 2020-09-28 2023-10-17 Vmware, Inc. Configuring PNIC to perform flow processing offload using virtual port identifiers
US20220100432A1 (en) 2020-09-28 2022-03-31 Vmware, Inc. Distributed storage services supported by a nic
US11593278B2 (en) 2020-09-28 2023-02-28 Vmware, Inc. Using machine executing on a NIC to access a third party storage not supported by a NIC or host
US11736436B2 (en) 2020-12-31 2023-08-22 Vmware, Inc. Identifying routes with indirect addressing in a datacenter
US11336533B1 (en) 2021-01-08 2022-05-17 Vmware, Inc. Network visualization of correlations between logical elements and associated physical elements
US11687210B2 (en) 2021-07-05 2023-06-27 Vmware, Inc. Criteria-based expansion of group nodes in a network topology visualization
US11711278B2 (en) 2021-07-24 2023-07-25 Vmware, Inc. Visualization of flow trace operation across multiple sites
US11677645B2 (en) 2021-09-17 2023-06-13 Vmware, Inc. Traffic monitoring
US11863376B2 (en) 2021-12-22 2024-01-02 Vmware, Inc. Smart NIC leader election
US11928367B2 (en) 2022-06-21 2024-03-12 VMware LLC Logical memory addressing for network devices
US11899594B2 (en) 2022-06-21 2024-02-13 VMware LLC Maintenance of data message classification cache on smart NIC
US11928062B2 (en) 2022-06-21 2024-03-12 VMware LLC Accelerating data message classification with smart NICs

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5689644A (en) * 1996-03-25 1997-11-18 I-Cube, Inc. Network switch with arbitration sytem
US20010033552A1 (en) * 2000-02-24 2001-10-25 Barrack Craig I. Credit-based pacing scheme for heterogeneous speed frame forwarding
US20020012344A1 (en) * 2000-06-06 2002-01-31 Johnson Ian David Switching system
US6356546B1 (en) * 1998-08-11 2002-03-12 Nortel Networks Limited Universal transfer method and network with distributed switch
US6363416B1 (en) * 1998-08-28 2002-03-26 3Com Corporation System and method for automatic election of a representative node within a communications network with built-in redundancy
US20020131414A1 (en) * 2001-03-15 2002-09-19 Hadzic Iiija Metropolitan area ethernet networks
US20020167950A1 (en) * 2001-01-12 2002-11-14 Zarlink Semiconductor V.N. Inc. Fast data path protocol for network switching
US20020176131A1 (en) * 2001-02-28 2002-11-28 Walters David H. Protection switching for an optical network, and methods and apparatus therefor
US20030067925A1 (en) * 2001-10-05 2003-04-10 Samsung Electronics Co., Ltd. Routing coordination protocol for a massively parallel router architecture
US6563837B2 (en) * 1998-02-10 2003-05-13 Enterasys Networks, Inc. Method and apparatus for providing work-conserving properties in a non-blocking switch with limited speedup independent of switch size
US20030118058A1 (en) * 2001-12-26 2003-06-26 Chan Kim Variable length packet switching system
US20050163115A1 (en) * 2003-09-18 2005-07-28 Sitaram Dontu Distributed forwarding in virtual network devices
US6954463B1 (en) * 2000-12-11 2005-10-11 Cisco Technology, Inc. Distributed packet processing architecture for network access servers

Family Cites Families (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6151324A (en) * 1996-06-03 2000-11-21 Cabletron Systems, Inc. Aggregation of mac data flows through pre-established path between ingress and egress switch to reduce number of number connections
US6195351B1 (en) * 1998-01-28 2001-02-27 3Com Corporation Logical switch set
US6208644B1 (en) * 1998-03-12 2001-03-27 I-Cube, Inc. Network switch providing dynamic load balancing
US6721271B1 (en) * 1999-02-04 2004-04-13 Nortel Networks Limited Rate-controlled multi-class high-capacity packet switch
CA2293920A1 (en) * 1999-12-31 2001-06-30 Nortel Networks Corporation Global distributed switch
DE60111457T2 (en) * 2000-06-19 2006-05-11 Broadcom Corp., Irvine Switching arrangement with redundant ways
GB0019341D0 (en) * 2000-08-08 2000-09-27 Easics Nv System-on-chip solutions
DE10055476A1 (en) * 2000-11-09 2002-05-29 Siemens Ag Optical switching matrix
US6697368B2 (en) * 2000-11-17 2004-02-24 Foundry Networks, Inc. High-performance network switch
GB0102743D0 (en) * 2001-02-03 2001-03-21 Power X Ltd A data switch and a method for controlling the data switch
US7599620B2 (en) * 2001-06-01 2009-10-06 Nortel Networks Limited Communications network for a metropolitan area
US7167648B2 (en) * 2001-10-24 2007-01-23 Innovative Fiber Optic Solutions, Llc System and method for an ethernet optical area network
US7145904B2 (en) * 2002-01-03 2006-12-05 Integrated Device Technology, Inc. Switch queue predictive protocol (SQPP) based packet switching technique
US7486894B2 (en) * 2002-06-25 2009-02-03 Finisar Corporation Transceiver module and integrated circuit with dual eye openers
US7206366B2 (en) * 2002-08-07 2007-04-17 Broadcom Corporation System and method for programmably adjusting gain and frequency response in a 10-GigaBit ethernet/fibre channel system
KR100460672B1 (en) * 2002-12-10 2004-12-09 한국전자통신연구원 Line interface apparatus for 10 gigabit ethernet and control method thereof
WO2005019970A2 (en) * 2003-07-03 2005-03-03 Ubi Systems, Inc. Communication system and method for an optical local area network

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5689644A (en) * 1996-03-25 1997-11-18 I-Cube, Inc. Network switch with arbitration sytem
US6563837B2 (en) * 1998-02-10 2003-05-13 Enterasys Networks, Inc. Method and apparatus for providing work-conserving properties in a non-blocking switch with limited speedup independent of switch size
US6356546B1 (en) * 1998-08-11 2002-03-12 Nortel Networks Limited Universal transfer method and network with distributed switch
US6363416B1 (en) * 1998-08-28 2002-03-26 3Com Corporation System and method for automatic election of a representative node within a communications network with built-in redundancy
US20010033552A1 (en) * 2000-02-24 2001-10-25 Barrack Craig I. Credit-based pacing scheme for heterogeneous speed frame forwarding
US20020012344A1 (en) * 2000-06-06 2002-01-31 Johnson Ian David Switching system
US6954463B1 (en) * 2000-12-11 2005-10-11 Cisco Technology, Inc. Distributed packet processing architecture for network access servers
US20020167950A1 (en) * 2001-01-12 2002-11-14 Zarlink Semiconductor V.N. Inc. Fast data path protocol for network switching
US20020176131A1 (en) * 2001-02-28 2002-11-28 Walters David H. Protection switching for an optical network, and methods and apparatus therefor
US20020131414A1 (en) * 2001-03-15 2002-09-19 Hadzic Iiija Metropolitan area ethernet networks
US20030067925A1 (en) * 2001-10-05 2003-04-10 Samsung Electronics Co., Ltd. Routing coordination protocol for a massively parallel router architecture
US20030118058A1 (en) * 2001-12-26 2003-06-26 Chan Kim Variable length packet switching system
US20050163115A1 (en) * 2003-09-18 2005-07-28 Sitaram Dontu Distributed forwarding in virtual network devices

Cited By (73)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11467883B2 (en) 2004-03-13 2022-10-11 Iii Holdings 12, Llc Co-allocating a reservation spanning different compute resources types
US11652706B2 (en) 2004-06-18 2023-05-16 Iii Holdings 12, Llc System and method for providing dynamic provisioning within a compute environment
US11630704B2 (en) 2004-08-20 2023-04-18 Iii Holdings 12, Llc System and method for a workload management and scheduling module to manage access to a compute environment according to local and non-local user identity information
US11709709B2 (en) 2004-11-08 2023-07-25 Iii Holdings 12, Llc System and method of providing system jobs within a compute environment
US11494235B2 (en) 2004-11-08 2022-11-08 Iii Holdings 12, Llc System and method of providing system jobs within a compute environment
US11537435B2 (en) 2004-11-08 2022-12-27 Iii Holdings 12, Llc System and method of providing system jobs within a compute environment
US11886915B2 (en) 2004-11-08 2024-01-30 Iii Holdings 12, Llc System and method of providing system jobs within a compute environment
US11861404B2 (en) 2004-11-08 2024-01-02 Iii Holdings 12, Llc System and method of providing system jobs within a compute environment
US11656907B2 (en) 2004-11-08 2023-05-23 Iii Holdings 12, Llc System and method of providing system jobs within a compute environment
US11537434B2 (en) 2004-11-08 2022-12-27 Iii Holdings 12, Llc System and method of providing system jobs within a compute environment
US11762694B2 (en) 2004-11-08 2023-09-19 Iii Holdings 12, Llc System and method of providing system jobs within a compute environment
US11658916B2 (en) 2005-03-16 2023-05-23 Iii Holdings 12, Llc Simple integration of an on-demand compute environment
US11522811B2 (en) 2005-04-07 2022-12-06 Iii Holdings 12, Llc On-demand access to compute resources
US11831564B2 (en) 2005-04-07 2023-11-28 Iii Holdings 12, Llc On-demand access to compute resources
US11533274B2 (en) 2005-04-07 2022-12-20 Iii Holdings 12, Llc On-demand access to compute resources
US11765101B2 (en) 2005-04-07 2023-09-19 Iii Holdings 12, Llc On-demand access to compute resources
US11496415B2 (en) 2005-04-07 2022-11-08 Iii Holdings 12, Llc On-demand access to compute resources
US20070160068A1 (en) * 2006-01-12 2007-07-12 Ciena Corporation Methods and systems for managing digital cross-connect matrices using virtual connection points
US8509113B2 (en) * 2006-01-12 2013-08-13 Ciena Corporation Methods and systems for managing digital cross-connect matrices using virtual connection points
US11650857B2 (en) 2006-03-16 2023-05-16 Iii Holdings 12, Llc System and method for managing a hybrid computer environment
WO2008085350A1 (en) * 2006-12-29 2008-07-17 Lucent Technologies Inc. Enabling virtual private local area network services
US20080159301A1 (en) * 2006-12-29 2008-07-03 De Heer Arjan Arie Enabling virtual private local area network services
US11522952B2 (en) 2007-09-24 2022-12-06 The Research Foundation For The State University Of New York Automatic clustering for self-organizing grids
US8532086B1 (en) 2008-11-14 2013-09-10 Intel Corporation Method and system for multi level switch configuration
US7983194B1 (en) * 2008-11-14 2011-07-19 Qlogic, Corporation Method and system for multi level switch configuration
US9465771B2 (en) 2009-09-24 2016-10-11 Iii Holdings 2, Llc Server on a chip and node cards comprising one or more of same
US9008079B2 (en) 2009-10-30 2015-04-14 Iii Holdings 2, Llc System and method for high-performance, low-power data center interconnect fabric
US9749326B2 (en) 2009-10-30 2017-08-29 Iii Holdings 2, Llc System and method for data center security enhancements leveraging server SOCs or server fabrics
US9262225B2 (en) 2009-10-30 2016-02-16 Iii Holdings 2, Llc Remote memory access functionality in a cluster of data processing nodes
US9311269B2 (en) 2009-10-30 2016-04-12 Iii Holdings 2, Llc Network proxy for high-performance, low-power data center interconnect fabric
US9405584B2 (en) 2009-10-30 2016-08-02 Iii Holdings 2, Llc System and method for high-performance, low-power data center interconnect fabric with addressing and unicast routing
US9454403B2 (en) 2009-10-30 2016-09-27 Iii Holdings 2, Llc System and method for high-performance, low-power data center interconnect fabric
US9075655B2 (en) 2009-10-30 2015-07-07 Iii Holdings 2, Llc System and method for high-performance, low-power data center interconnect fabric with broadcast or multicast addressing
US9479463B2 (en) 2009-10-30 2016-10-25 Iii Holdings 2, Llc System and method for data center security enhancements leveraging managed server SOCs
US9509552B2 (en) 2009-10-30 2016-11-29 Iii Holdings 2, Llc System and method for data center security enhancements leveraging server SOCs or server fabrics
US11720290B2 (en) 2009-10-30 2023-08-08 Iii Holdings 2, Llc Memcached server functionality in a cluster of data processing nodes
US20120207165A1 (en) * 2009-10-30 2012-08-16 Calxeda, Inc. System and Method for Using a Multi-Protocol Fabric Module Across a Distributed Server Interconnect Fabric
US9054990B2 (en) 2009-10-30 2015-06-09 Iii Holdings 2, Llc System and method for data center security enhancements leveraging server SOCs or server fabrics
US9680770B2 (en) 2009-10-30 2017-06-13 Iii Holdings 2, Llc System and method for using a multi-protocol fabric module across a distributed server interconnect fabric
US9077654B2 (en) 2009-10-30 2015-07-07 Iii Holdings 2, Llc System and method for data center security enhancements leveraging managed server SOCs
US11526304B2 (en) 2009-10-30 2022-12-13 Iii Holdings 2, Llc Memcached server functionality in a cluster of data processing nodes
US10877695B2 (en) 2009-10-30 2020-12-29 Iii Holdings 2, Llc Memcached server functionality in a cluster of data processing nodes
US9866477B2 (en) 2009-10-30 2018-01-09 Iii Holdings 2, Llc System and method for high-performance, low-power data center interconnect fabric
US9876735B2 (en) 2009-10-30 2018-01-23 Iii Holdings 2, Llc Performance and power optimized computer system architectures and methods leveraging power optimized tree fabric interconnect
US9929976B2 (en) 2009-10-30 2018-03-27 Iii Holdings 2, Llc System and method for data center security enhancements leveraging managed server SOCs
US10140245B2 (en) 2009-10-30 2018-11-27 Iii Holdings 2, Llc Memcached server functionality in a cluster of data processing nodes
US9977763B2 (en) 2009-10-30 2018-05-22 Iii Holdings 2, Llc Network proxy for high-performance, low-power data center interconnect fabric
US8599863B2 (en) * 2009-10-30 2013-12-03 Calxeda, Inc. System and method for using a multi-protocol fabric module across a distributed server interconnect fabric
US10050970B2 (en) 2009-10-30 2018-08-14 Iii Holdings 2, Llc System and method for data center security enhancements leveraging server SOCs or server fabrics
US10135731B2 (en) 2009-10-30 2018-11-20 Iii Holdings 2, Llc Remote memory access functionality in a cluster of data processing nodes
US9800499B2 (en) 2010-07-19 2017-10-24 Cray Uk Limited Ethernet switch and method for routing Ethernet data packets
GB2482118B (en) * 2010-07-19 2017-03-01 Cray Uk Ltd Ethernet switch with link aggregation group facility
GB2482118A (en) * 2010-07-19 2012-01-25 Gnodal Ltd Ethernet switch with link aggregation group facility
US8964601B2 (en) 2011-10-07 2015-02-24 International Business Machines Corporation Network switching domains with a virtualized control plane
US9585281B2 (en) 2011-10-28 2017-02-28 Iii Holdings 2, Llc System and method for flexible storage and networking provisioning in large scalable processor installations
US10021806B2 (en) 2011-10-28 2018-07-10 Iii Holdings 2, Llc System and method for flexible storage and networking provisioning in large scalable processor installations
US9069929B2 (en) 2011-10-31 2015-06-30 Iii Holdings 2, Llc Arbitrating usage of serial port in node card of scalable and modular servers
US9965442B2 (en) 2011-10-31 2018-05-08 Iii Holdings 2, Llc Node card management in a modular and large scalable server system
US9792249B2 (en) 2011-10-31 2017-10-17 Iii Holdings 2, Llc Node card utilizing a same connector to communicate pluralities of signals
US9092594B2 (en) 2011-10-31 2015-07-28 Iii Holdings 2, Llc Node card management in a modular and large scalable server system
US20130201873A1 (en) * 2012-02-02 2013-08-08 International Business Machines Corporation Distributed fabric management protocol
US20130201875A1 (en) * 2012-02-02 2013-08-08 International Business Machines Corporation Distributed fabric management protocol
US9088477B2 (en) * 2012-02-02 2015-07-21 International Business Machines Corporation Distributed fabric management protocol
CN104094555A (en) * 2012-02-02 2014-10-08 国际商业机器公司 Distributed fabric management protocol
US9071508B2 (en) * 2012-02-02 2015-06-30 International Business Machines Corporation Distributed fabric management protocol
US8665889B2 (en) 2012-03-01 2014-03-04 Ciena Corporation Unidirectional asymmetric traffic pattern systems and methods in switch matrices
US9059911B2 (en) 2012-03-07 2015-06-16 International Business Machines Corporation Diagnostics in a distributed fabric system
US9077624B2 (en) 2012-03-07 2015-07-07 International Business Machines Corporation Diagnostics in a distributed fabric system
US9054989B2 (en) 2012-03-07 2015-06-09 International Business Machines Corporation Management of a distributed fabric system
US9077651B2 (en) 2012-03-07 2015-07-07 International Business Machines Corporation Management of a distributed fabric system
US9137141B2 (en) 2012-06-12 2015-09-15 International Business Machines Corporation Synchronization of load-balancing switches
US9253076B2 (en) 2012-06-12 2016-02-02 International Business Machines Corporation Synchronization of load-balancing switches
US9648102B1 (en) 2012-12-27 2017-05-09 Iii Holdings 2, Llc Memcached server functionality in a cluster of data processing nodes

Also Published As

Publication number Publication date
WO2005038599A2 (en) 2005-04-28
US20060029057A1 (en) 2006-02-09
US20060029071A1 (en) 2006-02-09
JP2007507990A (en) 2007-03-29
US20060029056A1 (en) 2006-02-09
US20060029055A1 (en) 2006-02-09
US7352745B2 (en) 2008-04-01
EP1673683A2 (en) 2006-06-28
US20060039369A1 (en) 2006-02-23
US20060029072A1 (en) 2006-02-09
WO2005038599A3 (en) 2006-09-08
EP1673683A4 (en) 2010-06-02
US20070071014A1 (en) 2007-03-29

Similar Documents

Publication Publication Date Title
US7352745B2 (en) Switching system with distributed switching fabric
US9628375B2 (en) N-node link aggregation group (LAG) systems that can support various topologies
US9660939B2 (en) Protection switching over a virtual link aggregation
US9485194B2 (en) Virtual link aggregation of network traffic in an aggregation switch
US7751329B2 (en) Providing an abstraction layer in a cluster switch that includes plural switches
US8854975B2 (en) Scaling OAM for point-to-point trunking
US8320282B2 (en) Automatic control node selection in ring networks
EP3474498A1 (en) Hash-based multi-homing
KR20130100217A (en) Differential forwarding in address-based carrier networks
WO2007129699A1 (en) Communication system, node, terminal, communication method, and program
JPWO2005048540A1 (en) Communication system and communication method
US8228823B2 (en) Avoiding high-speed network partitions in favor of low-speed links
Cisco Concepts
Cisco Concepts
Cisco Concepts
Cisco Concepts
Cisco Concepts
Cisco Concepts
Cisco Concepts
Cisco Concepts
James Applying ADM and OpenFlow to Build High Availability Networks
WO2006131019A1 (en) A method and site for achieving link aggregation between the interconnected resilient packet ring
Lian Design and analysis of optical metropolitan networks
IL195263A (en) Mac address learning in a distributed bridge

Legal Events

Date Code Title Description
AS Assignment

Owner name: RAPTOR NETWORKS TECHNOLOGY, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:PERERA, ANANDA;HOFFMAN, EDWIN;REEL/FRAME:016324/0471

Effective date: 20050225

AS Assignment

Owner name: BRIDGE BANK N.A., CALIFORNIA

Free format text: SECURITY AGREEMENT;ASSIGNOR:RAPTOR NETWORKS TECHNOLOGY INC.;REEL/FRAME:017553/0786

Effective date: 20060427

Owner name: AGILITY CAPITAL, LLC, CALIFORNIA

Free format text: SECURITY AGREEMENT;ASSIGNOR:RAPTOR NETWORKS TECHNOLOGY, INC.;REEL/FRAME:017553/0965

Effective date: 20060427

Owner name: BRIDGE BANK N.A., CALIFORNIA

Free format text: SECURITY AGREEMENT;ASSIGNOR:RAPTOR NETWORKS TECHNOLOGY, INC.;REEL/FRAME:017553/0965

Effective date: 20060427

Owner name: AGILITY CAPITAL, LLC, CALIFORNIA

Free format text: SECURITY AGREEMENT;ASSIGNOR:RAPTOR NETWORKS TECHNOLOGY INC.;REEL/FRAME:017553/0786

Effective date: 20060427

AS Assignment

Owner name: RAPTOR NETWORKS TECHNOLOGY, INC., CALIFORNIA

Free format text: SECURITY AGREEMENT;ASSIGNORS:BRIDGE BANK N.A.;AGILITY CAPITAL, LLC;REEL/FRAME:018044/0115

Effective date: 20060727

AS Assignment

Owner name: RAPTOR NETWORKS TECHNOLOGY, INC., CALIFORNIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNORS:AGILITY CAPITAL, LLC;BRIDGE BANK N.A.;REEL/FRAME:018286/0474

Effective date: 20060921

Owner name: RAPTOR NETWORKS TECHNOLOGY INC., CALIFORNIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNORS:AGILITY CAPITAL, LLC;BRIDGE BANK N.A.;REEL/FRAME:018286/0474

Effective date: 20060921

AS Assignment

Owner name: CASTLERIGG MASTER INVESTMENTS LTD., AS COLLATERAL

Free format text: GRANT OF SECURITY INTEREST;ASSIGNOR:RAPTOR NETWORKS TECHNOLOGY, INC.;REEL/FRAME:021325/0788

Effective date: 20080728

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION