US20040024906A1 - Load balancing in a network comprising communication paths having different bandwidths - Google Patents
Load balancing in a network comprising communication paths having different bandwidths Download PDFInfo
- Publication number
- US20040024906A1 US20040024906A1 US10/208,969 US20896902A US2004024906A1 US 20040024906 A1 US20040024906 A1 US 20040024906A1 US 20896902 A US20896902 A US 20896902A US 2004024906 A1 US2004024906 A1 US 2004024906A1
- Authority
- US
- United States
- Prior art keywords
- trunk
- ports
- cost
- bandwidth
- paths
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L45/00—Routing or path finding of packets in data switching networks
- H04L45/12—Shortest path evaluation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L45/00—Routing or path finding of packets in data switching networks
- H04L45/24—Multipath
- H04L45/245—Link aggregation, e.g. trunking
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/12—Avoiding congestion; Recovering from congestion
- H04L47/125—Avoiding congestion; Recovering from congestion by balancing the load, e.g. traffic engineering
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D30/00—Reducing energy consumption in communication networks
- Y02D30/50—Reducing energy consumption in communication networks in wire-line communication networks, e.g. low power modes or reduced link rate
Definitions
- the present invention generally relates to computer networks. More particularly, the invention relates to electronic switches through which communications pass from one point in a network to another. Still more particularly, the invention relates to load balancing in a switch-based fabric.
- networks ranging from small networks comprising two or three computers to vast networks comprising hundreds or even thousands of computers.
- Networks can be set up to provide a wide assortment of capabilities. For example, networks of computers may permit each computer to share a centralized mass storage device or printer. Further, networks enable electronic mail and numerous other types of services.
- a network's infrastructure comprises switches, routers, hubs and the like to coordinate the effective and efficient transfer of data and commands from one point on the network to another.
- Networks often comprise a “fabric” of interconnected switches which are devices that route data packets from a source port to a destination port.
- FIG. 1 exemplifies a switch fabric having five switches 20 , 22 , 24 , 26 , and 28 .
- the switches 20 - 28 are interconnected by links 30 - 38 , as shown.
- One or more devices can be connected to any of the switches.
- four devices 40 are connected to switch 20 and two devices 42 connect to switch 24 .
- the devices may be any desirable types of devices such as servers and storage devices.
- a device 40 connected to switch 20 may need to send a data packet to a device 42 connected to switch 24 .
- the packet can be routed from switch 20 to switch 24 via one of two paths in the exemplary architecture of FIG. 1.
- One path comprises switches 20 - 22 - 24 and the other path comprises switches 20 - 28 - 26 - 24 .
- the path that will be used between pairs of switches is determined a priori during system initialization or when the fabric configuration changes such as the addition or removal of a switch.
- Various path selection algorithms have been suggested and used.
- One such conventional path selection algorithm is often referred to as the “shortest path” algorithm. According to this algorithm, the shortest path is selected to be the path for routing packets between switches. The shortest path takes into account the bandwidth of the various links interconnecting the switches.
- a “cost” value is assigned to each link.
- the numbers shown in parentheses adjacent each link represents the cost of each link.
- the cost values are generally arbitrary in magnitude, but correlate with a system criteria such as the inverse of the bandwidth of the links. That is, higher bandwidth links have lower costs.
- links 30 and 32 may have the same bandwidth (e.g., 1 gigabits per second (“gbps”)) and may be assigned a cost value of 1000.
- Links 34 , 36 , and 38 may have twice the bandwidth of links 30 and 32 (2 gbps) and accordingly may be assigned cost values of 500 (one-half of the cost of links 30 and 32 ).
- the shortest path represents the path with the lowest total cost.
- the total cost of the path is the sum of the costs associated with the links comprising the path.
- the 20 - 22 - 24 path has a total cost of 2000, while the total cost of the 20 - 28 - 26 - 24 path is 1500.
- the 20 - 28 - 26 - 24 path will be selected to be the path used for routing data packets between devices 40 and 42 .
- the cost values may be related to other system criteria besides bandwidth. Examples include the delay in crossing a link and a monetary cost an entity might charge for the ISL.
- FIG. 2 repeats the fabric architecture of FIG. 1, but also includes a trunk 48 interconnecting switches 20 and 22 .
- a trunk comprises a logical collection of two or more links.
- trunk 48 comprises four links.
- trunk is actually four separate parallel links, the trunk appears as one logical “pipe” for data to flow between switches.
- Hardware (not specifically shown) in switches 20 and 22 selects one of the various links comprising the pipe when sending a data packet across the trunk, thereby alleviating the higher level logic in the switch from controlling the operation of the trunk.
- the trunk effectively has a bandwidth that is greater than the bandwidth of the links comprising the trunk.
- the bandwidth of each link comprising trunk 48 is the same as the bandwidth of separate link 30 (i.e., 1 gbps)
- the effective bandwidth of trunk 48 with four 1 gbps links is 4 gbps.
- the system might assign trunk 48 a cost of one-fourth the cost of link 30 , which would be a cost of 250 .
- the lowest cost path from devices 40 to devices 42 would be the 1250 cost path comprising switches 20 - 22 - 24 and including trunk 48 between switches 20 and 22 .
- trunk 48 will carry all of the traffic and parallel link 30 will carry no traffic and thus be underutilized. This is acceptable if the bandwidth of the data does not exceed the capacity of trunk 48 . If the data does exceed the bandwidth of the trunk, then performance is reduced despite link 30 being available to carry traffic, but not being used in that regard.
- devices 40 and 42 connect to ports 46 and 44 , respectively, contained in their associated switches 20 and 24 .
- the speed of the links connecting devices 40 , 42 to their ports 46 , 44 may vary from device to device. Some devices, for example, may connect via a lgbps link, while other devices connect via 2 gbps links.
- the conventional shortest path selection algorithm typically does not take into account the speed of the peripheral links connecting the peripheral devices to the switches when assigning the source ports (e.g., ports 46 ) to the paths determined to be shortest, thereby possibly resulting in less than optimal path assignments.
- Such a solution preferably would efficiently merge together tnmk implementations in the face of a network fabric based on a shortest path selection algorithm and take into account peripheral link speeds.
- a network comprising a plurality of interconnected switches. At least one pair of switches is interconnected by a trunk formed from a plurality of individual links. Rather than assigning a cost value to the trunk commensurate with its higher bandwidth (relative to an individual link), a cost value is assigned to the trunk that is equal to the cost of one of the trunk's individual links.
- each trunk has a cost value for purposes of path calculation that is the same as the cost of individual (i.e., non-trunked) links in the network.
- a trunk is considered the same as an individual link when the shortest path calculations are made.
- the system balances load traffic between such lowest cost paths in a way that favors trunks without totally excluding non-trunk links.
- each switch comprises a plurality of source ports and destination ports and each destination port is connected to a communication link. Two or more source or destination ports may be logically combined together to form a trunk.
- a CPU in each switch balances its source ports among the destination ports that are part of the lowest cost paths.
- This load balancing technique is based on bandwidth allocation of the destination ports. The bandwidth allocation is determined to be the percentage of the bandwidth of the destination port that would be used if the source ports currently assigned to the destination port were providing traffic at their full rated bandwidth.
- trunks are not used to the total exclusion of other slower links as noted above.
- the preferred load balancing technique described herein still takes into account the higher bandwidth capabilities of trunks when balancing the source ports across the various destination ports.
- FIG. 1 shows a conventional switch fabric in which a shortest path selection algorithm is implemented
- FIG. 2 shows a similar fabric that also includes a trunk comprising more than one link
- FIG. 3 shows a preferred embodiment of the invention in which each trunk is assigned a cost value equal to the cost of an individual link forming the trunk;
- FIG. 4 depicts an exemplary logical sequence that is followed to assign source ports to destination ports in a switch in accordance with the preferred embodiment of the invention.
- an improved load balancing technique is implemented in a network comprising trunks formed from individual communication links.
- cost values are generally inversely proportional to link bandwidth
- higher bandwidth trunks preferably are not assigned lower cost values. Instead, the trunks are assigned a cost value that is the cost of the individual links comprising the trunk.
- trunks advantageously are not assigned source ports to the total exclusion of other ports as noted above with regard to conventional techniques. Further, source ports are distributed across the various lowest cost paths in a way that favors the higher bandwidth trunks, but not necessarily to the exclusion of other links.
- a switch fabric comprising four switches 50 , 52 , 54 and 56 permitting devices D 1 -D 10 to communicate with each other.
- Devices D 1 -D 6 couple to switch 50
- devices D 7 -D 10 couple to switch 54 .
- devices D 1 -D 10 can be any desired devices including servers, storage devices, etc.
- Switch 50 couples to switch 52 via a trunk 51 and an individual link 60 .
- switch 50 couples to switch 56 via a trunk 58 and link 66 .
- Switches 52 and 56 couple to switch 54 via links 62 and 64 , and trunks 63 and 65 as shown. All of the links shown in FIG.
- Switch 50 is shown as having a number of ports P 1 -P 10 and may have additional ports (e.g., 16 total ports) as desired. As shown, devices D 1 -D 6 are assigned to ports P 1 -P 6 .
- Ports P 7 and P 10 are assigned to trunks 51 and 58 , respectively, while ports P 8 and P 9 are assigned to links 60 and 66 .
- Ports P 1 -P 6 are referred to herein as “source” ports and ports P 7 -P 10 are referred to as “destination” ports, although in general, each port is bi-directional. It should also be understood that each link in a trunk is connected to a separate destination port and that such destination ports are aggregated together to form the logical trunk. Thus, ports P 7 and P 10 are actually two separate ports aggregated together.
- the values next to the links connecting devices D 1 -D 6 to ports P 1 -P 6 represent the bandwidth in units of gigabits (gbps).
- trunks 51 and 58 are considered to have the same cost as the individual links comprising the trunks.
- all of the individual links have a cost of 500 which is indicated in parentheses adjacent each link.
- the cost of the trunks are considered to be the same as the individual links comprising the trunks (i.e., 500). That is, the various paths from switch 50 to switch 54 all have the same cost.
- Those paths include switch 50 -trunk 51 -switch 52 -link 62 (or trunk 63 )-switch 54 ; (2) switch 50 -link 60 -switch 52 -link 62 (or trunk 63 )-switch 54 ; (3) switch 50 -trunk 58 -switch 56 -link 64 (or trunk 65 )-switch 54 ; and (4) switch 50 -link 66 -switch 56 -link 64 (or trunk 65 )-switch 54 .
- the path selection criterion of the preferred embodiment avoids using a trunk to the exclusion of a sister link as explained previously.
- each switch 50 - 56 includes two processes 57 and 59 . These processes are implemented as firmware stored in memory coupled to a CPU 61 and executed by the CPU.
- Process 57 comprises a switch interconnect database exchange process. This process propagates connection information to all adjacent switches in accordance with any suitable, known technique. For example, switch 50 propagates its connection information to switches 52 and 56 , while switch 52 propagates its connection information to switches 50 and 54 .
- the connection information for a switch comprises a database having a plurality of entries. Each entry includes, for each of the switch's ports, the identity of the adjacent switch connected to that port, the identity of the adjacent switch's port, and the cost of the link formed therebetween. Other information may be included as desired.
- the switch interconnect database exchange process 57 propagates this database to adjacent switches which, in turn, continue the propagation of the information. Eventually, all switches in the fabric have a complete and identical interconnection database.
- Process 59 comprises a load balancing process which uses the interconnection database information and computes the cost of the various paths through the network, determines the lowest cost paths, and balances the loads across multiple lowest cost paths as described below.
- the following explanation illustrates the preferred embodiment in balancing load between devices D 1 -D 6 and devices D 7 , D 8 and, more specifically, balancing loads between switch 50 's source ports P 1 -P 6 and the switch's destination ports P 7 -P 10 .
- FIGS. 3 and 4 for the following discussion.
- FIG. 4 lists the four destination ports P 7 -P 10 for switch 50 along with their associated bandwidths in parentheses.
- Steps 70 - 76 depict the sequential assignment of source ports P 1 -P 6 to destination ports P 7 -P 10 in accordance with a preferred embodiment of the invention. Initially, before any assignments are made, none of the bandwidth of the links and trunks assigned to the destination ports (i.e., trunks 51 , 58 and links 60 , 66 ) are allocated. This fact is reflected at 70 in which 0% of the bandwidth associated with each of the destination ports is allocated. Ports P 7 and P 10 which connect to trunks 51 and 58 have 4 gbps of bandwidth, while ports P 8 and P 9 which connect to links 60 and 66 have 2 gbps of bandwidth.
- the source ports P 1 -P 6 can be assigned in any desired order to the destination ports. As discussed below, the source ports are assigned in numerical order in FIG. 4 starting with port P 1 and progressing through port P 6 . In accordance with the preferred embodiment of the invention, source port assignments preferably are made in a manner that keeps the bandwidth allocation of the ports as low as possible and in a way that evenly distributes the loads or source ports across the various destination ports.
- port P 1 is connected to device D 1 over a 2 gpbs link. If the 2 gbps source port P 1 was assigned to either of the 2 gbps destination ports P 8 or P 9 , the destination port's bandwidth allocation would increase to 100%, assuming that, in fact, the full 2 gbps bandwidth of the source port was being used. This assumption, that the full rated bandwidth of a port is being used, is made throughout the path assignment technique described herein. In an attempt to keep the bandwidth allocation numbers as low as possible, as noted above, the 2 gpbs source port P 1 preferably is assigned to one of the 4 gbps destination ports P 7 or P 10 .
- Step 71 reflects this assignment with source port P 1 listed in the column beneath destination port P 7 .
- the value 50% in parentheses next to source port P 1 in step 71 shows that the bandwidth allocation for destination port P 7 has risen to 50%.
- Source port P 2 couples via a 1 gbps link to device D 2 .
- the 1 gpbs source port P 2 represents a 25% bandwidth allocation with regard to the 4 gbps destination ports P 7 and P 10 and a 50% allocation with regard to the 2 gbps destination ports P 8 and P 9 .
- destination port P 7 already has 50% of its bandwidth accounted for by virtue of source port P 1 , assigning source port P 2 to destination port P 7 would result in an allocation of 75% with regard to port P 7 .
- source port P 1 preferably is assigned to destination port P 10 .
- This assignment results in the allocation of ports P 7 -P 10 being 50%, 0%, 0% and 25%, respectively, as shown by steps 70 - 72 .
- source port P 3 which is a 2 gbps port
- that port preferably also is assigned to the 4 gbps destination port P 10 as shown in step 73 .
- the allocation of port P 10 will be 75% which results from the 1 gbps port P 2 (25%) and the 2 gbps port P 3 (50%).
- Assignment of source port P 3 to destination ports P 7 , P 8 or P 9 would result in an allocation of 100% for those ports.
- Source port P 4 is 1 gbps port. Assignment of port P 4 to ports P 7 or P 10 would result in allocation of those destination ports of 75% and 100%, while assignment to either of the 2 gbps ports P 8 or P 9 will result in only a 50% allocation. Being the smaller port number, port P 8 is selected to accommodate source port P 4 as shown in step 74 .
- Source port P 5 is a 2 gbps second port. Assignment of that port to destination ports P 8 -P 10 would result in bandwidth allocations of 150% (an over-allocation condition), 100% and 125% (also an over-allocation condition), respectively. However, assignment to destination port P 7 as shown at step 75 causes port P 7 to be 100% allocated. Because the assignment of source port P 5 to ports P 7 or P 9 would result in the same allocation, port P 7 being the smaller port number is selected. If there is a tie between 2 destination ports (as would be the case above), the tie is broken by selecting the destination port that has the highest bandwidth. If there are multiple ports that have the same bandwidth, the tie is broken by selecting the destination port with the smaller port number. Finally, as depicted at step 76 source port P 6 (which is a 2 gbps port) is assigned to destination port P 9 resulting in a 100% allocation of port P 9 because any other destination port assignment would result in allocations greater than 100%.
- the six source ports P 1 -P 6 have been assigned to the four destination ports as shown. As can be seen, two source ports have been assigned to each of the destination ports P 7 and P 10 that are coupled to the higher bandwidth trunks 51 and 58 , while only one source port is assigned to each of the other lower bandwidth destination ports P 8 and P 9 . As such, an efficient allocation of source ports to destination ports is achieved without any destination port being over-allocated (i.e., bandwidth allocation in excess of 100%). Further, the bandwidths of the source devices themselves have been taken into account when making the destination port assignments.
- load balancing is based on (1) the bandwidth associated with the links connected to the destination ports forming part of the lowest cost paths, (2) the bandwidth of trunks formed from the destination ports (if a trunk is so formed), (3) the bandwidth allocation of the destination ports, and (4) the bandwidth associated with the source ports.
- the preferred embodiment of the invention is not limited to networks that have trunks.
- networks that have trunks.
- each of such 1 gbps links would ordinarily have a cost of 100 and thus would not be used to route traffic.
- a cost of 500 could be assigned to the 1 gbps links and the process described above would cause twice as many ports (P 1 -P 6 ) to be assigned to the 2 gbps links 60 and 66 .
Abstract
Description
- Not applicable.
- Not applicable.
- 1. Field of the Invention
- The present invention generally relates to computer networks. More particularly, the invention relates to electronic switches through which communications pass from one point in a network to another. Still more particularly, the invention relates to load balancing in a switch-based fabric.
- 2. Background Information
- Initially, computers were most typically used in a standalone manner. It is now commonplace for computers and other types of computer-related and electronic devices to communicate with each other over a network. The ability for computers to communicate with one another has lead to the creation of networks ranging from small networks comprising two or three computers to vast networks comprising hundreds or even thousands of computers. Networks can be set up to provide a wide assortment of capabilities. For example, networks of computers may permit each computer to share a centralized mass storage device or printer. Further, networks enable electronic mail and numerous other types of services. Generally, a network's infrastructure comprises switches, routers, hubs and the like to coordinate the effective and efficient transfer of data and commands from one point on the network to another.
- Networks often comprise a “fabric” of interconnected switches which are devices that route data packets from a source port to a destination port. FIG. 1 exemplifies a switch fabric having five
switches devices 40 are connected to switch 20 and twodevices 42 connect to switch 24. The devices may be any desirable types of devices such as servers and storage devices. - A
device 40 connected toswitch 20 may need to send a data packet to adevice 42 connected to switch 24. The packet can be routed fromswitch 20 to switch 24 via one of two paths in the exemplary architecture of FIG. 1. One path comprises switches 20-22-24 and the other path comprises switches 20-28-26-24. In many networks, the path that will be used between pairs of switches is determined a priori during system initialization or when the fabric configuration changes such as the addition or removal of a switch. Various path selection algorithms have been suggested and used. One such conventional path selection algorithm is often referred to as the “shortest path” algorithm. According to this algorithm, the shortest path is selected to be the path for routing packets between switches. The shortest path takes into account the bandwidth of the various links interconnecting the switches. - Referring still to FIG. 1, a “cost” value is assigned to each link. The numbers shown in parentheses adjacent each link represents the cost of each link. The cost values are generally arbitrary in magnitude, but correlate with a system criteria such as the inverse of the bandwidth of the links. That is, higher bandwidth links have lower costs. For example,
links Links links 30 and 32 (2 gbps) and accordingly may be assigned cost values of 500 (one-half of the cost oflinks 30 and 32). The shortest path represents the path with the lowest total cost. The total cost of the path is the sum of the costs associated with the links comprising the path. In the example of FIG. 1, the 20-22-24 path has a total cost of 2000, while the total cost of the 20-28-26-24 path is 1500. As the lowest cost path, the 20-28-26-24 path will be selected to be the path used for routing data packets betweendevices - A complication to the lowest cost path selection algorithm is introduced when “trunks” are implemented in a switching fabric. For additional information regarding trunking, please refer to U.S. Pat. No. 09/872,412, filed Jun. 1, 2001, entitled “Link Trunking and Measuring Link Latency in Fibre Channel Fabric,” by David C. Banks, Kreg A. Martin, Shunjia Yu, Jieming Zhu and Kevan K. Kwong, incorporated herein by reference. For example, FIG. 2 repeats the fabric architecture of FIG. 1, but also includes a
trunk 48interconnecting switches trunk 48 comprises four links. Although the trunk is actually four separate parallel links, the trunk appears as one logical “pipe” for data to flow between switches. Hardware (not specifically shown) inswitches - Because the links comprising the trunk can be used simultaneously to send data packets, the trunk effectively has a bandwidth that is greater than the bandwidth of the links comprising the trunk. In FIG. 2, if the bandwidth of each
link comprising trunk 48 is the same as the bandwidth of separate link 30 (i.e., 1 gbps), the effective bandwidth oftrunk 48 with four 1 gbps links is 4 gbps. In the context of each path having a cost associated with it, the system might assign trunk 48 a cost of one-fourth the cost oflink 30, which would be a cost of 250. Then, the lowest cost path fromdevices 40 todevices 42 would be the 1250 cost path comprising switches 20-22-24 and includingtrunk 48 betweenswitches - In some situations, this will be a satisfactory implementation. However, this implementation may be less than satisfactory in other situations. Because traffic from
devices 40 will be routed fromswitch 20 to switch 22 throughtrunk 48 to the exclusion oflink 30,trunk 48 will carry all of the traffic andparallel link 30 will carry no traffic and thus be underutilized. This is acceptable if the bandwidth of the data does not exceed the capacity oftrunk 48. If the data does exceed the bandwidth of the trunk, then performance is reduced despitelink 30 being available to carry traffic, but not being used in that regard. - Referring still to FIG. 2,
devices ports switches links connecting devices ports - Accordingly, a solution to these problems is needed. Such a solution preferably would efficiently merge together tnmk implementations in the face of a network fabric based on a shortest path selection algorithm and take into account peripheral link speeds.
- The preferred embodiments of the present invention solve the problems noted above by a network comprising a plurality of interconnected switches. At least one pair of switches is interconnected by a trunk formed from a plurality of individual links. Rather than assigning a cost value to the trunk commensurate with its higher bandwidth (relative to an individual link), a cost value is assigned to the trunk that is equal to the cost of one of the trunk's individual links. Thus, in general, each trunk has a cost value for purposes of path calculation that is the same as the cost of individual (i.e., non-trunked) links in the network. As such, a trunk is considered the same as an individual link when the shortest path calculations are made. In further accordance with the preferred embodiment, when multiple paths are computed as having the lowest cost, the system balances load traffic between such lowest cost paths in a way that favors trunks without totally excluding non-trunk links.
- More specifically, each switch comprises a plurality of source ports and destination ports and each destination port is connected to a communication link. Two or more source or destination ports may be logically combined together to form a trunk. A CPU in each switch balances its source ports among the destination ports that are part of the lowest cost paths. This load balancing technique is based on bandwidth allocation of the destination ports. The bandwidth allocation is determined to be the percentage of the bandwidth of the destination port that would be used if the source ports currently assigned to the destination port were providing traffic at their full rated bandwidth. Thus, by assigning the same cost value to a trunk as an individual link, trunks are not used to the total exclusion of other slower links as noted above. However, the preferred load balancing technique described herein still takes into account the higher bandwidth capabilities of trunks when balancing the source ports across the various destination ports.
- These and other aspects and benefits of the preferred embodiments of the present invention will become apparent upon analyzing the drawings, detailed description and claims, which follow.
- For a detailed description of the preferred embodiments of the invention, reference will now be made to the accompanying drawings in which:
- FIG. 1 shows a conventional switch fabric in which a shortest path selection algorithm is implemented;
- FIG. 2 shows a similar fabric that also includes a trunk comprising more than one link;
- FIG. 3 shows a preferred embodiment of the invention in which each trunk is assigned a cost value equal to the cost of an individual link forming the trunk; and
- FIG. 4 depicts an exemplary logical sequence that is followed to assign source ports to destination ports in a switch in accordance with the preferred embodiment of the invention.
- Certain terms are used throughout the following description and claims to refer to particular system components. As one skilled in the art will appreciate, computer and computer-related companies may refer to a component and sub-components by different names. This document does not intend to distinguish between components that differ in name but not function. In the following discussion and in the claims, the terms “including” and “comprising” are used in an open-ended fashion, and thus should be interpreted to mean “including, but not limited to. . . ”. Also, the term “couple” or “couples” is intended to mean either a direct or indirect physical connection. Thus, if a first device couples to a second device, that connection may be through a direct physical connection, or through an indirect physical connection via other devices and connections.
- To the extent that any term is not specially defined in this specification, the intent is that the term is to be given its plain and ordinary meaning.
- In accordance with the preferred embodiment of the invention, an improved load balancing technique is implemented in a network comprising trunks formed from individual communication links. Although cost values are generally inversely proportional to link bandwidth, higher bandwidth trunks preferably are not assigned lower cost values. Instead, the trunks are assigned a cost value that is the cost of the individual links comprising the trunk. Because trunks do not have lower cost values, trunks advantageously are not assigned source ports to the total exclusion of other ports as noted above with regard to conventional techniques. Further, source ports are distributed across the various lowest cost paths in a way that favors the higher bandwidth trunks, but not necessarily to the exclusion of other links. Although numerous embodiments of the invention are possible, the following discussion provides one such suitable embodiment.
- Referring now to FIG. 3, a switch fabric is shown comprising four
switches Switch 50 couples to switch 52 via atrunk 51 and anindividual link 60. Similarly, switch 50 couples to switch 56 via atrunk 58 andlink 66.Switches links 62 and 64, andtrunks 63 and 65 as shown. All of the links shown in FIG. 3 including thelinks comprising trunks links trunks Switch 50 is shown as having a number of ports P1 -P10 and may have additional ports (e.g., 16 total ports) as desired. As shown, devices D1-D6 are assigned to ports P1-P6. Ports P7 and P10 are assigned totrunks links - In accordance with the preferred embodiment of the invention, for load
balancing purposes trunks trunks 63 and 65, are considered to have the same cost as the individual links comprising the trunks. In the example of FIG. 3 all of the individual links have a cost of 500 which is indicated in parentheses adjacent each link. Rather than reducing the cost of the higher bandwidth trunks in proportion to the increase in the trunks' effective bandwidth, the cost of the trunks are considered to be the same as the individual links comprising the trunks (i.e., 500). That is, the various paths fromswitch 50 to switch 54 all have the same cost. Those paths include switch 50-trunk 51-switch 52-link 62 (or trunk 63)-switch 54; (2) switch 50-link 60-switch 52-link 62 (or trunk 63)-switch 54; (3) switch 50-trunk 58-switch 56-link 64 (or trunk 65)-switch 54; and (4) switch 50-link 66-switch 56-link 64 (or trunk 65)-switch 54. By assigning eachtrunk switches - Referring still to FIG. 3, each switch50-56 includes two
processes CPU 61 and executed by the CPU.Process 57 comprises a switch interconnect database exchange process. This process propagates connection information to all adjacent switches in accordance with any suitable, known technique. For example, switch 50 propagates its connection information toswitches switch 52 propagates its connection information toswitches database exchange process 57 propagates this database to adjacent switches which, in turn, continue the propagation of the information. Eventually, all switches in the fabric have a complete and identical interconnection database. -
Process 59 comprises a load balancing process which uses the interconnection database information and computes the cost of the various paths through the network, determines the lowest cost paths, and balances the loads across multiple lowest cost paths as described below. The following explanation illustrates the preferred embodiment in balancing load between devices D1-D6 and devices D7, D8 and, more specifically, balancing loads betweenswitch 50's source ports P1-P6 and the switch's destination ports P7-P10. Reference should be made to FIGS. 3 and 4 for the following discussion. - FIG. 4 lists the four destination ports P7-P10 for
switch 50 along with their associated bandwidths in parentheses. Steps 70-76 depict the sequential assignment of source ports P1-P6 to destination ports P7-P10 in accordance with a preferred embodiment of the invention. Initially, before any assignments are made, none of the bandwidth of the links and trunks assigned to the destination ports (i.e.,trunks links 60, 66) are allocated. This fact is reflected at 70 in which 0% of the bandwidth associated with each of the destination ports is allocated. Ports P7 and P10 which connect totrunks links - The source ports P1-P6 can be assigned in any desired order to the destination ports. As discussed below, the source ports are assigned in numerical order in FIG. 4 starting with port P1 and progressing through port P6. In accordance with the preferred embodiment of the invention, source port assignments preferably are made in a manner that keeps the bandwidth allocation of the ports as low as possible and in a way that evenly distributes the loads or source ports across the various destination ports.
- Referring to FIGS. 3 and 4, port P1 is connected to device D1 over a 2 gpbs link. If the 2 gbps source port P1 was assigned to either of the 2 gbps destination ports P8 or P9, the destination port's bandwidth allocation would increase to 100%, assuming that, in fact, the full 2 gbps bandwidth of the source port was being used. This assumption, that the full rated bandwidth of a port is being used, is made throughout the path assignment technique described herein. In an attempt to keep the bandwidth allocation numbers as low as possible, as noted above, the 2 gpbs source port P1 preferably is assigned to one of the 4 gbps destination ports P7 or P10. Assigned to a 2 gbps source port, the 4 gbps destination port's bandwidth allocation will become 50%, which of course is lower than the 100% allocation that would have resulted if the ports P8 or P9 were used. Because there are two 4 gbps destination ports P7 and P10 available for the assignment of source port P1, either destination port can be used. In accordance with the preferred embodiment of the invention, the assignment is made to the lowest numbered port (i.e., port P7).
Step 71 reflects this assignment with source port P1 listed in the column beneath destination port P7. Thevalue 50% in parentheses next to source port P1 instep 71 shows that the bandwidth allocation for destination port P7 has risen to 50%. - A similar logic is applied to the assignment of the remaining source ports P2-P6. Source port P2 couples via a 1 gbps link to device D2. Again, examining the various destination ports P7-P10, the 1 gpbs source port P2 represents a 25% bandwidth allocation with regard to the 4 gbps destination ports P7 and P10 and a 50% allocation with regard to the 2 gbps destination ports P8 and P9. Because destination port P7 already has 50% of its bandwidth accounted for by virtue of source port P1, assigning source port P2 to destination port P7 would result in an allocation of 75% with regard to port P7. In an attempt to keep the bandwidth allocations low and evenly distributed across the various destination ports, source port P1 preferably is assigned to destination port P10. This assignment results in the allocation of ports P7-P10 being 50%, 0%, 0% and 25%, respectively, as shown by steps 70-72.
- Considering now source port P3 which is a 2 gbps port, that port preferably also is assigned to the 4 gbps destination port P10 as shown in step 73. As such, the allocation of port P10 will be 75% which results from the 1 gbps port P2 (25%) and the 2 gbps port P3 (50%). Assignment of source port P3 to destination ports P7, P8 or P9 would result in an allocation of 100% for those ports.
- Source port P4 is 1 gbps port. Assignment of port P4 to ports P7 or P10 would result in allocation of those destination ports of 75% and 100%, while assignment to either of the 2 gbps ports P8 or P9 will result in only a 50% allocation. Being the smaller port number, port P8 is selected to accommodate source port P4 as shown in
step 74. - Source port P5 is a 2 gbps second port. Assignment of that port to destination ports P8-P10 would result in bandwidth allocations of 150% (an over-allocation condition), 100% and 125% (also an over-allocation condition), respectively. However, assignment to destination port P7 as shown at
step 75 causes port P7 to be 100% allocated. Because the assignment of source port P5 to ports P7 or P9 would result in the same allocation, port P7 being the smaller port number is selected. If there is a tie between 2 destination ports (as would be the case above), the tie is broken by selecting the destination port that has the highest bandwidth. If there are multiple ports that have the same bandwidth, the tie is broken by selecting the destination port with the smaller port number. Finally, as depicted atstep 76 source port P6 (which is a 2 gbps port) is assigned to destination port P9 resulting in a 100% allocation of port P9 because any other destination port assignment would result in allocations greater than 100%. - Referring still to FIG. 4, the six source ports P1-P6 have been assigned to the four destination ports as shown. As can be seen, two source ports have been assigned to each of the destination ports P7 and P10 that are coupled to the
higher bandwidth trunks - Should the allocation of each of the destination ports reach 100%, adding an additional source port will result in an over-allocation condition. This being the case, the additional source port is assigned using the above techniques to minimize the over-allocation. Thus, in the preceding example, the next source port would be assigned to destination port P7. This process continues until all of the source ports have been assigned.
- It should be understood that the preferred embodiment of the invention is not limited to networks that have trunks. For example, with reference to FIG. 3, if the two
trunks gbps links - The above discussion is meant to be illustrative of the principles and various embodiments of the present invention. Numerous variations and modifications will become apparent to those skilled in the art once the above disclosure is fully appreciated. It is intended that the following claims be interpreted to embrace all such variations and modifications.
Claims (18)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/208,969 US7383353B2 (en) | 2002-07-31 | 2002-07-31 | Load balancing in a network comprising communication paths having different bandwidths |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/208,969 US7383353B2 (en) | 2002-07-31 | 2002-07-31 | Load balancing in a network comprising communication paths having different bandwidths |
Publications (2)
Publication Number | Publication Date |
---|---|
US20040024906A1 true US20040024906A1 (en) | 2004-02-05 |
US7383353B2 US7383353B2 (en) | 2008-06-03 |
Family
ID=31186921
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/208,969 Active 2025-06-03 US7383353B2 (en) | 2002-07-31 | 2002-07-31 | Load balancing in a network comprising communication paths having different bandwidths |
Country Status (1)
Country | Link |
---|---|
US (1) | US7383353B2 (en) |
Cited By (27)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030063594A1 (en) * | 2001-08-13 | 2003-04-03 | Via Technologies, Inc. | Load balance device and method for packet switching |
US20040071134A1 (en) * | 2002-06-28 | 2004-04-15 | Brocade Communications Systems, Inc. | Apparatus and method for routing traffic in multi-link switch |
US20050041659A1 (en) * | 2001-06-13 | 2005-02-24 | Paul Harry V. | Method and apparatus for rendering a cell-based switch useful for frame based protocols |
US20050047334A1 (en) * | 2001-06-13 | 2005-03-03 | Paul Harry V. | Fibre channel switch |
US20050088969A1 (en) * | 2001-12-19 | 2005-04-28 | Scott Carlsen | Port congestion notification in a switch |
DE102004007977A1 (en) * | 2004-02-18 | 2005-06-02 | Siemens Ag | Communications system has message controller formed by network coupling elements themselves that determines communications path per message between sending, receiving network coupling elements taking into account uniform network loading |
US20050281196A1 (en) * | 2004-06-21 | 2005-12-22 | Tornetta Anthony G | Rule based routing in a switch |
US20050281282A1 (en) * | 2004-06-21 | 2005-12-22 | Gonzalez Henry J | Internal messaging within a switch |
US20060013135A1 (en) * | 2004-06-21 | 2006-01-19 | Schmidt Steven G | Flow control in a switch |
US20060067331A1 (en) * | 2004-09-27 | 2006-03-30 | Kodialam Muralidharan S | Method for routing traffic using traffic weighting factors |
US20060168109A1 (en) * | 2004-11-12 | 2006-07-27 | Brocade Communications Systems, Inc. | Methods, devices and systems with improved zone merge operation by operating on a switch basis |
US7383353B2 (en) * | 2002-07-31 | 2008-06-03 | Brocade Communications Systems, Inc. | Load balancing in a network comprising communication paths having different bandwidths |
US20080205408A1 (en) * | 2007-02-28 | 2008-08-28 | Hewlett-Packard Development Company, L.P. | Transmitting a packet from a distributed trunk switch |
US20080205387A1 (en) * | 2007-02-28 | 2008-08-28 | Hewlett-Packard Development Company, L.P. | Transmitting a packet from a distributed trunk switch |
US7443799B2 (en) | 2003-10-31 | 2008-10-28 | Brocade Communication Systems, Inc. | Load balancing in core-edge configurations |
US20090028559A1 (en) * | 2007-07-26 | 2009-01-29 | At&T Knowledge Ventures, Lp | Method and System for Designing a Network |
US7619974B2 (en) | 2003-10-31 | 2009-11-17 | Brocade Communication Systems, Inc. | Frame traffic balancing across trunk groups |
US20100293277A1 (en) * | 2009-05-12 | 2010-11-18 | Rooks Kelsyn D S | Multi-source broadband aggregation router |
US20100296437A1 (en) * | 2009-05-20 | 2010-11-25 | Stelle William T | Dynamic multi-point access network |
US20140211623A1 (en) * | 2013-01-29 | 2014-07-31 | Hewlett-Packard Development Company, L.P. | Balancing Load Distributions of Loopback Ports |
US20160218970A1 (en) * | 2015-01-26 | 2016-07-28 | International Business Machines Corporation | Method to designate and implement new routing options for high priority data flows |
US20170013053A1 (en) * | 2011-03-31 | 2017-01-12 | Amazon Technologies, Inc. | Random next iteration for data update management |
US20170063666A1 (en) * | 2015-08-27 | 2017-03-02 | Facebook, Inc. | Routing with flow over shared risk link groups |
US10616319B2 (en) * | 2018-02-06 | 2020-04-07 | Vmware, Inc. | Methods and apparatus to allocate temporary protocol ports to control network load balancing |
US20210176162A1 (en) * | 2015-07-29 | 2021-06-10 | At&T Intellectual Property I, L.P. | Methods and apparatus to reflect routes from a remotely located virtual route reflector |
US11102142B2 (en) | 2018-01-24 | 2021-08-24 | Vmware, Inc. | Methods and apparatus to perform dynamic load balancing for a multi-fabric environment in network-based computing |
US11190440B2 (en) | 2018-01-19 | 2021-11-30 | Vmware, Inc. | Methods and apparatus to configure and manage network resources for use in network-based computing |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8891509B2 (en) | 2004-07-06 | 2014-11-18 | Hewlett-Packard Development Company, L.P. | Proxy networking device for a router |
JP4349349B2 (en) * | 2005-08-30 | 2009-10-21 | ソニー株式会社 | Data transmission / reception system, transmission apparatus, reception apparatus, and data transmission / reception method |
US8964746B2 (en) | 2008-02-15 | 2015-02-24 | Hewlett-Packard Development Company, L.P. | Transmitting a packet from a distributed trunk switch |
US9588839B1 (en) * | 2015-07-21 | 2017-03-07 | Qlogic Corporation | Methods and systems for using shared logic at network devices |
US20200076718A1 (en) * | 2018-08-31 | 2020-03-05 | Nokia Solutions And Networks Oy | High bandwidth using multiple physical ports |
US20220393967A1 (en) * | 2021-06-07 | 2022-12-08 | Vmware, Inc. | Load balancing of vpn traffic over multiple uplinks |
US11863514B2 (en) | 2022-01-14 | 2024-01-02 | Vmware, Inc. | Performance improvement of IPsec traffic using SA-groups and mixed-mode SAs |
US11956213B2 (en) | 2022-05-18 | 2024-04-09 | VMware LLC | Using firewall policies to map data messages to secure tunnels |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6262974B1 (en) * | 1996-07-23 | 2001-07-17 | International Business Machines Corporation | Method and system for non disruptively assigning link bandwidth to a user in a high speed digital network |
US6363077B1 (en) * | 1998-02-13 | 2002-03-26 | Broadcom Corporation | Load balancing in link aggregation and trunking |
US20020075540A1 (en) * | 2000-12-19 | 2002-06-20 | Munter Ernst A. | Modular high capacity network |
US6690671B1 (en) * | 1998-08-17 | 2004-02-10 | Marconi Communications, Inc. | Load balanced UBR routing in ATM networks |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7383353B2 (en) * | 2002-07-31 | 2008-06-03 | Brocade Communications Systems, Inc. | Load balancing in a network comprising communication paths having different bandwidths |
-
2002
- 2002-07-31 US US10/208,969 patent/US7383353B2/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6262974B1 (en) * | 1996-07-23 | 2001-07-17 | International Business Machines Corporation | Method and system for non disruptively assigning link bandwidth to a user in a high speed digital network |
US6363077B1 (en) * | 1998-02-13 | 2002-03-26 | Broadcom Corporation | Load balancing in link aggregation and trunking |
US6690671B1 (en) * | 1998-08-17 | 2004-02-10 | Marconi Communications, Inc. | Load balanced UBR routing in ATM networks |
US20020075540A1 (en) * | 2000-12-19 | 2002-06-20 | Munter Ernst A. | Modular high capacity network |
Cited By (50)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7042842B2 (en) | 2001-06-13 | 2006-05-09 | Computer Network Technology Corporation | Fiber channel switch |
US20050041659A1 (en) * | 2001-06-13 | 2005-02-24 | Paul Harry V. | Method and apparatus for rendering a cell-based switch useful for frame based protocols |
US20050047334A1 (en) * | 2001-06-13 | 2005-03-03 | Paul Harry V. | Fibre channel switch |
US20030063594A1 (en) * | 2001-08-13 | 2003-04-03 | Via Technologies, Inc. | Load balance device and method for packet switching |
US20100265821A1 (en) * | 2001-12-19 | 2010-10-21 | Mcdata Services Corporation | Deferred Queuing in a Buffered Switch |
US20050088970A1 (en) * | 2001-12-19 | 2005-04-28 | Schmidt Steven G. | Deferred queuing in a buffered switch |
US8379658B2 (en) | 2001-12-19 | 2013-02-19 | Brocade Communications Systems, Inc. | Deferred queuing in a buffered switch |
US20050088969A1 (en) * | 2001-12-19 | 2005-04-28 | Scott Carlsen | Port congestion notification in a switch |
US7773622B2 (en) | 2001-12-19 | 2010-08-10 | Mcdata Services Corporation | Deferred queuing in a buffered switch |
US8798043B2 (en) | 2002-06-28 | 2014-08-05 | Brocade Communications Systems, Inc. | Apparatus and method for routing traffic in multi-link switch |
US20040071134A1 (en) * | 2002-06-28 | 2004-04-15 | Brocade Communications Systems, Inc. | Apparatus and method for routing traffic in multi-link switch |
US7383353B2 (en) * | 2002-07-31 | 2008-06-03 | Brocade Communications Systems, Inc. | Load balancing in a network comprising communication paths having different bandwidths |
US7948895B2 (en) | 2003-10-31 | 2011-05-24 | Brocade Communications Systems, Inc. | Frame traffic balancing across trunk groups |
US7619974B2 (en) | 2003-10-31 | 2009-11-17 | Brocade Communication Systems, Inc. | Frame traffic balancing across trunk groups |
US20100091780A1 (en) * | 2003-10-31 | 2010-04-15 | Brocade Communication Systems, Inc. | Frame traffic balancing across trunk groups |
US7443799B2 (en) | 2003-10-31 | 2008-10-28 | Brocade Communication Systems, Inc. | Load balancing in core-edge configurations |
DE102004007977A1 (en) * | 2004-02-18 | 2005-06-02 | Siemens Ag | Communications system has message controller formed by network coupling elements themselves that determines communications path per message between sending, receiving network coupling elements taking into account uniform network loading |
US20050281282A1 (en) * | 2004-06-21 | 2005-12-22 | Gonzalez Henry J | Internal messaging within a switch |
US20060013135A1 (en) * | 2004-06-21 | 2006-01-19 | Schmidt Steven G | Flow control in a switch |
US7623519B2 (en) | 2004-06-21 | 2009-11-24 | Brocade Communication Systems, Inc. | Rule based routing in a switch |
US20050281196A1 (en) * | 2004-06-21 | 2005-12-22 | Tornetta Anthony G | Rule based routing in a switch |
US9160649B2 (en) * | 2004-09-27 | 2015-10-13 | Alcatel Lucent | Method for routing traffic using traffic weighting factors |
US20060067331A1 (en) * | 2004-09-27 | 2006-03-30 | Kodialam Muralidharan S | Method for routing traffic using traffic weighting factors |
US8700799B2 (en) | 2004-11-12 | 2014-04-15 | Brocade Communications Systems, Inc. | Methods, devices and systems with improved zone merge operation by operating on a switch basis |
US20060168109A1 (en) * | 2004-11-12 | 2006-07-27 | Brocade Communications Systems, Inc. | Methods, devices and systems with improved zone merge operation by operating on a switch basis |
US9344356B2 (en) | 2007-02-28 | 2016-05-17 | Hewlett Packard Enterprise Development Lp | Transmitting a packet from a distributed trunk switch |
US8213430B2 (en) * | 2007-02-28 | 2012-07-03 | Hewlett-Packard Development Company, L.P. | Transmitting a packet from a distributed trunk switch |
US20120250691A1 (en) * | 2007-02-28 | 2012-10-04 | Shaun Wakumoto | Transmitting a packet from a distributed trunk switch |
US8385344B2 (en) * | 2007-02-28 | 2013-02-26 | Hewlett-Packard Development Company, L.P. | Transmitting a packet from a distributed trunk switch |
US20080205408A1 (en) * | 2007-02-28 | 2008-08-28 | Hewlett-Packard Development Company, L.P. | Transmitting a packet from a distributed trunk switch |
US20080205387A1 (en) * | 2007-02-28 | 2008-08-28 | Hewlett-Packard Development Company, L.P. | Transmitting a packet from a distributed trunk switch |
US20090028559A1 (en) * | 2007-07-26 | 2009-01-29 | At&T Knowledge Ventures, Lp | Method and System for Designing a Network |
US20100293277A1 (en) * | 2009-05-12 | 2010-11-18 | Rooks Kelsyn D S | Multi-source broadband aggregation router |
US8719414B2 (en) * | 2009-05-12 | 2014-05-06 | Centurylink Intellectual Property Llc | Multi-source broadband aggregation router |
US8982798B2 (en) | 2009-05-20 | 2015-03-17 | Centurylink Intellectual Property Llc | Dynamic multi-point access network |
US8665783B2 (en) | 2009-05-20 | 2014-03-04 | Centurylink Intellectual Property Llc | Dynamic multi-point access network |
US20100296437A1 (en) * | 2009-05-20 | 2010-11-25 | Stelle William T | Dynamic multi-point access network |
US10148744B2 (en) * | 2011-03-31 | 2018-12-04 | Amazon Technologies, Inc. | Random next iteration for data update management |
US20170013053A1 (en) * | 2011-03-31 | 2017-01-12 | Amazon Technologies, Inc. | Random next iteration for data update management |
US9007924B2 (en) * | 2013-01-29 | 2015-04-14 | Hewlett-Packard Development Company, L.P. | Balancing load distributions of loopback ports |
US20140211623A1 (en) * | 2013-01-29 | 2014-07-31 | Hewlett-Packard Development Company, L.P. | Balancing Load Distributions of Loopback Ports |
US20160218970A1 (en) * | 2015-01-26 | 2016-07-28 | International Business Machines Corporation | Method to designate and implement new routing options for high priority data flows |
US10084859B2 (en) * | 2015-01-26 | 2018-09-25 | International Business Machines Corporation | Method to designate and implement new routing options for high priority data flows |
US20210176162A1 (en) * | 2015-07-29 | 2021-06-10 | At&T Intellectual Property I, L.P. | Methods and apparatus to reflect routes from a remotely located virtual route reflector |
US20170063666A1 (en) * | 2015-08-27 | 2017-03-02 | Facebook, Inc. | Routing with flow over shared risk link groups |
US10003522B2 (en) * | 2015-08-27 | 2018-06-19 | Facebook, Inc. | Routing with flow over shared risk link groups |
US11190440B2 (en) | 2018-01-19 | 2021-11-30 | Vmware, Inc. | Methods and apparatus to configure and manage network resources for use in network-based computing |
US11895016B2 (en) | 2018-01-19 | 2024-02-06 | VMware LLC | Methods and apparatus to configure and manage network resources for use in network-based computing |
US11102142B2 (en) | 2018-01-24 | 2021-08-24 | Vmware, Inc. | Methods and apparatus to perform dynamic load balancing for a multi-fabric environment in network-based computing |
US10616319B2 (en) * | 2018-02-06 | 2020-04-07 | Vmware, Inc. | Methods and apparatus to allocate temporary protocol ports to control network load balancing |
Also Published As
Publication number | Publication date |
---|---|
US7383353B2 (en) | 2008-06-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7383353B2 (en) | Load balancing in a network comprising communication paths having different bandwidths | |
EP0408648B1 (en) | Multichannel bandwith allocation | |
JP3347926B2 (en) | Packet communication system and method with improved memory allocation | |
CA2141353C (en) | Method of on-line permanent virtual circuit routing | |
US9146768B2 (en) | Automatic adjustment of logical channels in a fibre channel network | |
EP0765554B1 (en) | A method and device for partitioning physical network resources | |
JP2646984B2 (en) | ATM network control method and channel control circuit | |
JP2001514463A (en) | Router with virtual channel assignment | |
KR101110808B1 (en) | Integrated circuit and method for time slot allocation | |
US20050071484A1 (en) | Highly utilizable protection mechanism for WDM mesh network | |
US7426561B2 (en) | Configurable assignment of weights for efficient network routing | |
US6704316B1 (en) | Push-out technique for shared memory buffer management in a network node | |
Shim et al. | Static virtual channel allocation in oblivious routing | |
KR100429897B1 (en) | Adaptive buffer partitioning method for shared buffer switch and switch used for the method | |
EP1908306A1 (en) | Enhanced virtual circuit allocation methods and systems for multi-stage switching elements | |
Sehery et al. | Load balancing in data center networks with folded-Clos architectures | |
Duan et al. | Embedding nonblocking multicast virtual networks in fat-tree data centers | |
US7336658B2 (en) | Methods and system of virtual circuit identification based on bit permutation of link numbers for multi-stage elements | |
Bestavros et al. | Load profiling for efficient route selection in multi-class networks | |
Gupta et al. | Dynamic routing in multi-class non-hierarchical networks | |
Srinivas et al. | Virtual network embedding in hybrid datacenters with dynamic wavelength grouping | |
KR100903478B1 (en) | Method for assigning a number of m data links located on the subscriber side to a number of n data links located on the transport side | |
US6952398B1 (en) | System and method for optimal allocation of link bandwidth in a communications network for truck routing | |
Lea | Buffered or unbuffered: a case study based on log/sub d/(N, e, p) networks | |
Ito et al. | Optimizing Application Mapping for Multi-FPGA Systems with Multi-ejection STDM Switches |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: BROCADE COMMUNICATIONS SYSTEMS, INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:VALDEVIT, EZIO;ABRAHAM, VINEET;REEL/FRAME:013450/0778;SIGNING DATES FROM 20021015 TO 20021016 |
|
FEPP | Fee payment procedure |
Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
AS | Assignment |
Owner name: BANK OF AMERICA, N.A. AS ADMINISTRATIVE AGENT, CAL Free format text: SECURITY AGREEMENT;ASSIGNORS:BROCADE COMMUNICATIONS SYSTEMS, INC.;FOUNDRY NETWORKS, INC.;INRANGE TECHNOLOGIES CORPORATION;AND OTHERS;REEL/FRAME:022012/0204 Effective date: 20081218 Owner name: BANK OF AMERICA, N.A. AS ADMINISTRATIVE AGENT,CALI Free format text: SECURITY AGREEMENT;ASSIGNORS:BROCADE COMMUNICATIONS SYSTEMS, INC.;FOUNDRY NETWORKS, INC.;INRANGE TECHNOLOGIES CORPORATION;AND OTHERS;REEL/FRAME:022012/0204 Effective date: 20081218 |
|
AS | Assignment |
Owner name: WELLS FARGO BANK, NATIONAL ASSOCIATION, AS COLLATE Free format text: SECURITY AGREEMENT;ASSIGNORS:BROCADE COMMUNICATIONS SYSTEMS, INC.;FOUNDRY NETWORKS, LLC;INRANGE TECHNOLOGIES CORPORATION;AND OTHERS;REEL/FRAME:023814/0587 Effective date: 20100120 |
|
FPAY | Fee payment |
Year of fee payment: 4 |
|
AS | Assignment |
Owner name: FOUNDRY NETWORKS, LLC, CALIFORNIA Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF AMERICA, N.A., AS ADMINISTRATIVE AGENT;REEL/FRAME:034792/0540 Effective date: 20140114 Owner name: BROCADE COMMUNICATIONS SYSTEMS, INC., CALIFORNIA Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF AMERICA, N.A., AS ADMINISTRATIVE AGENT;REEL/FRAME:034792/0540 Effective date: 20140114 Owner name: INRANGE TECHNOLOGIES CORPORATION, CALIFORNIA Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF AMERICA, N.A., AS ADMINISTRATIVE AGENT;REEL/FRAME:034792/0540 Effective date: 20140114 |
|
AS | Assignment |
Owner name: BROCADE COMMUNICATIONS SYSTEMS, INC., CALIFORNIA Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:WELLS FARGO BANK, NATIONAL ASSOCIATION, AS COLLATERAL AGENT;REEL/FRAME:034804/0793 Effective date: 20150114 Owner name: FOUNDRY NETWORKS, LLC, CALIFORNIA Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:WELLS FARGO BANK, NATIONAL ASSOCIATION, AS COLLATERAL AGENT;REEL/FRAME:034804/0793 Effective date: 20150114 |
|
REMI | Maintenance fee reminder mailed | ||
FPAY | Fee payment |
Year of fee payment: 8 |
|
SULP | Surcharge for late payment |
Year of fee payment: 7 |
|
AS | Assignment |
Owner name: BROCADE COMMUNICATIONS SYSTEMS LLC, CALIFORNIA Free format text: CHANGE OF NAME;ASSIGNOR:BROCADE COMMUNICATIONS SYSTEMS, INC.;REEL/FRAME:044891/0536 Effective date: 20171128 |
|
AS | Assignment |
Owner name: AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE. LIMITED, SINGAPORE Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:BROCADE COMMUNICATIONS SYSTEMS LLC;REEL/FRAME:047270/0247 Effective date: 20180905 Owner name: AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE. LIMITE Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:BROCADE COMMUNICATIONS SYSTEMS LLC;REEL/FRAME:047270/0247 Effective date: 20180905 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 12TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1553); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 12 |