|Numéro de publication||US20070025253 A1|
|Type de publication||Demande|
|Numéro de demande||US 11/208,690|
|Date de publication||1 févr. 2007|
|Date de dépôt||22 août 2005|
|Date de priorité||1 août 2005|
|Autre référence de publication||US7872965|
|Numéro de publication||11208690, 208690, US 2007/0025253 A1, US 2007/025253 A1, US 20070025253 A1, US 20070025253A1, US 2007025253 A1, US 2007025253A1, US-A1-20070025253, US-A1-2007025253, US2007/0025253A1, US2007/025253A1, US20070025253 A1, US20070025253A1, US2007025253 A1, US2007025253A1|
|Inventeurs||Mark Enstone, Michael McGee, Darda Chang, Christopher Hughes|
|Cessionnaire d'origine||Enstone Mark R, Mcgee Michael S, Darda Chang, Hughes Christopher L|
|Exporter la citation||BiBTeX, EndNote, RefMan|
|Citations de brevets (6), Référencé par (31), Classifications (10), Événements juridiques (3)|
|Liens externes: USPTO, Cession USPTO, Espacenet|
This application claims the benefit of U.S. Provisional Application No. 60/704677, filed Aug. 1, 2005.
Computers and other devices are commonly interconnected to facilitate communication among one another using any one of a number of available standard network architectures and any one of several corresponding and compatible network protocols. Packet switched network protocols are commonly employed with a number of architectures such as the Ethernet® standard. One of the most basic and widely implemented network types is the local area network (LAN). In its simplest form, a LAN is a number of devices (e.g. computers, printers and other specialized peripherals) connected to one another over a common broadcast domain using some form of signal transmission medium such as coaxial cable. Multiple LANs may be coupled together as two or more sub-networks of a more complex network via routers or equivalent devices, each of the LANs having a distinct broadcast domain.
Computers and other devices employ network resources as a requisite interface with which to communicate over a network such as a LAN. These network resources are sometimes referred to as network adapters or network interface cards (NICs). An adapter or NIC typically has at least one port through which a physical link may be provided between the processing resources of its network device and the transmission medium of a network. Data generated for transmission by the processing resources of one network device is first formatted (as packets in the case of packet switched networks) in accordance with its resident protocol layer (a software process typically executing in conjunction with the device's OS (operating system)). These packets are then framed and transmitted through the device's network resources, over the transmission media to the network resources of a second network device similarly coupled to the network. The data received by an adapter port of the second device is passed to and then deformatted by the protocol layer resident in the O/S of the second network device. The deformatted data is presented to the processing resources of the second device. The adapters or NICs are commercially available and are designed to support one or more variations of standard network architectures and known topologies, including Ethernet as described above.
In an Ethernet environment, each network device and its links to the network are identified by the other devices on the network using a protocol address (e.g. Internet Protocol (IP)) and a media access control (MAC) address in accordance with layer 3 and layer 2 of the OSI networking model respectively. The protocol address is associated with a virtual interface established by software between a device's adapter hardware and the protocol layer executed by its OS. The MAC address is uniquely associated with the adapter hardware itself and is typically hard-programmed into each device at the time of manufacture. Provision is often made such that this pre-assigned MAC address can be overwritten through software command during initialization of the device. Devices coupled to a common broadcast domain of an Ethernet network identify each other by the MAC address. Devices coupled to disparate broadcast domains communicate using their IP addresses over a device such as a router that couples the two domains.
Thus, a network device such as a server can be directly coupled to one or more physical networks or sub-networks through one or more distinct adapter ports coupled to each of the one or more networks or sub-networks. Each adapter port and its associated protocol interface are identified by a unique MAC address and IP address respectively. In the alternative, a single adapter port may be coupled to a special switch that can be programmed to provide connections to devices belonging to one or more logical sub-networks called virtual LANs (VLANs). The VLANs are essentially superimposed or overlaid on the same physical network to create multiple logical networks sharing the same physical broadcast domain. A virtual interface to the device's protocol layer is created for each of the VLANs and thus each VLAN virtual interface is assigned its own protocol address. The single adapter port, however, is still known to the devices comprising the various VLANs by a single MAC address.
To improve the reliability of a network, redundant links have been established with the same network through multiple adapter ports in the event that one of the links fails. Redundant links can also provide an opportunity to increase throughput of the connection through aggregation of the throughput through the redundant links. Redundant links to the same network can be established through multiple adapter ports coupled to a network switch for example. This is sometimes referred to as multi-homing. While providing some of the benefits of redundant links, implementation of multi-homing to achieve redundancy is difficult for reasons known to those of skill in the art.
Redundant links can also be accomplished by teaming two or more adapter ports together to appear as a single virtual link. Adapter teams are typically made up of two or more adapter ports logically coupled in parallel using a teaming driver. The teaming driver is a software routine executed by the OS that presents a common virtual interface to its protocol layer for the entire team of resources rather than individual interfaces for each adapter port as previously discussed. A single protocol address is assigned to this common virtual interface. Also, a single team MAC address is assigned to the team from the set of MAC addresses assigned to each of the adapter ports of the team. Thus, other devices on the network see the team of adapter ports as a single virtual adapter port.
The throughput of the individual port members of the team can be aggregated for data transmitted from and received by the network device employing the team, depending upon the nature of the team configured. Throughput aggregation is commonly optimized using one of a number of known load-balancing algorithms, executed by the teaming driver, to distribute frames between the teamed NIC ports. The use of aggregated teamed adapter ports also inherently provides fault tolerance because the failure of one of the aggregated links does not eliminate the entire link. The aggregation of network interface resources through teaming is particularly beneficial in applications such as servers, as the demand for increased throughput and reliability of a network connection to a server is typically high.
Teams of network resources can be of various types providing different benefits. Network fault tolerant (NFT) teams commonly employ two or more network adapter or NIC ports redundantly coupled to the same network through a switch. One port is configured to be “active” and is designated as the “primary” adapter port. Each of the remaining members of the team is placed in a “standby” or “inactive” mode and is designated as a “secondary” member of the team. The primary adapter port is assigned a team MAC address from the set of MAC addresses associated with each of the team members. The secondary members are each assigned one of the remaining MAC addresses of the set. A NIC port in standby mode remains largely idle (it is typically only active to the limited extent necessary to respond to system test inquiries to indicate that it is still operational) until activated in a failover process. Failure detection and failover processes are typically executed by the teaming driver. Failover replaces the failed primary adapter port with one of the secondary team members, rendering the failed adapter port idle and secondary while activating one of the secondary adapters and designating it as the new primary for the team. In this way, interruption of a network connection to a critical server may be avoided notwithstanding the existence of a failed network adapter card or port.
Transmit load-balanced (TLB) teams typically aggregate and load-balance data transmitted from two or more active members of the team to other devices over the network in accordance with some load-balancing policy executed by the teaming driver. Several types of load-balancing algorithms may be employed with the teaming driver typically executing the algorithm. As with the NFT teams described above, only one of the active team members is designated as the primary for the team. Because the primary is the only member of the team that has been assigned the team MAC address, and this single MAC address is the one by which all devices on the network communicate with the team, it necessarily handles all of the data received by the team from the network. As a result, no aggregation of the receive traffic is available. TLB teams are particularly useful in applications where the transmit traffic is significantly greater than the traffic received by the team. One such application is a database server that provides data to a large number of clients in response to a relatively smaller amount of request traffic generated by those clients.
Switch-assisted load-balanced (SLB) teams are able to aggregate both transmit and receive data over all active team members. This is accomplished through a special switch interposed between the team and the network that has the intelligence to create a single virtual port for all of the physical ports coupling the team adapters and the switch. In this case, no adapter is designated as the primary and each team adapter is assigned the same team MAC address. The switch recognizes all packets it receives containing the team MAC address as being destined for the virtual port. The switch routes each such packet to one of the port members of the virtual port based on a load-balancing algorithm executed by the switch. The transmit data is typically load-balanced by the teaming driver in the manner used for TLB teams. SLB teams also provide fault tolerance by default, as team members that cease to function as a result of a fault will be inactivated and only the aggregated throughput of the team will be reduced as a result.
Certain network configurations are designed to achieve redundancy of connections between a system and the network using multiple coupling devices such as switches. Switch redundant configurations coupled to a server employing redundant links using a TLB or NFT team can result in members of the adapter team being coupled to the network through a different one of the redundant switches (and thus through separate paths of the network). To ensure that all team members are coupled to the same broadcast domain (i.e. same layer-2 network or subnet), these switch-redundant configurations require that all of the redundant devices (and therefore the team members) ultimately be interconnected in some way—either directly or by way of uplinks to a common third device (e.g. a backbone or core switch).
For a detailed description of embodiments of the invention, reference will now be made to the accompanying drawings in which:
Certain terms are used throughout the following description and in the claims to refer to particular features, apparatus, procedures, processes and actions resulting therefrom. For example, the term network resources is used to generally denote network interface hardware such as network interface cards (NICs) and other forms of network adapters known to those of skill in the art. Moreover, the term NIC or network adapter may refer to one piece of hardware having one port or several ports. While effort will be made to differentiate between NICs and NIC ports, reference to a plurality of NICs may be intended as a plurality of interface cards or as a single interface card having a plurality of NIC ports. Those skilled in the art may refer to an apparatus, procedure, process, result or a feature thereof by different names. This document does not intend to distinguish between components, procedures or results that differ in name but not function. In the following discussion and in the claims, the terms “including” and “comprising” are used in an open-ended fashion, and thus should be interpreted to mean “including, but not limited to . . .”
The following discussion is directed to various embodiments of the invention. Although one or more of these embodiments may be preferred, the embodiments disclosed should not be interpreted as, or otherwise be used for limiting the scope of the disclosure, including the claims, unless otherwise expressly specified herein. In addition, one skilled in the art will understand that the following description has broad application, and the discussion of any particular embodiment is meant only to be exemplary of that embodiment, and not intended to intimate that the scope of the disclosure, including the claims, is limited to that embodiment. For example, while the various embodiments may employ one type of network architecture and/or topology, those of skill in the art will recognize that the invention(s) disclosed herein may be readily applied to all other compatible network architectures and topologies as known to those of skill in the art.
Heretofore, load-balancing of data received by a team of network resources has employed a switch that implements one of a number of port-trunking algorithms that were originally developed for load balancing traffic transmitted between switches. These switches treat their ports as a single virtual trunk by routing received data to any one of their ports in accordance with the load-balancing algorithm. This SLB team of resources is therefore treated by the switch as if the team is coupled to it over a single virtual port trunk. Data received by the switch and destined for the team can be distributed to any member of the team by way of any of the output ports making up the trunk to which the members are coupled. One of the limitations of this technique is that all team members must be coupled to the same SLB (i.e. port-trunking capable) switch and thus, the same virtual port trunk.
Because splitting the resources of an SLB team between different switches is not permitted using conventional port-trunking techniques, users have been forced to choose between the benefits of network redundancy and settling for a TLB or NFT team, or they have had to forego switch redundancy to achieve receive traffic aggregation and load-balancing. Embodiments of the invention as described below permit users to realize the benefits of redundant connections to a network (e.g. eliminating single points of failure), as well as to achieve greater receive throughput through receive aggregation and load-balancing of resources coupled to each of the redundant connections.
The CPU 104 can be any one of several types of microprocessors and can include supporting external circuitry typically used in industry standard servers, computers and peripherals. The types of microprocessors may include the 80486, Pentium®, Pentium II®, etc. all microprocessors from Intel Corp., or other similar types of microprocessors such as the K6® microprocessor by Advanced Micro Devices. Pentium® is a registered trademark of Intel Corporation and K6® is a registered trademark of Advanced Micro Devices, Inc. Those of skill in the art will recognize that processors other than Intel compatible processors can also be employed. The external circuitry can include one or more external caches (e.g. a level two (L2) cache or the like (not shown)). The memory system 106 may include a memory controller or the like and may be implemented with one or more memory boards (not shown) plugged into compatible memory slots on the motherboard, although any memory configuration is contemplated. The CPU 104 may also be a plurality of such processors operating in parallel.
Other components, devices and circuitry may also be included in the computer system 100 that are not particularly relevant to embodiments of the present invention and are therefore not shown for purposes of simplicity. Such other components, devices and circuitry are typically coupled to the motherboard and bus system 102. The other components, devices and circuitry may include an integrated system peripheral (ISP), an interrupt controller such as an advanced programmable interrupt controller (APIC) or the like, bus arbiter(s), one or more system ROMs (read only memory) comprising one or more ROM modules, a keyboard controller, a real time clock (RTC) and timers, communication ports, non-volatile static random access memory (NVSRAM), a direct memory access (DMA) system, diagnostics ports, command/status registers, battery-backed CMOS memory, etc.
The computer system 100 may further include one or more output devices, such as speakers 109 coupled to the motherboard and bus system 102 via an appropriate sound card 108, and monitor or display 112 coupled to the motherboard and bus system 102 via an appropriate video card 110. One or more input devices may also be provided such as a mouse 114 and keyboard 116, each coupled to the motherboard and bus system 102 via appropriate controllers (not shown) as is known to those skilled in the art. Other input and output devices may also be included, such as one or more disk drives including floppy and hard disk drives, one or more CD-ROMs, as well as other types of input devices including a microphone, joystick, pointing device, etc. The input and output devices enable interaction with a user of the computer system 100 for purposes of configuration, as further described below. It will be appreciated that different combinations of such input/output and peripheral devices may be used in various combinations and forms depending upon the nature of the computer system.
The motherboard and bus system 102 is typically implemented with one or more expansion slots 120, individually labeled S1, S2, S3, S4 and so on, where each of the slots 120 is operable to receive compatible adapter or controller cards configured for the particular slot and bus type. Typical devices configured as adapter cards include network interface cards (NICs), disk controllers such as a SCSI (Small Computer System Interface) disk controller, video controllers, sound cards, etc. The computer system 100 may include one or more of several different types of buses and slots known to those of skill in the art, such as PCI, ISA, EISA, MCA, etc. In an embodiment illustrated in
As described more fully below, each of the NICs 122 enables the computer system to communicate through at least one port with other devices on a network to which the MC ports are coupled. The computer system 100 may be coupled to at least as many networks as there are NICs (or NIC ports) 122. When multiple NICs or NIC ports 122 are coupled to the same network as a team, each provides a separate and redundant link to that same network for purposes of load balancing and/or fault tolerance. Additionally, two or more of the NICs (or NIC ports) 122 may be split between distinct paths or segments of a network that ultimately connect to a core switch.
A more detailed discussion regarding a teaming mechanism that may be used to implement an embodiment of the invention is now presented with reference to
The computer system 100 of
An embodiment of configuration application 303 provides a graphical user interface (GUI) through which users may program configuration information regarding the initial teaming of the NICs. Additionally, the configuration application 303 receives current configuration information from the teaming driver 310 that can be displayed to the user using the first GUI on display 112, including the status of the resources for its team (e.g. “failed,” “standby” and/or “active”). Techniques for graphically displaying teaming configurations and resource status are disclosed in detail in U.S. Pat. No. 6,229,538 entitled “Port-Centric Graphic Representations of Network Controllers,” which is incorporated herein in its entirety by this reference. Application 303 provides commands by which the resources can be allocated to teams and reconfigured. A user can interact with the configuration program 303 through the GUIs via one or more input devices, such as the mouse 114 and the keyboard 116 and one or more output devices, such as the display 112. It will be appreciated that the GUI can be used remotely to access configuration program 303, such as over a local network or the Internet for example.
A hierarchy of layers within the O/S 301, each performing a distinct function and passing information between one another, enables communication with an operating system of another network device over the network. For example, four such layers have been added to Windows 2000: the Miniport I/F Layer 312, the Protocol I/F Layer 314, the Intermediate Driver Layer 310 and the Network Driver Interface Specification (NDIS) (not shown). The Protocol I/F Layer 314 is responsible for protocol addresses and for translating protocol addresses to MAC addresses. It also provides an interface between the protocol stacks 302, 304 and 306 and the NDIS layer. The drivers for controlling each of the network adapter or NIC ports reside at the Miniport I/F Layer 312 and are typically written and provided by the vendor of the network adapter hardware. The NDIS layer is provided by Microsoft, along with its O/S, to handle communications between the Miniport Driver Layer 312 and the Protocol I/F Layer 314.
To accomplish teaming of a plurality of network adapters, an instance of an intermediate driver residing at the Intermediate Driver Layer 310 is interposed between the Miniport Driver Layer 312 and the NDIS. The Intermediate Driver Layer 310 is not really a driver per se because it does not actually control any hardware. Rather, the intermediate driver causes the miniport drivers for each of the NIC ports to be teamed to function seamlessly as one virtual driver 320 that interfaces with the NDIS layer. For each team of NIC adapter ports, there will be a separate instance of the intermediate driver at the Intermediate Driver Layer 310, each instance being used to tie together those NIC drivers that correspond to the NIC ports belonging to that team. Each instance of a teaming driver presents a single virtual interface to each instance of a protocol (302, 304 and or 306) being executed by the O/S 301. That virtual interface is assigned one IP address. If the server is configured with VLANs (e.g. VLANs A 504 and B 506), virtual interfaces for each VLAN are presented to the protocol layer, with each VLAN having been assigned its own unique protocol address.
The intermediate driver 310 also presents a single protocol interface to each of the NIC drivers D1-D4 and the corresponding NIC ports 402, 404, 406 and 408 of NICs N1 460, N2 462, N3 464, and N4 466. Because each instance of the intermediate driver 310 can be used to combine two or more NIC drivers into a team, a user may configure multiple teams of any combination of the ports of those NICs currently installed on the computer system 100. By binding together two or more drivers corresponding to two or more ports of physical NICs, data can be transmitted and received through one of the two or more ports (in the case of an NFT team) or transmitted through all of the two or more ports and received through one for a TLB team), with the protocol stacks interacting with what appears to be only one logical device.
As previously discussed a fault tolerant team is typically employed where the throughput of a single NIC port is sufficient but fault tolerance is important. As an example, the NIC ports 402, 404, 406 and 408, providing redundant links L1 through L4 to a network can be configured as a network fault tolerance (NFT) team. For an NFT team, one of the NIC ports (e.g. port 402 of N1 460) is initially assigned as the primary and NIC N1 460 is placed in the “active” mode. This assignment can be accomplished by default (e.g. the teaming driver 310 simply chooses the team member located in the lowest numbered slot as the primary member and assigns it the team MAC address) or manually through the GUI and configuration application 303. For the NFT team, ports 404, 404, 406 and 408 are designated as “secondary” and their respective NICs N2 462, N3 464 and N4 466 are placed in a “standby” mode.
The primary team member transmits and receives all packets on behalf of the team. If the active link (i.e. L1) fails or is disabled for any reason, the computer system 100 (the teaming driver 310 specifically) can detect this failure and switch to one of the secondary team members by rendering it the active (and primary) member of the team while placing the failed member into a failed mode until it is repaired. This process is sometimes referred to as “failover” and involves reassigning the team MAC address to the NIC port that is to be the new primary. Communication between computer system 100 and devices in a network to which the team is coupled is thereby maintained without any significant interruption. Those of skill in the art will recognize that an embodiment of an NFT team can have any number of redundant links in an NFT team, and that one link of the team will be active and all of the others will be in standby.
The network resources NICs N1 460, N2 462, N3 464, and N4 466 of
As can be seen from
It should be noted that the example of
As previously discussed, switch-assisted load balancing (SLB) teams can provide not only load balancing of transmitted data, but also load-balancing of data received by the team. To implement this team type, a switch that is operative to perform port-trunking can be employed to load-balance the data received by the switch for the team. There are numerous port trunking algorithms known to those of skill in the art, including Cisco's EtherChannel and Hewlett-Packard's ProCurve for example.
Previously, an SLB team was limited to the non-redundant topology of
In an embodiment, system 100 receives ARP requests broadcast by the Clients A 452, B 454, C 456 and D 458 that specifies the team IP address for the system 100 (in the example of
In either case, the data received over each of the two switches is receive load-balanced between the NICs of the group to which it is coupled. Moreover, the teaming driver 310 can load-balance the connections established among the groups in accordance with some predetermined algorithm. The teaming driver 310 is able to decide in real time whether or not to intercept ARP responses destined for the Clients A 452, B 454, C 456 and D 458 and to direct communication to one of the groups not assigned the team MAC address. Additionally, the teaming driver 310 is still able to transmit load-balance data transmitted from the system 100 to the network and Clients A 452, B 454, C 456 and D 458 over the members of each group that can be used for standard TLB teams.
It will be appreciated by those of skill in the art that when the system generates an ARP request, it broadcasts its team MAC address as part of the process. As a result of this broadcast operation, the Clients A 452, B 454, C 456 and D 458 will update their ARP tables accordingly to reflect only the team MAC address=E. This is true even if the client was previously communicating with a group assigned to a non-team MAC address. The ARP entries in an ARP table, however, expire within a predetermined time and upon expiration, a client must re-ARP for a MAC address to maintain a connection to the system 100. This process is designed into most networks to ensure that the connections are refreshed periodically and the time to expiration is typically programmable. As a result, once an ARP broadcast by the system 100 has caused all connections to be established such that they communicate over the group to which the team MAC address has been assigned, the teaming driver is able to eventually balance them back out over the groups as the ARP table entries expire and are renewed. It will be appreciated that the time for this re-balancing to occur can be minimized by programming a minimum entry expiration time.
Those of skill in the art will appreciate that the ARP intercept technique of the present invention permits two or more distinct SLB teams to be created within a single team of NIC resources where this was not possible before. Although only two switches and thus two groups have been illustrated in
In an embodiment, a user can manually configure the team into groups and identify which of the teams should be assigned which MAC addresses through a user interface (e.g. graphical user interface (GUI)) and configuration program (303,
Should one of the NICs fail in one of the groups of FIGS. 6A-B, the team continues to function as described for a standard SLB team except for the loss of receive throughput within the group (of course, the overall team will lose some transmit throughput as well. The teaming driver 310 can compensate for the loss of receive throughput in a particular group by biasing its processing of ARP responses to route more receive data through a group having greater aggregated receive throughput. This is also true should changing network conditions favor a group coupled to a more optimal path to the core network. The teaming driver can be programmed manually or automatically assign more NICs to the group coupled to the optimal path. An optimal path detection technique is disclosed in U.S. application Ser. No. 11/048,520 entitled “Automated Selection of an Optimal Path between a Core Switch and Teamed Network Resources of a Computer System,” which is incorporated herein in its entirety by this reference.
Techniques for detecting and recovering from split segment conditions such as that illustrated in
It should be noted that the monitoring processes mentioned above which look for particular frames to be received by particular NICs of the team do not work for the conventional SLB topology of
Embodiments of the invention enable network users to combine the benefits of receive load-balancing while achieving the benefits of redundant network topologies. Through a system's teaming configuration program interface, users can assign network resources (manually or automatically) of a system such as a server to two or more groups. Each group includes at least one of the resources and is coupled to a different one of multiple network devices (e.g. switch) to provide redundant links between the system and the network. The groups of one or more resources are configured as distinct SLB teams, although groups of one resource do not require their switch be port-trunking enabled.
A primary group is assigned the team MAC address, and the remaining groups are each assigned their own unique MAC addresses. Each resource in the group is programmed to transmit and receive using its group MAC address. Switches to which all groups having two or more resources are programmed for port trunking. The teaming driver intercepts none, some or all of the responses generated by the system's protocol stack and inserts the MAC addresses of other groups in accordance with a predetermined load-balancing algorithm. In this way, each group becomes an independent SLB team within the team as a whole, and receive load-balancing can be implemented for each of the redundant switches coupling the team as a whole to the network.
It should be noted that while FIGS. 3, 4A-B, and 5A-B illustrate topologies configurable by previous incarnations of the teaming driver 310 and configuration program 303, the teaming driver and configuration program of the present invention are considered incorporated within the embodiments illustrated by those FIGS. for purposes of illustration as the teaming driver 310 and configuration program 303 embodying features of the present invention are still capable of configuring those topologies as well as those topologies illustrated in FIGS. 6A-B and 7-8.
|Brevet cité||Date de dépôt||Date de publication||Déposant||Titre|
|US6229538 *||11 sept. 1998||8 mai 2001||Compaq Computer Corporation||Port-centric graphic representations of network controllers|
|US6590861 *||18 mars 1999||8 juil. 2003||3Com Corporation||Combining virtual local area networks and load balancing with fault tolerance in a high performance protocol|
|US7145866 *||1 juin 2001||5 déc. 2006||Emc Corporation||Virtual network devices|
|US20020018489 *||11 juin 2001||14 févr. 2002||Broadcom Corporation||Gigabit switch supporting improved layer 3 switching|
|US20030140124 *||27 août 2002||24 juil. 2003||Alacritech, Inc.||TCP offload device that load balances and fails-over between aggregated ports having different MAC addresses|
|US20070002826 *||29 juin 2005||4 janv. 2007||Bennett Matthew J||System implementing shared interface for network link aggregation and system management|
|Brevet citant||Date de dépôt||Date de publication||Déposant||Titre|
|US7693044||15 déc. 2005||6 avr. 2010||Nvidia Corporation||Single logical network interface for advanced load balancing and fail-over functionality|
|US7743129 *||1 mai 2006||22 juin 2010||International Business Machines Corporation||Methods and arrangements to detect a failure in a communication network|
|US7756012||18 mai 2007||13 juil. 2010||Nvidia Corporation||Intelligent failover in a load-balanced network environment|
|US7760619||18 mai 2007||20 juil. 2010||Nvidia Corporation||Intelligent failover in a load-balanced networking environment|
|US7765290 *||30 mai 2008||27 juil. 2010||International Business Machines Corporation||Methods and arrangements to detect a failure in a communication network|
|US7792018 *||18 mai 2007||7 sept. 2010||Nvidia Corporation||Intelligent load balancing and failover of network traffic|
|US7840706 *||30 nov. 2007||23 nov. 2010||Nvidia Corporation||Wake-on-LAN design in a load balanced environment|
|US7870417 *||20 avr. 2007||11 janv. 2011||International Business Machines Corporation||Apparatus, system, and method for adapter card failover|
|US7907506 *||11 févr. 2009||15 mars 2011||Huawei Technologies Co., Ltd.||Method, system and device for xDSL crosstalk cancellation|
|US7921327||18 juin 2008||5 avr. 2011||Dell Products L.P.||System and method for recovery from uncorrectable bus errors in a teamed NIC configuration|
|US7995465||18 mai 2007||9 août 2011||Nvidia Corporation||Intelligent load balancing and failover of network traffic|
|US8134928||15 déc. 2005||13 mars 2012||Nvidia Corporation||Technique for identifying a failed network interface card within a team of network interface cards|
|US8169894 *||16 sept. 2010||1 mai 2012||Microsoft Corporation||Fault-tolerant communications in routed networks|
|US8195736 *||2 août 2007||5 juin 2012||Opnet Technologies, Inc.||Mapping virtual internet protocol addresses|
|US8300647||18 mai 2007||30 oct. 2012||Nvidia Corporation||Intelligent load balancing and failover of network traffic|
|US8321617 *||18 mai 2011||27 nov. 2012||Hitachi, Ltd.||Method and apparatus of server I/O migration management|
|US8369208 *||16 sept. 2010||5 févr. 2013||Microsoft Corporation||Fault-tolerant communications in routed networks|
|US8392751||1 avr. 2011||5 mars 2013||Dell Products L.P.||System and method for recovery from uncorrectable bus errors in a teamed NIC configuration|
|US8432788||18 mai 2007||30 avr. 2013||Nvidia Corporation||Intelligent failback in a load-balanced networking environment|
|US8549124 *||20 janv. 2010||1 oct. 2013||International Business Machines Corporation||Network management discovery tool|
|US8572288 *||15 déc. 2005||29 oct. 2013||Nvidia Corporation||Single logical network interface for advanced load balancing and fail-over functionality|
|US8958325||2 juil. 2012||17 févr. 2015||Microsoft Corporation||Fault-tolerant communications in routed networks|
|US8972547 *||18 oct. 2007||3 mars 2015||International Business Machines Corporation||Method and apparatus for dynamically configuring virtual internet protocol addresses|
|US9009304||4 juin 2012||14 avr. 2015||Riverbed Technology, Inc.||Mapping virtual internet protocol addresses|
|US9037750 *||18 oct. 2007||19 mai 2015||Qualcomm Incorporated||Methods and apparatus for data exchange in peer to peer communications|
|US20080040573 *||2 août 2007||14 févr. 2008||Malloy Patrick J||Mapping virtual internet protocol addresses|
|US20110004783 *||6 janv. 2011||Microsoft Corporation||Fault-tolerant communications in routed networks|
|US20120272092 *||2 juil. 2012||25 oct. 2012||Microsoft Corporation||Fault-tolerant communications in routed networks|
|US20120297091 *||22 nov. 2012||Hitachi, Ltd.||Method and apparatus of server i/o migration management|
|US20140344478 *||15 mai 2013||20 nov. 2014||Dell Products L.P.||Network interface connection teaming system|
|CN102469152A *||17 nov. 2010||23 mai 2012||英业达股份有限公司||Network load adjusting method|
|Classification aux États-Unis||370/235, 370/389|
|Classification internationale||H04J1/16, H04L12/56|
|Classification coopérative||H04L47/125, H04L47/10, H04L47/122|
|Classification européenne||H04L47/12A, H04L47/10, H04L47/12B|
|22 août 2005||AS||Assignment|
Owner name: HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P., TEXAS
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ENSTONE, MARK R;MCGEE, MICHAEL SEAN;CHANG, DARDA;AND OTHERS;REEL/FRAME:016915/0240
Effective date: 20050817
|19 avr. 2011||CC||Certificate of correction|
|25 juin 2014||FPAY||Fee payment|
Year of fee payment: 4