US20130262604A1 - Method and system for matching and repairing network configuration - Google Patents

Method and system for matching and repairing network configuration Download PDF

Info

Publication number
US20130262604A1
US20130262604A1 US13/897,918 US201313897918A US2013262604A1 US 20130262604 A1 US20130262604 A1 US 20130262604A1 US 201313897918 A US201313897918 A US 201313897918A US 2013262604 A1 US2013262604 A1 US 2013262604A1
Authority
US
United States
Prior art keywords
network element
network
configuration
communication
reconfiguration
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US13/897,918
Inventor
Uri Elzur
Hemal Shah
Patricia Thaler
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Avago Technologies International Sales Pte Ltd
Original Assignee
Broadcom Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Broadcom Corp filed Critical Broadcom Corp
Priority to US13/897,918 priority Critical patent/US20130262604A1/en
Assigned to BROADCOM CORPORATION reassignment BROADCOM CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: THALER, PATRICIA, ELZUR, URI, SHAH, HEMAL
Publication of US20130262604A1 publication Critical patent/US20130262604A1/en
Assigned to BANK OF AMERICA, N.A., AS COLLATERAL AGENT reassignment BANK OF AMERICA, N.A., AS COLLATERAL AGENT PATENT SECURITY AGREEMENT Assignors: BROADCOM CORPORATION
Assigned to AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD. reassignment AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: BROADCOM CORPORATION
Assigned to BROADCOM CORPORATION reassignment BROADCOM CORPORATION TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENTS Assignors: BANK OF AMERICA, N.A., AS COLLATERAL AGENT
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0803Configuration setting
    • H04L41/0813Configuration setting characterised by the conditions triggering a change of settings
    • H04L41/082Configuration setting characterised by the conditions triggering a change of settings the condition being updates or upgrades of network functionality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0803Configuration setting
    • H04L41/0813Configuration setting characterised by the conditions triggering a change of settings
    • H04L41/0816Configuration setting characterised by the conditions triggering a change of settings the condition being an adaptation, e.g. in response to network events
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0803Configuration setting
    • H04L41/0823Configuration setting characterised by the purposes of a change of settings, e.g. optimising configuration for enhancing reliability

Definitions

  • Certain embodiments of the invention relate to networking. More specifically, certain embodiments of the invention relate to a method and system for network configuration.
  • IT management may require performing remote management operations of remote systems to perform inventory, monitoring, control, and/or to determine whether remote systems are up-to-date.
  • management devices and/or consoles may perform such operations as discovering and/or navigating management resources in a network, manipulating and/or administrating management resources, requesting and/or controlling subscribing and/or unsubscribing operations, and executing and/or specific management methods and/or procedures.
  • Management devices and/or consoles may communicate with devices in a network to ensure availability of remote systems, to monitor/control remote systems, to validate that systems may be up-to-date, and/or to perform any security patch updates that may be necessary.
  • networks become increasingly large and complex, network management also becomes increasingly complex.
  • a system and/or method is provided for matching and repairing network configuration, substantially as shown in and/or described in connection with at least one of the figures, as set forth more completely in the claims.
  • FIG. 1 is diagram illustrating an exemplary network in which multiple devices and/or portions of devices are configured via a single logical point of management (LPM), in accordance with an embodiment of the invention.
  • LPM point of management
  • FIG. 2A is a diagram illustrating exemplary devices managed via a logical point of management (LPM), in accordance with an embodiment of the invention.
  • LPM logical point of management
  • FIG. 2B is a diagram illustrating exemplary devices managed via a logical point of management (LPM) integrated into a top-of-rack switch, in accordance with an embodiment of the invention.
  • LPM logical point of management
  • FIG. 3 illustrates configuration of devices along a network path, in accordance with an embodiment of the invention.
  • FIG. 4 is a flow chart illustrating exemplary steps for network configuration, in accordance with an embodiment of the invention.
  • one or more circuits and/or processors may be operable to determine a configuration of one or more parameters in a plurality of devices along a network path, and detect whether any of the one or more parameters is configured such that communication between the plurality of devices is disabled and/or suboptimal.
  • parameters that are configured such that they are compatible with each other may be said to be “matched,” whereas parameters which may be configured such that they are incompatible with each other may be said to be “mismatched.”
  • the one or more circuits and/or processors may communicate a notification of the incompatibility to a network management entity and/or generate one or more messages to reconfigure the one or more parameters in one or more of the plurality of devices.
  • reconfiguring one or more parameters such that they are matched and/or optimally configured may be said to be “repairing” network configuration.
  • the determining, detecting, matching, and repairing may be performed automatically in response to a change in one or more of the parameters in one or more of the plurality of devices.
  • the determining, detecting, matching, and repairing may be performed automatically in response to a reconfiguration of the network path.
  • the notification may comprise a recommended configuration of the one or more parameters in one or more of the plurality of devices.
  • the one or more circuits and/or processors may enable access to one or more application programming interfaces of one or more of the plurality of devices.
  • the one or more circuits and/or processors may be operable to translate between the one or more application programming interfaces and one or more network interfaces.
  • the one or more circuits and/or processors may be operable to manage configuration of the one or more parameters in a first portion of the plurality of devices.
  • the one or more circuits and/or processors may be operable to communicate with one or more other circuits and/or processors that manage configuration of the one or more parameters in a second portion of the plurality of devices.
  • the one or more circuits and/or processors may be operable to automatically perform the determining and/or detecting in response to a communication from the one or more other circuits and/or processors. Incompatibilities between parameters in the first portion of the plurality of devices and parameters in the second portion of the plurality of network devices may be reconciled via communications between the one or more circuits and/or processors and the one or more other circuits and/or processors.
  • FIG. 1 is diagram illustrating an exemplary network in which multiple devices and/or portions of devices are configured via a single logical point of management (LPM), in accordance with an embodiment of the invention.
  • a network 101 comprising subnetwork 120 , a management console 150 , logical points of management (LPMs) 108 1 - 108 M , and racks or cabinets 100 1 - 100 M , where ‘M’ is an integer greater than or equal to 1.
  • the network 101 may comprise a plurality of link layer technologies such as Ethernet, Fiber Channel, and Infiniband.
  • the network 101 may utilize one or more data center bridging (DCB) techniques and/or protocols such as Congestion Notification (CN), Priority Flow Control (PFC), and Enhanced Transmission Selection (ETS).
  • DCB data center bridging
  • CN Congestion Notification
  • PFC Priority Flow Control
  • ETS Enhanced Transmission Selection
  • the racks 100 1 - 100 m may comprise rack mount networking systems that may house, for example, computing devices such as servers and computers, and network devices such as switches, and/or other devices such as power supplies.
  • computing device is utilized herein to describe a device, such as a server, which is conventionally managed by a first management entity and/or utilizing a first set of management protocols and/or tools.
  • network device is utilized herein to describe a device, such as an Ethernet switch, which is conventionally managed by a second management entity and/or utilizing a second set of management protocols and/or tools.
  • the line between a “network” device and a “computing” device is becoming increasingly blurry.
  • a virtual switch residing in a server may, depending on the circumstances, be referred to by either or both terms.
  • other devices such as the UPS 110 X or displays (not shown), may comprise various functionalities which may result in them being considered as computing devices and/or network devices.
  • computing devices such as the UPS 110 X or displays (not shown)
  • network devices such as the Internet 110 X or displays (not shown)
  • aspects of the invention may enable simplifying and/or unifying management of various devices, particularly in instances when multiple management entities need, or desire, to manage various devices and/or portions thereof.
  • Each rack 100 m may comprise ‘N’ servers 102 m1 - 102 mN , where ‘N’ is an integer greater than or equal to 1 and ‘m’ is an integer between 1 and ‘M.’
  • the servers 102 m1 - 102 mN of rack 100 m may each comprise suitable logic, circuitry, interfaces, and/or code that may be operable to provide services to clients such as PCs, mobile devices, and other servers.
  • Each of the servers 102 m1 - 102 mN may be operable to, for example, run one or more applications that processes input from the clients and/or output information to the clients.
  • Each of the servers 102 m1 - 102 mN may interface with the network via a corresponding one of network interface circuits (NICs) 106 m1 - 106 mN .
  • the NICs 106 m1 - 106 mN of the rack 100 m may each comprise suitable logic, circuitry, interfaces, and/or code that may be operable to interface the corresponding servers 102 m1 - 102 mN to the switch 104 m .
  • Each rack 100 m may also comprise a switch 104 m .
  • Each of the switches 104 1 - 104 m may comprise suitable logic, circuitry, interfaces, and/or code that may be operable to forward packets between corresponding NICs 106 1 - 106 N , other ones of the switches 104 1 - 104 m , and other networks and/or storage area networks, such as the subnetwork 120 .
  • Each logical point of management (LPM) 108 m may comprise suitable logic, circuitry, interfaces, and/or code that may enable managing configuration of one or more devices with which it is associated.
  • Each LPM 108 m may comprise, for example, one or more processors, memory devices, and bus controllers.
  • Each LPM 108 m may comprise, for example, dedicated, shared, and/or general purpose hardware and/or may comprise firmware and/or software executed by dedicated, shared, and/or general purpose hardware.
  • Each LPM 108 m , or portions thereof, may reside in various places.
  • each LPM 108 m may reside in the switch 104 m , in one or more of the servers 102 m1 - 102 mN , in one or more of the NICs 106 m1 - 106 mN , on the rack itself, or a combination thereof.
  • each LPM 108 m may be similar to or the same as a central management unit (CMU) described in U.S. patent application Ser. No. 12/619,221, which is incorporated herein by reference in its entirety.
  • CMU central management unit
  • each rack 100 m may comprise a LPM 108 m , such that all the devices of each rack 100 m may be associated with, and managed via, a corresponding LPM 108 m .
  • the rack servers 102 m1 - 102 mN , the NICs 106 m1 - 106 mN , the switch 104 m , and the UPS 110 m may be managed via the LPM 108 m .
  • devices external to the rack 100 m such as devices (not shown) residing in the subnetwork 120 , may be associated with, and thus managed via, a LPM 108 m .
  • Each LPM 108 m may enable management of associated devices by, for example, exposing one or more application programming interfaces (APIs) of the associated devices to one or more network management entities such as the console 150 .
  • each LPM 108 m may be operable to translate between one or more APIs and one or more network interfaces.
  • each LPM 108 m may be operable to receive a command or request over a network interface and may de-packetize, convert, reformat, and/or otherwise process the command or request to generate a corresponding command or request on one or more APIs to one or more devices.
  • each LPM 108 m may expose an internal API of the rack 108 m to the one or more management consoles 150 .
  • each LPM 108 m may collect configuration information from its associated devices, make the information available among the associated devices, and make the information available to the management console 150 . Similarly, each LPM 108 m may receive configuration information from the management console 150 and/or other LPMs 108 m and distribute or make available the received configuration information among the devices associated with the LPM 108 m . Such exchanged configuration information may be utilized to ensure that devices are compatibly and/or optimally configured to enable and/or optimize communications in the network 101 . In this regard, parameters in various devices of the network 101 may be configured based on information communicated to, from, and/or between the LPMs 108 1 - 108 M .
  • one or more parameters in a first device may be compatible with one or more corresponding parameters in a second device when the parameters in the first device have the same values as the corresponding parameters in the second device.
  • one or more parameters in a first device may be compatible with one or more corresponding parameters in a second device when the parameters in the first device are configured to have values associated with transmission and the corresponding parameters in the second device are configured to have values associated with reception.
  • Whether parameters are compatible may be characterized by, for example, whether messages that utilize or depend on the parameters can be successfully communicated.
  • whether parameters are optimal may be characterized by, for example, a data rate and/or error rate that may be achieved in communications that utilize or depend on those parameters.
  • An exchange of configuration information to validate a configuration of, and/or reconfigure, one or more devices may occur upon, for example, one or more of: a change in network topology (e.g., addition, removal, power up, or power down of a device), a change in one or more parameters, a request from a management console 150 , and reception of configuration information at an LPM 108 m from another one of the LPMs 108 1 - 108 M or from a management entity.
  • various events may automatically trigger a collection of configuration information, an exchange of configuration information, and/or a reconfiguration of one or more devices.
  • the exchanged information may ensure link partners and devices along a network path are compatibly configured to enable and/or optimize communication along the respective link(s) and path(s).
  • one or more parameters in the servers 102 11 and/or NICs 106 11 may need to match, or be compatible with, a corresponding one or more parameters in the switch 104 1 to enable and/or optimize communication between the switch 104 1 and the server 102 11 . That is, if parameters in the server 102 11 and/or NIC 106 11 and the switch 104 1 are incompatibly configured, then communication over the corresponding link 112 11 may fail or be sub-optimal.
  • the server 102 11 , the NIC 102 11 , and the switch 104 1 may all be associated with a single LPM 108 1 , however, multiple LPMs may interact to achieve the same results.
  • the LPM 108 1 and the LPM 108 M may exchange information with each other and/or with one or more management consoles 150 to ensure link partners and/or devices along a network path are compatibly and/or optimally configured.
  • the server 102 11 may need or desire to communicate with the server 102 1M .
  • one or more parameters in the server 102 11 or NIC 106 11 , one or more parameters in the server 102 1M or NIC 106 1 m, and one or more corresponding parameters in any switches, routers, or other intermediary devices between the server 102 11 and the server 102 1M may need to be compatibly configured to enable and/or optimize communication along the path.
  • the LPM 108 1 , the LPM 108 M , and any LPM(s) associated with any intermediary device(s) may exchange configuration information, with each other and/or with one or more management consoles, to ensure compatible and/or optimal configuration along the network path between server 102 11 and server 102 1M .
  • one or more of LPMs 108 1 - 108 M may generate a notification to another one or more of the LPMs 108 1 - 108 M and/or to one or more management entities, such as the console 150 .
  • the notification may comprise, for example, current configuration, possible compatible configurations, and/or a recommended configuration.
  • current configuration and/or recommended configuration may be presented to, for example, management software and/or to a network administrator who may view the information and make management decisions based on the notification.
  • the management entities may then negotiate with each other to determine how to configure the devices.
  • the management entities may then reconfigure one or more parameters to eliminate the incompatibility.
  • the configuration by the management entities may be performed utilizing standard management protocols and/or tools and/or via the LPMs 108 1 - 108 m .
  • the corresponding ones of the LPMs 108 1 - 108 M may automatically reconfigure the parameters into a best-possible configuration.
  • one LPM 108 may act as a master while others may act as a slave. Which device is master and which is slave may be determined on a per-parameter, per-link, per-connection, per-network, per-function, and/or on any other basis.
  • configuration compatibility may be determined per link and/or end-to-end for various parameters.
  • Exemplary parameters may comprise: virtual local area networking (VLAN) identifiers and/or other VLAN parameters; teaming parameters; MAC addresses; parameters regarding data rates supported and/or utilized; parameters regarding whether communications are to be simplex, duplex, and/or unidirectional; an indication of whether or not IEEE 802.3 PAUSE is to be enabled; energy efficient networking (EEN) or energy efficient Ethernet (EEE) parameters, and parameters regarding supported and/or expected formatting of packets.
  • VLAN virtual local area networking
  • teaming parameters MAC addresses
  • parameters regarding data rates supported and/or utilized parameters regarding whether communications are to be simplex, duplex, and/or unidirectional
  • an indication of whether or not IEEE 802.3 PAUSE is to be enabled
  • energy efficient networking (EEN) or energy efficient Ethernet (EEE) parameters and parameters regarding supported and/or expected formatting of packets.
  • FCoE Fibre Channel Adaptive Sockets Layer
  • iSCSI Interoperability for Microwave Access
  • aspects of the invention may enable ensuring that support of lossless behavior is compatibly configured among link partners and end-to-end.
  • IEEE 802.3 PAUSE may be enabled on the link between the NIC 106 XY and the switch 104 X .
  • the network 101 comprises an LPM 108 for each of the racks 101 1 - 101 m .
  • the network 101 and racks 100 1 and 100 m are for illustration purposes only and the invention is not limited with regard to network topology, the particular devices within a network, the particular devices within a Rack, and/or the particular devices managed by an associated LPM 108 .
  • FIG. 2A is a diagram illustrating exemplary devices managed via a logical point of management (LPM), in accordance with an embodiment of the invention.
  • a server rack 200 comprising a variety of components including a LPM 224 , a top-of-rack (TOR) switch 202 , blade switches 204 1 - 204 K , UPS 110 , and servers 206 1 - 206 P , where K and P are integers greater than or equal to 1.
  • Each of the servers 206 1 - 206 P may comprise a variety of components such as a network interface circuit (NIC) 208 , a baseboard management controller (BMC) 210 , storage 212 , memory 213 , a processor 214 , a hypervisor 216 , a virtual switch (vSwitch) 222 , and virtual machines and/or operating systems (VMs/OSs) 218 1 - 218 Q , where Q is an integers greater than or equal to 1.
  • NIC network interface circuit
  • BMC baseboard management controller
  • the LPM 224 may be substantially similar to the LPMs 108 m described with respect to FIG. 1 .
  • the LPM 224 may be implemented via any combination of dedicated and/or shared logic, circuitry, interfaces, and/or code that resides anywhere on or within the Rack 200 .
  • the LPM 224 is implemented in dedicated logic, circuitry, interfaces, and/or code residing on the rack 200 separately from the servers 206 , the blade switches 204 , and the TOR switch 202 .
  • the NIC 208 may comprise suitable logic, circuitry, interfaces, and/or code that may be operable to transmit and receive data in adherence with one or more networking standards.
  • the NIC 208 may implement physical layer functions, data link layer functions, and, in some instances, functions associated with OSI layer 3 and higher OSI layers.
  • the NIC 208 may be operable to implement network interface layer functions, Internet layer functions, and, in some instances, transport layer functions and application layer functions.
  • the NIC 208 may, for example, communicate in adherence with one or more Ethernet standards defined in IEEE 802.3.
  • the NIC 208 may be enabled to utilize virtualization such that it may present itself as multiple network adapters.
  • the BMC 210 may comprise suitable logic, circuitry, interfaces, and/or code that may be operable to monitor various conditions on the corresponding server 206 , and control various functions and/or hardware components of the corresponding server 206 based on the monitored conditions. For example, the BMC 210 may receive data from one or more sensors and determine that the server 206 needs to be reset based on the sensor data. As another example, the BMC 210 may monitor temperature of the server 206 and adjust fan speed accordingly.
  • the BMC 210 may also comprise suitable logic, circuitry, interfaces, and/or code that may be operable communicate with a management entity via an internal data bus and/or a network link. For example, a management entity may request a status report from the BMC 210 , and, in response, the BMC 206 may gather information from various sensors and communicate the information to the management entity.
  • the storage 212 may comprise, for example, a hard drive or solid state memory.
  • the storage 212 may store, for example, data that may be read, written, and/or executed locally or remotely via the NIC 208 .
  • the processor 214 and the memory 213 may comprise suitable logic, circuitry, interfaces and/or code that may enable processing data and/or controlling operations of the server 206 .
  • the memory 213 may comprise, for example, SRAM, DRAM, and/or non-volatile memory that stores data and/or instructions.
  • the processor 214 utilizing the memory 213 , may be operable to execute code to effectuate operation of the server 206 .
  • the processor 214 may execute code stored in the memory 213 to realize the hypervisor 216 and the VMs/OSs 218 .
  • a configuration of any one or more of the TOR switch 202 , the blade switches 204 1 - 204 K , and any one or more components of any one or more of the servers 206 1 - 206 P may be managed via the LPM 224 .
  • the LPM 224 may be operable to determine a configuration of one or more parameters in any one or more components of the rack 200 and may be operable to determine. For instance, if one or more parameters in the blade switch 204 1 is configured incompatibly with one or more parameters in the NIC 208 of the server 206 1 , then traffic to and/or from the server 206 1 may be impossible or sub-optimal.
  • a remote client (not shown) requests a datastream from the VM/OS 218 1 of the server 206 1 but one or more parameters in the vSwitch 222 of the server 206 1 is incompatibly or sub-optimally configured with one or more corresponding parameters in the NIC 208 of server 206 1 , the blade switch 204 1 , and/or in the remote client, then the datastream may not be communicated or may be communicated sub-optimally.
  • messages may be exchanged between the LPM 224 and a LPM associated with the remote client automatically reconfigure the parameter(s) and/or notify one or more management entities, based on network and/or device policies or settings.
  • Reconciliation of one or more incompatible parameters may depend, for example, on an order of precedence, or hierarchy, of LPMs and/or management entities. For example, in resolving conflicts in parameter configuration, the decisions of certain management entities may control over the decisions of other management entities and/or decisions of certain LPMs may control over the decisions of other LPMs.
  • the LPM 224 may determine and/or configure parameters of various components of the rack 200 via one or more data busses, such as a PCI-X bus. Additionally or alternatively, the LPM 224 may determine and/or configure parameters of various components of the rack 200 via one or more network connections, such as an Ethernet connection. For example, in an exemplary embodiment of the invention, the LPM 224 may manage various components of the server 206 1 utilizing Ethernet packets communicated via the TOR switch 202 and the blade switch 204 1 . In such an embodiment, a portion of the LPM 224 may comprise an agent, for example, a virtual machine or software agent, running on the server 206 1 such that the agent may be operable to receive, process, implement, generate, and/or transmit configuration messages.
  • an agent for example, a virtual machine or software agent, running on the server 206 1 such that the agent may be operable to receive, process, implement, generate, and/or transmit configuration messages.
  • the agent may be operable to collect configuration information from various components of the server, generate one or more network messages, for example, an Ethernet frame, comprising the configuration information, and communicate the message(s) to another portion of the LPM 224 and/or to a management entity, such as the console 150 described with respect to FIG. 1 .
  • network messages for example, an Ethernet frame
  • the agent may be operable to collect configuration information from various components of the server, generate one or more network messages, for example, an Ethernet frame, comprising the configuration information, and communicate the message(s) to another portion of the LPM 224 and/or to a management entity, such as the console 150 described with respect to FIG. 1 .
  • FIG. 2B is a diagram illustrating exemplary devices managed via a logical point of management (LPM) integrated into a top-of-rack switch, in accordance with an embodiment of the invention.
  • the Rack 250 of FIG. 2B may be substantially similar to the rack 200 described with respect to FIG. 2B .
  • a portion 224 a of the LPM resides in the TOR switch 252 and a portion 224 b resides in each server 206 .
  • the portion 224 a may communicate over one or more network links, for example, Ethernet links, with one or more management entities such as console 150 ( FIG. 1 ), and may communicate with each of the servers 206 1 - 206 P via one or more network links internal to the rack 250 .
  • the portion 224 b may comprise, for example, a software agent.
  • Each portion 224 b on each of the servers 206 1 - 206 P may receive packets from the portion 224 a and generate corresponding signals and/or commands on the corresponding server 206 to implement and/or respond to requests and/or commands contained in the received packets.
  • a management entity may send a request for configuration information to the LPM 224 a .
  • the LPM 224 a may send corresponding requests to the agents 224 b .
  • Each agent may collect the configuration information for its respective server 206 and report such information to the portion 224 a .
  • the portion 224 a may then aggregate the configuration and report the information to the management entity.
  • a portion 224 b may detect a change in one or more parameters in the server 206 on which it resides. The portion 224 b may report this change to the portion 224 a which may determine whether the new configuration is incompatible with one or more parameters on any of the other servers 206 and/or with one or more devices external to the server 206 . In some instances, if there is an incompatibility internal to the rack 250 , the portion 224 b may automatically reconfigure one or more of the parameters to reconcile the incompatibility.
  • the portion 224 b may send a notification of the incompatibility and/or sub-optimal configuration to a management entity.
  • the notification may comprise a possible and/or recommended reconfiguration to optimize the configuration and/or eliminate the incompatibility.
  • the portion 224 b may communicate with a LPM associated with the external device to optimize the configuration and/or reconcile the incompatibility.
  • FIG. 3 illustrates configuration of devices along a network path, in accordance with an embodiment of the invention.
  • a network path comprising devices 302 , 304 , and 306 .
  • Each of the devices 302 , 304 , and 306 comprises parameters 318 and 320 .
  • the management entities 332 and 334 may be the same as the console 150 described with respect to FIG. 1 .
  • the device 302 c may comprise a server
  • the device 302 b may comprise a switch
  • the device 302 a may comprise a personal computer.
  • communication may be desired between devices 302 and 306 , and enabling and/or optimizing such communication may require that the parameters 318 and 320 be compatibly configured in each of the devices 302 , 304 , and 306 .
  • each of the parameters 318 and 320 may need to be set to ‘A.’
  • configuration information may be communicated among the LPMs 312 a , 312 b , and 312 c and/or the management entities 332 and 334 .
  • the LPM 312 a may collect the current configuration of the parameters 318 a and 320 a
  • the LPM 312 b may collect the current configuration of the parameters 318 b and 320 b
  • the LPM 312 c may collect the current configuration of the parameters 318 c and 320 c
  • the LPMs 312 a , 312 b , and 312 c may exchange the collected configuration information.
  • one or more of the LPMs 312 a , 312 b , and 312 c and/or the management entities 332 and 334 may inspect the information to detect any incompatibilities or sub-optimal configurations. In the exemplary embodiment of the invention, it may be detected that parameters 320 a , 320 b , and 320 c are incompatibly configured.
  • the LPMs 312 a , 312 b , and 312 c may, for example, automatically determine a compatible configuration to enable and/or optimize communication of the media stream.
  • the LPMs 312 a , 312 b , and 312 c may negotiate and/or one of LPMs 312 a , 312 b , and 312 c may take precedence over the other ones of the LPMs 312 a , 312 b , and 312 c to reconcile the incompatibility and resolve any conflicts.
  • the management entities 330 and 332 may negotiate to resolve the conflict, or if one of the management entities 330 and 332 takes precedence over the other, then that management entity may decide how to resolve the incompatibility.
  • the LPMs 312 b may have priority, for example, be a “master, and may decide that the optimum configuration is to reconfigure parameters 320 a and 320 b to a value of ‘A.’ Accordingly, the LPM 312 b may recon figure 320 b to ‘A’, send a command to LPM 312 a to recon figure 320 a to a value of ‘A,’ and then notify the LPM 312 c and/or the management entity 332 the reconfiguration. The LPM 312 c may then notify the management entity 334 .
  • the management entity 330 may decide that the optimum configuration is to reconfigure parameters 320 a and 320 b to a value of ‘A.’ Accordingly, the management entity 330 may communicate such a decision to the LPM 314 . In response, the LPM 314 may recon figure 320 b to ‘A,’ send a command to LPM 312 a to recon figure 320 a to a value of ‘A,’ and then notify the LPM 312 c and/or the management entity 332 the reconfiguration. The LPM 312 c may then notify the management entity 334 . In this manner, the single management entity may need only submit configuration parameters once to the LPM 312 b and configuration of the entire network path may occur as a result of that single message.
  • FIG. 3 depicts an LPM associated with each of the devices 302 , 304 , and 306
  • each of devices 302 , 304 , 306 may represent a plurality of devices.
  • the LPMs 312 a , 312 b , and 312 c may each correspond to portions, logical and/or physical, of a single LPM.
  • an LPM 312 may be distributed and/or virtualized across multiple devices.
  • FIG. 4 is a flow chart illustrating exemplary steps for network configuration, in accordance with an embodiment of the invention.
  • the exemplary steps may begin with step 402 when configuration, or validation of configuration, is triggered for link partners and/or devices along a network path.
  • Exemplary triggers may comprise, for example, a change in one or more parameters, a command or request from a management entity, a reconfiguration of a network such as a device added, removed, powered up, or powered down, and a request to, or attempt to, establish a network path such as an attempt to setup a path utilizing Audio Video Bridging protocols such as IEEE 802.1Qat, IEEE 802.1Qav, IEEE 802.1AP, or similar or related protocols.
  • the exemplary steps may advance to step 404 .
  • the one or more LPMs associated with the devices to be validated and/or configured may obtain the current configuration of the devices and may determine whether any parameters are incompatibly configured with one or more corresponding parameters in other devices of the network path. In instances that the parameters are optimally configured, the exemplary steps may advance to end step 412 . Returning to step 406 , in instances that one or more parameters in one or more of the devices are mismatched and/or sub-optimal, the exemplary step may, depending on the network policies, circumstances, and/or other implementation details, advance to step 408 and/or step 410 .
  • the LPM(s) may automatically reconfigure one or more of the devices, based on factors such as the nature of the parameter(s) that are mismatched and/or sub-optimal, whether current configuration results in no communication or just reduced communication, the types of the devices, security levels of the devices, effect of the parameters on other network paths or other network or device functions, any other policy or setting that may be established by network administrators.
  • the LPM that detects the mismatched and/or sub-optimal configuration may generate a notification to other LPMs and/or to one or more network management entities.
  • the notification may comprise a recommendation as to how to optimize the configuration and/or repair the mismatch.
  • a network management entity e.g., an automated or manned management console
  • one or more circuits and/or processors such as any of LPMs 312 a , 312 b , and 312 c or a combination thereof, may be operable to determine a configuration of one or more parameters 318 and 320 in devices 302 , 304 , and 306 along a network path, and detect whether any of the one or more parameters 318 and 320 is configured such that communication between the devices 302 , 304 , and 306 is disabled and/or suboptimal.
  • parameters that are configured such that they are compatible with each other may be said to be “matched,” whereas parameters which may be configured such that they are incompatible with each other may be said to be “mismatched.”
  • the one or more circuits and/or processors may communicate a notification of the incompatibility to a network management entity 150 and/or generate one or more messages to reconfigure one or more of the parameters 318 and/or 320 in one or more of devices 302 , 304 , and 306 .
  • reconfiguring one or more parameters such that they are matched and/or optimally configured may be said to be “repairing” network configuration.
  • the determining, detecting, matching, and repairing may be performed automatically in response to a change in one or more of the parameters 318 and/or 320 in one or more of devices 302 , 304 , and 306 .
  • the determining and/or detecting may be performed automatically in response to a reconfiguration of the network path.
  • the notification may comprise a recommended configuration of the one or more parameters 318 and/or 320 in one or more of devices 302 , 304 , and 306 .
  • the one or more circuits and/or processors may enable access to one or more application programming interfaces of one or more of devices 302 , 304 , and 306 .
  • the one or more circuits and/or processors may be operable to translate between the one or more application programming interfaces and one or more network interfaces.
  • the one or more circuits and/or processors may be operable to configure the one or more parameters in a first portion, such as a first one of devices 302 , 304 , and 306 , of the plurality of devices 302 , 304 , and 306 .
  • the one or more circuits and/or processors may be operable to communicate with one or more other circuits and/or processors such as any of LPMs 312 a , 312 b , and 312 c or a combination thereof, that manage configuration of the one or more parameters in a second portion of the plurality of devices, such as a second one of devices 302 , 304 , and 306 .
  • the one or more circuits and/or processors may be operable to automatically perform the determining and/or detecting in response to a communication from the one or more other circuits and/or processors. Incompatibilities between parameters 318 and/or 320 in the first portion of the plurality of devices and parameters in the second portion of the plurality of network devices may be reconciled via communications between the one or more circuits and/or processors and the one or more other circuits and/or processors.
  • inventions may provide a non-transitory computer readable medium and/or storage medium, and/or a non-transitory machine readable medium and/or storage medium, having stored thereon, a machine code and/or a computer program having at least one code section executable by a machine and/or a computer, thereby causing the machine and/or computer to perform the steps as described herein for matching and repairing network configuration.
  • the present invention may be realized in hardware, software, or a combination of hardware and software.
  • the present invention may be realized in a centralized fashion in at least one computer system, or in a distributed fashion where different elements are spread across several interconnected computer systems. Any kind of computer system or other apparatus adapted for carrying out the methods described herein is suited.
  • a typical combination of hardware and software may be a general-purpose computer system with a computer program that, when being loaded and executed, controls the computer system such that it carries out the methods described herein.
  • the present invention may also be embedded in a computer program product, which comprises all the features enabling the implementation of the methods described herein, and which when loaded in a computer system is able to carry out these methods.
  • Computer program in the present context means any expression, in any language, code or notation, of a set of instructions intended to cause a system having an information processing capability to perform a particular function either directly or after either or both of the following: a) conversion to another language, code or notation; b) reproduction in a different material form.

Abstract

Aspects of a method and system for matching and repairing network configuration are provided. In this regard, one or more circuits and/or processors may be operable to determine a configuration of one or more parameters in a plurality of devices along a network path, and detect whether any of the one or more parameters are configured such that communication between the plurality of devices is disabled and/or suboptimal. The devices may comprise at least one server and one or more of a network switch, a network bridge, and a router. In instances that one or more parameters are incompatibly or sub-optimally configured, a notification of the incompatibility may be communicated to a network management entity and/or one or more messages may be generated to reconfigure the one or more parameters in one or more of the plurality of devices. The determining and/or detecting may be performed automatically in response to various events.

Description

    1. CROSS REFERENCE TO RELATED APPLICATIONS
  • This application is a continuation of U.S. application Ser. No. 12/850,858, filed Aug. 5, 2010, pending, issued as U.S. Pat. No. 8,458,305 on Jun. 4, 2012, and claims priority to provisional application Ser. No. 61/231,726, filed Aug. 6, 2009, which are incorporated herein by reference in their entirety.
  • 2. TECHNICAL FIELD
  • Certain embodiments of the invention relate to networking. More specifically, certain embodiments of the invention relate to a method and system for network configuration.
  • 3. BACKGROUND
  • Information Technology (IT) management may require performing remote management operations of remote systems to perform inventory, monitoring, control, and/or to determine whether remote systems are up-to-date. For example, management devices and/or consoles may perform such operations as discovering and/or navigating management resources in a network, manipulating and/or administrating management resources, requesting and/or controlling subscribing and/or unsubscribing operations, and executing and/or specific management methods and/or procedures. Management devices and/or consoles may communicate with devices in a network to ensure availability of remote systems, to monitor/control remote systems, to validate that systems may be up-to-date, and/or to perform any security patch updates that may be necessary. As networks become increasingly large and complex, network management also becomes increasingly complex.
  • Further limitations and disadvantages of conventional and traditional approaches will become apparent to one of skill in the art, through comparison of such systems with some aspects of the present invention as set forth in the remainder of the present application with reference to the drawings.
  • BRIEF SUMMARY
  • A system and/or method is provided for matching and repairing network configuration, substantially as shown in and/or described in connection with at least one of the figures, as set forth more completely in the claims.
  • These and other advantages, aspects and novel features of the present invention, as well as details of an illustrated embodiment thereof, will be more fully understood from the following description and drawings.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is diagram illustrating an exemplary network in which multiple devices and/or portions of devices are configured via a single logical point of management (LPM), in accordance with an embodiment of the invention.
  • FIG. 2A is a diagram illustrating exemplary devices managed via a logical point of management (LPM), in accordance with an embodiment of the invention.
  • FIG. 2B is a diagram illustrating exemplary devices managed via a logical point of management (LPM) integrated into a top-of-rack switch, in accordance with an embodiment of the invention.
  • FIG. 3 illustrates configuration of devices along a network path, in accordance with an embodiment of the invention.
  • FIG. 4 is a flow chart illustrating exemplary steps for network configuration, in accordance with an embodiment of the invention.
  • DETAILED DESCRIPTION
  • Certain embodiments of the invention may be found in a method and system for matching and repairing network configuration. In various embodiments of the invention, one or more circuits and/or processors may be operable to determine a configuration of one or more parameters in a plurality of devices along a network path, and detect whether any of the one or more parameters is configured such that communication between the plurality of devices is disabled and/or suboptimal. In some instances, parameters that are configured such that they are compatible with each other may be said to be “matched,” whereas parameters which may be configured such that they are incompatible with each other may be said to be “mismatched.” In instances that one or more parameters are incompatibly or sub-optimally configured, the one or more circuits and/or processors may communicate a notification of the incompatibility to a network management entity and/or generate one or more messages to reconfigure the one or more parameters in one or more of the plurality of devices. In some instances, reconfiguring one or more parameters such that they are matched and/or optimally configured may be said to be “repairing” network configuration. The determining, detecting, matching, and repairing may be performed automatically in response to a change in one or more of the parameters in one or more of the plurality of devices. The determining, detecting, matching, and repairing may be performed automatically in response to a reconfiguration of the network path. The notification may comprise a recommended configuration of the one or more parameters in one or more of the plurality of devices. The one or more circuits and/or processors may enable access to one or more application programming interfaces of one or more of the plurality of devices. The one or more circuits and/or processors may be operable to translate between the one or more application programming interfaces and one or more network interfaces.
  • The one or more circuits and/or processors may be operable to manage configuration of the one or more parameters in a first portion of the plurality of devices. The one or more circuits and/or processors may be operable to communicate with one or more other circuits and/or processors that manage configuration of the one or more parameters in a second portion of the plurality of devices. The one or more circuits and/or processors may be operable to automatically perform the determining and/or detecting in response to a communication from the one or more other circuits and/or processors. Incompatibilities between parameters in the first portion of the plurality of devices and parameters in the second portion of the plurality of network devices may be reconciled via communications between the one or more circuits and/or processors and the one or more other circuits and/or processors.
  • FIG. 1 is diagram illustrating an exemplary network in which multiple devices and/or portions of devices are configured via a single logical point of management (LPM), in accordance with an embodiment of the invention. Referring to FIG. 1 there is shown a network 101 comprising subnetwork 120, a management console 150, logical points of management (LPMs) 108 1-108 M, and racks or cabinets 100 1-100 M, where ‘M’ is an integer greater than or equal to 1.
  • The network 101 may comprise a plurality of link layer technologies such as Ethernet, Fiber Channel, and Infiniband. In an exemplary embodiment of the invention, the network 101 may utilize one or more data center bridging (DCB) techniques and/or protocols such as Congestion Notification (CN), Priority Flow Control (PFC), and Enhanced Transmission Selection (ETS).
  • The racks 100 1-100 m may comprise rack mount networking systems that may house, for example, computing devices such as servers and computers, and network devices such as switches, and/or other devices such as power supplies. In this regard, the term “computing” device is utilized herein to describe a device, such as a server, which is conventionally managed by a first management entity and/or utilizing a first set of management protocols and/or tools. Similarly, the term “network” device is utilized herein to describe a device, such as an Ethernet switch, which is conventionally managed by a second management entity and/or utilizing a second set of management protocols and/or tools. However, in practice, the line between a “network” device and a “computing” device is becoming increasingly blurry. For example, a virtual switch residing in a server may, depending on the circumstances, be referred to by either or both terms. Similarly, other devices such as the UPS 110 X or displays (not shown), may comprise various functionalities which may result in them being considered as computing devices and/or network devices. Accordingly, “computing” devices, “network” devices, and other devices are referred to collectively and individually herein as simply “devices.” Furthermore, aspects of the invention may enable simplifying and/or unifying management of various devices, particularly in instances when multiple management entities need, or desire, to manage various devices and/or portions thereof.
  • Each rack 100 m may comprise ‘N’ servers 102 m1-102 mN, where ‘N’ is an integer greater than or equal to 1 and ‘m’ is an integer between 1 and ‘M.’ The servers 102 m1-102 mN of rack 100 m may each comprise suitable logic, circuitry, interfaces, and/or code that may be operable to provide services to clients such as PCs, mobile devices, and other servers. Each of the servers 102 m1-102 mN may be operable to, for example, run one or more applications that processes input from the clients and/or output information to the clients.
  • Each of the servers 102 m1-102 mN may interface with the network via a corresponding one of network interface circuits (NICs) 106 m1-106 mN. The NICs 106 m1-106 mN of the rack 100 m may each comprise suitable logic, circuitry, interfaces, and/or code that may be operable to interface the corresponding servers 102 m1-102 mN to the switch 104 m.
  • Each rack 100 m may also comprise a switch 104 m. Each of the switches 104 1-104 m may comprise suitable logic, circuitry, interfaces, and/or code that may be operable to forward packets between corresponding NICs 106 1-106 N, other ones of the switches 104 1-104 m, and other networks and/or storage area networks, such as the subnetwork 120.
  • Each logical point of management (LPM) 108 m may comprise suitable logic, circuitry, interfaces, and/or code that may enable managing configuration of one or more devices with which it is associated. Each LPM 108 m may comprise, for example, one or more processors, memory devices, and bus controllers. Each LPM 108 m may comprise, for example, dedicated, shared, and/or general purpose hardware and/or may comprise firmware and/or software executed by dedicated, shared, and/or general purpose hardware. Each LPM 108 m, or portions thereof, may reside in various places. For example, for a rack 108 m, portions of the LPM 108 m may reside in the switch 104 m, in one or more of the servers 102 m1-102 mN, in one or more of the NICs 106 m1-106 mN, on the rack itself, or a combination thereof. In various embodiments of the invention, each LPM 108 m may be similar to or the same as a central management unit (CMU) described in U.S. patent application Ser. No. 12/619,221, which is incorporated herein by reference in its entirety.
  • In the exemplary embodiment of the invention depicted in FIG. 1, each rack 100 m may comprise a LPM 108 m, such that all the devices of each rack 100 m may be associated with, and managed via, a corresponding LPM 108 m. As a result, the rack servers 102 m1-102 mN, the NICs 106 m1-106 mN, the switch 104 m, and the UPS 110 m may be managed via the LPM 108 m. Additionally, in some instances, devices external to the rack 100 m, such as devices (not shown) residing in the subnetwork 120, may be associated with, and thus managed via, a LPM 108 m.
  • Each LPM 108 m may enable management of associated devices by, for example, exposing one or more application programming interfaces (APIs) of the associated devices to one or more network management entities such as the console 150. In this regard, each LPM 108 m may be operable to translate between one or more APIs and one or more network interfaces. For instance, each LPM 108 m may be operable to receive a command or request over a network interface and may de-packetize, convert, reformat, and/or otherwise process the command or request to generate a corresponding command or request on one or more APIs to one or more devices.
  • In the exemplary embodiment of the invention depicted in FIG. 1, each LPM 108 m may expose an internal API of the rack 108 m to the one or more management consoles 150.
  • In operation, each LPM 108 m may collect configuration information from its associated devices, make the information available among the associated devices, and make the information available to the management console 150. Similarly, each LPM 108 m may receive configuration information from the management console 150 and/or other LPMs 108 m and distribute or make available the received configuration information among the devices associated with the LPM 108 m. Such exchanged configuration information may be utilized to ensure that devices are compatibly and/or optimally configured to enable and/or optimize communications in the network 101. In this regard, parameters in various devices of the network 101 may be configured based on information communicated to, from, and/or between the LPMs 108 1-108 M.
  • Whether various parameters are compatible, or how to make parameters compatible, may vary with the parameters and the circumstances. For example, one or more parameters in a first device may be compatible with one or more corresponding parameters in a second device when the parameters in the first device have the same values as the corresponding parameters in the second device. As another example, one or more parameters in a first device may be compatible with one or more corresponding parameters in a second device when the parameters in the first device are configured to have values associated with transmission and the corresponding parameters in the second device are configured to have values associated with reception. Whether parameters are compatible may be characterized by, for example, whether messages that utilize or depend on the parameters can be successfully communicated. Similarly, whether parameters are optimal may be characterized by, for example, a data rate and/or error rate that may be achieved in communications that utilize or depend on those parameters.
  • An exchange of configuration information to validate a configuration of, and/or reconfigure, one or more devices, may occur upon, for example, one or more of: a change in network topology (e.g., addition, removal, power up, or power down of a device), a change in one or more parameters, a request from a management console 150, and reception of configuration information at an LPM 108 m from another one of the LPMs 108 1-108 M or from a management entity. In this regard, various events may automatically trigger a collection of configuration information, an exchange of configuration information, and/or a reconfiguration of one or more devices.
  • The exchanged information may ensure link partners and devices along a network path are compatibly configured to enable and/or optimize communication along the respective link(s) and path(s). For example, one or more parameters in the servers 102 11 and/or NICs 106 11 may need to match, or be compatible with, a corresponding one or more parameters in the switch 104 1 to enable and/or optimize communication between the switch 104 1 and the server 102 11. That is, if parameters in the server 102 11 and/or NIC 106 11 and the switch 104 1 are incompatibly configured, then communication over the corresponding link 112 11 may fail or be sub-optimal. In this example, the server 102 11, the NIC 102 11, and the switch 104 1 may all be associated with a single LPM 108 1, however, multiple LPMs may interact to achieve the same results. To elaborate, the LPM 108 1 and the LPM 108 M may exchange information with each other and/or with one or more management consoles 150 to ensure link partners and/or devices along a network path are compatibly and/or optimally configured. For example, the server 102 11 may need or desire to communicate with the server 102 1M. As a result, one or more parameters in the server 102 11 or NIC 106 11, one or more parameters in the server 102 1M or NIC 106 1 m, and one or more corresponding parameters in any switches, routers, or other intermediary devices between the server 102 11 and the server 102 1M may need to be compatibly configured to enable and/or optimize communication along the path. Accordingly, the LPM 108 1, the LPM 108 M, and any LPM(s) associated with any intermediary device(s) (if such intermediary devices are present, and if such devices are not associated with either of the LMP 108 1 and 108 m) may exchange configuration information, with each other and/or with one or more management consoles, to ensure compatible and/or optimal configuration along the network path between server 102 11 and server 102 1M.
  • In an exemplary embodiment of the invention, in instances that an exchange of configuration information results in a detection that one or more parameters are incompatibly configured, one or more of LPMs 108 1-108 M may generate a notification to another one or more of the LPMs 108 1-108 M and/or to one or more management entities, such as the console 150. The notification may comprise, for example, current configuration, possible compatible configurations, and/or a recommended configuration. In this manner, current configuration and/or recommended configuration may be presented to, for example, management software and/or to a network administrator who may view the information and make management decisions based on the notification. In instances that there are multiple management entities, the management entities may then negotiate with each other to determine how to configure the devices. The management entities may then reconfigure one or more parameters to eliminate the incompatibility. The configuration by the management entities may be performed utilizing standard management protocols and/or tools and/or via the LPMs 108 1-108 m.
  • In an exemplary embodiment of the invention, in instances that an exchange of configuration information results in a detection that one or more parameters are mismatched or sub-optimally configured, the corresponding ones of the LPMs 108 1-108 M may automatically reconfigure the parameters into a best-possible configuration. In such instances that LPMs interact to automatically reconfigure one or more parameters, one LPM 108 may act as a master while others may act as a slave. Which device is master and which is slave may be determined on a per-parameter, per-link, per-connection, per-network, per-function, and/or on any other basis.
  • In various embodiments of the invention, configuration compatibility may be determined per link and/or end-to-end for various parameters. Exemplary parameters may comprise: virtual local area networking (VLAN) identifiers and/or other VLAN parameters; teaming parameters; MAC addresses; parameters regarding data rates supported and/or utilized; parameters regarding whether communications are to be simplex, duplex, and/or unidirectional; an indication of whether or not IEEE 802.3 PAUSE is to be enabled; energy efficient networking (EEN) or energy efficient Ethernet (EEE) parameters, and parameters regarding supported and/or expected formatting of packets. Also, the parameters may be associated with data center bridging (DCB) policies and/or settings such as: whether priority flow control (PFC) support is enabled, a number of priority classes utilized for PFC, number of lossless priority classes, whether ETS is enabled; whether and how quantized congestion notification (QCN) is supported, whether and how iSCSI is supported, whether and how Fiber Channel over Ethernet (FCoE) is supported, the maximum transmission unit (MTU) supported for a lossless priority, the maximum MTU for all priorities, whether and how a lossless performance mode is supported.
  • In some embodiments of the invention, FCoE, iSCSI, or other protocols that require lossless communication may be utilized. Accordingly, aspects of the invention may enable ensuring that support of lossless behavior is compatibly configured among link partners and end-to-end. In some instances, if a mismatch (either among link partners or at some point along an end-to-end path) is detected because a NIC 106 XY does not support lossless behavior but a corresponding switch 104 X does, then IEEE 802.3 PAUSE may be enabled on the link between the NIC 106 XY and the switch 104 X.
  • In the exemplary embodiment of the invention depicted in FIG. 1, the network 101 comprises an LPM 108 for each of the racks 101 1-101 m. However, the network 101 and racks 100 1 and 100 m are for illustration purposes only and the invention is not limited with regard to network topology, the particular devices within a network, the particular devices within a Rack, and/or the particular devices managed by an associated LPM 108.
  • FIG. 2A is a diagram illustrating exemplary devices managed via a logical point of management (LPM), in accordance with an embodiment of the invention. Referring to FIG. 2A, there is shown a server rack 200 comprising a variety of components including a LPM 224, a top-of-rack (TOR) switch 202, blade switches 204 1-204 K, UPS 110, and servers 206 1-206 P, where K and P are integers greater than or equal to 1. Each of the servers 206 1-206 P may comprise a variety of components such as a network interface circuit (NIC) 208, a baseboard management controller (BMC) 210, storage 212, memory 213, a processor 214, a hypervisor 216, a virtual switch (vSwitch) 222, and virtual machines and/or operating systems (VMs/OSs) 218 1-218 Q, where Q is an integers greater than or equal to 1.
  • The LPM 224 may be substantially similar to the LPMs 108 m described with respect to FIG. 1. In various exemplary embodiments of the invention, the LPM 224 may be implemented via any combination of dedicated and/or shared logic, circuitry, interfaces, and/or code that resides anywhere on or within the Rack 200. In the exemplary embodiment of the invention depicted in FIG. 2A, the LPM 224 is implemented in dedicated logic, circuitry, interfaces, and/or code residing on the rack 200 separately from the servers 206, the blade switches 204, and the TOR switch 202.
  • The NIC 208 may comprise suitable logic, circuitry, interfaces, and/or code that may be operable to transmit and receive data in adherence with one or more networking standards. With reference to the OSI model, the NIC 208 may implement physical layer functions, data link layer functions, and, in some instances, functions associated with OSI layer 3 and higher OSI layers. Similarly, with reference to the TCP/IP model, the NIC 208 may be operable to implement network interface layer functions, Internet layer functions, and, in some instances, transport layer functions and application layer functions. The NIC 208 may, for example, communicate in adherence with one or more Ethernet standards defined in IEEE 802.3. The NIC 208 may be enabled to utilize virtualization such that it may present itself as multiple network adapters.
  • The BMC 210 may comprise suitable logic, circuitry, interfaces, and/or code that may be operable to monitor various conditions on the corresponding server 206, and control various functions and/or hardware components of the corresponding server 206 based on the monitored conditions. For example, the BMC 210 may receive data from one or more sensors and determine that the server 206 needs to be reset based on the sensor data. As another example, the BMC 210 may monitor temperature of the server 206 and adjust fan speed accordingly. The BMC 210 may also comprise suitable logic, circuitry, interfaces, and/or code that may be operable communicate with a management entity via an internal data bus and/or a network link. For example, a management entity may request a status report from the BMC 210, and, in response, the BMC 206 may gather information from various sensors and communicate the information to the management entity.
  • The storage 212 may comprise, for example, a hard drive or solid state memory. The storage 212 may store, for example, data that may be read, written, and/or executed locally or remotely via the NIC 208.
  • The processor 214 and the memory 213 may comprise suitable logic, circuitry, interfaces and/or code that may enable processing data and/or controlling operations of the server 206. The memory 213 may comprise, for example, SRAM, DRAM, and/or non-volatile memory that stores data and/or instructions. The processor 214, utilizing the memory 213, may be operable to execute code to effectuate operation of the server 206. For example, the processor 214 may execute code stored in the memory 213 to realize the hypervisor 216 and the VMs/OSs 218.
  • In operation, a configuration of any one or more of the TOR switch 202, the blade switches 204 1-204 K, and any one or more components of any one or more of the servers 206 1-206 P may be managed via the LPM 224. In this regard, the LPM 224 may be operable to determine a configuration of one or more parameters in any one or more components of the rack 200 and may be operable to determine. For instance, if one or more parameters in the blade switch 204 1 is configured incompatibly with one or more parameters in the NIC 208 of the server 206 1, then traffic to and/or from the server 206 1 may be impossible or sub-optimal. Similarly, if a remote client (not shown) requests a datastream from the VM/OS 218 1 of the server 206 1 but one or more parameters in the vSwitch 222 of the server 206 1 is incompatibly or sub-optimally configured with one or more corresponding parameters in the NIC 208 of server 206 1, the blade switch 204 1, and/or in the remote client, then the datastream may not be communicated or may be communicated sub-optimally. In instances that one or more parameters are incompatible or sub-optimal, messages may be exchanged between the LPM 224 and a LPM associated with the remote client automatically reconfigure the parameter(s) and/or notify one or more management entities, based on network and/or device policies or settings. Reconciliation of one or more incompatible parameters may depend, for example, on an order of precedence, or hierarchy, of LPMs and/or management entities. For example, in resolving conflicts in parameter configuration, the decisions of certain management entities may control over the decisions of other management entities and/or decisions of certain LPMs may control over the decisions of other LPMs.
  • The LPM 224 may determine and/or configure parameters of various components of the rack 200 via one or more data busses, such as a PCI-X bus. Additionally or alternatively, the LPM 224 may determine and/or configure parameters of various components of the rack 200 via one or more network connections, such as an Ethernet connection. For example, in an exemplary embodiment of the invention, the LPM 224 may manage various components of the server 206 1 utilizing Ethernet packets communicated via the TOR switch 202 and the blade switch 204 1. In such an embodiment, a portion of the LPM 224 may comprise an agent, for example, a virtual machine or software agent, running on the server 206 1 such that the agent may be operable to receive, process, implement, generate, and/or transmit configuration messages. In this regard, the agent may be operable to collect configuration information from various components of the server, generate one or more network messages, for example, an Ethernet frame, comprising the configuration information, and communicate the message(s) to another portion of the LPM 224 and/or to a management entity, such as the console 150 described with respect to FIG. 1.
  • FIG. 2B is a diagram illustrating exemplary devices managed via a logical point of management (LPM) integrated into a top-of-rack switch, in accordance with an embodiment of the invention. The Rack 250 of FIG. 2B may be substantially similar to the rack 200 described with respect to FIG. 2B.
  • In the exemplary embodiment of FIG. 2B, a portion 224 a of the LPM resides in the TOR switch 252 and a portion 224 b resides in each server 206. The portion 224 a may communicate over one or more network links, for example, Ethernet links, with one or more management entities such as console 150 (FIG. 1), and may communicate with each of the servers 206 1-206 P via one or more network links internal to the rack 250. The portion 224 b may comprise, for example, a software agent. Each portion 224 b on each of the servers 206 1-206 P may receive packets from the portion 224 a and generate corresponding signals and/or commands on the corresponding server 206 to implement and/or respond to requests and/or commands contained in the received packets.
  • In an exemplary operation, a management entity may send a request for configuration information to the LPM 224 a. The LPM 224 a may send corresponding requests to the agents 224 b. Each agent may collect the configuration information for its respective server 206 and report such information to the portion 224 a. The portion 224 a may then aggregate the configuration and report the information to the management entity.
  • In an exemplary operation, a portion 224 b may detect a change in one or more parameters in the server 206 on which it resides. The portion 224 b may report this change to the portion 224 a which may determine whether the new configuration is incompatible with one or more parameters on any of the other servers 206 and/or with one or more devices external to the server 206. In some instances, if there is an incompatibility internal to the rack 250, the portion 224 b may automatically reconfigure one or more of the parameters to reconcile the incompatibility. In some instances, if there is an incompatibility and/or sub-optimal configuration internal to the rack 250, the portion 224 b may send a notification of the incompatibility and/or sub-optimal configuration to a management entity. The notification may comprise a possible and/or recommended reconfiguration to optimize the configuration and/or eliminate the incompatibility. In some instances, if there is an incompatibility and/or sub-optimal configuration with respect to a device that is external to the rack 250, the portion 224 b may communicate with a LPM associated with the external device to optimize the configuration and/or reconcile the incompatibility.
  • FIG. 3 illustrates configuration of devices along a network path, in accordance with an embodiment of the invention. Referring to FIG. 3, there is shown a network path comprising devices 302, 304, and 306. Each of the devices 302, 304, and 306 comprises parameters 318 and 320. The management entities 332 and 334 may be the same as the console 150 described with respect to FIG. 1. In an exemplary embodiment of the invention, the device 302 c may comprise a server, the device 302 b may comprise a switch, and the device 302 a may comprise a personal computer.
  • In operation, communication may be desired between devices 302 and 306, and enabling and/or optimizing such communication may require that the parameters 318 and 320 be compatibly configured in each of the devices 302, 304, and 306. For illustration, to enable and/or optimize delivery of a media stream from the device 302 to the device 306 each of the parameters 318 and 320 may need to be set to ‘A.’
  • Accordingly, configuration information, as indicated by dashed lines, may be communicated among the LPMs 312 a, 312 b, and 312 c and/or the management entities 332 and 334. In this regard, the LPM 312 a may collect the current configuration of the parameters 318 a and 320 a, the LPM 312 b may collect the current configuration of the parameters 318 b and 320 b, and the LPM 312 c may collect the current configuration of the parameters 318 c and 320 c, and then the LPMs 312 a, 312 b, and 312 c may exchange the collected configuration information. Upon receiving the configuration information from other LPMs, one or more of the LPMs 312 a, 312 b, and 312 c and/or the management entities 332 and 334 may inspect the information to detect any incompatibilities or sub-optimal configurations. In the exemplary embodiment of the invention, it may be detected that parameters 320 a, 320 b, and 320 c are incompatibly configured.
  • Upon detection, the incompatibility may be resolved in variety of ways. The LPMs 312 a, 312 b, and 312 c may, for example, automatically determine a compatible configuration to enable and/or optimize communication of the media stream. In this regard, the LPMs 312 a, 312 b, and 312 c may negotiate and/or one of LPMs 312 a, 312 b, and 312 c may take precedence over the other ones of the LPMs 312 a, 312 b, and 312 c to reconcile the incompatibility and resolve any conflicts. Similarly, the management entities 330 and 332 may negotiate to resolve the conflict, or if one of the management entities 330 and 332 takes precedence over the other, then that management entity may decide how to resolve the incompatibility.
  • In an exemplary embodiment of the invention, the LPMs 312 b may have priority, for example, be a “master, and may decide that the optimum configuration is to reconfigure parameters 320 a and 320 b to a value of ‘A.’ Accordingly, the LPM 312 b may reconfigure 320 b to ‘A’, send a command to LPM 312 a to reconfigure 320 a to a value of ‘A,’ and then notify the LPM 312 c and/or the management entity 332 the reconfiguration. The LPM 312 c may then notify the management entity 334.
  • In an exemplary embodiment of the invention, the management entity 330 may decide that the optimum configuration is to reconfigure parameters 320 a and 320 b to a value of ‘A.’ Accordingly, the management entity 330 may communicate such a decision to the LPM 314. In response, the LPM 314 may reconfigure 320 b to ‘A,’ send a command to LPM 312 a to reconfigure 320 a to a value of ‘A,’ and then notify the LPM 312 c and/or the management entity 332 the reconfiguration. The LPM 312 c may then notify the management entity 334. In this manner, the single management entity may need only submit configuration parameters once to the LPM 312 b and configuration of the entire network path may occur as a result of that single message.
  • Although FIG. 3 depicts an LPM associated with each of the devices 302, 304, and 306, the invention is not so limited. For example, each of devices 302, 304, 306 may represent a plurality of devices. Similarly, the LPMs 312 a, 312 b, and 312 c may each correspond to portions, logical and/or physical, of a single LPM. In this regard, an LPM 312 may be distributed and/or virtualized across multiple devices.
  • FIG. 4 is a flow chart illustrating exemplary steps for network configuration, in accordance with an embodiment of the invention. Referring to FIG. 4, the exemplary steps may begin with step 402 when configuration, or validation of configuration, is triggered for link partners and/or devices along a network path. Exemplary triggers may comprise, for example, a change in one or more parameters, a command or request from a management entity, a reconfiguration of a network such as a device added, removed, powered up, or powered down, and a request to, or attempt to, establish a network path such as an attempt to setup a path utilizing Audio Video Bridging protocols such as IEEE 802.1Qat, IEEE 802.1Qav, IEEE 802.1AP, or similar or related protocols. Subsequent to step 402, the exemplary steps may advance to step 404.
  • In step 404, the one or more LPMs associated with the devices to be validated and/or configured may obtain the current configuration of the devices and may determine whether any parameters are incompatibly configured with one or more corresponding parameters in other devices of the network path. In instances that the parameters are optimally configured, the exemplary steps may advance to end step 412. Returning to step 406, in instances that one or more parameters in one or more of the devices are mismatched and/or sub-optimal, the exemplary step may, depending on the network policies, circumstances, and/or other implementation details, advance to step 408 and/or step 410.
  • In step 408, the LPM(s) may automatically reconfigure one or more of the devices, based on factors such as the nature of the parameter(s) that are mismatched and/or sub-optimal, whether current configuration results in no communication or just reduced communication, the types of the devices, security levels of the devices, effect of the parameters on other network paths or other network or device functions, any other policy or setting that may be established by network administrators.
  • In step 410, the LPM that detects the mismatched and/or sub-optimal configuration may generate a notification to other LPMs and/or to one or more network management entities. The notification may comprise a recommendation as to how to optimize the configuration and/or repair the mismatch. A network management entity (e.g., an automated or manned management console) may utilize the notification and/or recommendation to determine how to optimize configuration.
  • Various aspects of a method and system for matching and repairing network configuration are provided. In an exemplary embodiment of the invention, one or more circuits and/or processors, such as any of LPMs 312 a, 312 b, and 312 c or a combination thereof, may be operable to determine a configuration of one or more parameters 318 and 320 in devices 302, 304, and 306 along a network path, and detect whether any of the one or more parameters 318 and 320 is configured such that communication between the devices 302, 304, and 306 is disabled and/or suboptimal. In some instances, parameters that are configured such that they are compatible with each other may be said to be “matched,” whereas parameters which may be configured such that they are incompatible with each other may be said to be “mismatched.” In instances that one or more parameters 318 and/or 320 are incompatibly or sub-optimally configured, the one or more circuits and/or processors may communicate a notification of the incompatibility to a network management entity 150 and/or generate one or more messages to reconfigure one or more of the parameters 318 and/or 320 in one or more of devices 302, 304, and 306. In some instances, reconfiguring one or more parameters such that they are matched and/or optimally configured may be said to be “repairing” network configuration. The determining, detecting, matching, and repairing may be performed automatically in response to a change in one or more of the parameters 318 and/or 320 in one or more of devices 302, 304, and 306. The determining and/or detecting may be performed automatically in response to a reconfiguration of the network path. The notification may comprise a recommended configuration of the one or more parameters 318 and/or 320 in one or more of devices 302, 304, and 306. The one or more circuits and/or processors may enable access to one or more application programming interfaces of one or more of devices 302, 304, and 306. The one or more circuits and/or processors may be operable to translate between the one or more application programming interfaces and one or more network interfaces.
  • The one or more circuits and/or processors may be operable to configure the one or more parameters in a first portion, such as a first one of devices 302, 304, and 306, of the plurality of devices 302, 304, and 306. The one or more circuits and/or processors may be operable to communicate with one or more other circuits and/or processors such as any of LPMs 312 a, 312 b, and 312 c or a combination thereof, that manage configuration of the one or more parameters in a second portion of the plurality of devices, such as a second one of devices 302, 304, and 306. The one or more circuits and/or processors may be operable to automatically perform the determining and/or detecting in response to a communication from the one or more other circuits and/or processors. Incompatibilities between parameters 318 and/or 320 in the first portion of the plurality of devices and parameters in the second portion of the plurality of network devices may be reconciled via communications between the one or more circuits and/or processors and the one or more other circuits and/or processors.
  • Other embodiments of the invention may provide a non-transitory computer readable medium and/or storage medium, and/or a non-transitory machine readable medium and/or storage medium, having stored thereon, a machine code and/or a computer program having at least one code section executable by a machine and/or a computer, thereby causing the machine and/or computer to perform the steps as described herein for matching and repairing network configuration.
  • Accordingly, the present invention may be realized in hardware, software, or a combination of hardware and software. The present invention may be realized in a centralized fashion in at least one computer system, or in a distributed fashion where different elements are spread across several interconnected computer systems. Any kind of computer system or other apparatus adapted for carrying out the methods described herein is suited. A typical combination of hardware and software may be a general-purpose computer system with a computer program that, when being loaded and executed, controls the computer system such that it carries out the methods described herein.
  • The present invention may also be embedded in a computer program product, which comprises all the features enabling the implementation of the methods described herein, and which when loaded in a computer system is able to carry out these methods. Computer program in the present context means any expression, in any language, code or notation, of a set of instructions intended to cause a system having an information processing capability to perform a particular function either directly or after either or both of the following: a) conversion to another language, code or notation; b) reproduction in a different material form.
  • While the present invention has been described with reference to certain embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted without departing from the scope of the present invention. In addition, many modifications may be made to adapt a particular situation or material to the teachings of the present invention without departing from its scope. Therefore, it is intended that the present invention not be limited to the particular embodiment disclosed, but that the present invention will include all embodiments falling within the scope of the appended claims.

Claims (20)

What is claimed is:
1. A method comprising:
at a network management entity in data communication with a plurality of network elements on a communication network,
receiving from a first network element configuration information which indicates incompatible communication configuration at the first network element;
determining a reconfiguration for the first network element to overcome the incompatible communication configuration;
automatically generating a message including information to reconfigure the first network element to correct the incompatible communication configuration; and
communicating the message to the first network element.
2. The method of claim 1 further comprising:
at the network management entity, automatically negotiating a reconfiguration for the first network element to correct the incompatible communication configuration.
3. The method of claim 1 further comprising:
at the network management entity, sending a request for configuration information to the first network element.
4. The method of claim 1 further comprising:
receiving second configuration information from a second network element;
determining the second configuration information is incompatible with the received configuration information from the first network element; and
wherein determining the reconfiguration for the first network element comprises determining a reconfiguration resolving the incompatibility.
5. The method of claim 4 wherein resolving the incompatibility comprises:
automatically negotiating a reconfiguration for the first network element to correct a communication configuration of one of the first network element and the second network element to improve subsequent communication performance between the first network element and the second network element.
6. The method of claim 4 wherein resolving the incompatibility comprises:
yielding the resolution of the incompatibility to another network management entity that takes precedence over the network management entity.
7. The method of claim 4 further comprising:
determining a reconfiguration for the second network element that, together with the reconfiguration for the first network element, comprises a reconfiguration resolving the incompatibility.
8. The method of claim 7 wherein automatically generating a message comprises:
automatically generating a single message that includes information to reconfigure the first network element and to reconfigure the second element to correct the incompatible communication configuration.
9. The method of claim 8 wherein communicating the message to the first network element comprises communicating the single message to only the first network element with a command to instruct the first network element to notify the second network element of the reconfiguration to correct the incompatible communication configuration.
10. The method of claim 1 wherein receiving configuration information comprises receiving information about possible compatible configurations for the first network element and a second network element.
11. The method of claim 1 wherein receiving configuration information comprises receiving information about a recommended reconfiguration for the first network element.
12. A method comprising:
at a network management entity managing communication among a plurality of network elements,
receiving from a first network element of the plurality of network elements data indicating that one or more communication parameters is mismatched with one or more corresponding communication parameters of a second network element of the plurality of network elements;
determining a reconfiguration suitable to repair network configuration; and
communicating data for the reconfiguration to one or both of the first network element and the second network element.
13. The method of claim 12 further comprising:
at the network management entity, negotiating with a second network management entity the reconfiguration suitable to repair the network configuration.
14. The method of claim 12 further comprising:
yielding resolution of the mismatch of the one or more communication parameters to the second network management entity according to an established network precedence for network management.
15. The method of claim 12 wherein determining a reconfiguration suitable to repair network configuration comprises:
selecting matching values of the one or more communication parameters and the one or more corresponding communication parameters, the matching values suitable to optimize subsequent data communication between the first network element and the second network element.
16. The method of claim 12 wherein communicating data for the reconfiguration comprises:
communicating a message to the first network element to cause the first network element to
reconfigure the one or more communication parameters, and
communicate a notification of the reconfiguration to the second network element.
17. The method of claim 12 further comprising
receiving from the first network element information about possible compatible reconfigurations for the first network element and the second network element suitable to repair the network configuration.
18. A method comprising:
at a network management entity managing communication in a network;
receiving a communication from a first logical point of management, the logical point of management managing configuration of a plurality of network elements, the communication including data indicating that one or more communication parameters of a first network element under management of the logical point of management is mismatched with one or more corresponding communication parameters of a second network element;
determining a reconfiguration suitable to optimize communication between the first network element and the second network element; and
communicating a message including data for the reconfiguration the first logical point of management.
19. The method of claim 18 wherein communicating the message comprises:
communicating instructions to the first logical point of management to reconfigure the first network element, and
communicating instructions to the first logical point of management to notify a second logical point of management which manages configuration of the second network element.
20. The method of claim 18 wherein receiving a communication from a first logical point of management comprises receiving information from the first logical point of management about possible compatible configurations for the first network element and a second network element.
US13/897,918 2009-08-06 2013-05-20 Method and system for matching and repairing network configuration Abandoned US20130262604A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US13/897,918 US20130262604A1 (en) 2009-08-06 2013-05-20 Method and system for matching and repairing network configuration

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US23172609P 2009-08-06 2009-08-06
US12/850,858 US8458305B2 (en) 2009-08-06 2010-08-05 Method and system for matching and repairing network configuration
US13/897,918 US20130262604A1 (en) 2009-08-06 2013-05-20 Method and system for matching and repairing network configuration

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US12/850,858 Continuation US8458305B2 (en) 2009-08-06 2010-08-05 Method and system for matching and repairing network configuration

Publications (1)

Publication Number Publication Date
US20130262604A1 true US20130262604A1 (en) 2013-10-03

Family

ID=43535632

Family Applications (2)

Application Number Title Priority Date Filing Date
US12/850,858 Expired - Fee Related US8458305B2 (en) 2009-08-06 2010-08-05 Method and system for matching and repairing network configuration
US13/897,918 Abandoned US20130262604A1 (en) 2009-08-06 2013-05-20 Method and system for matching and repairing network configuration

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US12/850,858 Expired - Fee Related US8458305B2 (en) 2009-08-06 2010-08-05 Method and system for matching and repairing network configuration

Country Status (1)

Country Link
US (2) US8458305B2 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130227338A1 (en) * 2012-02-28 2013-08-29 International Business Machines Corporation Reconfiguring interrelationships between components of virtual computing networks
US9860147B2 (en) 2014-05-27 2018-01-02 Huawei Technologies Co., Ltd. Method and device for generating CNM
US20230224999A1 (en) * 2021-05-10 2023-07-13 Apple Inc. Power saving for sdt procedure

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102340411B (en) * 2010-07-26 2016-01-20 深圳市腾讯计算机系统有限公司 A kind of server information data management method and system
WO2012050590A1 (en) * 2010-10-16 2012-04-19 Hewlett-Packard Development Company, L.P. Device hardware agent
US8462666B2 (en) * 2011-02-05 2013-06-11 Force10 Networks, Inc. Method and apparatus for provisioning a network switch port
US20130205038A1 (en) * 2012-02-06 2013-08-08 International Business Machines Corporation Lossless socket-based layer 4 transport (reliability) system for a converged ethernet network
US9253043B2 (en) 2013-11-30 2016-02-02 At&T Intellectual Property I, L.P. Methods and apparatus to convert router configuration data
US9338231B2 (en) * 2014-03-18 2016-05-10 Sling Media, Inc Methods and systems for recommending communications configurations
US10057330B2 (en) * 2014-11-04 2018-08-21 Intel Corporation Apparatus and method for deferring asynchronous events notifications
WO2016171682A1 (en) * 2015-04-22 2016-10-27 Hewlett Packard Enterprise Development Lp Configuring network devices
WO2017165716A1 (en) 2016-03-23 2017-09-28 Lutron Electronics Co., Inc. Configuring control devices operable for a load control environment
US11606257B2 (en) * 2019-01-09 2023-03-14 Vmware, Inc. Topology-aware control information dissemination in software-defined networking environments
US10985980B2 (en) * 2019-04-16 2021-04-20 International Business Machines Corporation Detecting cloned members of an enterprise messaging environment
CN114039798B (en) * 2021-11-30 2023-11-03 绿盟科技集团股份有限公司 Data transmission method and device and electronic equipment

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040236843A1 (en) * 2001-11-15 2004-11-25 Robert Wing Online diagnosing of computer hardware and software
US6859843B1 (en) * 2000-11-20 2005-02-22 Hewlett-Packard Development Company, L.P. Systems and methods for reconfiguring network devices
US20050044196A1 (en) * 2003-08-08 2005-02-24 Pullen Benjamin A. Method of and system for host based configuration of network devices
US20050204215A1 (en) * 2004-02-13 2005-09-15 Nokia Corporation Problem solving in a communications system
US20060064485A1 (en) * 2004-09-17 2006-03-23 Microsoft Corporation Methods for service monitoring and control
US7023973B2 (en) * 2003-04-28 2006-04-04 Microsoft Corporation Dual-band modem and service
US20060233114A1 (en) * 2005-04-15 2006-10-19 Microsoft Corporation Method and apparatus for performing wireless diagnsotics and troubleshooting
US20080215711A1 (en) * 2005-02-23 2008-09-04 Danny Ben Shitrit Configuring output on a communication device
US20090216867A1 (en) * 2008-02-15 2009-08-27 !J Incorporated Vendor-independent network configuration tool
US7586854B2 (en) * 2006-03-13 2009-09-08 Alcatel Lucent Dynamic data path component configuration apparatus and methods
US7620848B1 (en) * 2003-11-25 2009-11-17 Cisco Technology, Inc. Method of diagnosing and repairing network devices based on scenarios
US20100112997A1 (en) * 2006-08-16 2010-05-06 Nuance Communications, Inc. Local triggering methods, such as applications for device-initiated diagnostic or configuration management
US20100149994A1 (en) * 2008-12-15 2010-06-17 At&T Intellectual Property I, L.P. Systems Configured to Automatically Identify Open Shortest Path First (OSPF) Protocol Problems in a Network and Related Computer Program Products and Methods
US7925765B2 (en) * 2006-04-07 2011-04-12 Microsoft Corporation Cooperative diagnosis in a wireless LAN
US8024618B1 (en) * 2007-03-30 2011-09-20 Apple Inc. Multi-client and fabric diagnostics and repair
US9292374B2 (en) * 2005-03-14 2016-03-22 Rhapsody International Inc. System and method for automatically uploading analysis data for customer support

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6938086B1 (en) * 2000-05-23 2005-08-30 Network Appliance, Inc. Auto-detection of duplex mismatch on an ethernet
US6920485B2 (en) * 2001-10-04 2005-07-19 Hewlett-Packard Development Company, L.P. Packet processing in shared memory multi-computer systems
JP2006525729A (en) * 2003-05-01 2006-11-09 コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ Ad hoc network, network device, and configuration management method thereof
US20070143471A1 (en) * 2005-12-20 2007-06-21 Hicks Jeffrey T Methods, systems and computer program products for evaluating suitability of a network for packetized communications
US20090086642A1 (en) * 2007-09-28 2009-04-02 Cisco Technology, Inc. High availability path audit
US8131992B2 (en) * 2009-07-01 2012-03-06 Infoblox Inc. Methods and apparatus for identifying the impact of changes in computer networks

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6859843B1 (en) * 2000-11-20 2005-02-22 Hewlett-Packard Development Company, L.P. Systems and methods for reconfiguring network devices
US20040236843A1 (en) * 2001-11-15 2004-11-25 Robert Wing Online diagnosing of computer hardware and software
US7023973B2 (en) * 2003-04-28 2006-04-04 Microsoft Corporation Dual-band modem and service
US20050044196A1 (en) * 2003-08-08 2005-02-24 Pullen Benjamin A. Method of and system for host based configuration of network devices
US7620848B1 (en) * 2003-11-25 2009-11-17 Cisco Technology, Inc. Method of diagnosing and repairing network devices based on scenarios
US20050204215A1 (en) * 2004-02-13 2005-09-15 Nokia Corporation Problem solving in a communications system
US20060064485A1 (en) * 2004-09-17 2006-03-23 Microsoft Corporation Methods for service monitoring and control
US20080215711A1 (en) * 2005-02-23 2008-09-04 Danny Ben Shitrit Configuring output on a communication device
US9292374B2 (en) * 2005-03-14 2016-03-22 Rhapsody International Inc. System and method for automatically uploading analysis data for customer support
US20060233114A1 (en) * 2005-04-15 2006-10-19 Microsoft Corporation Method and apparatus for performing wireless diagnsotics and troubleshooting
US7586854B2 (en) * 2006-03-13 2009-09-08 Alcatel Lucent Dynamic data path component configuration apparatus and methods
US7925765B2 (en) * 2006-04-07 2011-04-12 Microsoft Corporation Cooperative diagnosis in a wireless LAN
US20100112997A1 (en) * 2006-08-16 2010-05-06 Nuance Communications, Inc. Local triggering methods, such as applications for device-initiated diagnostic or configuration management
US8024618B1 (en) * 2007-03-30 2011-09-20 Apple Inc. Multi-client and fabric diagnostics and repair
US20090216867A1 (en) * 2008-02-15 2009-08-27 !J Incorporated Vendor-independent network configuration tool
US20100149994A1 (en) * 2008-12-15 2010-06-17 At&T Intellectual Property I, L.P. Systems Configured to Automatically Identify Open Shortest Path First (OSPF) Protocol Problems in a Network and Related Computer Program Products and Methods

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Dictionary.com, definition of negotiate, 3 pages, September 20, 2008 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130227338A1 (en) * 2012-02-28 2013-08-29 International Business Machines Corporation Reconfiguring interrelationships between components of virtual computing networks
US9270523B2 (en) * 2012-02-28 2016-02-23 International Business Machines Corporation Reconfiguring interrelationships between components of virtual computing networks
US20160134487A1 (en) * 2012-02-28 2016-05-12 International Business Machines Corporation Reconfiguring interrelationships between components of virtual computing networks
US9686146B2 (en) * 2012-02-28 2017-06-20 International Business Machines Corporation Reconfiguring interrelationships between components of virtual computing networks
US9860147B2 (en) 2014-05-27 2018-01-02 Huawei Technologies Co., Ltd. Method and device for generating CNM
US20230224999A1 (en) * 2021-05-10 2023-07-13 Apple Inc. Power saving for sdt procedure

Also Published As

Publication number Publication date
US20110035474A1 (en) 2011-02-10
US8458305B2 (en) 2013-06-04

Similar Documents

Publication Publication Date Title
US8458305B2 (en) Method and system for matching and repairing network configuration
CN107409066B (en) System and method for automatic detection and configuration of server uplink network interfaces
US7941539B2 (en) Method and system for creating a virtual router in a blade chassis to maintain connectivity
US7962587B2 (en) Method and system for enforcing resource constraints for virtual machines across migration
CN112398676B (en) Vendor-independent profile-based modeling of service access endpoints in a multi-tenant environment
US10938660B1 (en) Automation of maintenance mode operations for network devices
US8386825B2 (en) Method and system for power management in a virtual machine environment without disrupting network connectivity
US11201782B1 (en) Automation of maintenance mode operations for network devices
US7984123B2 (en) Method and system for reconfiguring a virtual network path
US9806911B2 (en) Distributed virtual gateway appliance
US20120131662A1 (en) Virtual local area networks in a virtual machine environment
US20110292807A1 (en) Method and system for sideband communication architecture for supporting manageability over wireless lan (wlan)
US8095661B2 (en) Method and system for scaling applications on a blade chassis
US9077552B2 (en) Method and system for NIC-centric hyper-channel distributed network management
US9071508B2 (en) Distributed fabric management protocol
WO2014000292A1 (en) Migration method, serving control gateway and system for virtual machine across data centres
EP2656212B1 (en) Activate attribute for service profiles in unified computing system
US10581669B2 (en) Restoring control-plane connectivity with a network management entity
CN114521322A (en) Dynamic discovery of service nodes in a network
US9794146B2 (en) Methods and systems for a monitoring device to execute commands on an attached switch
JP2016105576A (en) Communication path changeover device, control method therefor and program
US11652734B2 (en) Multicasting within a mutual subnetwork
CN110417568B (en) NFV strategy negotiation method and system
US9385921B1 (en) Provisioning network services

Legal Events

Date Code Title Description
AS Assignment

Owner name: BROADCOM CORPORATION, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ELZUR, URI;SHAH, HEMAL;THALER, PATRICIA;SIGNING DATES FROM 20100730 TO 20100802;REEL/FRAME:030457/0841

AS Assignment

Owner name: BANK OF AMERICA, N.A., AS COLLATERAL AGENT, NORTH CAROLINA

Free format text: PATENT SECURITY AGREEMENT;ASSIGNOR:BROADCOM CORPORATION;REEL/FRAME:037806/0001

Effective date: 20160201

Owner name: BANK OF AMERICA, N.A., AS COLLATERAL AGENT, NORTH

Free format text: PATENT SECURITY AGREEMENT;ASSIGNOR:BROADCOM CORPORATION;REEL/FRAME:037806/0001

Effective date: 20160201

AS Assignment

Owner name: AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD., SINGAPORE

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:BROADCOM CORPORATION;REEL/FRAME:041706/0001

Effective date: 20170120

Owner name: AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:BROADCOM CORPORATION;REEL/FRAME:041706/0001

Effective date: 20170120

AS Assignment

Owner name: BROADCOM CORPORATION, CALIFORNIA

Free format text: TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENTS;ASSIGNOR:BANK OF AMERICA, N.A., AS COLLATERAL AGENT;REEL/FRAME:041712/0001

Effective date: 20170119

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION