US20140280841A1 - Scalable distributed control plane for network switching systems - Google Patents

Scalable distributed control plane for network switching systems Download PDF

Info

Publication number
US20140280841A1
US20140280841A1 US14/062,817 US201314062817A US2014280841A1 US 20140280841 A1 US20140280841 A1 US 20140280841A1 US 201314062817 A US201314062817 A US 201314062817A US 2014280841 A1 US2014280841 A1 US 2014280841A1
Authority
US
United States
Prior art keywords
networking
packet
information
networking packet
physical
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US14/062,817
Inventor
Keshav G. Kamble
Dar-Ren Leu
Vijoy A. Pandey
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Enterprise Solutions Singapore Pte Ltd
Original Assignee
International Business Machines Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Business Machines Corp filed Critical International Business Machines Corp
Priority to US14/062,817 priority Critical patent/US20140280841A1/en
Assigned to INTERNATIONAL BUSINESS MACHINES CORPORATION reassignment INTERNATIONAL BUSINESS MACHINES CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: KAMBLE, KESHAV G., LEU, DAR-REN, PANDEY, VIJOY A.
Publication of US20140280841A1 publication Critical patent/US20140280841A1/en
Assigned to LENOVO ENTERPRISE SOLUTIONS (SINGAPORE) PTE. LTD. reassignment LENOVO ENTERPRISE SOLUTIONS (SINGAPORE) PTE. LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: INTERNATIONAL BUSINESS MACHINES CORPORATION
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45595Network integration; Enabling network access in virtual machine instances
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L2101/00Indexing scheme associated with group H04L61/00
    • H04L2101/60Types of network addresses
    • H04L2101/618Details of network addresses
    • H04L2101/622Layer-2 addresses, e.g. medium access control [MAC] addresses

Definitions

  • the present invention relates to data center infrastructure, and more particularly, this invention relates to a scalable distributed control plane for network switching systems.
  • a control plane for the network (which contains processors, buses, I/O, and other associated resources across many different physical entities) also increase.
  • a control plane may have limited physical capabilities and is limited to a single switching component in the network. Therefore, each control plane would be incapable of scaling to the degree necessary to handle all demands made thereof when the size of a physical networking system increases to a certain degree.
  • a method for processing, a first networking packet within a networking system includes receiving a first networking packet using a physical networking switch, classifying, using the physical networking switch, the first networking packet to produce a packet classification, generating, using the physical networking switch, a second networking packet based on the first networking packet, forwarding the second networking packet using the physical networking switch, receiving the second networking packet using a physical host server, wherein the physical host server is adapted to host a plurality of VMs each VM being adapted for providing a control plane for a particular protocol, receiving, using a VM hosted by the physical host server, the second networking packet, decapsulating, using the VM, the second networking packet to retrieve information about the first networking packet, handling, using the VM, processing of the first networking packet using the information about the first networking packet to obtain forwarding information sufficient to allow the first networking packet to be delivered to its intended destination, encapsulating, using the VM, the first networking packet into a third networking packet including the forwarding information, and forwarding, using, the VM,
  • FIG. 1 illustrates a network architecture, in accordance with one embodiment
  • FIG. 2 shows a representative hardware environment that may be associated with the servers and/or clients of FIG. 1 , in accordance with one embodiment.
  • FIG. 3 shows a simplified diagram of a network switching system, according to one embodiment.
  • FIG. 4 shows a more detailed view of a network switching system, according to one embodiment.
  • FIG. 5 shows a more detailed view of a control plane server, in accordance with one embodiment.
  • FIG. 6 is a flowchart of a method in one embodiment.
  • FIG. 7 shows some exemplary records of information, according to one embodiment.
  • FIG. 8 is a flowchart of a method, according to one embodiment.
  • an improved networking system comprises a scalable and distributed virtual control plane and at least one physical networking switch.
  • the scalable and distributed virtual control plane comprises at least one physical host server that capable of hosting a plurality of virtual machines (VMs).
  • VMs virtual machines
  • Each VM is capable of handling the processing of at least one type of networking packet that is received by the physical networking switch.
  • Each server hosting a VM which is capable of handling the processing of at least one type of networking packet is identifiable and accessible using a physical CPU port and a virtual local area network (VLAN) tag, while each VM is accessible and identified by a unique VM media access control (MAC) address, according to one embodiment.
  • VLAN virtual local area network
  • the distributed virtual control plane enables offloading of the processing of control packets to one or more VMs, and also the flexibility and ability to easily scale the control plane in response to changes in the networking system's scalability requirements due to additions resulting in the expansion of the data plane.
  • the distributed virtual control plane comprises multiple VMs, with each VM being capable of processing control packets that are of a different type than the other VMs. In this way, divisions in the control plane may be made across the various VMs. There are many different possible combinations to optimize the usage of the distributed virtual control plane given specifications and additional user requirements or conditions that are desired to be met.
  • a networking system includes at least one physical networking switch and a scalable and distributed virtual control plane.
  • the switch has logic adapted to receive a first networking packet, logic adapted to classify the first networking packet to produce a packet classification, logic adapted to generate a second networking packet based on the first networking, packet, and logic adapted to forward the second networking packet.
  • the scalable and distributed virtual control plane has at least one physical host server adapted to host a plurality of virtual machines (VMs), each VM being adapted for providing a control plane for a particular protocol, and a network connecting the at least one physical networking switch to the at least one physical host server.
  • VMs virtual machines
  • the plurality of VMs include logic adapted to receive the second networking packet, logic adapted to decapsulate the second networking packet to retrieve information about the first networking packet, logic adapted to handle processing of the first networking, packet using, the information about the first networking packet to obtain forwarding information sufficient to allow the first networking packet to be delivered to its intended destination, logic adapted to encapsulate the first networking packet into a third networking packet including the forwarding information, and logic adapted to forward the third networking packet according to the forwarding information.
  • a method for processing a first networking packet within a networking system includes receiving a first networking packet using a physical networking switch, classifying, using the physical networking switch, the first networking packet to produce a packet classification, generating, using the physical networking switch, a second networking packet based on the first networking packet, forwarding the second networking packet using the physical networking switch, receiving the second networking packet using a physical host server, wherein the physical host server is adapted to host a plurality of VMs, each VM being adapted for providing, a control plane for a particular protocol, receiving, using a VM hosted by the physical host server, the second networking packet, decapsulating, using the VM, the second networking packet to retrieve information about the first networking packet handling, using the VM, processing of the first networking packet using the information about the first networking packet to obtain forwarding information sufficient to allow the first networking packet to be delivered to its intended destination, encapsulating, using the VM, the first networking packet into a third networking, packet including, the forwarding information, and forwarding, using, the
  • a computer program product for processing a first networking packet within a networking system includes a computer readable storage medium having computer readable program code embodied therewith.
  • the computer readable program code includes computer readable program code configured to receive a first networking packet, computer readable program code configured to classify the first networking packet to produce a packet classification, computer readable program code configured to determine a destination address to send the first networking packet based on the packet classification in order to provide processing for the first networking packet by: selecting a condition from a look-up table to which the packet classification adheres, selecting an entry associated with the selected condition, and determining a set of information associated with the selected entry, computer readable program code configured to generate a second networking packet by encapsulating, the first networking packet into the second networking packet, and computer readable program code configured to forward the second networking packet to at least one physical host server to handle processing of the first networking packet.
  • the look-up table includes a plurality of entries, a plurality of conditions, and a plurality of sets of information, each condition is associated with one entry, each entry is associated with one set of information, each set of information includes destination information for one of a plurality of VMs, each VM being adapted for providing a control plane for a particular protocol, any entry is capable of being associated with more than one condition, each set of information includes at least: an address identifier (ID), a service tag, and a media access control (MAC) ID, the address ID includes information that corresponds to one or more egress ports of a physical networking switch, the MAC ID includes a destination address that corresponds to at least one VM of the plurality of VMs designated to receive the second networking packet hosted by the at least one physical host server based on the packet classification, and the service tag includes information that corresponds to a membership of the second networking packet within one virtual local area network (VLAN) in accordance with any one of: a networking protocol, a port based protocol, and a port
  • aspects of the present invention may be embodied as a system, method or computer program product. Accordingly, aspects of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including, firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as “logic,” a “circuit,” “module,” or “system.” Furthermore, aspects of the present invention may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon.
  • the computer readable medium may be a computer readable signal medium or a non-transitory computer readable storage medium.
  • a non-transitory computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing.
  • non-transitory computer readable storage medium More specific examples (a non-exhaustive list) of the non-transitory computer readable storage medium include the following: a portable computer diskette, as hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a portable compact disc read-only memory (CD-ROM), a Blu-ray disc read-only memory (BD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
  • a non-transitory computer readable storage medium may be any tangible medium that is capable of containing, or storing a program or application for use by or in connection with an instruction execution system, apparatus, or device.
  • a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof.
  • a computer readable signal medium may be any computer readable medium that is not a non-transitory computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device, such as an electrical connection having one or more wires, an optical fibre, etc.
  • Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fibre cable, RF, etc., or any suitable combination of the foregoing.
  • Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++, or the like, and conventional procedural programming languages, such as the “C” programming language or similar programming languages.
  • the program cede may execute entirely on a user's computer, partly on the user's computer, as a stand-alone software package, partly on the users computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer or server may be connected to the user's computer through any type of network, including a local area network (LAN), storage area network (SAN), and/or a wide area network (WAN), or the connection may be made to an external computer, for example through the Internet using an Internet Service Provider (ISP).
  • LAN local area network
  • SAN storage area network
  • WAN wide area network
  • ISP Internet Service Provider
  • These computer program instructions may also be stored in a computer readable medium that may direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
  • the computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
  • FIG. 1 illustrates a network architecture 100 , in accordance with one embodiment.
  • a plurality of remote networks 102 are provided including a first remote network 104 and a second remote network 106 .
  • a gateway 101 may be coupled between the remote networks 102 and a proximate network 108 .
  • the networks 104 , 106 may each take any form including, but not limited to a LAN, a WAN such as the Internet, public switched telephone network (PSTN), internal telephone network, etc.
  • PSTN public switched telephone network
  • the gateway 101 serves as an entrance point from the remote networks 102 to the proximate network 108 .
  • the gateway 101 may function as a router, which is capable of directing a given packet of data that arrives at the gateway 101 , and a switch, which furnishes the actual path in and out of the gateway 101 for a given packet.
  • At least one data server 114 coupled to the proximate network 108 , and which is accessible from the remote networks 102 via the gateway 101 .
  • the data server(s) 114 may include any type of computing device/groupware. Coupled to each data server 114 is a plurality of user devices 116 .
  • Such user devices 116 may include a desktop computer, laptop computer, handheld computer, printer, and/or any other type of logic-containing device.
  • a user device 111 may also be directly coupled to any of the networks, in some embodiments.
  • a peripheral 120 or series of peripherals 120 may be coupled to one or more of the networks 104 , 106 , 108 .
  • databases and/or additional components may be utilized with, or integrated into, any type of network element coupled to the networks 104 , 106 , 108 .
  • a network element may refer to any component of a network.
  • methods and systems described herein may be implemented with and/or on virtual systems and/or systems which emulate one or more other systems, such as a UNIX system which emulates an IBM z/OS environment, a UNIX system which virtually hosts a MICROSOFT WINDOWS environment, a MICROSOFT WINDOWS system which emulates an IBM z/OS environment, etc.
  • This virtualization and/or emulation may be enhanced through the use of VMWARE software, in some embodiments.
  • one or more networks 104 , 106 , 108 may represent a cluster of systems commonly referred to as a “cloud.”
  • cloud computing shared resources, such as processing power, peripherals, software, data, servers, etc., are provided to any system in the cloud in an on-demand relationship, thereby allowing access and distribution of services across many computing systems.
  • Cloud computing typically involves an Internet connection between the systems operating in the cloud, but other techniques of connecting the systems may also be used, as known in the art.
  • FIG. 2 shows a representative hardware environment associated with a user device 116 and/or server 114 of FIG. 1 , in accordance with one embodiment.
  • FIG. 2 illustrates a typical hardware configuration of a workstation having a central processing unit (CPU) 210 , such as a microprocessor, and a number of other units interconnected via one or more buses 212 which may be of different types, such as a local bus, a parallel bus, a serial bus, etc., according to several embodiments.
  • CPU central processing unit
  • the workstation shown in FIG. 2 includes a Random Access Memory (RAM) 214 , Read Only Memory (ROM) 216 , an I/O adapter 218 for connecting peripheral devices such as disk storage units 220 to the one or more buses 212 , a user interface adapter 222 for connecting a keyboard 224 , a mouse 226 , a speaker 228 , a microphone 232 , and/or other user interface devices such as a touch screen, a digital camera not shown) etc., to the one or more buses 212 , communication adapter 234 for connecting the workstation to a communication network 235 (e.g., a data processing network) and a display adapter 236 for connecting the one or more buses 212 to a display device 238 .
  • a communication network 235 e.g., a data processing network
  • display adapter 236 for connecting the one or more buses 212 to a display device 238 .
  • the workstation may have resident thereon an operating system such as the MICROSOFT WINDOWS Operating System (OS), a MAC OS, a UNIX OS, etc. It will be appreciated that a preferred embodiment may also be implemented on platforms and operating systems other than those mentioned.
  • OS MICROSOFT WINDOWS Operating System
  • MAC OS MAC OS
  • UNIX OS UNIX OS
  • a preferred embodiment may also be implemented on platforms and operating systems other than those mentioned.
  • a preferred embodiment may be written using JAVA, XML, C, and/or C++ language, or other programming languages, along with an object oriented programming methodology.
  • Object oriented programming (OOP) which has become increasingly used to develop complex applications, may be used.
  • distributed switches relying on a cell-based fabric interconnect have an advantage of providing predictable, low latency for setups in which interconnectivity between a large number of ports is desired.
  • a distributed switch appears to be a single, very large switch, with the single ingress lookup specifying the index needed to traverse the network.
  • the edge facing switches are interconnected using cell-based clos fabrics, which are wired in a fixed fashion and rely on the path selection made at the ingress.
  • the networking system 300 may also be referred to as a switch.
  • the networking system 300 comprises at least one networking switch 310 a . . . 310 n which may be arranged in a network switching system 304 .
  • the network switching system 304 is coupled to a scalable and distributed virtual control plane 302 via a network 308 .
  • the network 308 may be any type of network comprising any number of networking elements therein, such as switches, routers, subnets, wires, cables, etc., as would be known to one of skill in the art.
  • the network 308 may be as simple as a cable plant and/or communication channels utilizing a communication network.
  • the scalable and distributed virtual control plane 302 may comprise at least one physical host server 306 a , and may include a plurality of physical host servers 306 a , . . . , 306 m , which may or may not be physically located in the same location or area, geographically, as the network switching system 304 . That is to say, the network switching system 304 and the scalable and distributed virtual control plane 302 may be located remotely of one another.
  • Each physical host server 306 a , . . . , 306 m comprises at least one port 312 for connecting to other network elements and forwarding and/or receiving packets. Any connection type known in the art may be used with the at least one port 312 .
  • the network switching system 304 may comprise at least one physical networking switch 310 a , . . . , 310 n , such as a networking switch of a type known in the art.
  • Each of the physical networking switches 310 a , . . . 310 n may include at least one ingress port to receive an incoming packet, and at least one egress port to transmit or forward an outgoing packet, shown collectively as ports 314 .
  • Each ingress and/or egress port 314 may be established using, a dedicated and separate physical port, each of which is identified by a physical port address, or using a dedicated virtual port identified by a virtual port address, where a single physical port may be programmed to incorporate a communication link to one or more virtual ports, in various approaches.
  • Each of the ingress ports may be established as a first virtual port identified with a first virtual address, e.g., a first MAC address, and each of the egress ports may be established as a second virtual port identified with a second virtual address, e.g., a second MAC address.
  • first virtual port may be associated with a first physical port address
  • second virtual port may be associated with a second physical port address
  • the first and second virtual ports may be associated with a single physical port address, etc.
  • the physical networking switch 310 may comprise a network interface 406 , a switching processor 402 , and a CPU subsystem 404 (having a CPU or some other processor therein).
  • the network interface 406 includes a plurality of physical ports 314 , each physical port 314 being identified using a unique physical port address, in one approach.
  • the switching processor 402 may be any type of processor known in the art, such as a CPU, an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a microcontroller, a microprocessor, etc, and may be included in a switching platform that provides the capability to receive an incoming packet via an ingress port, process the incoming, packet to acquire or generate packet control information, and transmit (or forward) an outgoing packet, corresponding to the incoming packet, via an egress port using the packet control information.
  • the ingress and egress ports are generally labeled ports 313 for the sake of these descriptions, and a port 314 used as an ingress port in one situation may be used as an egress port in another situation.
  • the at least one physical networking switch 310 may comprise logic adapted to receive a first networking packet, logic adapted to generate a second networking packet that encapsulates at least a payload of the first networking packet, and logic adapted to forward the second networking packet such as to the physical networking server 306 ).
  • the switching processor 402 may use the local CPU subsystem 404 to provide processing capability to manage the generation and transmission of the second networking packet (outgoing packet), which may encapsulate the first networking packet or portions thereof (including information based on the first networking packet).
  • the CPU subsystem 404 may offload the classification and management of packet processing to the scalable and distributed control plane 302 .
  • the switching processor 402 may perform these functions itself.
  • the physical host server 306 may comprise a network interface 502 , a virtual network interface 504 , and a plurality of VMs 508 (VM0, VM1, . . . , VMk). Each of the VMs 508 may be associated with one virtual network interface port 506 identified using a dedicated MAC address (MAC-0, MAC-1, . . . , MAC-k), in one approach. Furthermore, each of the MAC addresses of the virtual network interface 504 may be associated with at least one physical port 312 of the network interface 502 , each physical port 312 being identified using a physical port address, in one approach.
  • An incoming packet (such as the second networking packet) received using one of the physical ports 312 of the network interface 502 may be processed and appropriately forwarded to a corresponding VM 508 using information included within the incoming packet, such as header information of an external packet or an internal, encapsulated packet.
  • each VM 508 may be adapted to process at least one type of networking packet.
  • the VM 508 may include logic adapted to receive the second networking packet, logic adapted to decapsulate the second networking packet to retrieve at least the first networking packet or some portion thereof (if necessary, such as when the second networking packet is designed to encapsulate another packet or portion thereof, such as an overlay packet or some other type of packet capable of encapsulating another packet, such as the first networking packet), and logic adapted to handle processing of the first networking packet to obtain forwarding information sufficient to allow the at least one physical networking switch 310 or some other component of the network 308 to deliver the first networking packet to its intended destination. Any processing that is helpful or useful in performing any functions related to the networking packets may be performed, such as reading the networking packet, classifying the networking packet, resending the networking packet, etc.
  • the at least one VM 508 also includes logic adapted to encapsulate the first networking packet into a third networking packet comprising the forwarding information.
  • the third networking packet may be forwarded to another VM 508 , another physical host server 306 , another networking switch 310 , or any other component capable of delivering, the third networking packet to its intended destination.
  • the third networking packet is adapted to encapsulate the first networking packet or a portion thereof such that once the third networking packet is delivered to its intended destination, the first networking packet may be decapsulated from the third networking packet and delivered to its intended destination as the first networking packet.
  • the at least one VM 508 also includes logic adapted to forward the third networking packet, such as to the at least one physical networking switch 310 or some other component of the network 308 capable of handling the third networking packet, such that the third networking packet may be delivered to its intended destination.
  • the third networking, packet may be transmitted via the VM 508 to the virtual network interface 504 which appropriately forwards the third networking packet to an associated physical port 312 of the network interface 502 of the physical host server 306 .
  • the physical host server 306 is programmable in order to establish at least one virtual network interface port 506 that may be used to establish a communication channel between:
  • a scalable and distributed virtual control plane 302 may include at least one physical host server 306 having a plurality of VMs 508 running thereon which are configured to use resources of the physical host server 306 .
  • the scalable and distributed virtual control plane 302 may comprise a first physical host server 306 a hosting a first VM 508 (e.g., VM0) used to process networking packets: (i) that are of certain types, and/or (ii) when a certain condition is met for a networking packet, and/or (iii) that require a specific handling requirement, e.g., the networking packets utilize a certain protocol type, such as overlay (NVGRE, VXLAN etc.), TRILL, STP, etc.
  • a certain protocol type such as overlay (NVGRE, VXLAN etc.), TRILL, STP, etc.
  • some additional requirements may comprise the processing of certain networking packets that require special handling, e.g., security or service priority, where a dedicated VM 508 hosted by a certain physical host server 306 may be used to meet such demand.
  • special handling e.g., security or service priority
  • a dedicated VM 508 hosted by a certain physical host server 306 may be used to meet such demand.
  • many VMs 508 may be established where each VM 508 is programmed to handle any user defined requirement or specified condition, such that the processing of networking packets may be split according to any user desired methodology and/or according to an optimum setting designed by the scalable and distributed control plane 302 .
  • Exemplary records of information that may be used in the classification processing of the first networking packet on at least one of the plurality of VMs 598 hosted by the at least one physical host server 306 , as described above in FIGS. 3-5 , are shown in FIG. 6 , according to one embodiment.
  • FIG. 6 There are many various options possible to those of skill in the art to organize, program, and/or store such information.
  • a simple look-up table 600 type is shown in FIG. 6 for this example.
  • Other types of ways to organize and/or store the information may be used, such as a database, a list, a chart, an indexed file system, etc.
  • each record (CPU_Entry_N) 602 in the look-up table 600 includes a pre-programmed set of information 604 . Furthermore, there is a set of predetermined conditions 606 , which each predetermined condition 606 being matched to at least one of the various records 602 .
  • the set of information 604 stored for each record 602 includes: (ii) egress port information (or CPU_Port) that may also include the physical address of the network interface port to be used for transmitting the second networking packet, where the egress port information may be a multicast (MC) group identifier intended for one or more CPU_Ports; (b) information associated with a service tag (e.g., IEEE 802.1q S-tag); and (c) MAC address information corresponding, to the VM designated to receive and process the second networking packet (which includes the first networking packet or portion thereof), where the MAC address information may be a MC MAC address intended to forward the second networking packet to one or more VMs.
  • egress port information may be a multicast (MC) group identifier intended for one or more CPU_Ports
  • MC multicast
  • S-tag information associated with a service tag
  • the pre-programmed set of information 604 may be retrieved upon activation of a corresponding CPU_Port_entry process.
  • Each CPU_Port_entry process may be started when a certain condition 606 is met during the classification of the first networking packet by the networking switch or some other component of the network.
  • At least one of the preprogrammed conditions 606 may be used to select any one of the egress ports (e.g., (CPU_Port_entry-1 through CPU_Port_entry-N), such that it is feasible to direct one or more conditions 606 to select the same CPU_Port_entry. This ensures that the flexibility is provided to match and distribute the virtual control plane processing: capability based on the networking switch processing requirements and/or desired performance.
  • This mechanism enables partitioning of the whole control plane of a networking switch into multiple smaller sub-control planes.
  • the control plane is effectively split into multiple sub-control planes.
  • each sub-control plane may process one or more control protocols, and/or each sub-control plane may execute as a VIM on a virtualization platform (such as Hypervisor, Hyper-V, etc) of a physical host server.
  • a virtualization platform such as Hypervisor, Hyper-V, etc
  • control packets may be received for processing based on entries in a look-up table 600 stored locally to a switching processor using each sub-control plane, and a redundant sub-control plane may be created for each sub-control plane provide for high availability.
  • FIG. 7 a method 700 for processing an incoming networking packet, such as using a remote VM, is shown according to one embodiment.
  • the method 700 may be performed in accordance with the present invention in any of the environments depicted in FIGS. 1-6 , among others, in various embodiments. Of course, more or less operations than those specifically described in FIG. 7 may be included in method 700 , as would be understood by one of skill in the art upon reading the present descriptions.
  • Each of the steps of the method 700 may be performed by any suitable component of the operating environment.
  • the method 700 may be partially or entirely performed by a virtual control plane, a physical networking switch, a combination thereof, etc.
  • Any type of networking packet may be used in conjunction with method 700 , particularly a control packet in some approaches.
  • method 700 may initiate with operation 702 , where a control packet is classified to obtain a control packet classification.
  • a physical networking switch may receive a first control packet via an ingress port, and the control packet may be classified using any available information, such as a source address of the control packet, specific routing information of the control packet, and/or information that corresponds to a certain property of the control packet.
  • a destination address to send the first networking packet is determined based on the packet classification in order to provide processing for the first networking packet.
  • the determination may be based on: (i) information that is included within the control packet; and/or (ii) information that is extracted from the control packet.
  • a look-up table may be used to determine the destination address. Specifically, a condition may be selected from the look-up table to which the packet classification adheres, an entry may be selected that is associated with the selected condition, and a set of information may be determined that is associated with the selected entry.
  • the look-up table may comprise a plurality of entries, a plurality of conditions, and a plurality of sets of information from which the selected entry and the selected set of information is chosen. According to one embodiment, each condition is associated with one entry, each entry is associated with one set of information, and any entry is capable of being associated with more than one condition.
  • Each set of information may comprise at least: an address ID, a service tag, and a MAC ID.
  • the address ID may comprise information that corresponds to one or more egress ports of the at least one physical networking switch
  • the MAC ID may include a destination address that corresponds to at least one VM designated to receive the second networking packet based on the packet classification
  • the service tag may comprise information that corresponds to a membership of the second networking packet within one VLAN in accordance with any one of a networking protocol, a port based protocol, and a port based networking protocol.
  • the address determination procedures for the control packet may comprise the determination of the following address data information: (i) determining an egress port (or CPU port) of a physical networking switch to be used to transmit a second networking packet encapsulating the control packet, which may also include determining the address of a network interface physical port associated with the determined egress port; (ii) determining information associated with a service tag (e.g., IEEE 802.1q S-tag); and (iii) determining a MAC address corresponding to a VM designated to receive and process the second networking packet.
  • a service tag e.g., IEEE 802.1q S-tag
  • Each of the one or more egress ports may comprise information that corresponds to an egress port of the at least one physical networking switch which is used to transmit the second networking packet, and/or information that corresponds to an address of a network interface physical port associated with the egress port.
  • the look-up table may be used to look-up programmable information that identities at least one of: (i) an egress port (or CPU port) of the at least one physical networking switch to be used to transmit the second networking packet, which may also include the address of the network interface physical port associated with the egress port, where each CPU entry may identify one or multiple VMs; (ii) a service tag e.g., IEEE 802.1q S-tag); and (iii) a MAC address corresponding to the VM designated to receive and process the second packet.
  • the identification of a condition for which the CPU Entry or port is to be used to process the content of the second networking packet is based on at least some information included within the control packet, in some approaches.
  • a CPU entry may be used as a pointer to an entry in the look-up table that includes information to be used for generating content for the second networking packet, it is important to no note that each CPU entry may identify one or multiple VMs.
  • the information used to determine the destination address may also include at least one of the following: (i) a source address of the control packet; (ii) specific routing information VLAN; (iii) a specific request to process the second networking packet in accordance with a programmable (predetermined) procedure; and (iv) information that is related to a certain property of the control packet.
  • a second networking, packet may be venerated that includes at least a portion of the control packet.
  • This second networking packet may be a L2 packet, and may include an outer MAC header where the S-Channel is the S-Tag, and includes the MAC address(es) of the VM(s).
  • the second packet may be transmitted (forwarded) to a designated port (CPU_Port) of a physical host server, such as by using an egress port of the physical networking switch.
  • the CPU_Port may be a physical egress port of the physical host server.
  • the physical host server may then receive and process the second networking packet using one or more VMs thereof, which in turn generate a third networking packet based on information included within the second networking packet about the control packet. Furthermore, one VM may transmit the third networking packet back to the networking switch using at least some of the information included within the second networking packet.
  • FIG. 8 a flowchart of a method 800 for processing a packet within a networking system is shown, according to one embodiment.
  • the method 800 may be performed in accordance with the present invention in any of the environments depicted in FIGS. 1-7 , among others, in various embodiments. Of course, more or less operations than those specifically described in FIG. 8 may be included in method 800 , as would be understood by one of skill in the art upon reading the present descriptions.
  • Each of the steps of the method 800 may be performed by any suitable component of the operating environment.
  • the method 800 may be partially or entirely performed by a virtual control plane, a physical control plane, a combination thereof, etc.
  • method 800 may initiate with operation 802 , where a first networking packet is received using a first physical networking switch, such as via an ingress port thereof.
  • the first packet is classified using the first physical networking switch.
  • a second networking packet based on the first networking packet is generated using the first physical networking switch.
  • the second networking packet is forwarded using the physical networking switch.
  • the second networking packet is received using a physical host server, wherein the physical host server is adapted to host a plurality of VMs, each VM being adapted for providing a control plane for a particular protocol.
  • each separate protocol such as open shortest path first (OPSF), spanning tree, border gateway protocol (BGP), or any other protocol known in the art, may have one or more dedicated VMs which are adapted for processing control plane packets for that particular protocol.
  • more than one protocol may be supported by a single VM, when the processing requirements are loss and handled by a single VM. However, it is most easily understood and compartmentalized to have a separate VM handle the responsibilities for each separate protocol on a one-to-one basis.
  • the second networking packet is received using a VM hosted by the physical host server.
  • the second networking packet is decapsulated, using the VM, to retrieve information about the first networking packet.
  • processing of the first networking packet is handled, handling, using the VM, according to die information about the first networking packet to obtain forwarding information sufficient to allow the first networking, packet to be delivered to its intended destination.
  • the first networking packet is encapsulated into a third networking packet comprising the forwarding information.
  • the third networking packet is forwarded according to the forwarding information.
  • the first classification data may be generated based on the second set of information, wherein the second set of information may include programmable records of information that are stored within the at least one physical networking switch.
  • the first classification data may include at least: a first address ID comprising information that corresponds to one or more egress ports of the at least one physical networking switch, a service tag comprising information that corresponds to a membership of the second packet within exactly one VLAN in accordance with any one of: a networking protocol, a port based protocol, and a port based networking protocol, and a MAC ID including information that corresponds to at least one VM designated to receive the second packet.
  • each of the one or more egress ports of the at least one physical networking switch may include information that corresponds to an address of a network interface physical port associated with each egress port.
  • Metadata is determined based on information included within the first packet using the first physical networking switch.
  • determining the metadata may comprise using information included within the first packet including at least: a source address of the first packet, specific routing information, a specific request to process the second packet in accordance with a programmable procedure, and information that corresponds to a certain property of the first packet.
  • a second packet is generated using at least the first classification data, the determined metadata, and at least a third set of information using the first physical networking switch, wherein the third set of information is based on information included within the first packet.
  • generating the second packet may comprise using at least: the first classification data, the metadata, and a set of information based on content of the first packet.
  • the second packet is forwarded using an egress port of the first physical networking switch.
  • the second packet is received using an ingress port of a first control plane server.
  • the first physical networking switch and the first control plane server are linked via a network.
  • the network comprises at least one of: a direct link, a wired communication channel, and a wireless communication channel.
  • the method 800 may further include processing the received second packet using the first control plane server, generating a third control packet using information included within the second packet, and transmitting the third packet to the first physical networking switch using an egress port of the first control plane server.
  • the method 800 may be embodied in a computer program product for processing a first networking packet within a networking system.
  • the computer program product may comprise a computer readable storage medium having computer readable program code embodied therewith, the computer readable program code comprising: computer readable program code configured to receive a first networking packet; computer readable program code configured to classify the first networking packet to produce a packet classification; computer readable program code configured to determine a destination address to send the first networking packet based on the packet classification in order to provide processing for the first networking packet by: selecting a condition from a look-up table to which the packet classification adheres, selecting an entry associated with the selected condition, and determining a set of information associated with the selected entry, wherein the look-up table comprises a plurality of entries, a plurality of conditions, and a plurality of sets of information, each condition is associated with one entry, each entry is associated with one set of information, each set of information comprises destination information for one of a plurality of VMs, each VM being adapted for providing
  • each set of information may comprise at least: an address ID, a service tag, and a MAC ID, computer readable program code configured to generate a second networking packet by encapsulating the first networking packet into the second networking packet, and computer readable program code configured to forward the second networking packet to at least one physical host server to handle processing of the first networking packet, wherein the address ID comprises information that corresponds to one or more egress ports of a physical networking switch, the MAC ID includes a destination address that corresponds to at least one VM, of the plurality of VMs, designated to receive the second networking packet hosted by the at least one physical host server based on the packet classification, and the service tag comprises information that corresponds to a membership of the second networking packet within one VLAN in accordance with any one of a networking protocol, a port based protocol, and a port based networking protocol.

Abstract

Various aspects relate to processing a first networking packet within a networking system. In one embodiment, a first networking packet is received and classified to produce a packet classification. A second networking packet is generated based on the first networking packet, and forwarded. The second networking packet is received using a physical host server, where the physical host server is adapted to host a plurality of virtual machines (VMs), each VM being configured to provide a control plane for a particular protocol. The second networking packet is received and decapsulated using a VM hosted by the physical host server to retrieve information about the first networking packet. Using the VM, the first networking packet is processed using the information about the first networking packet to obtain forwarding information. Using the VM, the first networking packet is encapsulated into a third networking packet comprising the forwarding information; and forwarded.

Description

    RELATED APPLICATIONS
  • This application is a continuation of copending U.S. patent application Ser. No. 13/831,029 filed Mar. 14, 2013 which is herein incorporated by reference.
  • BACKGROUND
  • The present invention relates to data center infrastructure, and more particularly, this invention relates to a scalable distributed control plane for network switching systems.
  • As the size of a physical networking system increases (e.g., number of ports, an aggregation of multiple ports, multiple physical networking systems used to simulate a larger physical networking system, number of network control protocols, etc.), the requirements and demands made on a control plane for the network (which contains processors, buses, I/O, and other associated resources across many different physical entities) also increase. However, typically a control plane may have limited physical capabilities and is limited to a single switching component in the network. Therefore, each control plane would be incapable of scaling to the degree necessary to handle all demands made thereof when the size of a physical networking system increases to a certain degree.
  • Therefore, there is a need for improved control plane capabilities, particularly capabilities for scaling up to handle additional control packet processing; capabilities that are required in a fast and large networking system.
  • SUMMARY
  • In another embodiment, a method for processing, a first networking packet within a networking system includes receiving a first networking packet using a physical networking switch, classifying, using the physical networking switch, the first networking packet to produce a packet classification, generating, using the physical networking switch, a second networking packet based on the first networking packet, forwarding the second networking packet using the physical networking switch, receiving the second networking packet using a physical host server, wherein the physical host server is adapted to host a plurality of VMs each VM being adapted for providing a control plane for a particular protocol, receiving, using a VM hosted by the physical host server, the second networking packet, decapsulating, using the VM, the second networking packet to retrieve information about the first networking packet, handling, using the VM, processing of the first networking packet using the information about the first networking packet to obtain forwarding information sufficient to allow the first networking packet to be delivered to its intended destination, encapsulating, using the VM, the first networking packet into a third networking packet including the forwarding information, and forwarding, using, the VM, the third networking, packet according to the forwarding information.
  • Other aspects and embodiments of the present invention will become apparent front the following detailed description, which, when taken in conjunction with the drawings, illustrate by way of example the principles of the invention.
  • BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS
  • FIG. 1 illustrates a network architecture, in accordance with one embodiment
  • FIG. 2 shows a representative hardware environment that may be associated with the servers and/or clients of FIG. 1, in accordance with one embodiment.
  • FIG. 3 shows a simplified diagram of a network switching system, according to one embodiment.
  • FIG. 4 shows a more detailed view of a network switching system, according to one embodiment.
  • FIG. 5 shows a more detailed view of a control plane server, in accordance with one embodiment.
  • FIG. 6 is a flowchart of a method in one embodiment.
  • FIG. 7 shows some exemplary records of information, according to one embodiment.
  • FIG. 8 is a flowchart of a method, according to one embodiment.
  • DETAILED DESCRIPTION
  • The following description is made for the purpose of illustrating the general principles of the present invention and is not meant to limit the inventive concepts claimed herein. Further, particular features described herein can be used in combination with other described features in each of the various possible combinations and permutations.
  • Unless otherwise specifically defined herein, all terms are to be given their broadest possible interpretation including meanings implied from the specification as well as meanings understood by those skilled in the art and/or as defined in dictionaries, treatises, etc.
  • It must also be noted that, as used in the specification and the appended claims, the singular forms “a,” “an,” and “the” include plural referents unless otherwise sped fled.
  • In one approach, an improved networking system comprises a scalable and distributed virtual control plane and at least one physical networking switch. The scalable and distributed virtual control plane comprises at least one physical host server that capable of hosting a plurality of virtual machines (VMs). Each VM is capable of handling the processing of at least one type of networking packet that is received by the physical networking switch. Each server hosting a VM which is capable of handling the processing of at least one type of networking packet is identifiable and accessible using a physical CPU port and a virtual local area network (VLAN) tag, while each VM is accessible and identified by a unique VM media access control (MAC) address, according to one embodiment.
  • The distributed virtual control plane enables offloading of the processing of control packets to one or more VMs, and also the flexibility and ability to easily scale the control plane in response to changes in the networking system's scalability requirements due to additions resulting in the expansion of the data plane. In one embodiment, the distributed virtual control plane comprises multiple VMs, with each VM being capable of processing control packets that are of a different type than the other VMs. In this way, divisions in the control plane may be made across the various VMs. There are many different possible combinations to optimize the usage of the distributed virtual control plane given specifications and additional user requirements or conditions that are desired to be met.
  • In one general embodiment, a networking system includes at least one physical networking switch and a scalable and distributed virtual control plane. The switch has logic adapted to receive a first networking packet, logic adapted to classify the first networking packet to produce a packet classification, logic adapted to generate a second networking packet based on the first networking, packet, and logic adapted to forward the second networking packet. The scalable and distributed virtual control plane has at least one physical host server adapted to host a plurality of virtual machines (VMs), each VM being adapted for providing a control plane for a particular protocol, and a network connecting the at least one physical networking switch to the at least one physical host server. In addition, the plurality of VMs include logic adapted to receive the second networking packet, logic adapted to decapsulate the second networking packet to retrieve information about the first networking packet, logic adapted to handle processing of the first networking, packet using, the information about the first networking packet to obtain forwarding information sufficient to allow the first networking packet to be delivered to its intended destination, logic adapted to encapsulate the first networking packet into a third networking packet including the forwarding information, and logic adapted to forward the third networking packet according to the forwarding information.
  • In another general embodiment, a method for processing a first networking packet within a networking system includes receiving a first networking packet using a physical networking switch, classifying, using the physical networking switch, the first networking packet to produce a packet classification, generating, using the physical networking switch, a second networking packet based on the first networking packet, forwarding the second networking packet using the physical networking switch, receiving the second networking packet using a physical host server, wherein the physical host server is adapted to host a plurality of VMs, each VM being adapted for providing, a control plane for a particular protocol, receiving, using a VM hosted by the physical host server, the second networking packet, decapsulating, using the VM, the second networking packet to retrieve information about the first networking packet handling, using the VM, processing of the first networking packet using the information about the first networking packet to obtain forwarding information sufficient to allow the first networking packet to be delivered to its intended destination, encapsulating, using the VM, the first networking packet into a third networking, packet including, the forwarding information, and forwarding, using, the VM, the third networking packet according to the forwarding information.
  • According to yet another general embodiment, a computer program product for processing a first networking packet within a networking system includes a computer readable storage medium having computer readable program code embodied therewith. The computer readable program code includes computer readable program code configured to receive a first networking packet, computer readable program code configured to classify the first networking packet to produce a packet classification, computer readable program code configured to determine a destination address to send the first networking packet based on the packet classification in order to provide processing for the first networking packet by: selecting a condition from a look-up table to which the packet classification adheres, selecting an entry associated with the selected condition, and determining a set of information associated with the selected entry, computer readable program code configured to generate a second networking packet by encapsulating, the first networking packet into the second networking packet, and computer readable program code configured to forward the second networking packet to at least one physical host server to handle processing of the first networking packet. The look-up table includes a plurality of entries, a plurality of conditions, and a plurality of sets of information, each condition is associated with one entry, each entry is associated with one set of information, each set of information includes destination information for one of a plurality of VMs, each VM being adapted for providing a control plane for a particular protocol, any entry is capable of being associated with more than one condition, each set of information includes at least: an address identifier (ID), a service tag, and a media access control (MAC) ID, the address ID includes information that corresponds to one or more egress ports of a physical networking switch, the MAC ID includes a destination address that corresponds to at least one VM of the plurality of VMs designated to receive the second networking packet hosted by the at least one physical host server based on the packet classification, and the service tag includes information that corresponds to a membership of the second networking packet within one virtual local area network (VLAN) in accordance with any one of: a networking protocol, a port based protocol, and a port based networking protocol.
  • As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as a system, method or computer program product. Accordingly, aspects of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including, firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as “logic,” a “circuit,” “module,” or “system.” Furthermore, aspects of the present invention may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon.
  • Any combination of one or more computer readable medium(s) may be utilized. The computer readable medium may be a computer readable signal medium or a non-transitory computer readable storage medium. A non-transitory computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the non-transitory computer readable storage medium include the following: a portable computer diskette, as hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a portable compact disc read-only memory (CD-ROM), a Blu-ray disc read-only memory (BD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a non-transitory computer readable storage medium may be any tangible medium that is capable of containing, or storing a program or application for use by or in connection with an instruction execution system, apparatus, or device.
  • A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a non-transitory computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device, such as an electrical connection having one or more wires, an optical fibre, etc.
  • Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fibre cable, RF, etc., or any suitable combination of the foregoing.
  • Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++, or the like, and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The program cede may execute entirely on a user's computer, partly on the user's computer, as a stand-alone software package, partly on the users computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer or server may be connected to the user's computer through any type of network, including a local area network (LAN), storage area network (SAN), and/or a wide area network (WAN), or the connection may be made to an external computer, for example through the Internet using an Internet Service Provider (ISP).
  • Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatuses (systems), and computer program products according, to various embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, may be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block, or blocks.
  • These computer program instructions may also be stored in a computer readable medium that may direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
  • The computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
  • FIG. 1 illustrates a network architecture 100, in accordance with one embodiment. As shown in FIG. 1, a plurality of remote networks 102 are provided including a first remote network 104 and a second remote network 106. A gateway 101 may be coupled between the remote networks 102 and a proximate network 108. In the context of the present network architecture 100, the networks 104, 106 may each take any form including, but not limited to a LAN, a WAN such as the Internet, public switched telephone network (PSTN), internal telephone network, etc.
  • In use the gateway 101 serves as an entrance point from the remote networks 102 to the proximate network 108. As such, the gateway 101 may function as a router, which is capable of directing a given packet of data that arrives at the gateway 101, and a switch, which furnishes the actual path in and out of the gateway 101 for a given packet.
  • Further included is at least one data server 114 coupled to the proximate network 108, and which is accessible from the remote networks 102 via the gateway 101. It should be noted that the data server(s) 114 may include any type of computing device/groupware. Coupled to each data server 114 is a plurality of user devices 116. Such user devices 116 may include a desktop computer, laptop computer, handheld computer, printer, and/or any other type of logic-containing device. It should be noted that a user device 111 may also be directly coupled to any of the networks, in some embodiments.
  • A peripheral 120 or series of peripherals 120, e.g., facsimile machines, printers, scanners, hard disk drives, networked and/or local storage units or systems, etc., may be coupled to one or more of the networks 104, 106, 108. It should be noted that databases and/or additional components may be utilized with, or integrated into, any type of network element coupled to the networks 104, 106, 108. In the context of the present description, a network element may refer to any component of a network.
  • According to some approaches, methods and systems described herein may be implemented with and/or on virtual systems and/or systems which emulate one or more other systems, such as a UNIX system which emulates an IBM z/OS environment, a UNIX system which virtually hosts a MICROSOFT WINDOWS environment, a MICROSOFT WINDOWS system which emulates an IBM z/OS environment, etc. This virtualization and/or emulation may be enhanced through the use of VMWARE software, in some embodiments.
  • In more approaches, one or more networks 104, 106, 108, may represent a cluster of systems commonly referred to as a “cloud,” In cloud computing, shared resources, such as processing power, peripherals, software, data, servers, etc., are provided to any system in the cloud in an on-demand relationship, thereby allowing access and distribution of services across many computing systems. Cloud computing typically involves an Internet connection between the systems operating in the cloud, but other techniques of connecting the systems may also be used, as known in the art.
  • FIG. 2 shows a representative hardware environment associated with a user device 116 and/or server 114 of FIG. 1, in accordance with one embodiment. FIG. 2 illustrates a typical hardware configuration of a workstation having a central processing unit (CPU) 210, such as a microprocessor, and a number of other units interconnected via one or more buses 212 which may be of different types, such as a local bus, a parallel bus, a serial bus, etc., according to several embodiments.
  • The workstation shown in FIG. 2 includes a Random Access Memory (RAM) 214, Read Only Memory (ROM) 216, an I/O adapter 218 for connecting peripheral devices such as disk storage units 220 to the one or more buses 212, a user interface adapter 222 for connecting a keyboard 224, a mouse 226, a speaker 228, a microphone 232, and/or other user interface devices such as a touch screen, a digital camera not shown) etc., to the one or more buses 212, communication adapter 234 for connecting the workstation to a communication network 235 (e.g., a data processing network) and a display adapter 236 for connecting the one or more buses 212 to a display device 238.
  • The workstation may have resident thereon an operating system such as the MICROSOFT WINDOWS Operating System (OS), a MAC OS, a UNIX OS, etc. It will be appreciated that a preferred embodiment may also be implemented on platforms and operating systems other than those mentioned. A preferred embodiment may be written using JAVA, XML, C, and/or C++ language, or other programming languages, along with an object oriented programming methodology. Object oriented programming (OOP), which has become increasingly used to develop complex applications, may be used.
  • Currently, methods of achieving interconnectivity between a large number of layer 2 ports rely on having numerous discrete switches each running spanning tree protocol (STP) or transparent interconnect of lots of links (TRILL). Unfortunately, by using discrete switches, a lookup needs to be performed using a lookup table at each hop between two of the discrete switches, which not only adds latency to the process, but also makes the process latency unpredictable as the network evolves and changes.
  • On the other hand, distributed switches relying on a cell-based fabric interconnect have an advantage of providing predictable, low latency for setups in which interconnectivity between a large number of ports is desired. A distributed switch appears to be a single, very large switch, with the single ingress lookup specifying the index needed to traverse the network. The edge facing switches are interconnected using cell-based clos fabrics, which are wired in a fixed fashion and rely on the path selection made at the ingress.
  • Unfortunately, as the number of ports in to distributed switch grows, software that manages the network must struggle to accommodate the increased number of link up and/or link down events (link events) and processing. Control protocols, like STP and intermediate system to intermediate system (ISIS), will see a large number of link events, which will stress their convergence times if they continue to exist as monolithic elements.
  • Now referring to FIG. 3, as networking system 300 is shown according to one embodiment. The networking system 300 may also be referred to as a switch. In one embodiment, the networking system 300 comprises at least one networking switch 310 a . . . 310 n which may be arranged in a network switching system 304. The network switching system 304 is coupled to a scalable and distributed virtual control plane 302 via a network 308. The network 308 may be any type of network comprising any number of networking elements therein, such as switches, routers, subnets, wires, cables, etc., as would be known to one of skill in the art. In one embodiment, the network 308 may be as simple as a cable plant and/or communication channels utilizing a communication network.
  • The scalable and distributed virtual control plane 302 may comprise at least one physical host server 306 a, and may include a plurality of physical host servers 306 a, . . . , 306 m, which may or may not be physically located in the same location or area, geographically, as the network switching system 304. That is to say, the network switching system 304 and the scalable and distributed virtual control plane 302 may be located remotely of one another. Each physical host server 306 a, . . . , 306 m comprises at least one port 312 for connecting to other network elements and forwarding and/or receiving packets. Any connection type known in the art may be used with the at least one port 312.
  • In addition, the network switching system 304 may comprise at least one physical networking switch 310 a, . . . , 310 n, such as a networking switch of a type known in the art. Each of the physical networking switches 310 a, . . . 310 n may include at least one ingress port to receive an incoming packet, and at least one egress port to transmit or forward an outgoing packet, shown collectively as ports 314. Each ingress and/or egress port 314 may be established using, a dedicated and separate physical port, each of which is identified by a physical port address, or using a dedicated virtual port identified by a virtual port address, where a single physical port may be programmed to incorporate a communication link to one or more virtual ports, in various approaches.
  • Each of the ingress ports may be established as a first virtual port identified with a first virtual address, e.g., a first MAC address, and each of the egress ports may be established as a second virtual port identified with a second virtual address, e.g., a second MAC address. For example, the first virtual port may be associated with a first physical port address, and the second virtual port may be associated with a second physical port address, the first and second virtual ports may be associated with a single physical port address, etc.
  • A more detailed view of a physical networking switch 310 is shown in FIG. 4, according to one embodiment. The physical networking switch 310, in one approach, may comprise a network interface 406, a switching processor 402, and a CPU subsystem 404 (having a CPU or some other processor therein). The network interface 406 includes a plurality of physical ports 314, each physical port 314 being identified using a unique physical port address, in one approach. The switching processor 402 may be any type of processor known in the art, such as a CPU, an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a microcontroller, a microprocessor, etc, and may be included in a switching platform that provides the capability to receive an incoming packet via an ingress port, process the incoming, packet to acquire or generate packet control information, and transmit (or forward) an outgoing packet, corresponding to the incoming packet, via an egress port using the packet control information. The ingress and egress ports are generally labeled ports 313 for the sake of these descriptions, and a port 314 used as an ingress port in one situation may be used as an egress port in another situation.
  • In one embodiment, the at least one physical networking switch 310 may comprise logic adapted to receive a first networking packet, logic adapted to generate a second networking packet that encapsulates at least a payload of the first networking packet, and logic adapted to forward the second networking packet such as to the physical networking server 306).
  • The switching processor 402 may use the local CPU subsystem 404 to provide processing capability to manage the generation and transmission of the second networking packet (outgoing packet), which may encapsulate the first networking packet or portions thereof (including information based on the first networking packet). In another embodiment, the CPU subsystem 404 may offload the classification and management of packet processing to the scalable and distributed control plane 302. In yet another embodiment, the switching processor 402 may perform these functions itself.
  • Now referring to FIG. 5, a more detailed view of a physical host server 306 is shown, according to one embodiment. The physical host server 306 may comprise a network interface 502, a virtual network interface 504, and a plurality of VMs 508 (VM0, VM1, . . . , VMk). Each of the VMs 508 may be associated with one virtual network interface port 506 identified using a dedicated MAC address (MAC-0, MAC-1, . . . , MAC-k), in one approach. Furthermore, each of the MAC addresses of the virtual network interface 504 may be associated with at least one physical port 312 of the network interface 502, each physical port 312 being identified using a physical port address, in one approach.
  • An incoming packet (such as the second networking packet) received using one of the physical ports 312 of the network interface 502 may be processed and appropriately forwarded to a corresponding VM 508 using information included within the incoming packet, such as header information of an external packet or an internal, encapsulated packet.
  • Referring now to FIGS. 3-5, each VM 508 may be adapted to process at least one type of networking packet. In this way, the VM 508 may include logic adapted to receive the second networking packet, logic adapted to decapsulate the second networking packet to retrieve at least the first networking packet or some portion thereof (if necessary, such as when the second networking packet is designed to encapsulate another packet or portion thereof, such as an overlay packet or some other type of packet capable of encapsulating another packet, such as the first networking packet), and logic adapted to handle processing of the first networking packet to obtain forwarding information sufficient to allow the at least one physical networking switch 310 or some other component of the network 308 to deliver the first networking packet to its intended destination. Any processing that is helpful or useful in performing any functions related to the networking packets may be performed, such as reading the networking packet, classifying the networking packet, resending the networking packet, etc.
  • The at least one VM 508 also includes logic adapted to encapsulate the first networking packet into a third networking packet comprising the forwarding information. In this way, the third networking packet may be forwarded to another VM 508, another physical host server 306, another networking switch 310, or any other component capable of delivering, the third networking packet to its intended destination. The third networking packet is adapted to encapsulate the first networking packet or a portion thereof such that once the third networking packet is delivered to its intended destination, the first networking packet may be decapsulated from the third networking packet and delivered to its intended destination as the first networking packet.
  • Accordingly, the at least one VM 508 also includes logic adapted to forward the third networking packet, such as to the at least one physical networking switch 310 or some other component of the network 308 capable of handling the third networking packet, such that the third networking packet may be delivered to its intended destination. The third networking, packet may be transmitted via the VM 508 to the virtual network interface 504 which appropriately forwards the third networking packet to an associated physical port 312 of the network interface 502 of the physical host server 306.
  • The physical host server 306 is programmable in order to establish at least one virtual network interface port 506 that may be used to establish a communication channel between:
      • 1. a network interface physical port 312 of the physical host server 306, and
      • 2. at least one VM 508 established within the physical host server 306, where each VM 508 is accessible via a unique MAC address (via a virtual network interface port 506).
  • In one embodiment, a scalable and distributed virtual control plane 302 may include at least one physical host server 306 having a plurality of VMs 508 running thereon which are configured to use resources of the physical host server 306. In another embodiment, the scalable and distributed virtual control plane 302 may comprise a first physical host server 306 a hosting a first VM 508 (e.g., VM0) used to process networking packets: (i) that are of certain types, and/or (ii) when a certain condition is met for a networking packet, and/or (iii) that require a specific handling requirement, e.g., the networking packets utilize a certain protocol type, such as overlay (NVGRE, VXLAN etc.), TRILL, STP, etc.
  • For example, some additional requirements may comprise the processing of certain networking packets that require special handling, e.g., security or service priority, where a dedicated VM 508 hosted by a certain physical host server 306 may be used to meet such demand. Of course, within each scalable and distributed control plane 302, many VMs 508 may be established where each VM 508 is programmed to handle any user defined requirement or specified condition, such that the processing of networking packets may be split according to any user desired methodology and/or according to an optimum setting designed by the scalable and distributed control plane 302.
  • Exemplary records of information that may be used in the classification processing of the first networking packet on at least one of the plurality of VMs 598 hosted by the at least one physical host server 306, as described above in FIGS. 3-5, are shown in FIG. 6, according to one embodiment. There are many various options possible to those of skill in the art to organize, program, and/or store such information. A simple look-up table 600 type is shown in FIG. 6 for this example. Other types of ways to organize and/or store the information may be used, such as a database, a list, a chart, an indexed file system, etc.
  • As shown in FIG. 6, each record (CPU_Entry_N) 602 in the look-up table 600 includes a pre-programmed set of information 604. Furthermore, there is a set of predetermined conditions 606, which each predetermined condition 606 being matched to at least one of the various records 602. In this example, the set of information 604 stored for each record 602 includes: (ii) egress port information (or CPU_Port) that may also include the physical address of the network interface port to be used for transmitting the second networking packet, where the egress port information may be a multicast (MC) group identifier intended for one or more CPU_Ports; (b) information associated with a service tag (e.g., IEEE 802.1q S-tag); and (c) MAC address information corresponding, to the VM designated to receive and process the second networking packet (which includes the first networking packet or portion thereof), where the MAC address information may be a MC MAC address intended to forward the second networking packet to one or more VMs.
  • The pre-programmed set of information 604 may be retrieved upon activation of a corresponding CPU_Port_entry process. Each CPU_Port_entry process may be started when a certain condition 606 is met during the classification of the first networking packet by the networking switch or some other component of the network.
  • Furthermore, at least one of the preprogrammed conditions 606 (e.g., Condition-1 through Condition-T) may be used to select any one of the egress ports (e.g., (CPU_Port_entry-1 through CPU_Port_entry-N), such that it is feasible to direct one or more conditions 606 to select the same CPU_Port_entry. This ensures that the flexibility is provided to match and distribute the virtual control plane processing: capability based on the networking switch processing requirements and/or desired performance.
  • This mechanism enables partitioning of the whole control plane of a networking switch into multiple smaller sub-control planes. By creating different conditions 606 in the table 600 which identify a specific control packet protocol on each switching port, and then sending control packets that use that designated protocol to the specified CPU_Port_entry after pre-processing and encapsulating the packet into another packet, the control plane is effectively split into multiple sub-control planes.
  • Also, in some approaches, each sub-control plane may process one or more control protocols, and/or each sub-control plane may execute as a VIM on a virtualization platform (such as Hypervisor, Hyper-V, etc) of a physical host server.
  • In another embodiment, control packets may be received for processing based on entries in a look-up table 600 stored locally to a switching processor using each sub-control plane, and a redundant sub-control plane may be created for each sub-control plane provide for high availability.
  • Now referring to FIG. 7, a method 700 for processing an incoming networking packet, such as using a remote VM, is shown according to one embodiment. The method 700 may be performed in accordance with the present invention in any of the environments depicted in FIGS. 1-6, among others, in various embodiments. Of course, more or less operations than those specifically described in FIG. 7 may be included in method 700, as would be understood by one of skill in the art upon reading the present descriptions.
  • Each of the steps of the method 700 may be performed by any suitable component of the operating environment. For example, in some embodiments, the method 700 may be partially or entirely performed by a virtual control plane, a physical networking switch, a combination thereof, etc.
  • Any type of networking packet may be used in conjunction with method 700, particularly a control packet in some approaches.
  • As shown in FIG. 7, method 700 may initiate with operation 702, where a control packet is classified to obtain a control packet classification. In one embodiment, a physical networking switch may receive a first control packet via an ingress port, and the control packet may be classified using any available information, such as a source address of the control packet, specific routing information of the control packet, and/or information that corresponds to a certain property of the control packet.
  • In operation 704, a destination address to send the first networking packet is determined based on the packet classification in order to provide processing for the first networking packet.
  • In one embodiment, the determination may be based on: (i) information that is included within the control packet; and/or (ii) information that is extracted from the control packet.
  • For example, a look-up table may be used to determine the destination address. Specifically, a condition may be selected from the look-up table to which the packet classification adheres, an entry may be selected that is associated with the selected condition, and a set of information may be determined that is associated with the selected entry. The look-up table may comprise a plurality of entries, a plurality of conditions, and a plurality of sets of information from which the selected entry and the selected set of information is chosen. According to one embodiment, each condition is associated with one entry, each entry is associated with one set of information, and any entry is capable of being associated with more than one condition.
  • Each set of information may comprise at least: an address ID, a service tag, and a MAC ID. The address ID may comprise information that corresponds to one or more egress ports of the at least one physical networking switch, the MAC ID may include a destination address that corresponds to at least one VM designated to receive the second networking packet based on the packet classification, and the service tag may comprise information that corresponds to a membership of the second networking packet within one VLAN in accordance with any one of a networking protocol, a port based protocol, and a port based networking protocol.
  • Alternatively, in another embodiment, the address determination procedures for the control packet, as described above, may comprise the determination of the following address data information: (i) determining an egress port (or CPU port) of a physical networking switch to be used to transmit a second networking packet encapsulating the control packet, which may also include determining the address of a network interface physical port associated with the determined egress port; (ii) determining information associated with a service tag (e.g., IEEE 802.1q S-tag); and (iii) determining a MAC address corresponding to a VM designated to receive and process the second networking packet. Of course, in some embodiments, more information may be determined and the determination is not limited to the information described above.
  • Each of the one or more egress ports may comprise information that corresponds to an egress port of the at least one physical networking switch which is used to transmit the second networking packet, and/or information that corresponds to an address of a network interface physical port associated with the egress port.
  • The look-up table may be used to look-up programmable information that identities at least one of: (i) an egress port (or CPU port) of the at least one physical networking switch to be used to transmit the second networking packet, which may also include the address of the network interface physical port associated with the egress port, where each CPU entry may identify one or multiple VMs; (ii) a service tag e.g., IEEE 802.1q S-tag); and (iii) a MAC address corresponding to the VM designated to receive and process the second packet. The identification of a condition for which the CPU Entry or port is to be used to process the content of the second networking packet is based on at least some information included within the control packet, in some approaches.
  • In one approach, a CPU entry may be used as a pointer to an entry in the look-up table that includes information to be used for generating content for the second networking packet, it is important to no note that each CPU entry may identify one or multiple VMs. Furthermore, the information used to determine the destination address may also include at least one of the following: (i) a source address of the control packet; (ii) specific routing information VLAN; (iii) a specific request to process the second networking packet in accordance with a programmable (predetermined) procedure; and (iv) information that is related to a certain property of the control packet.
  • In operation 706, a second networking, packet may be venerated that includes at least a portion of the control packet. This second networking packet may be a L2 packet, and may include an outer MAC header where the S-Channel is the S-Tag, and includes the MAC address(es) of the VM(s).
  • In operation 708, the second packet may be transmitted (forwarded) to a designated port (CPU_Port) of a physical host server, such as by using an egress port of the physical networking switch. In one embodiment, the CPU_Port may be a physical egress port of the physical host server.
  • The physical host server may then receive and process the second networking packet using one or more VMs thereof, which in turn generate a third networking packet based on information included within the second networking packet about the control packet. Furthermore, one VM may transmit the third networking packet back to the networking switch using at least some of the information included within the second networking packet.
  • Now referring to FIG. 8, a flowchart of a method 800 for processing a packet within a networking system is shown, according to one embodiment. The method 800 may be performed in accordance with the present invention in any of the environments depicted in FIGS. 1-7, among others, in various embodiments. Of course, more or less operations than those specifically described in FIG. 8 may be included in method 800, as would be understood by one of skill in the art upon reading the present descriptions.
  • Each of the steps of the method 800 may be performed by any suitable component of the operating environment. For example, in some embodiments, the method 800 may be partially or entirely performed by a virtual control plane, a physical control plane, a combination thereof, etc.
  • As shown in FIG. 8, method 800 may initiate with operation 802, where a first networking packet is received using a first physical networking switch, such as via an ingress port thereof.
  • In operation 804, the first packet is classified using the first physical networking switch.
  • In operation 806, a second networking packet based on the first networking packet is generated using the first physical networking switch. In a further embodiment,
  • In operation 808, the second networking packet is forwarded using the physical networking switch.
  • In operation 810, the second networking packet is received using a physical host server, wherein the physical host server is adapted to host a plurality of VMs, each VM being adapted for providing a control plane for a particular protocol. In this way, each separate protocol, such as open shortest path first (OPSF), spanning tree, border gateway protocol (BGP), or any other protocol known in the art, may have one or more dedicated VMs which are adapted for processing control plane packets for that particular protocol. In another embodiment, more than one protocol may be supported by a single VM, when the processing requirements are loss and handled by a single VM. However, it is most easily understood and compartmentalized to have a separate VM handle the responsibilities for each separate protocol on a one-to-one basis.
  • In operation 812, the second networking packet is received using a VM hosted by the physical host server.
  • In operation 814, the second networking packet is decapsulated, using the VM, to retrieve information about the first networking packet.
  • In operation 816, processing of the first networking packet is handled, handling, using the VM, according to die information about the first networking packet to obtain forwarding information sufficient to allow the first networking, packet to be delivered to its intended destination.
  • In operation 818, using the VM, the first networking packet is encapsulated into a third networking packet comprising the forwarding information.
  • In operation 820, using the VM, the third networking packet is forwarded according to the forwarding information.
  • In another embodiment, the first classification data may be generated based on the second set of information, wherein the second set of information may include programmable records of information that are stored within the at least one physical networking switch. Also, the first classification data may include at least: a first address ID comprising information that corresponds to one or more egress ports of the at least one physical networking switch, a service tag comprising information that corresponds to a membership of the second packet within exactly one VLAN in accordance with any one of: a networking protocol, a port based protocol, and a port based networking protocol, and a MAC ID including information that corresponds to at least one VM designated to receive the second packet.
  • In accordance with another embodiment, each of the one or more egress ports of the at least one physical networking switch may include information that corresponds to an address of a network interface physical port associated with each egress port.
  • In operation 808, metadata is determined based on information included within the first packet using the first physical networking switch. In one approach, determining the metadata may comprise using information included within the first packet including at least: a source address of the first packet, specific routing information, a specific request to process the second packet in accordance with a programmable procedure, and information that corresponds to a certain property of the first packet.
  • In operation 810, a second packet is generated using at least the first classification data, the determined metadata, and at least a third set of information using the first physical networking switch, wherein the third set of information is based on information included within the first packet.
  • In one approach, generating the second packet may comprise using at least: the first classification data, the metadata, and a set of information based on content of the first packet.
  • In operation 812, the second packet is forwarded using an egress port of the first physical networking switch.
  • In operation 814, the second packet is received using an ingress port of a first control plane server.
  • In operation 816, the first physical networking switch and the first control plane server are linked via a network. The network comprises at least one of: a direct link, a wired communication channel, and a wireless communication channel.
  • In a further embodiment, the method 800 may further include processing the received second packet using the first control plane server, generating a third control packet using information included within the second packet, and transmitting the third packet to the first physical networking switch using an egress port of the first control plane server.
  • In one embodiment, the method 800 may be embodied in a computer program product for processing a first networking packet within a networking system. The computer program product may comprise a computer readable storage medium having computer readable program code embodied therewith, the computer readable program code comprising: computer readable program code configured to receive a first networking packet; computer readable program code configured to classify the first networking packet to produce a packet classification; computer readable program code configured to determine a destination address to send the first networking packet based on the packet classification in order to provide processing for the first networking packet by: selecting a condition from a look-up table to which the packet classification adheres, selecting an entry associated with the selected condition, and determining a set of information associated with the selected entry, wherein the look-up table comprises a plurality of entries, a plurality of conditions, and a plurality of sets of information, each condition is associated with one entry, each entry is associated with one set of information, each set of information comprises destination information for one of a plurality of VMs, each VM being adapted for providing a control plane for a particular protocol such that the destination information leads the packet to be delivered to a VM capable of providing a control plane for that packet), and any entry is capable of being associated with more than one condition. Furthermore, each set of information may comprise at least: an address ID, a service tag, and a MAC ID, computer readable program code configured to generate a second networking packet by encapsulating the first networking packet into the second networking packet, and computer readable program code configured to forward the second networking packet to at least one physical host server to handle processing of the first networking packet, wherein the address ID comprises information that corresponds to one or more egress ports of a physical networking switch, the MAC ID includes a destination address that corresponds to at least one VM, of the plurality of VMs, designated to receive the second networking packet hosted by the at least one physical host server based on the packet classification, and the service tag comprises information that corresponds to a membership of the second networking packet within one VLAN in accordance with any one of a networking protocol, a port based protocol, and a port based networking protocol.
  • While various embodiments have been described above, it should be understood that they have been presented by way of example only, and not limitation. Thus, the breadth and scope of an embodiment of the present invention should not be limited by any of the above-described exemplary embodiments, but should be defined only in accordance with the following claims and their equivalents.

Claims (9)

What is claimed is:
1. A method for processing a first networking packet within a networking system, the method comprising:
receiving a first networking packet using a physical networking switch;
classifying, using the physical networking switch, the first networking packet to produce a packet classification;
generating, using the physical networking switch, a second networking packet based on the first networking packet;
forwarding the second networking packet using the physical networking switch;
receiving the second networking packet using a physical host server, wherein the physical host server is adapted to host a plurality of virtual machines (VMs), each VM being adapted for providing a control plane for a particular protocol;
receiving, using a VM hosted by the physical host server, the second networking packet;
decapsulating, using the VM, the second networking packet to retrieve information about the first networking packet;
handling, using the VM, processing of the first networking packet using the information about the first networking packet to obtain forwarding information sufficient to allow the first networking packet to be delivered to its intended destination;
encapsulating, using the VM, the first networking packet into a third networking packet comprising the forwarding information; and
forwarding, using the VM, the third networking packet according to the forwarding information.
2. The method as recited in claim 1, wherein the generating the second networking packet based on the first networking packet comprises encapsulating the first networking packet into the second networking packet.
3. The method as recited in claim 1, further comprising determining, a destination address to send the first networking packet based on the packet classification in order to provide processing for the first networking packet.
4. The method as recited in claim 3, wherein the determining the destination address to provide processing for the first networking packet comprises logic adapted to use a look-up table to determine the destination address.
5. The method as recited in claim 4, wherein the using the look-up table to determine the destination address comprises:
selecting a condition from the look-up table to which the packet classification adheres;
selecting an entry associated with the selected condition; and
determining a set of information associated with the selected entry,
wherein the look-up table comprises a plurality of entries, a plurality of conditions, and a plurality of sets of information.
6. The method as recited in claim 5,
wherein each condition is associated with one entry,
wherein each entry is associated with one set of information,
wherein each set of information comprises destination information for one of the plurality of VMs, and
wherein any entry is capable of being associated with more than one condition.
7. The method as recited in claim 5,
wherein each set of information comprises at least: an address identifier (ID), a service tag, and a media access control (MAC) ID,
wherein the address ID comprises information that corresponds to one or more egress ports of the at least one physical networking switch,
wherein the MAC ID includes a destination address that corresponds to at least one VM of the plurality of VMs designated to receive the second networking packet based on the packet classification, and
wherein the service tag comprises information that corresponds to a membership of the second networking packet within one virtual local area network (VLAN) in accordance with any one of: a networking protocol, a port based protocol, and a port based networking protocol.
8. The method as recited in claim 7, wherein each of the one or more egress ports comprises:
information that corresponds to an egress port of the at least one physical networking switch which is used to transmit the second networking packet; and/or
information that corresponds to an address of a network interface physical port associated with the egress port.
9. The method as recited in claim 1, wherein the classifying the first networking packet to produce the packet classification utilizes at least:
a source address of the first networking packet;
specific routing information of the first networking packet; and
information that corresponds to a certain property of the first networking packet.
US14/062,817 2013-03-14 2013-10-24 Scalable distributed control plane for network switching systems Abandoned US20140280841A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US14/062,817 US20140280841A1 (en) 2013-03-14 2013-10-24 Scalable distributed control plane for network switching systems

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US13/831,029 US9571338B2 (en) 2013-03-14 2013-03-14 Scalable distributed control plane for network switching systems
US14/062,817 US20140280841A1 (en) 2013-03-14 2013-10-24 Scalable distributed control plane for network switching systems

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US13/831,029 Continuation US9571338B2 (en) 2013-03-14 2013-03-14 Scalable distributed control plane for network switching systems

Publications (1)

Publication Number Publication Date
US20140280841A1 true US20140280841A1 (en) 2014-09-18

Family

ID=51533624

Family Applications (2)

Application Number Title Priority Date Filing Date
US13/831,029 Expired - Fee Related US9571338B2 (en) 2013-03-14 2013-03-14 Scalable distributed control plane for network switching systems
US14/062,817 Abandoned US20140280841A1 (en) 2013-03-14 2013-10-24 Scalable distributed control plane for network switching systems

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US13/831,029 Expired - Fee Related US9571338B2 (en) 2013-03-14 2013-03-14 Scalable distributed control plane for network switching systems

Country Status (1)

Country Link
US (2) US9571338B2 (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150109923A1 (en) * 2013-10-17 2015-04-23 Cisco Technology, Inc. Proxy Address Resolution Protocol on a Controller Device
US9571338B2 (en) 2013-03-14 2017-02-14 Lenovo Enterprise Solutions (Singapore) Pte. Ltd. Scalable distributed control plane for network switching systems
US9575689B2 (en) 2015-06-26 2017-02-21 EMC IP Holding Company LLC Data storage system having segregated control plane and/or segregated data plane architecture
US10003554B1 (en) * 2015-12-22 2018-06-19 Amazon Technologies, Inc. Assisted sideband traffic management
US10091295B1 (en) 2015-09-23 2018-10-02 EMC IP Holding Company LLC Converged infrastructure implemented with distributed compute elements
US10104171B1 (en) 2015-11-25 2018-10-16 EMC IP Holding Company LLC Server architecture having dedicated compute resources for processing infrastructure-related workloads
US20220210063A1 (en) * 2020-12-30 2022-06-30 Oracle International Corporation Layer-2 networking information in a virtualized cloud environment
US11818040B2 (en) 2020-07-14 2023-11-14 Oracle International Corporation Systems and methods for a VLAN switching and routing service

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9357484B2 (en) * 2013-11-11 2016-05-31 Avaya Inc. Elastic wireless control plane
CN104243318B (en) 2014-09-29 2018-10-09 新华三技术有限公司 MAC address learning method and device in VXLAN networks
US10645031B2 (en) 2015-06-02 2020-05-05 At&T Intellectual Property I, L.P. Virtual network element and methods for use therewith
US10033622B2 (en) * 2015-08-07 2018-07-24 Lenovo Enterprise Solutions (Singapore) Pte. Ltd. Controller-based dynamic routing in a software defined network environment
US10218605B2 (en) * 2017-04-21 2019-02-26 Cisco Technology, Inc. On-demand control plane redundancy

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7502884B1 (en) * 2004-07-22 2009-03-10 Xsigo Systems Resource virtualization switch
US20100061242A1 (en) * 2008-09-11 2010-03-11 Pradeep Sindhu Methods and apparatus related to a flexible data center security architecture
US20100214949A1 (en) * 2009-02-23 2010-08-26 Cisco Technology, Inc. Distributed data center access switch
US20100257263A1 (en) * 2009-04-01 2010-10-07 Nicira Networks, Inc. Method and apparatus for implementing and managing virtual switches
US20110007744A1 (en) * 2007-02-14 2011-01-13 Melman David Packet forwarding apparatus and method
US20110019552A1 (en) * 2009-07-24 2011-01-27 Jeyhan Karaoguz Method and system for network aware virtual machines
US20110103259A1 (en) * 2009-11-04 2011-05-05 Gunes Aybay Methods and apparatus for configuring a virtual network switch
US20110142053A1 (en) * 2009-12-15 2011-06-16 Jacobus Van Der Merwe Methods and apparatus to communicatively couple virtual private networks to virtual machines within distributive computing networks
US20110292836A1 (en) * 2007-12-21 2011-12-01 Nigel Bragg Evolution of ethernet networks
US20130044636A1 (en) * 2011-08-17 2013-02-21 Teemu Koponen Distributed logical l3 routing

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070036178A1 (en) 2005-02-02 2007-02-15 Susan Hares Layer 2 virtual switching environment
US8566822B2 (en) 2009-07-22 2013-10-22 Broadcom Corporation Method and system for distributing hypervisor functionality over multiple physical devices in a network and configuring sub-hypervisor to control the virtual machines
US9461840B2 (en) 2010-06-02 2016-10-04 Brocade Communications Systems, Inc. Port profile management for virtual cluster switching
US8456984B2 (en) 2010-07-19 2013-06-04 Ciena Corporation Virtualized shared protection capacity
US9571338B2 (en) 2013-03-14 2017-02-14 Lenovo Enterprise Solutions (Singapore) Pte. Ltd. Scalable distributed control plane for network switching systems

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7502884B1 (en) * 2004-07-22 2009-03-10 Xsigo Systems Resource virtualization switch
US20110007744A1 (en) * 2007-02-14 2011-01-13 Melman David Packet forwarding apparatus and method
US20110292836A1 (en) * 2007-12-21 2011-12-01 Nigel Bragg Evolution of ethernet networks
US20100061242A1 (en) * 2008-09-11 2010-03-11 Pradeep Sindhu Methods and apparatus related to a flexible data center security architecture
US20100214949A1 (en) * 2009-02-23 2010-08-26 Cisco Technology, Inc. Distributed data center access switch
US20100257263A1 (en) * 2009-04-01 2010-10-07 Nicira Networks, Inc. Method and apparatus for implementing and managing virtual switches
US20110019552A1 (en) * 2009-07-24 2011-01-27 Jeyhan Karaoguz Method and system for network aware virtual machines
US20110103259A1 (en) * 2009-11-04 2011-05-05 Gunes Aybay Methods and apparatus for configuring a virtual network switch
US20110142053A1 (en) * 2009-12-15 2011-06-16 Jacobus Van Der Merwe Methods and apparatus to communicatively couple virtual private networks to virtual machines within distributive computing networks
US20130044636A1 (en) * 2011-08-17 2013-02-21 Teemu Koponen Distributed logical l3 routing

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Media Access Control (MAC) Bridges and Virtual Bridged Local Area Networks, IEEE Std 802.1q(TM), May 16, 2011 *

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9571338B2 (en) 2013-03-14 2017-02-14 Lenovo Enterprise Solutions (Singapore) Pte. Ltd. Scalable distributed control plane for network switching systems
US9264362B2 (en) * 2013-10-17 2016-02-16 Cisco Technology, Inc. Proxy address resolution protocol on a controller device
US9621373B2 (en) 2013-10-17 2017-04-11 Cisco Technology, Inc. Proxy address resolution protocol on a controller device
US20150109923A1 (en) * 2013-10-17 2015-04-23 Cisco Technology, Inc. Proxy Address Resolution Protocol on a Controller Device
US9575689B2 (en) 2015-06-26 2017-02-21 EMC IP Holding Company LLC Data storage system having segregated control plane and/or segregated data plane architecture
US10091295B1 (en) 2015-09-23 2018-10-02 EMC IP Holding Company LLC Converged infrastructure implemented with distributed compute elements
US10873630B2 (en) 2015-11-25 2020-12-22 EMC IP Holding Company LLC Server architecture having dedicated compute resources for processing infrastructure-related workloads
US10104171B1 (en) 2015-11-25 2018-10-16 EMC IP Holding Company LLC Server architecture having dedicated compute resources for processing infrastructure-related workloads
US10003554B1 (en) * 2015-12-22 2018-06-19 Amazon Technologies, Inc. Assisted sideband traffic management
US10917362B1 (en) 2015-12-22 2021-02-09 Amazon Technologies, Inc. Assisted sideband traffic management
US11818040B2 (en) 2020-07-14 2023-11-14 Oracle International Corporation Systems and methods for a VLAN switching and routing service
US11831544B2 (en) 2020-07-14 2023-11-28 Oracle International Corporation Virtual layer-2 network
US11876708B2 (en) 2020-07-14 2024-01-16 Oracle International Corporation Interface-based ACLs in a layer-2 network
US20220210063A1 (en) * 2020-12-30 2022-06-30 Oracle International Corporation Layer-2 networking information in a virtualized cloud environment
US11757773B2 (en) 2020-12-30 2023-09-12 Oracle International Corporation Layer-2 networking storm control in a virtualized cloud environment
US11765080B2 (en) 2020-12-30 2023-09-19 Oracle International Corporation Layer-2 networking span port in a virtualized cloud environment
US11909636B2 (en) 2020-12-30 2024-02-20 Oracle International Corporation Layer-2 networking using access control lists in a virtualized cloud environment

Also Published As

Publication number Publication date
US20140280827A1 (en) 2014-09-18
US9571338B2 (en) 2017-02-14

Similar Documents

Publication Publication Date Title
US9571338B2 (en) Scalable distributed control plane for network switching systems
US10158563B2 (en) Flow based overlay network
US10582420B2 (en) Processing of overlay networks using an accelerated network interface card
US10182005B2 (en) Software defined network (SDN) switch clusters having layer-3 distributed router functionality
US9736070B2 (en) Load balancing overlay network traffic using a teamed set of network interface cards
US9544248B2 (en) Overlay network capable of supporting storage area network (SAN) traffic
US9602307B2 (en) Tagging virtual overlay packets in a virtual networking system
US10103998B2 (en) Overlay network priority inheritance
US9503313B2 (en) Network interface card having overlay gateway functionality
US8908691B2 (en) Virtual ethernet port aggregation (VEPA)-enabled multi-tenant overlay network
CN113326228B (en) Message forwarding method, device and equipment based on remote direct data storage
US10911493B2 (en) Identifying communication paths between servers for securing network communications
US8966148B2 (en) Providing real-time interrupts over Ethernet
US20200351286A1 (en) Configuring an island virtual switch for provisioning of network security services
US9853885B1 (en) Using packet duplication in a packet-switched network to increase reliability

Legal Events

Date Code Title Description
AS Assignment

Owner name: INTERNATIONAL BUSINESS MACHINES CORPORATION, NEW Y

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:KAMBLE, KESHAV G.;LEU, DAR-REN;PANDEY, VIJOY A.;REEL/FRAME:031474/0446

Effective date: 20130314

AS Assignment

Owner name: LENOVO ENTERPRISE SOLUTIONS (SINGAPORE) PTE. LTD., SINGAPORE

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:INTERNATIONAL BUSINESS MACHINES CORPORATION;REEL/FRAME:034194/0353

Effective date: 20140926

Owner name: LENOVO ENTERPRISE SOLUTIONS (SINGAPORE) PTE. LTD.,

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:INTERNATIONAL BUSINESS MACHINES CORPORATION;REEL/FRAME:034194/0353

Effective date: 20140926

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION