WO2002069104A2 - Node architecture and management system for optical networks - Google Patents

Node architecture and management system for optical networks Download PDF

Info

Publication number
WO2002069104A2
WO2002069104A2 PCT/US2002/005826 US0205826W WO02069104A2 WO 2002069104 A2 WO2002069104 A2 WO 2002069104A2 US 0205826 W US0205826 W US 0205826W WO 02069104 A2 WO02069104 A2 WO 02069104A2
Authority
WO
WIPO (PCT)
Prior art keywords
optical
line card
manager
architecture
network
Prior art date
Application number
PCT/US2002/005826
Other languages
French (fr)
Other versions
WO2002069104A3 (en
Inventor
Mario F. Alvarez
Michael S. Amoroso
Abdella Battou
David M. Brooks
Chin J. Chen
Richard J. Garrick
Moon W. Kim
Gyi-Hong Liu
Michael I. Mandelberg
Wu Shi Shung
Anastasios Tzathas
David H. Walters
Original Assignee
Firstwave Intelligent Optical Networks, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Family has litigation
First worldwide family litigation filed litigation Critical https://patents.darts-ip.com/?family=27569919&utm_source=google_patent&utm_medium=platform_link&utm_campaign=public_patent_search&patent=WO2002069104(A2) "Global patent litigation dataset” by Darts-ip is licensed under a Creative Commons Attribution 4.0 International License.
Application filed by Firstwave Intelligent Optical Networks, Inc. filed Critical Firstwave Intelligent Optical Networks, Inc.
Priority to AU2002254037A priority Critical patent/AU2002254037A1/en
Publication of WO2002069104A2 publication Critical patent/WO2002069104A2/en
Publication of WO2002069104A3 publication Critical patent/WO2002069104A3/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04JMULTIPLEX COMMUNICATION
    • H04J14/00Optical multiplex systems
    • H04J14/02Wavelength-division multiplex systems
    • H04J14/0278WDM optical network architectures
    • H04J14/0284WDM mesh architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/60Software deployment
    • G06F8/65Updates
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04JMULTIPLEX COMMUNICATION
    • H04J14/00Optical multiplex systems
    • H04J14/02Wavelength-division multiplex systems
    • H04J14/0227Operation, administration, maintenance or provisioning [OAMP] of WDM networks, e.g. media access, routing or wavelength allocation
    • H04J14/0238Wavelength allocation for communications one-to-many, e.g. multicasting wavelengths
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04JMULTIPLEX COMMUNICATION
    • H04J14/00Optical multiplex systems
    • H04J14/02Wavelength-division multiplex systems
    • H04J14/0227Operation, administration, maintenance or provisioning [OAMP] of WDM networks, e.g. media access, routing or wavelength allocation
    • H04J14/0241Wavelength allocation for communications one-to-one, e.g. unicasting wavelengths
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04JMULTIPLEX COMMUNICATION
    • H04J14/00Optical multiplex systems
    • H04J14/02Wavelength-division multiplex systems
    • H04J14/0278WDM optical network architectures
    • H04J14/0286WDM hierarchical architectures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04JMULTIPLEX COMMUNICATION
    • H04J14/00Optical multiplex systems
    • H04J14/02Wavelength-division multiplex systems
    • H04J14/0287Protection in WDM systems
    • H04J14/0289Optical multiplex section protection
    • H04J14/029Dedicated protection at the optical multiplex section (1+1)
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04JMULTIPLEX COMMUNICATION
    • H04J14/00Optical multiplex systems
    • H04J14/02Wavelength-division multiplex systems
    • H04J14/0287Protection in WDM systems
    • H04J14/0289Optical multiplex section protection
    • H04J14/0291Shared protection at the optical multiplex section (1:1, n:m)
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04JMULTIPLEX COMMUNICATION
    • H04J14/00Optical multiplex systems
    • H04J14/02Wavelength-division multiplex systems
    • H04J14/0287Protection in WDM systems
    • H04J14/0293Optical channel protection
    • H04J14/0294Dedicated protection at the optical channel (1+1)
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04JMULTIPLEX COMMUNICATION
    • H04J14/00Optical multiplex systems
    • H04J14/02Wavelength-division multiplex systems
    • H04J14/0287Protection in WDM systems
    • H04J14/0293Optical channel protection
    • H04J14/0295Shared protection at the optical channel (1:1, n:m)
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04JMULTIPLEX COMMUNICATION
    • H04J14/00Optical multiplex systems
    • H04J14/02Wavelength-division multiplex systems
    • H04J14/0287Protection in WDM systems
    • H04J14/0297Optical equipment protection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/04Network management architectures or arrangements
    • H04L41/044Network management architectures or arrangements comprising hierarchical management structures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/04Network management architectures or arrangements
    • H04L41/046Network management architectures or arrangements comprising network management agents or mobile agents therefor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/06Management of faults, events, alarms or notifications
    • H04L41/0681Configuration of triggering conditions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0803Configuration setting
    • H04L41/0806Configuration setting for initial configuration or provisioning, e.g. plug-and-play
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0803Configuration setting
    • H04L41/084Configuration by using pre-existing information, e.g. using templates or copying from other elements
    • H04L41/0843Configuration by using pre-existing information, e.g. using templates or copying from other elements based on generic templates
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/085Retrieval of network configuration; Tracking network configuration history
    • H04L41/0853Retrieval of network configuration; Tracking network configuration history by actively collecting configuration information or by backing up configuration information
    • H04L41/0856Retrieval of network configuration; Tracking network configuration history by actively collecting configuration information or by backing up configuration information by backing up or archiving configuration information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0893Assignment of logical groups to network elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/22Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks comprising specially adapted graphical user interfaces [GUI]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/34Network arrangements or protocols for supporting network services or applications involving the movement of software or configuration parameters 
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/40Network security protocols
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q11/00Selecting arrangements for multiplex systems
    • H04Q11/0001Selecting arrangements for multiplex systems using optical switching
    • H04Q11/0005Switch and router aspects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q11/00Selecting arrangements for multiplex systems
    • H04Q11/0001Selecting arrangements for multiplex systems using optical switching
    • H04Q11/0062Network aspects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04JMULTIPLEX COMMUNICATION
    • H04J14/00Optical multiplex systems
    • H04J14/02Wavelength-division multiplex systems
    • H04J14/0227Operation, administration, maintenance or provisioning [OAMP] of WDM networks, e.g. media access, routing or wavelength allocation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/06Management of faults, events, alarms or notifications
    • H04L41/0631Management of faults, events, alarms or notifications using root cause analysis; using analysis of correlation between notifications, alarms or events based on decision criteria, e.g. hierarchy, tree or time analysis
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/06Management of faults, events, alarms or notifications
    • H04L41/0654Management of faults, events, alarms or notifications using network fault recovery
    • H04L41/0659Management of faults, events, alarms or notifications using network fault recovery by isolating or reconfiguring faulty entities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/02Capturing of monitoring data
    • H04L43/022Capturing of monitoring data by sampling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • H04L43/0805Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters by checking availability
    • H04L43/0811Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters by checking availability by checking connectivity
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • H04L43/0805Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters by checking availability
    • H04L43/0817Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters by checking availability by checking functioning
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • H04L43/0823Errors, e.g. transmission errors
    • H04L43/0847Transmission error
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/10Active monitoring, e.g. heartbeat, ping or trace-route
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/10Active monitoring, e.g. heartbeat, ping or trace-route
    • H04L43/106Active monitoring, e.g. heartbeat, ping or trace-route using time related information in packets, e.g. by adding timestamps
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/16Threshold monitoring
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/30Definitions, standards or architectural aspects of layered protocol stacks
    • H04L69/32Architecture of open systems interconnection [OSI] 7-layer type protocol stacks, e.g. the interfaces between the data link level and the physical level
    • H04L69/322Intralayer communication protocols among peer entities or protocol data unit [PDU] definitions
    • H04L69/329Intralayer communication protocols among peer entities or protocol data unit [PDU] definitions in the application layer [OSI layer 7]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q11/00Selecting arrangements for multiplex systems
    • H04Q11/0001Selecting arrangements for multiplex systems using optical switching
    • H04Q11/0062Network aspects
    • H04Q11/0066Provisions for optical burst or packet networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q11/00Selecting arrangements for multiplex systems
    • H04Q11/0001Selecting arrangements for multiplex systems using optical switching
    • H04Q11/0062Network aspects
    • H04Q11/0071Provisions for the electrical-optical layer interface
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q11/00Selecting arrangements for multiplex systems
    • H04Q11/0001Selecting arrangements for multiplex systems using optical switching
    • H04Q11/0005Switch and router aspects
    • H04Q2011/0007Construction
    • H04Q2011/0015Construction using splitting combining
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q11/00Selecting arrangements for multiplex systems
    • H04Q11/0001Selecting arrangements for multiplex systems using optical switching
    • H04Q11/0005Switch and router aspects
    • H04Q2011/0007Construction
    • H04Q2011/0016Construction using wavelength multiplexing or demultiplexing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q11/00Selecting arrangements for multiplex systems
    • H04Q11/0001Selecting arrangements for multiplex systems using optical switching
    • H04Q11/0005Switch and router aspects
    • H04Q2011/0007Construction
    • H04Q2011/0024Construction using space switching
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q11/00Selecting arrangements for multiplex systems
    • H04Q11/0001Selecting arrangements for multiplex systems using optical switching
    • H04Q11/0005Switch and router aspects
    • H04Q2011/0007Construction
    • H04Q2011/0026Construction using free space propagation (e.g. lenses, mirrors)
    • H04Q2011/003Construction using free space propagation (e.g. lenses, mirrors) using switches based on microelectro-mechanical systems [MEMS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q11/00Selecting arrangements for multiplex systems
    • H04Q11/0001Selecting arrangements for multiplex systems using optical switching
    • H04Q11/0005Switch and router aspects
    • H04Q2011/0037Operation
    • H04Q2011/0039Electrical control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q11/00Selecting arrangements for multiplex systems
    • H04Q11/0001Selecting arrangements for multiplex systems using optical switching
    • H04Q11/0062Network aspects
    • H04Q2011/0079Operation or maintenance aspects
    • H04Q2011/0081Fault tolerance; Redundancy; Recovery; Reconfigurability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q11/00Selecting arrangements for multiplex systems
    • H04Q11/0001Selecting arrangements for multiplex systems using optical switching
    • H04Q11/0062Network aspects
    • H04Q2011/0079Operation or maintenance aspects
    • H04Q2011/0083Testing; Monitoring

Definitions

  • Fiber optic and laser technology have enabled the communication of data at ever-higher rates.
  • the use of optical signals has been particularly suitable for use over long haul links.
  • Conventional approaches use switches (i.e., network nodes) that receive an optical signal, convert it to the electrical domain for switching, then convert the switched electrical signal back to the optical domain for communication to the next switch.
  • switches i.e., network nodes
  • Some all-optical switches have been developed that avoid the need for O-E-O conversion, they possess many limitations, such as their being time consuming and difficult to configure or re-configure.
  • a network management system typically interfaces with the individual nodes or exchanges of a data communications network through an overlay network, e.g., an out-of-band data transmission infrastructure dedicated to handling network management traffic.
  • an overlay network e.g., an out-of-band data transmission infrastructure dedicated to handling network management traffic.
  • the NMS provides a variety of functions required to effectively manage the network from a system-wide perspective.
  • These functionalities as conceptualized for instance by the M Series Recommendations ofthe ITU-T Telecommunication Management Network (TMN) standards, include system-wide issues such as fault management, configuration management, accounting, security and performance management.
  • configuration management functionality could include the ability to establish or provision a permanent virtual circuit or light path using a graphical user interface (GUI) provided by the NMS.
  • GUI graphical user interface
  • the NMS may be capable of computing the route across the communications network for the bearer channel path and, by interfacing with the nodes, configuring and establishing the individual cross-connects on each node in the bearer channel path.
  • the nodes can inform the NMS about a failed bearer channel link.
  • the NMS can then take corrective action such as automatically re-routing any bearer channel paths associated with the failed link. This is an example of fault management functionality provided by the NMS.
  • Fault tolerance is an important issue for service providers, particularly since one ofthe business parameters service providers often negotiate with their customers is network availability or permissible "down" time.
  • the invention seeks to provide a fault-tolerant NMS, and more particularly a fault-tolerant NMS attuned to the complexities introduced by a hierarchical structure.
  • Path protection is the provisioning of a pre- determined redundant or pre-emptable communication path(s) that can be switched into operation whenever a problem is detected with a connection over a first communication path.
  • Path restoration is the ability to automatically re-establish or re-signal a connection using a different path through the network.
  • optical networks are intended for deployment in network backbones and as such will likely carry massive levels of communications traffic, the loss of which can be disruptive to many organizations.
  • optical switches are likely to employ state-of-the-art optical devices such as switching fabrics. The long-term performance characteristics and longevity of such devices are not yet fully understood.
  • the new all-optical networks are likely to embody complex mesh topologies, unlike the linear and ring topologies of current SONET networks. In view of these and other factors a robust set of protection and restoration features is desired.
  • SUMMARY OF THE INVENTION it is an object ofthe invention to provide an all-optical switch that can be selectively configured to provide cross-connection and add/drop multiplexing functions. It is an object ofthe invention to provide an all-optical switch for use in an optical network that interfaces with access networks, including Gigabit Ethernet over fiber networks, SONET OC-n networks, and others.
  • a node of an optical network in accordance with the present invention is also referred to as an optical transport switching system (or simply an "OTS").
  • a node in one aspect ofthe invention, includes an optical access ingress subsystem, an optical access egress subsystem a transport ingress subsystem, a transport egress subsystem, and an optical switch subsystem.
  • the optical switch subsystem selectively provides optical coupling between (a) the transport egress subsystem and at least one of (a)(1) the optical access ingress subsystem and (a)(2) the transport ingress subsystem, and between (b) the transport ingress subsystem and at least one of one of (b)(1) the optical access egress subsystem, and (b)(2) the transport egress subsystem.
  • a node includes a chassis having a plurality of receiving locations, where at least some ofthe receiving locations are adapted to receive optical circuit cards.
  • an optical backplane is associated with the chassis for providing selective optical coupling between the optical circuit cards, and a local area network enables control and monitoring ofthe optical circuit cards by a central controller.
  • Upper layers can rely on the fact that if a lower layer determines it cannot handle a problem, it will notify the next higher layer, e.g., via an event such as a fault or alarm.
  • a given layer may also periodically provide performance data to the next higher layer based on a request by the next higher layer, or according to a predetermined schedule.
  • the increased response time at each higher layer is due to the time spent in aggregating data from the lower layer or layers.
  • a lower layer such as a line card layer may observe laser current consumption over a period of time. When the current consumption eventually exceeds a threshold, the line card layer informs the node manager layer, and it reacts accordingly, e.g., by sending an alarm to the network management system.
  • a control architecture for an optical network having a plurality of optical switches.
  • the control architecture includes, for each optical switch, respective line card managers for managing respective line cards associated therewith, and a node manager for managing the line card managers.
  • the control architecture further includes a centralized network management system for managing the node managers ofthe optical switches.
  • the node manager includes an event manager for enabling software components running at the node manager to register for, and receive, events, and/or post events.
  • a management architecture for use at an optical switch in an optical network.
  • the architecture includes a line card manager for managing an associated line card at the optical switch.
  • the line card manager includes: (a) a first interface for receiving monitored parameter values from the line card, (b) processing resources for setting an event regarding the line card when criteria for setting the event is met by the monitored parameter values, and (c) a second interface for communicating the event to a node manager at the optical switch that manages the line card manager.
  • a management architecture for use at an optical switch in an optical network includes a number of respective line card managers for managing respective line cards at the optical switch.
  • Each line card manager includes: (a) a first interface for receiving monitored parameter values from the respective line card, (b) processing resources for setting an event regarding the respective line card when criteria for setting the event is met by the monitored parameter values, and (c) a second interface for communicating the event to a node manager at the optical switch.
  • the node manager manages each ofthe respective line card managers.
  • a management architecture for use at an optical switch in an optical network includes a line card manager for managing an associated line card, where the line card manager includes a message-passing interface for communicating with a node manager at the optical switch that manages the line card manager.
  • an interface for use at an optical switch that includes a node manager for managing a number of respective line card managers, where each respective line card manager manages an associated respective line card at the optical switch.
  • the interface includes a message-passing interface for enabling the node manager to exchange messages with each ofthe line card managers.
  • the messages include: (a) line card manager-to-node manager messages for enabling the line card managers to report events regarding the respective line cards to the node manager, and (b) node manager-to-line card manager messages for enabling the node manager to provide commands to the line card managers.
  • a management architecture for use at an optical switch in an optical communication network includes a node manager for managing a number of line card managers at the optical switch. Each line card manager manages an associated line card at the optical switch.
  • the node manager includes: (a) an interface for communicating with the line card managers, and (b) processing resources for enabling at least one of applications and system services.
  • the invention provides a hierarchical network management system in which a plurality of NMS managers, each responsible for different portions or aggregations of a communications network, are logically arranged in a tree structure.
  • the NMS managers are further organized into various sub-groups.
  • the NMS managers within each sub-group monitor the status of one another in order to detect when one of them is no longer operational. If this happens, the remaining operational NMS managers ofthe sub-group collectively elect one of them to assume the responsibility of the non-operational NMS manager.
  • a method for managing a network includes organizing a plurality of network management system (NMS) managers in a hierarchy.
  • the hierarchy has at least a root level and a leaf level, wherein each non-leaf level NMS manager supervises at least one child NMS manager and each leaf-level NMS manager supervises one or more network nodes.
  • NMS network management system
  • each NMS manager receives and stores state information pertaining to the network nodes supervised by sibling NMS managers, thereby synchronizing network state information amongst siblings.
  • An event service is the preferred mechanism for carrying this out.
  • each group of sibling NMS managers only one NMS manager within the group aggregates state information pertaining to all nodes supervised by the group to the common parent NMS manager.
  • a heartbeat process is preferably established between at least two NMS manager siblings. In the preferred heartbeat process, each NMS manager transmits a "hello" message to every other NMS manager in the same sibling group.
  • the invention provides protection-switching capability for an all-optical communications network.
  • an optical communications network having at least one optical switch connected to a network access device.
  • the optical switch includes a first line card disposed along a first communications path over which a first optical signal is transmitted.
  • the first line card is connected to the network access device.
  • a second line card is disposed along a second communications path over which a second optical signal is transmitted.
  • An inter-card communication channel is provided for bridging the second communications path to the first line card.
  • the optical switch preferably includes a Node Manager for controlling the first and second line card managers.
  • a first line card manager is associated with the first line card for monitoring the quality ofthe first optical signal. In the event of poor quality the first line card manager notifies the Node Manager which consequently instructs the second line card manager to transit the high priority traffic using the second optical signal and second path.
  • the Node Manager also communicates the fault to a network management system which informs the egress switch. Similar actions are initiated at the egress switch so as to provide traffic carried over the second optical signal to user equipment connected to a line card disposed along the first communication path.
  • an optical communications network comprising: an ingress optical switch having a first ingress line card and first and second egress line cards connected to a first switch fabric, wherein the first switch fabric bridges an ingress optical signal onto the first and second egress line cards thereby providing first and second copies ofthe optical signal; transit optical switches for transiting the first and second copies ofthe optical signal across the networks across first and second optical paths; an egress optical switch having second and third ingress line cards and a third egress line card connected to a second switch fabric, wherein said second and third ingress line cards respectively receive the first and second optical signal copies and the second switch fabric cross-connects only one ofthe optical signal copies to the third egress line card.
  • FIG. 2 illustrates a logical node architecture in accordance with the present invention
  • FIG. 3 illustrates an optical transport switching system hardware architecture in accordance with the present invention
  • FIG. 4 illustrates a control architecture for an OTS in accordance with the present invention
  • FIG. 5 illustrates a single Node Manager architecture in accordance with the present invention
  • FIG. 6 illustrates a Line Card Manager architecture in accordance with the present invention
  • FIG. 7 illustrates an OTS configuration in accordance with the present invention
  • FIG. 9 illustrates the operation of a control architecture and Optical Signaling Module in accordance with the present invention.
  • FIG. 10 illustrates an optical switch fabric module in accordance with the present invention
  • FIG. 11 illustrates a Transport Ingress Module in accordance with the present invention
  • FIG. 12 illustrates a Transport Egress Module in accordance with the present invention
  • FIG. 15 illustrates a Gigabit Ethernet Access Line Interface module in accordance with the present invention
  • FIG. 16 illustrates a SONET OC-12 Access Line Interface module in accordance with the present invention
  • FIG. 17 illustrates a SONET OC- 48 Access Line Interface module in accordance with the present invention
  • FIG. 18 illustrates a SONET OC- 192 Access Line Interface module in accordance with the present invention
  • FIG. 19 illustrates an Optical Performance Monitoring module in accordance with the present invention
  • FIG. 20 illustrates a physical architecture of an OTS chassis in an OXC configuration in accordance with the present invention
  • FIG. 21 illustrates a physical architecture of an OTS chassis in an OXC/OADM configuration in accordance with the present invention
  • FIG. 22 illustrates a physical architecture of an OTS chassis in an ALI configuration in accordance with the present invention
  • FIG. 23 illustrates a full wavelength cross-connect configuration in accordance with the present invention
  • FIG. 24 illustrates an optical add/drop multiplexer configuration with compliant wavelengths in accordance with the present invention
  • FIG. 25 illustrates an optical add multiplexer configuration in accordance with the present invention
  • FIG. 26 illustrates an optical drop multiplexer configuration in accordance with the present invention
  • FIG. 27 illustrates an example data flow through optical switches, including add/drop multiplexers and wavelength cross-connects, in accordance with the present invention
  • FIG. 29 illustrates SONET networks accessing a managed optical network in accordance with the present invention
  • FIG. 30 illustrates a hierarchical optical network structure in accordance with the present invention
  • FIGs 33(a)-(c) illustrate a normal data flow, a data flow with line protection, and a data flow with path protection, respectively, in accordance with the present invention
  • FIG. 34 illustrates a high-level Network Management System functional architecture in accordance with the present invention
  • FIG. 35 illustrates a Network Management System hierarchy in accordance with the present invention
  • FIG. 36 illustrates a Node Manager software architecture in accordance with the present invention
  • FIG. 40 illustrates an NMS Database/Server Client context diagram in accordance with the present invention
  • FIG. 41 illustrates a Routing context diagram in accordance with the present invention
  • FIG. 42 illustrates an NMS Agent context diagram in accordance with the present invention
  • FIG. 44 illustrates an Event Manager context diagram in accordance with the present invention
  • FIG. 45 illustrates a Software Version Manager context diagram in accordance with the present invention
  • FIG. 46 illustrates a Configuration Manager context diagram in accordance with the present invention
  • FIG. 47 illustrates a Logger context diagram in accordance with the present invention
  • FIG. 48 illustrates a Flash Interface context diagram in accordance with the present invention
  • FIG. 51 shows a working and a protection path configured over a reference optical network for the purposes of network managed protection
  • FIG. 52 shows the architecture in-relevant-part of an optical switch in accordance with another embodiment ofthe invention for establishing the working and protection paths of FIG. 59;
  • FIG. 53 shows a high priority path and a low-priority, pre-emptable path configured over a reference optical network for the purposes of network managed protection
  • FIG. 54 shows the architecture in-relevant-part of an optical switch in accordance with one embodiment ofthe invention for establishing the high and low priority paths of FIG. 53;
  • FIG. 55 is a system block diagram of a line card which supports the protection features shown in FIGs. 53-54;
  • FIG. 56 is a datapath diagram showing how SONET framing components of two line cards (one being illustrated in FIG. 55) are connected in order to support the protection features shown in FIGs. 53-54 on ingress;
  • FIG. 58 is a schematic diagram showing how a protection and a working path may be created over an optical network using switch fabric bridging, in accordance with another embodiment ofthe invention.
  • FIG. 59A illustrates a responsibility hierarchy for managers of a multi- tiered network management system (NMS) in accordance with an embodiment ofthe invention
  • FIG. 59B illustrates a hardware and software architecture for implementing the multi-tiered NMS shown in FIG. 59A
  • FIG. 59C illustrates an alternative hardware and software architecture for implementing the multi-tiered NMS shown in FIG. 59A;
  • FIG. 59D illustrates a revised responsibility hierarchy for the multi-tiered NMS shown in FIG. 59A when one ofthe NMS managers thereof ceases to function;
  • FIG. 59E illustrates a control hierarchy employed in an optical switching network
  • FIG. 59G illustrates an event topic tree
  • FIG. 59H illustrates software components employed in an optical network switch
  • FIG. 591 illustrates a software architecture for an NMS manager in accordance with an embodiment ofthe invention geared towards optical switching networks.
  • Section 27 in particular discusses protection and restoration features.
  • Section 28 in particular discusses a generic embodiment of a hierarchical network management system (NMS) which is applicable to a wide variety of network types.
  • Section 29 in particular discusses an implementation ofthe generic embodiment in relation to the novel optical switching network, which, when configured as a large complex network producing vast amounts of telemetric data, is particularly well-suited to benefit from the increased reliability provided by the present invention.
  • the present invention provides for an all optical configurable switch (i.e., network node or OTS) that can operate as an optical cross-connect (OXC) (also referred to as a wavelength cross-connect, or WXC), which switches individual wavelengths, and/or an optical add/drop multiplexer (OADM).
  • OXC optical cross-connect
  • WXC wavelength cross-connect
  • OADM optical add/drop multiplexer
  • the switch is typically utilized with a Network Management System also discussed herein.
  • the switch ofthe present invention operates independently of bit rates and protocols.
  • the all-optical switching, between inputs and outputs ofthe OTS is achieved through the use of Micro-Electro- Mechanical System (MEMS) technology.
  • MEMS Micro-Electro- Mechanical System
  • this optical switch offers an on- demand ⁇ switching capability to support, e.g., either SONET ring based or mesh configurations.
  • the OTS also provides the capability to achieve an optimized network architecture since multiple topologies, such as ring and mesh, can be supported. Thus, the service provider can tailor its network design to best meet its traffic requirements.
  • the OTS also enables flexible access interconnection supporting SONET circuits, Gigabit Ethernet (GbE) (IEEE 802.3z), conversion from non-ITU compliant optical wavelengths, and ITU-compliant wavelength connectivity. With these interfaces, the service provider is able to support a broad variety of protocols and data rates and ultimately provide IP services directly over DWDM without SONET equipment.
  • the OTS further enables a scalable equipment architecture that is provided by a small form factor and modular design such that the service provider can minimize its floor space and power requirements needs and thereby incrementally expand its network within the same footprint.
  • FIG. 1 illustrates an all-optical "metro core” network architecture that utilizes the present invention in accordance with a variety of configurations.
  • OTS equipment is shown within the optical network boundary 105, and is designed for deployment both at the edges of a metro core network (when operating as an OADM), and internally to a metro core network (when operating as an OXC).
  • the OTSs at the edge ofthe network include OADMs 106, 108, 110 and 112
  • the OTSs internal to the network include WXCs 115, 117, 119, 121 and 123.
  • Each OTS is a node of the network.
  • External devices such as SONET and GbE equipment may be connected directly to the optical network 105 via the edge OTSs.
  • SONET equipment 130 and 134, and GbE equipment 132 connect to the network 105 via the OADM 106.
  • GbE 136 and SONET 138 equipment connect to the network via OADM 108.
  • GbE 140 and SONET equipment 142 and 144 connect to the network via OADM 110.
  • SONET equipment 146 connects to the network via OADM 112.
  • the network architecture may also support other network protocols as indicated, such as IP, MPLS, ATM, and Fibre Channel operating over the SONET and GbE interfaces.
  • FIG. 2 depicts a logical node architecture 200, which includes an Optical Switch Fabric 210, Access Line Interface 220, Optical Access Ingress 230, Optical
  • Access Egress 235 Transport Ingress 240, and Transport Egress 245 functions. These functions (described below) are implemented on respective line cards, also referred to as optical circuit cards or circuit packs or packages, that are receivable or deployed in a common chassis. Moreover, multiple line cards ofthe same type may be used at an OTS to provide scalability as the bandwidth needs ofthe OTS grow over time.
  • a Node Manager 250 and Optical Performance Monitoring (OPM) module 260 may also be implemented on respective line cards in the chassis.
  • Node Manager 250 typically communicates with the rest ofthe OTS 200 through a 100 BaseT Ethernet internal LAN distributed to every line card and module and terminated by the Line Card Manager module 270 residing on every line card.
  • OPM 260 is responsible for monitoring optical hardware of OTS 200, and typically communicates its findings to the Node Manager 250 via the internal LAN and the OPM's LCM.
  • the Node manager may process this performance information to determine whether the hardware is functioning properly.
  • the Node Manager may apply control signals to the line cards, switchover to backup components on the line cards or to backup line cards, set alarms for the NMS, or take other appropriate action.
  • Each ofthe line cards including the OPM 260 and the line cards that carry the optical signals in the network, shown within the dashed line 265, are controlled by respective LCMs 270.
  • the Node Manager 250 may control the line cards, and receive data from the line cards, via the LCMs 270.
  • the Node Manager 250 Being interfaced to all other cards ofthe OTS 200 via the internal LAN and LCMs, the Node Manager 250 is responsible for the overall management and operation ofthe OTS 200 including signaling, routing, and fault protection. The responsibility for telemetry of all control and status information is delegated to the LCMs. There are also certain local functions that are completely abstracted away from the Node Manager and handled solely by the LCMs, such as laser failsafe protection. Whenever a light path is created between OTSs, the Node Manager 250 of each OTS performs the necessary signaling, routing and switch configuration to set up the path. The Node Manager 250 also continuously monitors switch and network status such that fault conditions can be detected, isolated, and repaired.
  • the OPM 260 may be used in this regard to detect a loss of signal or poor quality signal, or to measure signal parameters such as power, at any ofthe line cards using appropriate optical taps and processing circuitry.
  • Three levels of fault recovery may be supported: (1) Component Switchover - replacement of failed switch components with backup, (2) Line Protection - rerouting of all light paths around a failed link; and (3) Path Protection - rerouting of individual light paths affected by a link or node failure.
  • Component Switchover is preferably implemented within microseconds, while Line Protection is preferably implemented within milli-seconds of failure, and Path Restoration may take several seconds.
  • the all-optical switch fabric 210 is preferably implemented using MEMS technology.
  • MEMS have arrays of tiny mirrors that are aimed in response to an electrostatic control signal. By aiming the mirrors, any optical signal from an input fiber (e.g., of a transport ingress or optical access ingress line card) can be routed to any output fiber (e.g., of a transport egress or optical access egress line card).
  • the Optical Access Network 205 may support various voice and data services, including switched services such as telephony, ISDN, interactive video, Internet access, videoconferencing and business services, as well as multicast services such as video.
  • Service provider equipment in the Optical Access Network 205 can access the OTS 200 in two primary ways. Specifically, if the service provider equipment operates with wavelengths that are supported by the OTS 200 ofthe optical network, such as selected OC-n ITU-compliant wavelengths, it can directly interface with the Optical Access (OA) ingress module 230 and egress module 235.
  • OA Optical Access
  • the service provider equipment accesses the OTS 200 via an ALI card 220.
  • a GbE network can be directly bridged to the OTS without a SONET Add/Drop Multiplexer (ADM) and a SONET/SDH terminal, this relatively more expensive equipment is not required, so service provider costs are reduced. That is, typically, legacy electronic infrastructure equipment is required to connect with a SONET terminator and add-drop multiplexer (ADM). In contrast, these functions are integrated in the OTS of the present invention, resulting in good cost benefits and a simpler network design.
  • Table 1 summarizes the access card interface parameters associated with each type of OA and ALI card, in some possible implementations.
  • the OTS can interface with all existing physical and data-link layer domains (e.g., ATM, IP router, Frame relay, TDM, and SONET/SDH/STM systems) so that legacy router and ATM systems can connect to the OTS.
  • the OTS solution also provides the new demand services, e.g., audio/video on demand, with cost-effective bandwidth and efficient bandwidth utilization.
  • the OTS 200 can be configured, e.g., for metro and long haul configurations.
  • the OTS can be deployed in up to four- fiber rings, up to four fiber OADMs, or four fiber point-to-point connections.
  • Each OTS can be set to add/drop any wavelength with the maximum of sixty- four channels of local connections.
  • FIG. 3 illustrates an OTS hardware architecture in accordance with the present invention.
  • the all-optical switch fabric 210 may include eight 8x8 switch elements, the group of eight being indicated collectively as 211. Each ofthe eight switch elements is responsible for switching an optical signal from each of eight sources to any one of eight outputs.
  • selected outputs ofthe TP ingress cards 240 and OA ingress cards 230 are optically coupled by the switching fabric cards 210 to selected inputs ofthe TP egress cards 245 and/or OA egress cards 235.
  • the optical coupling between cards and the fabric occurs via an optical backplane, which may comprise optical fibers.
  • the cards are optically coupled to the optical backplane when they are inserted into their slots in the OTS bay such that the cards can be easily removed and replaced.
  • MTPTM-type connectors Fiber Connections, Inc.
  • each line card may connect to an RJ-45 connector when inserted into their slots.
  • each TP ingress and OA ingress card has appropriate optical outputs for providing optical coupling to inputs ofthe switch fabric via the optical backplane.
  • each TP egress and OA egress card has appropriate optical inputs for providing optical coupling to outputs ofthe switch fabric via the optical backplane.
  • the switching fabric is controlled to optically couple selected inputs and outputs ofthe switch fabric card, thereby providing selective optical coupling between outputs ofthe TP ingress and OA ingress cards, and the inputs ofthe TP egress and OA egress cards.
  • the optical signals carried by the outputs of TP ingress and OA ingress cards can be selectively switched (optically coupled) to the inputs ofthe TP egress and OA egress cards.
  • the transport ingress module 240 includes four cards 302, 304, 306 and 308, each of which includes a wavelength division demultiplexer (WDD), an example of which is the WDD 341, for recovering the OSC, which may be provided as an out-of-band signal with the eight multiplexed data signals ( ⁇ 's).
  • WDD wavelength division demultiplexer
  • An optical amplifier (OA), an example of which is the OA 342, amplifies the optical transport signal multiplex, and a demux, an example of which is the demux 343, separates out each individual wavelength (optical transport signal) in the multiplex.
  • Each individual wavelength is provided to the switch fabric 210 via the optical backplane, then switched by one ofthe modules 211 thereat.
  • the outputs ofthe switch fabric 210 are provided to the optical backplane, then received by either a mux, an example of which is the mux 346, of one ofthe transport egress cards 320, 322, 324 or 326, or an 8x8 switch of one ofthe OA egress cards 235.
  • the multiplexer output is amplified at the associated OA, and the input OSC is multiplexed with data signals via the WDM.
  • the multiplexer output at the WDM can then be routed to another OTS via an optical link in the network.
  • each received signal is amplified and then split at 1x2 dividers/splitters to provide corresponding outputs either to the faceplate ofthe OA egress cards for compliant wavelengths, or to the ALI cards via the optical backplane for non-compliant wavelengths. Note that only example light paths are shown in FIG. 3, and that for clarity, all possible light paths are not depicted.
  • the ALI cards perform wavelength conversion for interfacing with access networks that use optical signals that are non-compliant with the OTS.
  • the ALI card receives non-compliant wavelength signals, converts them to electrical signals, multiplexes them, and generates a compliant wavelength signal.
  • Two optical signals that are output from the ALI card 220 are shown as inputs to one ofthe OA In cards 230 to be transmitted by the optical network, and two optical signals that are output from one ofthe OA Eg cards 235 are provided as inputs to the ALI cards 220.
  • the OSC recovered at the TP ingress cards namely OSC O U T
  • OSM Optical Signaling Module
  • the OSM generates a signaling packet that contains signaling and route information, and passes it on to the Node Manager.
  • the OSM is discussed further below, particularly in conjunction with FIG. 9. If the OSC is intended for use by another OTS, it is regenerated by the OSM for communication to another OTS and transmitted via, e.g., OSC IN - Or, if the OSC is intended for use only by the present OTS, there is no need to relay it to a further node.
  • OSC IN could also represent a communication that originated from the present OTS and is intended for receipt, e.g., by another node.
  • OSC IN typically only one ofthe nodes acts as a gateway to the shared NMS.
  • the other nodes ofthe group communicate with the NMS via the gateway node and communication by the other nodes with their gateway node is typically also accomplished via the OSC.
  • FIG. 4 illustrates a control architecture for an OTS in accordance with the present invention.
  • the OTS implements the lower two tiers ofthe above described three- tier control architecture typically without a traditional electrical backplane or shelf controller.
  • the OTS has a distributed architecture, which results in maximum system reliability and stability.
  • the OTS does not use a parallel backplane bus such as Compact PCI or VME bus because they represent a single point of failure risk, and too much demand on one shared element is a performance risk.
  • the invention preferably provides a distributed architecture wherein each line card ofthe OTS is outfitted with at least one embedded controller referred to as a LCM on at least one daughter board, with the daughter boards communicating with the node's single Node Manager via a LAN technology such as 100 BaseT Ethernet and Core Embedded Control Software.
  • the LCM may use Ethernet layer 2 (L2) datagrams for communication with the Node Manager, with the Node Manager being the highest-level processor within an individual OTS.
  • L2 Ethernet layer 2
  • the Node Manager and all OTS line cards plug into a 100 BaseT port on one or more hubs via RJ-45 connectors to allow electronic signaling between LCMs and the Node Manager via an internal LAN at the OTS.
  • two twenty-four port hubs are provided to control two shelves of line cards in an OTS bay, and the different hubs are connected by crossover cables.
  • FIG. 4 depicts LCMs 410 and associated line cards 420 as connected to hubs 415 and 418, which may be 24-port 100 BaseT hubs.
  • the line cards may perform various functions as discussed, including Gigabit Ethernet interface (a type of ALI card), SONET interface (a type of ALI card), TP ingress, TP egress, optical access ingress, optical access egress, switching fabric, optical signaling, and optical performance monitoring.
  • the primary Node Manager 250 can be provided with a backup Node Manager 450 for redundancy.
  • Each Node Manager has access to the non-volatile data on the LCMs which help in reconstructing the state ofthe failed node manager.
  • the backup Node Manager gets copies ofthe primary node manager non-volatile store, and listens to all traffic (e.g., messages from the LCMs and the primary Node Manager) on all hubs in the OTS to determine if the primary has failed.
  • All traffic e.g., messages from the LCMs and the primary Node Manager
  • Various schemes may be employed for determining if the primary Node Manager is not functioning properly, e.g., by determining whether the primary Node Manager 250 responds to a message from an LCM within a specified amount of time.
  • the hubs 415 and 418 are connected to one another via a crossover 417 and additional hubs may also be connected in this manner. See also FIG. 8.
  • every shelf connects to a 100 BaseT hub.
  • This use of an Ethernet backplane provides both hot-swappability of line cards (i.e., removal and insertion of line cards into the OTS bay when optical and/or electrical connections are active), and totally redundant connections between the line cards and both Node Managers.
  • the node is a gateway, its primary Node Manager communicates with the NMS, e.g., via a protocol such as SNMP, using 100 BaseT ports 416, 419.
  • selectable 10/100 BaseT may be used.
  • RJ-45 connectors on the faceplate ofthe Node Manager circuit pack may be used for this purpose.
  • FIG. 5 illustrates a single Node Manager architecture 250 in accordance with the present invention. An OTS with primary and backup Node Managers would have two ofthe architectures 250.
  • the Node Manager executes all application software at the OTS, including network management, signaling, routing, and fault protection functions, as well as other features.
  • each Node Manager circuit pack has a 100 BaseT network connection to a backplane hub that becomes the shared medium for each LCM in the OTS. Additionally, for a gateway OTS node, another 100 BaseT interface to a faceplate is provided for external network access.
  • the Node Manager Core Embedded Software performs a variety of functions, including: i) issuing commands to the LCMs, ii) configuring the LCMs with software, parameter thresholds or other data, iii) reporting alarms, faults or other events to the NMS, and iv) aggregating the information from the LCMs into a node-wide view that is made available to applications software at the Node Manager.
  • This node-wide view, as well as the complete software for each LCM controller, are stored in flash memory 530.
  • the node- or switch- wide view may provide information regarding the status of each component ofthe switch, and may include, e.g., performance information, configuration information, software provisioning information, switch fabric connection status, presence of alarms, and so forth. Since the node's state and the LCM software are stored locally to the node, the Node Manager can rapidly restore a swapped line card to the needed configuration without requiring a remote software download, e.g., from the NMS.
  • the Node Manager is also responsible for node-to-node communications processing. All signaling messages bound for a specific OTS are sent to the Node Manager by that OTS's optical signaling module.
  • the OSM which has an associated LCM, receives the OSC wavelength from the Transport Ingress module. The incoming OSC signal is converted from optical to electrical, and received as packets by the OSM. The packets are sent to the Node Manager for proper signaling setup within the system.
  • out-going signaling messages are packetized and converted into an optical signal of, e.g., 1310 nm or 1510 nm, by the OSM, and sent to the Transport Egress module for transmission to the next-hop OTS.
  • the Node Manager configures the networking capabilities of the OSM, e.g., by providing the OSM with appropriate software for implementing a desired network communication protocol.
  • the Node Manager may receive remote software downloads from the NMS to provision itself and the LCMs.
  • the Node Manager distributes each LCM's software via the OTS's internal LAN, which is preferably a shared medium LAN.
  • Each LCM may be provisioned with only the software it needs for managing the associated line card type.
  • each LCM may be provisioned with multi-purpose software for handling any type of line card, where the appropriate software and/or control algorithms are invoked after an LCM identifies the line card type it is controlling (e.g., based on the LCM querying its line card or identifying its slot location in the bay).
  • the Node Manager uses a main processor 505, such as the 200MHz MPC 8255 or MPC8260 (Motorola PowerPC microprocessor, available from Motorola Corp., Schaumburg, Illinois), with an optional plug-in module 510 for a higher power plug-in processor 512, which may be a RISC CPU such as the 400 MHz MPC755.
  • These processors 505, 510 simultaneously support Fast Ethernet, 155 Mbps ATM and 256 HDLC channels.
  • the invention is not limited to use with any particular model of microprocessor.
  • the plug-in module 510 is optional, it is intended to provide for a longer useful life for the Node Manager circuit pack by allowing the processor to be upgraded without changing the rest ofthe circuit pack.
  • the Node Manager architecture is intended to be flexible in order to meet a variety of needs, such as being a gateway and/or OTS controller.
  • the architecture is typically provided with a communications module front end that has two Ethernet interfaces: 1) the FCC2 channel 520, which is a 100 BaseT to service the internal 100 BaseT Ethernet hub on the backplane 522, and, for gateway nodes, 2) the FCC3 channel 525, which is a 100 BaseT port to service the NMS interface to the outside.
  • the flash memory 530 may be 128 MB organized in a xl6 array, such that it appears as the least significant sixteen data bits on the bus 528. See the section entitled "Flash Memory Architecture" for further information regarding the flash memory 530.
  • the bus 528 may be an address and data bus, such as Motorola's PowerPC 60x.
  • the SDRAM 535 may be 256 MB organized by sixty-four data bits.
  • An EPROM 532 may store start up instructions that are loaded into the processor 505 or 512 via the bus 528 during an initialization or reset ofthe Node Manager.
  • a PCMCIA Flash disk 537 also communicates with the bus 528, and is used for persistent storage, e.g., for storing long term trend data and the like from monitored parameters ofthe line cards.
  • a warning light may be used so that the Flash disk is not inadvertently removed while data is being written to it.
  • the non- volatile memory resources, such as the Flash disk are designed so that they cannot be removed while the Node Manager card is installed on the OTS backplane.
  • SDRAM 540 (e.g., having 4 MB) on the local bus 545, which is used to buffer packets received on the communications module front-end of the main processor 505.
  • the local bus 545 may carry eighteen address bits and thirty-two data bits.
  • the core microprocessor such as is possible with Motorola's PowerPC 603e core inside the MPC8260
  • the plug-in processor 512 can be installed on the bus 528.
  • Such plug-in processor 512 can be further assisted with an L2 backside cache 514, e.g., having 256 KB. It is expected that a plug-in processor can be used to increase the performance ofthe Node Manager 505 by more than double.
  • the plug-in processor 512 may be any future type of RISC processor that operates on the 60x bus.
  • the processor 505 yields the bus to, and may also align its peripherals to, the more powerful plug-in processor 512.
  • the plug-in processor is also useful, e.g., for the specific situation where the OTS has had line cards added to it and the main processor 505 is therefore no longer able to manage its LCMs at a rate compatible with the desired performance characteristics ofthe optical networking system.
  • a serial port 523 for debugging may also be created.
  • the Node Manager provides NMS interface and local node management, as well as providing signaling, routing and fault protection functions (all using the Node Manager's application software), provides real-time LCM provisioning, receives monitored parameters and alarms/faults from each LCM, aggregates monitored parameters and alarms/faults from each line card into a node-wide view, processes node- to-node communication messages, provides remote software download capability, distributes new software to all LCMs, is expandable to utilize a more powerful CPU (through plug-in processor 512), such as of RISC design, is built on a Real-Time Operating System (RTOS), provides intra-OTS networking support (e.g., LAN connectivity to LCMs), and provides node-to-node networking support.
  • RTOS Real-Time Operating System
  • FIG. 6 illustrates a Line Card Manager architecture 600 in accordance with the present invention.
  • the LCM modules may be provided as daughter boards/plug-in modules that plug into the respective line cards to control each line card in the OTS.
  • the LCMs offload local processing tasks from the Node Manager and provide continued line card support without any interruptions in the event the Node Manager fails (assuming no backup is available, or the backup has also failed), or the communication path to the Node Manager is not available. That is, even if the control path is lost, the user data paths are still active.
  • the line card state and data are stored until the Node Manager is back in service.
  • the line cards which an LCM 600 may control include any ofthe following: switch fabric, TP_IN, TPJEG, OA_IN, OA EG, OSM, OPM, or ALI cards (acronyms defined in Glossary).
  • the LCM daughter board is built around an embedded controller/processor 605, and contains both digital and analog control and monitoring hardware. LCMs typically communicate with the Node Manager via the OTS internal LAN. The LCM receives commands from the Node Manager, such as for configuring the line cards, and executes the commands via digital and analog control signals that are applied to the associated line card. The LCM gathers from its line card digital and analog feedback and monitored parameter values, and may periodically send this information to the Node Manager, e.g., if requested by the Node Manager. The LCM also passes events such as faults/alarms and alerts to the Node Manager as they occur. These values and all provisioning data are kept in an in-memory snapshot ofthe line card status.
  • the LCM stores this snapshot and a copy ofthe software that is currently running the LCM in its non- volatile (e.g., flash) memory 610 to allow rapid rebooting ofthe LCM.
  • the LCM when the LCM powers up, it loads the software from the non- volatile memory 610 into SDRAM 625, and then begins to execute. This avoids the need for the LCM to download the software from the Node Manager via the OTS internal LAN each time it starts up, which saves time and avoids unnecessary traffic on the internal OTS LAN.
  • the software logic for all line cards is preferably contained in one discrete software load which has the ability to configure itself based on the identity of the attached line card as disclosed during the discovery phase of LCM initialization.
  • the type of line card may be stored on an EEPROM on the line card.
  • the LCM queries the EEPROM through the I 2 C bus to obtain the identifier.
  • Flash Memory Architecture See the section entitled “Flash Memory Architecture” for further information regarding the flash memory 610.
  • the LCM can also receive new software from the Node Manager via the OTS internal LAN and store it in the flash memory 610. It is desirable to have sufficient non-volatile memory at the LCM to store two copies ofthe software, i.e., a current copy and a backup copy. In this way, a new software version, e.g., that provides new features, could be stored at the LCM and tested to see if it worked properly. If not, the backup copy (rollback version) ofthe previous software version could be used.
  • the Node Manager delegates most ofthe workload for monitoring and controlling the individual line cards to each line card's local LCM.
  • the controller 605 is the 200 MHz Motorola MPC8255 or MPC8260. However, the invention is not limited to use with any particular model of microprocessor.
  • the controller 605 may have a built-in communications processor front-end, which includes an Ethernet controller (FCC2) 615 that connects to the Node Manager via the internal switch LAN. In the embodiment shown, this connection is made via the line card using an RJ-45 connector. Other variations are possible.
  • the flash memory 610 may be 128 MB organized in xl6 mode, such that it appears as the least significant sixteen data bits on the bus 620, which may be
  • the SDRAM 625 may be 64 MB organized by sixty-four data bits.
  • An A/D converter 635 such as the AD7891-1 (Analog Devices, Inc., Norwood, Mass.) includes a 16 channel analog multiplexer into a 12 bit A/D converter.
  • a D/A converter 622 which may be an array of four "quad” D/A converters, such as MAX536's (Maxim Integrated Products, Inc., Sunnyvale, Calif), provides sixteen analog outputs to a connector 640, such as a 240-pin Berg Mega- Array connector (Berg Electronics Connector Systems Ltd, Herts, UK).
  • the LCMs and line cards preferably adhere to a standard footprint connect scheme so that it is known which pins ofthe connector are to be driven or read. Essentially, a telemetry connection is established between the LCM and the line card via the connector 640.
  • the LCM can be easily removed from its line card instead of being designed into the line card, it can be easily swapped with an LCM with enhanced capabilities, e.g., processor speed and memory, for future upgrades.
  • the LCM daughter board removeably connects to the associated line card via a connector 640.
  • a serial port 645 for debugging may be added.
  • a serial port 645 may be constructed from port D (SMC1).
  • SMC1 Serial Peripheral Interface
  • Port A 636 receives a latch signal.
  • a serial bus known as a Serial Peripheral Interface SPI 606 is specialized for A/D and D/A devices, and is generated by the controller 605.
  • the SPI 606 provides an interface that allows a line card to communicate directly with the controller 605.
  • the SPI 606 may carry analog signals to the line card via the D/A 622, or receive analog signals from the line card via the A/D 635.
  • the FPGA 602 provides a 40-bit status read only register for reading in signals from the line card, and a 32 bit read/write control register for reading/writing of control signals from to the line card.
  • the FPGA 602 also receives an 8-bit line card ID tag that identifies the location ofthe line card within the OTS (i.e., slot, shelf and bay) since certain slots are typically reserved for certain line card types.
  • the slot locations are digitally encoded for this purpose.
  • the type of line card could be identified directly regardless ofthe slot, shelf and bay, e.g., by using a serial number or other identifier stored on the line card and accessible to the LCM, e.g., via an I 2 C bus 604.
  • This bus enables the communication of data between the controller 605 and the connector 640.
  • the bus 604 may be part of a GPIO that receives information from a line card, including the bay, shelf and slot, that identifies the line card's position at the OTS.
  • the controller 605 may receive a hard reset signal from the Node Manager, e.g., via the Ethernet controller (FCC2) 615, which clears all registers and performs a cold boot ofthe system software on the LCM, and a soft reset signal, which performs a warm boot that does not interfere with register contents.
  • the soft reset is preferred for preserving customer cross connect settings.
  • the LCM is preferably not accessible directly from the customer LAN/WAN interfaces.
  • An EPROM 612 e.g., having 8 KB, may store instructions that are loaded into the processor 605 via the bus 620 during an initialization or reset ofthe LCM.
  • the microcontroller 605 typically integrates the following functions: 603e core CPU (with its non -multiplexed 32 bit address bus and bi-directional 64 bit data bus), a number of timers (including watchdog timers), chip selects, interrupt controller, DMA controllers, SDRAM controls, and asynchronous serial ports.
  • the second fast communication channel (FCC2) 100 BaseT Ethernet controller is also integrated into the Communications Processor Module functions ofthe controller 605.
  • the microcontroller may be configured for 66 Mhz bus operation, 133 Mhz CPM operation, and 200 Mhz 603 e core processor operation.
  • the line card manager module provides local control for each line card, executes commands received from the Node Manager, provides digital and/or analog control and monitoring ofthe line card, sends monitored parameters and alarms/faults ofthe line card to the Node Manager, provides an embedded controller with sufficient processing power to support a RTOS and multi-tasking, and provides Intra-OTS networking support.
  • FIG. 7 illustrates an OTS configuration in accordance with the present invention.
  • the OTS 700 includes an optical backplane 730 that uses, e.g., optical fibers to couple optical signals to the different optical circuit cards (line cards).
  • line cards Preferably, specific locations/slots ofthe chassis are reserved for specific line card types according to the required optical inputs and outputs ofthe line card.
  • the optical backplane 730 includes optical connections to optical links ofthe optical network, and, optionally, to links of one or more access networks.
  • each line card type is shown, as noted previously, more than one line card of each type is typically provided in an OTS configuration.
  • Each ofthe optical circuit cards (specifically, the LCMs ofthe cards) also communicates via a LAN with the Node Manager to enable the control and monitoring of the line cards.
  • optical inputs and outputs of each card type are as follows:
  • ALI - inputs an from access network link and OA egress cards; outputs to an access network link and OA ingress cards; OA ingress cards - inputs from an access network link and ALI cards; outputs to switching fabric cards and OPM cards;
  • OA egress cards - inputs from switching fabric cards; outputs to ALI cards, OPM cards, and an access network link;
  • TP ingress cards - inputs from an optical network link; outputs to switching fabric cards and OPM cards;
  • TP egress cards - inputs from switching fabric cards; outputs to an optical network link and OPM cards;
  • Switch fabric cards inputs from OA ingress cards and TP ingress cards; outputs to OA egress cards and TP egress cards; OSM - inputs from TP ingress cards; outputs to TP egress cards; and OPM - inputs from TP ingress cards, TP egress cards, OA ingress cards, and OA egress cards (may monitor additional cards also).
  • FIG. 8 illustrates backplane Ethernet hubs for an OTS in accordance with the present invention.
  • the OTS may use standard Ethernet hub assemblies, such as 24- port hubs 830 and 840, to form the basis of inter-processor communication (i.e., between the Node Manager and the LCMs).
  • Each hub assembly 830, 840 may have, e.g., twenty- four or more ports, whereas the corresponding shelf backplanes (815, 825, 835, 845, respectively) typically have, e.g., 6-8 ports.
  • a number of connectors, two examples of which are denoted at 820, are provided to enable each line card to connect to a hub.
  • the connectors may be RJ-45 connectors.
  • each connector 820 is connected individually to a hub.
  • the connectors for shelf 1 (815) and shelf 3 (835) may connect to hub 830, while the connectors for shelf 2 (825) and shelf 4 (845) may connect to hub 840.
  • a crossover cable 842 which may be a cable such as 100 BaseT media, may connect the two hubs such that they are part of a common LAN.
  • a single hub may be used that is sized large enough to connect to each line card in the OTS bay. In this arrangement, the backup Node Manager 750 shadows the primary
  • Node Manager 250 by listening to all traffic on the internal OTS backplane hubs (the shared media LAN), to deter ine when the primary Node Manager ceases to operate. When such a determination is made, the backup Node Manager takes over for the primary Node Manager 750. 7. Optical Signaling Module
  • FIG. 9 illustrates the operation of a control architecture and OSM in accordance with the present invention.
  • the OSM provides an IP signaling network between switches for the interchange of signaling, routing and control messages.
  • a gateway node 900 can interact with other networks, and includes an intra-product (internal to the OTS) LAN 905, which enables communication between the Node
  • An example non-gateway node 950 similarly includes an intra-product LAN 955, which enables communication between its Node Manager and the LCMs, such as LCM 966 and the associated line card 965, ..., LCM 968 and the associated line card 967, and LCM 971 and the associated line card 970, which is an OSM.
  • the OSC wavelength from the Transport Ingress module is extracted and fed into the optical signaling module (OSM).
  • OSM optical signaling module
  • the network topology is such that the node A 900 receives the OSC first, then forwards it to node B 950.
  • the extracted OSC wavelength from the OSM 920 is provided to the OSM 970.
  • the incoming OSC wavelength from node A 900 is converted from optical to electrical and packetized by the OSM 970, and the packets are sent to the Node Manager 960 for proper signaling setup within the system.
  • Node Manager 960 On the output side of Node B 950, outgoing signaling messages are packetized and converted into an optical signal by OSM 970 and sent to the
  • FIG. 9 is logical, and that the OSC typically propagates from TP card to TP card where it is added to TPJEg by the outgoing OSM and extracted from TP In by the inbound OSM.
  • FIG. 9 shows the inter-operation ofthe Node Manager, LCMs, and the OSM in the OTS.
  • the interconnection of the NMS 901 with the OTS/node 900 via routers 904 and 906 is also shown.
  • the node 900 communicates with the NMS 901 via a POP gateway LAN 902, an NMS platform 908 via an NMS LAN 909 and the routers 904 and 906.
  • an electrical signaling channel enables a gateway node to communicate with the NMS.
  • Each Node Manager at each OTS typically has three distinct network interfaces: 1) a 100 BaseT interface to the intra-OTS LAN, 2) a 100 BaseT interface to remote NMS platforms, and 3) an out-of-band optical signaling channel (OSC) for node- to-node communications.
  • OTSs that act as gateways to the NMS such as node A 900, may use the 100 BaseT interface, while non-gateways nodes, such as node B 950, need not have this capability.
  • the service provider's LAN is separated from the OTS LAN for more efficient traffic handling.
  • Layer 3 (L3) IP routing over the OSC provides nodes without gateway connectivity access to nodes that have such Gateway capability.
  • L3 here refers to the 3 rd layer ofthe OSI model, i.e., the network layer.
  • an NMS connects to application software on the Node Manager through the Node Manager NMS agent.
  • an "S" (services) message interface provides an abstraction layer for connecting Node Manager application software to a collection of Core Embedded Control software services, on the Node Manager, that serves to aggregate information sent to, or received from, the LCMs.
  • a "D" (driver) message interface connects the aggregating software ofthe Node Manager to the LCMs.
  • FIG. 10 illustrates an optical switch fabric module architecture 210 in accordance with the present invention.
  • the OSF module 210 may be designed using 8x8 MEMS modules/chips 1010 as switching elements. The switching is done in the optical domain, and no O/E/O conversions are involved. All inputs to a switching element carry one wavelength (i.e., one optical signal as opposed to a multiplex of optical signals), thus enabling wavelength level switching. Moreover, each optical output of every switching element goes through a variable optical attenuator (VOA) 1050, which may be part ofthe switch fabric card, to equalize the power across all the wavelengths being subsequently multiplexed into one fiber.
  • VOA variable optical attenuator
  • the switch fabric 210 is designed in a modular and scalable fashion so that it can be easily configured from a small-scale system to a large-scale system depending on the system configuration requirements.
  • the switch fabric 210 may receive optical inputs from an input module 1070 such as a transport ingress card and/or an optical access ingress card.
  • the switch fabric provides the corresponding optical outputs to designated ports of an output module 1080, such as a transport egress card and/or an optical access egress card. Note that, for clarity of depiction in FIG. 10, only example light paths are shown.
  • the optical switch module provides wavelength-level switching, individually controllable signal attenuation of each output, interconnection to other modules via the optical fiber backplane, power level control management for ensuring that the power ofthe signal that is output between switches is acceptable, and path loss equalization for ensuring that all channels have the same power.
  • the optical switch module may also use an inherently very low cross-talk switch fabric technology such as MEMS, typically with a 2-D architecture, have a modular architecture for scalability with 8x8 switch modules, and provide digital control ofthe MEMS fabric with electrostatic actuation.
  • the optical transport module (or "TP" module) is a multiplexed multi- wavelength (per optical fiber) optical interface between OTSs in an optical network.
  • this transport module supports in-band control signals, which are within the EDFA window of amplification, e.g., 1525-1570 nm, as well as out-of-band control signals.
  • the OTS may support a 1510 nm channel interface.
  • the OTS uses two primary types of transport modules: Transport Ingress 240 (FIG. 11) and Transport Egress 245 (FIG. 12).
  • the optical transport module provides demultiplexing ofthe OSC signal (ingress module), multiplexing ofthe OSC signal (egress module), optical amplification (ingress and egress modules) which may use low noise optical amplification and gain flattening techniques, demultiplexing ofthe multi-wavelength transport signal (ingress module), multiplexing ofthe individual wavelength signals (egress module).
  • the optical transport module may also provide dynamic suppression of optical power transients ofthe multi-wavelength signal.
  • This suppression may be independent ofthe number ofthe surviving signals (i.e., the signals at the transport ingress module that survive at the transport egress module - some signals may be egressed due to drop multiplexing), and independent ofthe number ofthe added signals (i.e., the signals added at the transport egress module that are not present at the transport ingress module - these signals may be added using add multiplexing).
  • the optical transport module may also provide dynamic power equalization of individual signals, wavelength connection to the optical switch fabric via the optical backplane, and pump control.
  • FIG. 11 shows the architecture for the Transport Ingress module 240.
  • the module includes a demultiplexer 1105 to recover the OSC, an EDFA pre-amplifier 1110, an EDFA power amplifier 1115, a demultiplexer 1120 to demultiplex the eight wavelengths from the input port, and pump lasers 1122 and 1124 (e.g., operating at 980 nm).
  • a filter 1107 filters the OSC before it is provided to the OSM.
  • a coupler 1108 couples a tapped pre-amplified optical signal to the OPM, and to a PIN diode 1109 to provide a first feedback signal.
  • the PIN diode outputs a current that represents the power ofthe optical signal.
  • the OPM may measure the power ofthe optical signal (as well as other characteristics such as wavelength registration), typically with more accuracy than the PIN diode.
  • the tap used allows monitoring ofthe multi-wavelength signal and may be a narrowband coupler with a low coupling ratio to avoid depleting too much signal power out ofthe main transmission path.
  • a coupler 1126 couples a tapped amplified optical signal to the OPM, and to a PIN diode 1 127 to provide a second feedback signal.
  • the pump laser 1122 is responsive to a pump laser driver 1130 and a TEC driver 1132.
  • the high-power pump laser 1124 is responsive to a pump laser driver 1140 and a TEC driver 1142.
  • Both pump laser drivers 1130 and 1140 are responsive to an optical transient and amplified spontaneous emission noise suppression function 1150, which in turn is responsive to the feedback signals from the PIN diodes 1109 and 1127, and control signals from the LCM 1170.
  • a DC conversion and filtering function may be used to provide local DC power.
  • the LCM 1170 provides circuit parameters and control by providing control bits and receiving status bits, performs A/D and D/A data conversions as required, and communicates with the associated Node Manager via an Ethernet or other LAN.
  • the LCM 1170 may provide control signals, e.g., for pump laser current control, laser on/off, laser current remote control, TEC on/off, and TEC remote current control.
  • the LCM 1170 may receive status data regarding, e.g., pump laser current, backface photocurrent, pump laser temperature, and TEC current.
  • FIG. 12 shows the architecture ofthe Transport Egress module 245, which includes a multiplexer 1205 to multiplex the eight wavelengths from the switch fabric, an EDFA Pre-amplifier 1210, an EDFA Power amplifier 1215, a multiplexer 1220 to multiplex the eight wavelengths and the OSC, and pump lasers 1222 and 1224 (e.g., operating at 980 nm).
  • the transport egress module 245 also includes a coupler 1208 that couples a tapped pre-amplified optical signal to the OPM module, and to a PIN diode 1209 to provide a first feedback signal, e.g., ofthe optical signal power.
  • a coupler 1226 couples a tapped amplified optical signal to the OPM module, and to a PIN diode 1227 to provide a second feedback signal.
  • the pump laser 1222 is responsive to a pump laser driver 1230 and a TEC driver 1232.
  • the high-power pump laser 1224 is responsive to a pump laser driver 1240 and a TEC driver 1242.
  • Both pump laser drivers 1230 and 1240 are responsive to an optical transient and amplified spontaneous emission noise suppression function 1250, which in turn is responsive to feedback signals from the PIN diodes 1209 and 1227, and the LCM 1270.
  • a DC conversion and filtering function may be used to provide local DC power.
  • the LCM 1270 operates in a similar manner as discussed in connection with the LCM 1170 of the TP ingress module. 10. Optical Access Modules
  • the optical access module 230 provides an OTS with a single wavelength interface to access networks that use wavelengths that are compliant with the optical network ofthe OTSs, such as ITU-grid compliant wavelengths. Therefore, third party existing or future ITU-grid wavelength compliant systems (e.g. GbE router, ATM switch, and Fibre Channel equipment) can connect to the OTS.
  • the optical access modules are generally of two types: Optical Access Ingress 230 (FIG. 13) for ingressing (inputting) one or more signals from an access network, and Optical Access Egress 235 (FIG. 14) for egressing (outputting) one or more signals to an access network.
  • the ITU grid specifies the minimum spacing and the actual wavelengths ofthe individual wavelengths in a WDM system.
  • optical access modules include: optical amplification, connection to the optical switch fabric to route the signal for its wavelength provisioning, ITU-Grid wavelength based configuration, reconfiguration at run-time, direct connectivity for ITU-grid based wavelength signals, local wavelength switching, and direct wavelength transport capability.
  • FIG. 13 shows the architecture ofthe Optical Access Ingress module 230, which includes EDFAs (EDFA-l,...,EDFA-8) 1350, 2x1 switches 1310 and 8x8 optical (e.g., MEMS) switch 1360.
  • each 2x1 switch receives a compliant wavelength ( ⁇ ) from the faceplate and from the output of an ALI card via the optical backplane.
  • compliant wavelength
  • eight compliant wavelengths from the outputs of four ALI cards are received via the optical backplane.
  • the LCM 1370 provides a control signal to each switch to output one ofthe two optical inputs to an associated EDFA.
  • the LCM 1370 operates in a similar manner as discussed in connection with the TP ingress and egress modules.
  • Taps 1390 are provided for each ofthe signals input to the switch 1360 to provide monitoring points to the OPM via the optical backplane.
  • taps 1395 are provided for each ofthe output signals from the switch 1360 to obtain additional monitoring points for the OPM via the optical backplane.
  • each wavelength passes through the optical tap 1390 and a 1x2 optical splitter that provides outputs to: (a) a 8x1 optical coupler to provide a signal to the OPM via the optical backplane, and (b) a PIN diode for loss of signal detection by the LCM 1370.
  • the OPM is used to measure the OSNR and for wavelength registration.
  • the wavelengths at the taps 1395 are provided to a 8x1 optical coupler to provide a signal to the OPM via the optical backplane.
  • the optical taps, optical splitters and 8x1 optical coupler are passive devices.
  • FIG. 14 shows the architecture ofthe Optical Access Egress module 235.
  • the module 235 includes EDFAs (EDFA-l,...,EDFA-8) 1450, 1x2 switches 1470 and 8x8 optical (e.g., MEMS) switch 1420.
  • the optical switch 1420 receives eight optical inputs from a switch fabric module 210.
  • Taps 1410 and 1490 provide monitoring points for each ofthe inputs and outputs, respectively, ofthe switch 1420 to the OPM via the optical backplane.
  • the optical signals from the switch fabric are monitored for performance and loss of signal detection as discussed in connection with the Optical Access Ingress module 230.
  • the LCM 1472 provides control signals to the switches 1470 for outputting eight compliant wavelengths to the faceplate, and eight compliant wavelengths to the input of four ALI cards via the optical backplane.
  • the LCM 1472 operates in a similar manner as discussed previously. 11. Access Line Interface Modules
  • This O/E/O convergent module is a multi-port single wavelength interface between the switching system and legacy access networks using non-compliant wavelengths, e.g., around 1300 nm.
  • the ALI module/card may be provided as either a GbE interface module 220a (FIG. 15) or SONET OC-n module.
  • FIG. 16 shows the ALI module configured as an OC-12 module 220b
  • FIG. 17 shows the ALI module configured as an OC-48 module 220c
  • FIG. 18 shows the ALI module configured as an OC-192 module 220d.
  • Other OC-n speeds may also be supported.
  • the solid lines denote transport data flow
  • the dashed lines denote control data flow. Referring to FIG.
  • the GbE module 220a provides dual data paths, each of which accepts four GbE signals, and multiplexes them to a single OC-48 signal. In the other direction, the module accepts an OC-48 signal and demultiplexes it into four GbE signals in each ofthe two paths.
  • the GbE module 220a includes SONET framers 1510 and 1520 that handle aggregation and grooming from each GbE port.
  • the SONET framers may use the Model S4083 or Yukon chips from Advanced Micro Circuits Corporation (AMCC) of Andover, Massachusetts.
  • the module 220a aggregates two or more GbE lines into each SONET framer 1510, 1520, which support OC-48 and OC-192 data rates.
  • the module 220a also performs wavelength conversion to one ofthe ITU-grid wavelengths. For each ofthe modules 220a-220d, the desired ITU-grid wavelength is configured at initial path signaling setup.
  • a variety of scheduling algorithms may be used when the aggregate bandwidth ofthe ALI inputs is greater than that ofthe ALI output. Such algorithms are typically performed by FPGAs 1540 and 1542. For example, one may use round robin scheduling, where the same bandwidth is allocated to each ofthe GbE interfaces, or weighted round robin scheduling, where relatively more bandwidth is allocated to specified GbE interfaces that have a higher priority.
  • the MAC/PHY chips 1530, 1532, 1534, 1536 communicate with GbE transceivers, shown collectively at 1525, which in turn provide O-E and E-O conversion.
  • the MAC refers to processing that is related to how the medium (the optical fiber) is accessed.
  • the MAC processing performed by the chips may include frame formatting, token handling, addressing, CRC calculations, and error recovery mechanisms.
  • the Physical Layer Protocol, or PHY, processing may include data encoding or decoding procedures, clocking requirements, framing, and other functions.
  • the chips may be AMCC's Model S2060.
  • the module 220a also includes FPGAs 1540, 1542 which are involved in signal processing, as well as a control FPGA 1544.
  • the FPGAs 1540, 1542 may be the Model XCV300 from Xilinx Corp., San Jose, Calif.
  • Optical transceivers (TRx) 1550 and 1552 perform O-E and E-O conversions.
  • the MAC/PHY chips 1530-1536 receive input signals from the GbE transceivers 1525, and provide them to the associated FPGA 1540 or 1542, which in turn provides the data in an appropriate format for the SONET framers 1510 and 1520, respectively.
  • the SONET framers 1510 or 1520 output SONET-compliant signals to the transceivers 1550 and 1552, respectively, for subsequent E-O conversion and communication to the OA In cards 230 via the optical backplane.
  • SONET optical signals are received from the optical access egress cards 235 at the transceivers 1550 and 1552, where O-E conversion is performed, the results of which are provided to the SONET framers 1510 or 1520 for de-framing.
  • the de-framed data is provided to the FPGAs 1540 and 1542, which provide the data in an appropriate format for the MAC/PHY chips 1530-1536.
  • the MAC/PHY chips include FIFOs for storing the data prior to forwarding it to the GbE transceivers 1525.
  • the control FPGA 1544 communicates with the ALI card's associated LCM, and also provides control signals to the transceivers 1550 and 1552, FPGAs 1540 and 1542, SONET framers 1510 and 1520, and MAC/PHY chips 1530-1536.
  • the FPGA 1544 may be the Model XCV150 from Xilinx Corp.
  • the ALI modules may include module types 220a-220c, having: 16 physical ports (8 input and 8 output) of GbE, OC-12, or OC-48, and four physical ports (two input and two output) of OC-192.
  • Module 220d has four physical ports on either end.
  • the ALI modules may support OC-12 to OC-192 bandwidths (or faster, e.g., OC-768), provide wavelength conversion, e.g., from the 1250-1600 nm range, to ITU-compliant grid, support shaping and re-timing through O-E-O conversion, provide optical signal generation and amplification, and may use a wavelength channel sharing technique. See FIG. 28 for additional related information.
  • FIGs 16, 17 and 18 show the architecture ofthe OC-12, OC-48, and OC- 192 access line interface cards, respectively. See also FIG. 29 for additional related information.
  • FIG. 16 shows an OC-12 module 220b, which aggregates four or more OC-12 lines into each SONET framer 1610 or 1620, which support OC-48 data rates.
  • Quad PHY functions 1630 and 1640 each receive four signals from OC-12 interfaces via transceivers, shown collectively at 1625, and provide them to corresponding SONET framers 1610 and 1620, respectively.
  • the SONET Framers may use AMCC's Model S4082 or Missouri chips.
  • the Quad PHY functions may each include four of AMCC's Model S3024 chips.
  • the SONET framers 1610 and 1620 provide the data in frames. Since four OC-12 signals are combined, a speed of OC-48 is achieved.
  • the framed data is then provided to optical transceivers 1650 and 1652 for E-O conversion, and communication to the optical access ingress cards 230 via the optical backplane.
  • the SONET framers 1610 and 1620 may also communicate with adjacent ALI cards via an electrical backplane to receive additional input signals, e.g., to provide a capability for switch protection mechanisms.
  • the electrical backplane may comprise a parallel bus that allows ALI cards in adjacent bays to communicate with one another.
  • the electrical backplane may also have a component that provides power to each of the cards in the OTS bay.
  • optical signals are received by the transceivers 1650 and 1652 from the OA Eg cards and provided to the SONET framers 1610 and 1620 following O-E conversion.
  • the SONET framers 1610 and 1620 provide the signals in a format that is appropriate for the Quad PHY chips 1630 and 1640.
  • the control FPGA 1644 communicates with the ALI card's associated
  • LCM LCM, and also provides control signals to the transceivers 1650 and 1652, SONET framers 1610 and 1620, and Quad PHY chips 1630 and 1640.
  • FIG. 17 shows an OC-48 module 220c, which aggregates two or more OC- 48 lines into each SONET framer 1710 and 1720, which support OC-192 data rates.
  • PHY chips 1730, 1732, 1734 and 1736 each receive two signals from OC-48 interfaces via transceivers 1725 and provide them to corresponding SONET framers 1710 and 1720, respectively.
  • the SONET framers 1710 and 1720 provide the signals in frames. Since four OC-48 signals are combined, a speed of OC-192 is achieved.
  • the signals are then provided to optical transceivers 1750 and 1752 for E-O conversion, and for communication to optical access ingress cards 230 via the optical backplane.
  • the SONET framers 1710 and 1720 may also communicate with adjacent ALI cards.
  • optical signals are received by the optical transceivers 1750 and 1752 from optical access egress cards and provided to the SONET framers 1710 and 1720 following O-E conversion at the transceivers 1650, 1652.
  • the SONET framers 1710 and 1720 provide the signals in a format that is appropriate for the OC-48 interfaces.
  • the formatted optical signals are provided to the OC-48 interfaces via the PHY chips 1730-1736.
  • dedicated ports may be provided, which obviate MAC processing.
  • the FPGA 1744 communicates with the ALI card's associated LCM, and also provides control signals to the transceivers 1750 and 1752, SONET framers 1710 and 1720, and PHY chips 1730-1736.
  • FIG. 18 shows an OC-192 module 220d, which provides one OC-192 line into each SONET framer 1810, 1820, which support OC-192 data rates.
  • PHY chips 1830 and 1832 each receive a signal from OC-192 interfaces via transceivers 1825 and provide it to corresponding SONET framers 1810 and 1820, respectively, which provide the signals in frames.
  • the signals are then provided to optical transceivers 1850 and 1852 for E-O conversion, and communicated to OA In cards 230 via the optical backplane.
  • the SONET framers 1810 and 1820 may also communicate with adjacent ALI cards.
  • optical signals are received by the optical transceivers 1850 and 1852 from the OA_Eg cards and provided to the SONET framers 1810 and 1820 following O-E conversion.
  • the SONET framers 1810 and 1820 provide the signals in a format that is appropriate for the OC-192 interfaces.
  • the formatted signals are provided to the OC-192 interfaces via the PHY chips 1830 and 1832.
  • the FPGA 1844 communicates with the ALI card's associated LCM, and also provides control signals to the transceivers 1850 and 1852, SONET framers 1810 and 1820, and PHY chips 1830 and 1832.
  • the Optical Performance Monitoring (OPM) module 260 is used for several activities. For example, it monitors the power level of a multi- wavelength signal, the power level of a single wavelength signal, and the optical signal- to-noise ratio (OSNR) of each wavelength. It also measures wavelength registration. Each incoming wavelength power variation should be less than 5 dB and each out-going wavelength power variation should be less than 1 dB.
  • OPM optical Performance Monitoring
  • the OPM acts as an optical spectrum analyzer.
  • the OPM may sample customer traffic and determine whether the expected signals levels are present.
  • the OPM monitoring is in addition to the LCM monitoring of a line card, and generally provides higher resolution readings.
  • the OPM is connected through the optical backplane, e.g., using optical fibers, to strategic monitoring points on the line cards.
  • the OPM switches from point to point to sample and take measurements. Splitters, couplers and other appropriate hardware are used to access the optical signals on the line cards.
  • the OPM module and signal processing unit 260 communicates with a LCM 1920, and receives monitoring data from all the line card monitoring points from a lxN optical switch 1930 via the optical backplane ofthe OTS.
  • a faceplate optical jumper 1912 allows the OPM module and signal processing unit 260 and the optical switch 1930 to communicate.
  • a conversion and filtering function may be used to provide local DC power.
  • the LCM 1920 (like all other LCMs of a node) communicates with the Node Manager via the intra-node LAN.
  • the OPM supports protection switching, fault isolation, and bundling, and measures optical power, OSNR of all wavelengths (by sweeping), and wavelength registration.
  • the OPM which preferably has a high sensitivity and large dynamic range, may monitor each wavelength, collect data relevant to optical devices on the different line cards, and communicate with the NMS (via the LCM and Node Manager).
  • the OPM is preferably built with a small form factor. 13.
  • the OTS is designed to be flexible, particularly as a result of its modular system design that facilitates expandability.
  • the OTS is based on a distributed architecture where each line card has an embedded controller.
  • the embedded controller performs the initial configuration, boots up the line card, and is capable of reconfiguring each line card without any performance impact on the whole system.
  • FIG. 20 illustrates a physical architecture of an OTS chassis or bay (receiving apparatus) in an OXC configuration 2000 in accordance with the present invention.
  • a total of twenty-two circuit packs/line cards are provided in receiving locations or slots ofthe bay, with two of those twenty-two circuit packs being OTS Node Manager cards.
  • An OTS is typically designed to provide a certain number of slots per shelf in its bay.
  • FIG. 21 shows a fully configured OTS 2100 in a OXC/OADM configuration.
  • FIG. 22 shows a fully configured ALI card bay 2200.
  • Optical cables in an OTS are typically connected through the optical backplane to provide a simple and comprehensive optical cable connectivity of all ofthe optical modules.
  • the electrical backplane handles power distribution, physical board connection, and supports all physical realizations with full NEBS level 3 compliance. Note that since "hot" plugging of cards into an OTS is often desirable, it may be necessary to equip such cards with transient suppression on their power supply inputs to prevent the propagation of powering-up transients on the electrical backplane's power distribution lines.
  • locations or slots in the OTS bay may be reserved for specific types of line cards since the required optical coupling of a line card depends on its function, and it is desirable to minimize the complexity ofthe optical fiber connections in the optical backplane.
  • Each ofthe optical circuit cards also has a connection to an electrical backplane that forms the LAN for LCM-Node Manager communications. This connection is uniform for each card and may use an RJ-45 connector, which is an 8-wire connector used on network interface cards.
  • the OTS is flexible in that it can accommodate a mix of cards, including Optical Access and Transport line cards.
  • largely generic equipment can be provided at various nodes in a network and then a particular network configuration can be remotely configured as the specific need arises. This simplifies network maintenance and provides great flexibility in reconfiguring the network.
  • the OTS may operate as a pure transport optical switch if it is configured with all cards are transport cards (FIG. 20), e.g., eight transport ingress (TP_In) cards and eight transport egress cards (TP_Eg).
  • TP_In transport ingress
  • TP_Eg transport egress cards
  • each TP In card has one input port/fiber
  • each TP_Eg card has one output port/fiber.
  • each port/fiber supports eight wavelength-division multiplexed ⁇ 's, along with the OSC.
  • the OTS may operate as an Add/Drop terminal if it is configured with ALI, OA, and TP cards (FIGs 21 and 22).
  • ALI ALI
  • OA OA
  • TP cards TP cards
  • a wide range of configurations is possible depending on the mix of compliant and non-compliant wavelengths supported.
  • a typical configuration might include sixteen ALI cards for conversion of non- compliant wavelengths, four OA In cards, four OA Eg cards, four TP_In cards, and four TP_Eg cards.
  • the ALI cards provide wavelength conversion in this embodiment, no wavelength conversion need be performed within the optical fabric. However, wavelength conversion within the optical fabric is also a possibility as the switch fabric technology develops.
  • the OTS is scalable since line cards may be added to the spare slots in the bay at a later time, e.g., when bandwidth requirements ofthe network increase.
  • multiple OTS bays can be connected together to further expand the bandwidth-handling capabilities ofthe node and/or to connect bays having different types of line cards. This connection may be realized using a connection like the ALI card-to-OA card connection via the optical backplane.
  • OTS chassis configurations some features ofthe OTS when configured as a OXC or
  • Transport/Switching, and Management functions Since the OADM can be equipped with transport cards (TP In and TP_Eg), it performs all ofthe functions listed, while the dedicated OXC configuration performs the switching/transport and management functions, but not the ALI functions.
  • the Node Manager or NMS may control the OTS to configure it in the OXC or OADM modes, or to set up routing for light paths in the network.
  • each OTS can be used in a different configuration based on its position within an optical network.
  • the input transport module, the switch fabric and the output transport module are used.
  • FIG. 23 shows the modules used for the OXC configuration.
  • the OTS 200a includes the TP In modules 240 and the TP_Eg modules 245.
  • Each TP_In card may receive one fiber that includes, e.g., eight multiplexed data channels and the OSC.
  • each TP Eg card outputs eight data channels in a multiplex and the OSC on an associated fiber.
  • FIG. 24 shows the modules used for the OADM configuration when the incoming optical signals are compliant, e.g., with the ITU grid.
  • the access line modules are not needed since the wavelengths are input directly from the access network to the OA_In cards.
  • the OTS 200b includes the TP In modules 240, the TP_Eg modules 245, the OA In modules 230, and the OA Eg modules 235.
  • the OA In and OA Eg cards are typically provided in pairs to provide bi-directional signaling.
  • FIGs 25 and 26 show the OTS configurations when non-compliant wavelengths are used.
  • the non-compliant wavelengths may include, e.g., eight OC-12 wavelengths and eight OC-48 wavelengths.
  • FIG. 24 shows the modules used for the OADM configuration when the incoming optical signals are compliant, e.g., with the ITU grid.
  • the access line modules are not needed since the wavelengths are input directly from the access network to the OA_In cards.
  • the OTS 200c uses the ALI modules 220 for converting the non-compliant wavelengths to compliant wavelengths, e.g., using any known wavelength conversion technique.
  • the OA In modules 230 receive the compliant wavelengths from the ALIs 220 and provide them to the switch fabric 210.
  • the switched signals are then provided to the TP_Eg modules 245 for transport on optical fibers in the optical network. Note that, typically, bidirectional signaling is provided to/from the access network via the ALI cards. Thus, the processes of FIGs 25 and 26 may occur at the same time via one or more ALI cards.
  • the OTS 200d includes the TP In module 240 for receiving the optical signal via the optical network, the OA Eg modules 235 for receiving the optical signals from the switch 210, and the ALI modules 220 for converting the compliant wavelengths to non-compliant wavelengths for use by the access network.
  • the non-compliant wavelengths may be provided as, e.g., eight OC-12 wavelengths and eight OC-48 wavelengths.
  • the ALI modules both provide inputs to the OA_In modules 230, and receive outputs from the OA Eg modules 245.
  • any concurrent combination ofthe following is possible: (a) inputting OTS-compliant signals from one or more access networks to the OA_In modules, (b) inputting non-OTS-compliant signals from one or more access networks to the ALI modules, (c) outputting signals, which are both OTS- and access-network compliant, from the OA Eg modules to one or more access networks, and (d) outputting signals, which are OTS-compliant but non-compliant with an access network, to the ALI modules.
  • a primary service enabled by the present invention is a transparent circuit- switched light path. Compared to conventional services, these flows are distinguished by a large quantity of bandwidth provided, and a setup time measured in seconds.
  • FIG. 27 shows a simple example of wavelength adding, dropping, and cross-connection.
  • light paths are terminated at the OADMs 2710, 2730, 2750 and 2760 (at edge nodes ofthe network 2700), and switched through the OXCs 2720 and 2740 (at internal nodes ofthe network 2700).
  • OXCs 2720 and 2740 at internal nodes ofthe network 2700.
  • the same wavelength carrying the light path is used on all links comprising the light path, but the wavelength can be reused on different links.
  • ⁇ l can be used in light paths 2770 and 2780
  • ⁇ 2 can be used in light path 2775
  • ⁇ 3 can be used in light paths 2785 and 2790.
  • this transparent data transfer service is equivalent to a dedicated line for SONET services, and nearly equivalent to a dedicated line for GbE services.
  • the OTS operation is independent of data rate and protocol, it does not offer a Quality of Service in terms of bit error rate or delay.
  • the OTS may monitor optical signal levels to ensure that the optical path signal has not degraded.
  • the OTS may perform dynamic power equalization ofthe optical signals, and dynamic suppression of optical power transients ofthe multi -wavelength signal independently of the number ofthe surviving signals, and independently ofthe number ofthe added signals.
  • the OTS may thus measure an Optical Quality of Service (OQoS) based on optical signal-to-noise ratio (OSNR), and wavelength registration.
  • OQoS Optical Quality of Service
  • OSNR optical signal-to-noise ratio
  • Non-compliant Optical O-E-O translation from non-compliant wavelength e.g.,
  • the signal shaping and timing may be performed on the ALI cards using on-off keying with Non-Return-to-Zero signaling.
  • eight compliant waveforms are supported based on the ITU-specified grid, with 200 Ghz or 1.6 nm spacing, shown in Table 4. These are eight wavelengths from the ITU grid.
  • the received signal is optically amplified and switched to the destination.
  • signals are converted to electrical form and are groomed. If the current assignment has several lower rate SONET input streams, e.g., OC-12, going to the same destination, the ALI can groom them into one higher rate stream, e.g., OC-48. After being switched to the destination port, the stream is multiplexed by a TP module onto a fiber with other wavelengths for transmission. Moreover, for non-compliant wavelengths, the OTS performs a wavelength conversion to an ITU wavelength, and the stream is then handled as a compliant stream. Conversion of optical signals from legacy networks to ITU-compliant wavelengths listed in Table 4 may be supported.
  • FIG. 28 illustrates Gigabit Ethernet networks accessing a managed optical network in accordance with the present invention.
  • the GbE interface supports the fiber media GbE option, where the media access control and multiplexing are implemented in the electrical domain. Therefore, the flow is somewhat different from SONET.
  • the GbE packetized data streams are received as Ethernet packets, multiplexed into a SONET frame, modulated (initial timing and shaping), and converted to a compliant wavelength. After the compliant wavelengths are formed, they are handled as compliant wavelength streams as described above.
  • GbEl 2802, GbE2 2804, GbE3 2806, GbE4 2808, GbE5 2840, GbE6 2842, GbE7 2844 and GbE8 2846 are separate LANs.
  • each ofthe active ports are going to a different destination, so dedicated wavelengths are assigned. If two or more GbE ports have the same destination switch, they may be multiplexed onto the same wavelength.
  • each of four GbE ports are transmitted to the same destination (i.e., OADM B 2830) but to separate GbE LANs (GbEl is transmitted to GbE5, GbE2 is transmitted to GbE6, etc.).
  • the client can attach as many devices to the GbE as desired, but their packets are all routed to the same destination. In this case, the processing flow proceeds as follows. First, the OADM A
  • the 2810 receives GbE packets on GbEl 2802, GbE2 2804, GbE3 2806, and GbE4 2808.
  • the OADM A 2810 performs O-E conversion and multiplexes the packets into SONET frames at the ALI/OA function 2812.
  • OADM A 2810 performs the E-O conversion at the assigned ⁇ , also at the ALI/OA function 2812.
  • the resulting optical signal is switched through the switch fabric (SW) 2814 to the transport module (egress portion) 2816, and enters the network 2820.
  • the optical signal is switched through the optical network 2820 to the destination switch at OADM B 2830.
  • the optical signal is received at the transport module (ingress portion) 2832, and switched through the switch fabric 2834 to the OA_Eg/ALI function 2836.
  • the OADM B 2830 extracts the GbE packets from the SONET frame at the OA/ALI function 2836.
  • the OADM B 2830 demultiplexes the packets in hardware at the OA/ALI function 2836 to determine the destination GbE port and transmits the packet on that port. Since the ALI 2812 in the OADM A 2810 may receive packets on different ports at the same time, the ALI buffers one ofthe packets for transmission after the other. However, appropriate hardware can be selected for the ALI such that the queuing delays incurred are negligible and the performance appears to be like a dedicated line. Note that, in this example, all GbE ports are connected to the same ALI.
  • the service provider can configure the traffic routing within the GbE networks to ensure that traffic going to the same destination is routed to the same input GbE port on the optical switch. Multiplexing GbE networks attached to different ALIs is also possible. Refer also to FIG. 15 and the related discussion.
  • the QoS in terms of traditional measures is not directly relevant to the optical network. Instead, the client (network operator) may control these performance metrics. For example, if the client expects that the GbE ports will have a relatively modest utilization, the client may choose to assign four ports to a single OC- 48 ⁇ operating at 2.4 Gbps (assuming they all have the same destination port). In the worst case, the ⁇ channel may be oversubscribed, but for the most part, its performance should be acceptable. *
  • weighted fair queuing may be used that allows the client to specify the weights given to each stream. In this way, the client can control the relative fraction of bandwidth allocated to each stream.
  • the client may be operating a mix of CBR, VBR, ABR, and UBR services as inputs to the OADM module.
  • the switching system does not distinguish the different cell types. It simply forwards the ATM cells as they are received, and outputs them on the port as designated during setup.
  • FIG. 29 shows an example of interconnectivity ofthe optical network with OC-12 legacy networks. Other OC-n networks may be handled similarly. Refer also to FIGs 16-18 and the related discussions.
  • the example shows four OC-12 networks 2902, 2904, 2906 and 2908, connected to the optical network 2920 through the OC-12 ALI card 2912. Similarly, four OC-12 networks 2940, 2942, 2944 and 2946 are connected to the OC-12 ALI card 2936 at the OADM B 2930.
  • the processing flow proceeds as follows. First, the OADM
  • a 2910 receives packets on OC-12 1 (2902), OC-12 2 (2904), OC-12 3 (2906), and OC- 12 4 (2908).
  • the OADM A 2910 multiplexes the packets into SONET frames at OC-48 at the ALI/OA module 2912 using TDM.
  • OC-n uses only the OA portion, not the ALI portion.
  • the ALI is used for wavelength conversion, through an O-E-O process, then the OA is used for handling the newly-compliant signals.
  • the resulting optical signal is switched through the switch fabric (SW) 2914 to the transport module (egress portion) (TP) 2916, and enters the network 2920.
  • SW switch fabric
  • TP transport module
  • the optical signal is switched through the optical network 2920 to the destination switch at OADM B 2930.
  • the optical signal is received at the transport module (ingress portion) 2932, and switched through the switch fabric (SW) 2934 to the OA/ALI function 2936.
  • the OADM B 2930 extracts the packets from the SONET frame at the OA ALI function 2936.
  • the OADM B 2930 demultiplexes the packet in hardware at the OA/ALI function 2936 to determine the destination port, and transmits the packet on that port. 16. Routing and Wavelength Assignment
  • the routing block 3120 of FIG. 31 refers to a Routing and Wavelength Assignment (RWA) function that may be provided as software running on the NMS for selecting a path in the optical network between endpoints, and assigning the associated wavelengths for the path.
  • RWA Routing and Wavelength Assignment
  • the same wavelength is used on each link in the path, i.e., there is wavelength continuity on each link.
  • OSPF Open Shortest Path First
  • LSA link state advertisement
  • OSPF is particularly suitable for RWA since it is available at low risk, e.g., easily extended to support traffic engineering and wavelength assignment, scalable, e.g., able to support large networks using one or two levels of hierarchies, less complex than other candidate techniques, and widely commercially accepted.
  • Several organizations have investigated the enhancement of OSPF to support optical networks and several alternative approaches have been formulated. The major variation among these approaches involves the information that should be distributed in the LSA messages. As a minimum, it is necessary to distribute the total number of active wavelengths on each link, the number of allocated wavelengths, the number of pre-emptable wavelengths, and the risk groups throughout the networks.
  • information may be distributed on the association of fibers and wavelengths such that nodes can derive wavelength availability. In this way, wavelength assignments may be made intelligently as part ofthe routing process.
  • the overhead incurred can be controlled by "re-advertising" only when significant changes occur, where the threshold for identifying significant changes is a tunable parameter.
  • the optical network may support some special requirements.
  • the client may request paths that are disjoint from a set of specified paths.
  • the client provides a list of circuit identifiers and request that the new path be disjoint from the path of each of these paths.
  • the routing algorithm must specifically exclude the links/switches comprising these paths in setting up the new path.
  • Flash memory is used on all controllers for persistent storage.
  • the Node Manager flash memory may have 164 Mbytes while LCM flash memories may have 16 Mbytes.
  • the Intel 28F128J3A flash chip, containing 16 Mbytes, may be used as a building block. Designing flash memory into both controllers obviates the need for ROM on both controllers. Both controllers boot from their flash memory. Should either controller outgrow its flash storage, the driver can be modified to apply compression techniques to avoid hardware modifications.
  • the flash memory on all controllers may be divided into fixed partitions for performance.
  • the Node Manager may have five partitions, including (1) current version Node Manager software, (2) previous version (rollback) Node Manager software, (3) LCM software, (4) Core Embedded software data storage, and (5) application software/data storage
  • the LCM may have 3 partitions, including (1) LCM software, (2) previous version (rollback) LCM software, and (3) Core Embedded software data storage.
  • the flash memory on both the Node Manager and LCM may use a special device driver for read and write access since the flash memory has access controls to prevent accidental erasure or reprogramming.
  • the flash driver For write access, the flash driver requires a partition ID, a pointer to the data, and a byte count.
  • the driver first checks that the size ofthe partition is greater than or equal to the size ofthe read buffer, and returns a negative integer value if the partition is too small to hold the data in the buffer.
  • the driver checks that the specified partition is valid and, if the partition is not valid, returns a different negative integer.
  • the driver then writes a header containing a timestamp, checksum, and user data byte count into the named partition.
  • the driver then writes the specified number of bytes starting from the given pointer into the named partition.
  • the flash driver returns a positive integer value indicating the number of user data bytes written to the partition. If the operation fails, the driver returns a negative integer value indicating the reason for failure (e.g., device failure).
  • the flash driver For read access, the flash driver requires a partition ID, a pointer to a read data buffer, and the size ofthe data buffer. The driver checks that the size ofthe read buffer is greater than or equal to the size ofthe data stored in the partition (size field is zero if nothing has been stored there). The driver returns a negative integer value if the buffer is too small to hold the data in the partition. The driver then does a checksum validation ofthe flash contents. If checksum validation fails, the driver returns a different negative integer. If the checksum validation is successful, the driver copies the partition contents into the provided buffer and return a positive integer value indicating the number of bytes read. If the operation fails, the driver returns a negative integer value indicating the reason for failure (e.g., device failure). 18. Hierarchical Optical Network Structure
  • the all-optical network architecture is based on an open, hierarchical structure to provide interoperability with other systems and accommodate a large number of client systems.
  • FIG. 30 depicts the hierarchical structure ofthe all-optical network architecture for a simple case with three networks, network A 3010, network B 3040 and network C 3070.
  • a network is managed by a three-tiered control architecture: i) at the highest level a leaf NMS manages the multiple OTSs of its network, ii) at the middle level each OTS is managed individually by its associated Node Manager, and iii) at the lowest level each line card of a node (except the Node Manager) is managed by an associated Line Card Manager.
  • the nodes such as nodes 3012, 3014, 3042 and 3072 depict the optical switching hardware (the OTSs).
  • network A 3010 and network B 3040 communicate with one another via OTSs 3012 and 3042
  • network A 3010 and network C 3040 communicate with one another via OTSs 3014 and 3072.
  • each network has its own NMS.
  • network A 3010 has an NMS 3015
  • network B 3040 has an NMS 3045
  • network C 3070 has an NMS 3075.
  • the NMS 3015 for Network A 3010 may be the root NMS, such that the NMSs 3045 and 3075 for Networks B and C, respectively, are subservient to it.
  • Each NMS includes software that runs separate and apart from the network it controls, as well as NMS agent software that runs on each Node Manager ofthe NMS's network.
  • the NMS agent software allows the each NMS to communicate with the Node Managers of each of its network's nodes.
  • each NMS may use a database server to store persistent data, e.g., longer-life data such as configuration and connection information.
  • the database server may use LDAP, and Oracle® database software to store longer-life data such as configuration and connection information.
  • LDAP is an open industry standard solution that makes use of TCP/IP, thus enabling wide deployment. Additionally, a LDAP server can be accessed using a web-based client, which is built into many browsers, including the Microsoft Explorer® and Netscape Navigator® browsers. The data can be stored in a separate database for each instance of a network, or multiple networks can share a common database server depending on the size of the network or networks. As an example, separate databases can be provided for each of networks A, B and C, where each database contains information for the associated network, such as connection, configuration, fault, and performance information. In addition, the root NMS (e.g., NMS 3015) can be provided with a summary view ofthe status and performance data for Networks B and C. The hierarchical NMS structure is incorporated into the control architecture as needed.
  • NMS e.g., NMS 3015
  • the functionality provided by the OTS and NMS, as well as the external network interfaces are shown in FIG. 31.
  • the path restoration 3115 and network management 3105 functionalities are implemented in the NMS, while the routing 3120, signaling 3135 including user-network signaling 3136 and internal signaling 3137 (internal to the network), agent/proxy 3110, and protection 3145 are real-time functionalities implemented in the Node Manager.
  • External interfaces to the optical network system include: (1) a client system 3140 requesting services, such as a light path, from the optical network via the UNI protocol, (2) a service provider/carrier NMS 3130 used for the exchange of management information, and (3) a hardware interface 3150 for transfer of data.
  • An interface to a local GUI 3125 is also provided.
  • the client system 3140 may be resident on the service provider's hardware. However, if the service provider does not support UNI, then manual (e.g., voice or email) requests can be supported.
  • Light path (i.e., optical circuit) setup may be provided, e.g., using a signaled light path, a provisioned light path, and proxy signaling.
  • a signaled light path is analogous to an ATM switched virtual circuit, such that a service provider acts as UNI requestor and sends a "create" message to initiate service, and the Optical Network Controller (ONC) invokes NNI signaling to create a switched lightpath.
  • ONC Optical Network Controller
  • a provisioned lightpath is analogous to an ATM permanent virtual circuit (PVC), such that a service provider via the NMS requests a lightpath be created (where UNI signaling is not used), and the NMS commands the switches directly to establish a lightpath.
  • PVC ATM permanent virtual circuit
  • the NMS can also use the services of a proxy signaling agent to signal for the establishment of a lightpath.
  • the service provider/carrier NMS interface 3130 enables the service provider operator to have an integrated view ofthe network using a single display.
  • This interface which may be defined using CORBA, for instance, may also be used for other management functions, such as fault isolation.
  • the local GUI interface 3125 allows local management ofthe optical network by providing a local administrator/network operator with a complete on-screen view of topology, performance, connection, fault and configuration management capabilities and status for the optical network.
  • the control plane protocol interface between the service provider control plane and the optical network control plane may be based on an "overlay model" (not to be confused with an overlay network used by the NMS to interface with the nodes), where the optical paths are viewed by the service provider system as fibers between service provider system endpoints.
  • the routing algorithm employed by the optical network is separate from the routing algorithms employed by the higher layer user network.
  • the internal optical network routing algorithm, internal signaling protocols, protection algorithms, and management protocols are discussed in further detail below.
  • the all-optical network based on the OTS may be modified from the "overlay model" architecture to the "peer model” architecture, where the user device is aware ofthe optical network routing algorithm and the user level.
  • the optical network and user network routing algorithms are integrated in the "peer model” architecture.
  • NNI Network Interface
  • the NNI may be specified by extending the UNI protocol (ATM Forum 3.1 Signaling Protocol) by specifying additional messages fields, states, and transitions.
  • UNI is a protocol by which an external network accesses an edge OTS ofthe optical network.
  • the NNI may include a path Type-Length- Value field in its
  • the primary function ofthe signaling network is to provide connectivity among the Node Managers ofthe different OTSs.
  • An IP network may be used that is capable of supporting both signaling as well as network management traffic.
  • TCP may be used as the transport protocol.
  • UDP may be used, depending upon the specific application.
  • FIG. 32 depicts an example of a signaling network having three OTSs, OTS A (3210), OTS B (3220), and OTS C (3230), an NMS 3240 that communicates with OTS B 3220 (and all other OTSs via OTS B) via an Ethernet 3245, a path requester 3215 and path head 3216 that communicate with the OTS A 3210 via an Ethernet 3217, and a path tail 3235 that communicates with the OTS C 3230 via an Ethernet 3232.
  • the path requester 3215, path head 3216 and path tail 3235 denote client equipment that is external to the all-optical network.
  • the internal signaling network may use the OSC within the optical network, in which case the facilities are entirely within the optical network and dedicated to the signaling and management ofthe optical network. The OSC is not directly available to external client elements.
  • Each Node Manager may have its own Ethernet for local communication with the client equipment. Also, a gateway node may have an additional Ethernet link for communication with the NMS manager if they are co-located.
  • the signaling network has its own routing protocol for transmission of messages between OTSs as well as within an NMS. Moreover, for fail-safe operation, the signaling network may be provided with its own NMS that monitors the status and performance ofthe signaling network, e.g., to take corrective actions in response to fault conditions, and generate performance data for the signaling network. 21. Protection/Restoration Flow
  • the all-optical network may provide a service recovery feature in response to failure conditions. Both line and path protection may be provided such that recovery can be performed within a very short period of time comparable to SONET ( ⁇ 50 ms). In cases where recovery time requirements are less stringent, path restoration under the control ofthe NMS may provide a more suitable capability.
  • client-managed protection may be provided by allowing the client to request disjoint paths, in which case the protection mechanisms utilized by the client are transparent to the optical network.
  • the recovery capability may include 1 : 1 line protection by having four optical fibers between OTSs - a primary and a backup in each direction.
  • a link or node fails, all paths in the affected link are re-routed (by pre-defined links) as a whole (e.g., on a line basis) rather than by individual path (e.g., on a path basis). While this is less bandwidth efficient, it is simpler to implement than path protection and is equivalent to SONET layer services.
  • the re-routing is predefined via Network Management in a switch table such that when a failure occurs, the re-routing can be performed in real-time ( ⁇ 50 ms per hop).
  • Path protection re-routes each individual circuit when a failure occurs. Protection paths may be dedicated and carry a duplicate data stream (1+1), dedicated and carry a pre-emptable low priority data stream (1 : 1), or shared (1 :N).
  • FIGs 33(a)-(c) compare line and path protection where two light paths, shown as ⁇ j and ⁇ 2 , have been setup.
  • FIG. 33(a) shows the normal case, where two signaling paths are available between nodes "1" and "6" (i.e., path 1-2-4-5-6 and path 1- 2-3-5-6).
  • ⁇ j traverses nodes 1-2-3-5-6 in travelling toward its final destination
  • ⁇ 2 traverses nodes 1 -2-3 in travelling toward its final destination.
  • FIG. 33(b) shows the case where line protection is used.
  • link 2-3 fails.
  • line protection all channels affected by the failure are re-routed over nodes 2-4-5-3.
  • ⁇ j is routed from node 5-3, and then back from 3-5-6, which is inefficient since ⁇ j travels twice between nodes "3" and "5", thereby reducing the availability of the 3-5 path for backup traffic.
  • FIG. 33(c) shows the case where path protection is used.
  • path protection the light paths ⁇ j and ⁇ 2 are each routed separately in an optimum way, which eliminates the inefficiency of line protection.
  • ⁇ j is routed on nodes 1-2-4-5- 6
  • ⁇ 2 is routed on nodes 1-2-4-5-3.
  • the backup fiber (here, the fiber between nodes 2-4-5) need not be used under normal conditions (FIG. 33(a)).
  • pre-emptable traffic e.g., lower priority traffic
  • the pre-emptable traffic may be allowed to use the backup fiber until a failure occurs. Once a failure occurs, the pre-emptable traffic is removed from the backup fiber, which is then used for transport of higher-priority traffic.
  • the client having the lower-priority traffic is preferably notified ofthe preemption.
  • Protection and restoration in large complex mesh networks may also be provided. Protection features defined by the ODSI, OIF, and IETF standards bodies can also be included as they become available.
  • Protection services can also include having redundant hardware at the OTSs, such as for the Node Manager and other line cards.
  • the redundancy ofthe hardware which may range from full redundancy to single string operation, can be configured to meet the needs ofthe service provider.
  • the hardware can be equipped with a comprehensive performance monitoring and analysis capability so that, when a failure occurs, a switch over to the redundant, backup component is quickly made without manual intervention. In case of major node failures, traffic can be re-routed around the failed node using line protection. 22.
  • the Network Management System is a comprehensive suite of management applications that is compatible with the TMN model, and may support TMN layers 1 to 3. Interfaces to layer 4, service layer management, may also be provided so that customer Operational Support Systems (OSSs) as well as third party solutions can be deployed in that space.
  • OSSs Operational Support Systems
  • the overall architecture ofthe NMS is depicted in FIG. 34.
  • the Element Management Layer 3404 corresponds to layer 2 ofthe TMN model, while the Network Management Layer 3402 components correspond to layer 3 ofthe TMN model.
  • the functions shown are achieved by software running on the NMS and NMS agents at the Node Managers.
  • a common network management interface 3420 at the Network Management Layer provides an interface between: (a) applications 3405 (such as a GUI), customer services 3410, and other NMSs/OSSs 3415, and (b) a configuration manager 3425, connection manager 3430, 3440, fault manager 3445, and performance manager 3450, which may share common resources/services 3435, such as a database server, which uses an appropriate database interface, and a topology manager 3440.
  • the database server or servers may store information for the managers 3425, 3430, 3445 and 3450.
  • the interface 3420 may provides a rich set of client interfaces that include RMI, EJB and CORBA, which allow the carrier to integrate the NMS with their systems to perform end-to-end provisioning and unify event information. Third-party services and business layer applications can also be easily integrated into the NMS via this interface.
  • the interface 3420 may be compatible with industry standards where possible.
  • the GUI 3405 is an integrated set of user interfaces that may be built using
  • Java or other similar object oriented technology to provide an easy-to-use customer interface, as well as portability.
  • the customer can select a manager from a menu of available GUI views, or drill down to a new level by obtaining a more detailed set of views.
  • the customer services may include, e.g., protection and restoration, prioritized light paths, and other services that are typically sold to customers ofthe network by the network operator.
  • the "other NMSs" 3415 refer to NMSs that are subservient to a root NMS in a hierarchical optical network structure or an NMS hierarchy.
  • the OSSs are switching systems other than the OTS system described herein.
  • the configuration manager 3425 provides a switch level view ofthe NMS, and may provide functions including provisioning ofthe Node Managers and LCMs, status and control, and installation and upgrade support.
  • the configuration manager 3425 may also enable the user, e.g., via the GUI 3405, to graphically identify the state ofthe system, boards, and lower level devices, and to provide a point and click configuration to quickly configure ports and place them in service.
  • the configuration manager may collect switch information such as IP address and switch type, as well as card-specific information such as serial number and firmware/software revision.
  • the connection manager 3430 provides a way to view existing light path connections between OTSs, including connections within the OTS itself, and to create such connections.
  • the connection manager 3430 supports simple cross connects as well as end-to-end connections traversing the entire network.
  • the user is able to dictate the exact path of a light path by manually specifying the ports and cross connects to use at an OTS. Or, the user may only specify the endpoints and let the connection manager set up the connection automatically.
  • the endpoints of a connection are OA ports, and the intermediate ports are TP ports.
  • the user may also select a wavelength for the connection.
  • the types of connections supported include Permanent Optical Circuit (POC), Switched Optical Circuit (SOC), as well as Smart Permanent Optical Circuit (SPOC). SOC and SPOC connections are routed by the network element routing and signaling planes. SOC connections are available for viewing only.
  • the topology manager 3440 provides a NMS topological view ofthe network, which allows the user to quickly determine, e.g., via the GUI 3405, all resources in the network, including links and OTSs in the network, and how they are currently physically connected. The user can use this map to obtain more detailed views of specific portions ofthe network, or of an individual OTS, and even access a view of an OTS's front panel. For instance, the user can use the topological view to assist in making end- to-end connections, where each OTS or subnet in the path of a connection can be specified. Moreover, while the topology manager 3440 provides the initial view, the connection manager 3430 is called upon to set up the actual connection.
  • the fault manager 3445 collect faults/alarms from the OTSs as well as other SNMP-compliant devices, and may include functions such as alarm surveillance, fault localization, correction, and trouble administration. Furthermore, the fault manager 3445 can be implemented such that the faults are presented to the user in an easy to understand way, e.g., via the GUI 3405, and the user is able to sort the faults by various methods such as device origination, time, severity, etc. Moreover, the faults can be aggregated by applying rules that are predefined by the network administrator, or customer-defined.
  • the performance manager 3450 performs processing related to the performance ofthe elements/OTSs, as well as the network as a whole. Specific functionalities may include performance quality assurance, performance monitoring, performance management control, and performance analysis. An emphasis may be on optical connections, including the QoS and reliability ofthe connection.
  • the performance manager 3450 allows the user to monitor the performance of a selected port of channel on an OTS. In particular, the performance manager may display data in realtime, or from archived data.
  • managers 3425, 3430, 3445 and 3450 may provide specific functionality and share information, e.g., via Jini, and using an associated Jini server. Moreover, the manager may store associated data in one or more database servers, which can be configured in a redundant mode for high availability.
  • a common network management interface 3455 at the Element Management Layer provides an interface between: (a) the configuration manager 3425, connection manager 3430, fault manager 3445 and performance manager 3450, and (b) an agent adapter function 3460 and an "other adapter" function 3465.
  • the agent adapter 3460 may communicate with the OTSs in the optical network 3462 using SNMP and IP, in which case corresponding SNMP agents and IP agents are provided at the OTSs.
  • the SNMP agent at the OTSs may also interface with other NMS applications.
  • SNMP is an industry standard interface that allows integration with other NMS tools.
  • the interface from the NMS to the OTS in the optical network 3462 may also use a proprietary interface, which allows greater flexibility and efficiency than SNMP alone.
  • the other adapter function 3465 refers to other types of optical switches other than the OTSs described herein that the NMS may manage.
  • the NMS provides a comprehensive capability to manage an
  • OTS OTS or a network of OTSs.
  • a user-friendly interface allows intuitive control ofthe element/OTS or network.
  • a rich set of northbound interfaces allows interoperability and integration with OSS systems.
  • the NMS may be an open architecture system that is based on standardized Management Information Bases (MIBs).
  • MIBs Management Information Bases
  • ODSI has defined a comprehensive MIB for the UNI.
  • additional MIBs are required, e.g., for NNI signaling and optical network enhancements to OSPF routing.
  • the NMS ofthe present invention can support the standard MIBs as they become available, while using proprietary MIBs in areas where the standards are not available.
  • the NMS may be implemented in Java (or similar object oriented) technology, which allows the management applications to easily communicate and share data, and tends to enable faster software development, a friendlier (i.e., easier to use) user interface, robustness, self-healing, and portability.
  • Java tools such as Jini, Jiro, Enterprise Java Beans (EJB), and Remote Method Invocation (RMI) may be used.
  • RMI introduced in JDK 1.1 , is a Java technology that allows the programmer to develop distributed Java objects similar to using local Java objects. It does this by keeping separate the definition of behavior, and the implementation of the behavior. In other words, the definition is coded using a Java interface while the implementation on the remote server is coded in a class. This provides a network infrastructure to access/develop remote objects.
  • the EJB specification defines an architecture for a transactional, distributed object system based on components. It defines an API that ensures portability across vendors. This allows an organization to build its own components or purchase components.
  • These server-side components are enterprise beans, and are distributed objects that are hosted in EJB containers and provide remote services for clients distributed throughout the network
  • Jini which uses RMI technology, is an infrastructure for providing services in a network, as well as creating spontaneous interactions between programs that use these services. Services can be added or removed from the network in a robust way. Clients are able to rely upon the availability of these services. The Client program downloads a Java object from the server and uses this object to talk to the server. This allows the client to talk to the server even though it does not know the details ofthe server. Jini allows the building of flexible, dynamic and robust systems, while allowing the components to be built independently. A key to Jini is the Lookup Service, which allows a client to locate the service it needs.
  • Jiro is a Java implementation ofthe Federated Management Architecture.
  • a federation could be a group of services at one location, i.e., a management domain. It provides technologies useful in building an interoperable and automated distributed management solution. It is built using Jini technology with enhancements added for a distributed management solution, thereby complementing Jini. Some examples ofthe benefits of using Jiro over Jini include security services and direct support for SNMP.
  • FIG. 35 illustrates an NMS hierarchy in accordance with the present invention.
  • scalability may be achieved via the NMS hierarchical architecture, thus allowing a networks from a few OTSs to hundreds of OTSs to still be manageable and using only the processing power ofthe necessary number of managing NMSs.
  • each NMS instance in an NMS hierarchy (which we may also refer to as "manager"), manages a subset of OTSs (with the "root” NMS managing, at least indirectly through its child NMSs, all the OTSs managed by the hierarchy).
  • NMS 1 (3510) manages NMS 1.1 (3520) and NMS 1.2 (3525).
  • NMS 1.1 (3520) manages NMS 1.1.1 (3530), which in turn manages a first network 3540, and NMS 1.1.2 (3532), which in turn manages a second network 3542.
  • NMS 1.2 (3525) manages NMS 1.2.1 (3534), which in turn manages a third network 3544, and NMS 1.2.2 (3536), which in turn manages a fourth network 3546.
  • Each instance ofthe NMS in the hierarchy may be implemented as shown in FIG. 34, including having one more database servers for use by the managers ofthe different functional areas.
  • the number of OTSs that an NMS instance can manage depends on factors such as the performance and memory ofthe instance's underlying processor, and the stability ofthe network configuration.
  • the hierarchy of NMS instances can be determined using various techniques. In the event of failure of a manager, another manager can quickly recover the NMS functionality. The user can see an aggregated view ofthe entire network or some part ofthe network without regard to the number of managers being deployed.
  • the NMSs form a hierarchy dynamically, through an election process, such that a management structure can be quickly reconstituted in case of failure of some ofthe NMSs. Furthermore, the NMS provide the capability to configure each OTS and dynamically modify the connectivity of OTSs in the network. The NMS also enables the network operators to generate on-the-fly statistical metrics for evaluating network performance. 23. Node Manager Software
  • the control software at the OTS includes the Node Manager software and the Line Card Manager software.
  • the Node Manager software 3600 includes Applications layer software 3610 and Core Embedded System Services layer 3630 software running on top of an operating system such as VxWorks (Wind River Systems, Inc., Alameda, Calif).
  • the LCM software has Core Embedded System Services device drivers for the target peripheral hardware such as the GbE and OC-n SONET interfaces.
  • the Applications layer 3610 enables various functions, such as signaling and routing functions, as well as node-to-node communications. For example, assume it is desired to restore service within 50 msec for a customer using a SONET service. The routing and signaling functions are used to quickly communicate from one node to another when an alarm has been reported, such as "the link between Chicago and New York is down.” So, the Applications software 3610 enables the nodes to communicate with each other for selecting a new route that does not use the faulty link. Generally, to minimize the amount of processing by the Applications software 3610, information that is used there is abstracted as much as possible by the Core Embedded Software 3641 and the System Services 3630.
  • the Applications layer 3610 may include applications such as a Protection/Fault Manager 3612, UNI Signaling 3614, NNI Signaling 3615, Command Line Interface (CLI) 3616, NMS Database Client 3617, Routing 3618, and NMS agent 3620, each of which is described in further detail below.
  • applications such as a Protection/Fault Manager 3612, UNI Signaling 3614, NNI Signaling 3615, Command Line Interface (CLI) 3616, NMS Database Client 3617, Routing 3618, and NMS agent 3620, each of which is described in further detail below.
  • the System Services layer software 3630 may include services such as Resource Manager 3631, Event Manager 3632, Software Version Manager 3633, Configuration Manager 3634, Logger 3635, Watchdog 3636, Flash Memory Interface 3637, and Application "S" Message Manager 3638, each of which is described in further detail below.
  • the Node Manager's Core Embedded Control Software 3641 is provided below an "S" interface and the System Services software 3630. 23.1 Node Manager Core Embedded Software
  • the Node Manager Core Embedded software 3641 is provided between the "S” interface 3640 and the "D” interface 3690.
  • the "D" (drivers) message interface 3690 is for messages exchanged between the LCMs and the Node Manager via the OTS's internal LAN, while the "S" (services) message interface 3640 is for messages exchanged between the application software and the Core Embedded software on the Node Manager.
  • the Node Manager "D” message manager 3646 receives "D” messages such as raw Ethernet packets from the LCM and forwards them to the appropriate process.
  • the Node Manager "S" Message Manager 3642 serves a similar general function: providing inter-process communication between messages from the
  • the Node Manager's Core Embedded software further includes a Node Configuration Manager 3644, which is a master task for spawning other tasks, shown collectively at 3660, at the Node Manager, and may therefore have a large, complex, body of code.
  • This manager is responsible for managing the other Node Manager processes, and knows how to configure the system, such as configuring around an anomaly such as a line card removal or insertion.
  • this manager 3644 determines how many ofthe tasks 3662, 3664, 3666, 3668, 3670, 3672, 3674, 3676 and 3678 need to be started to achieve a particular configuration.
  • the tasks at the Node Manager Core Embedded software are line card tasks/processes for handling the different line card types. These include a TP_IN task 3662, an OA IN task 3644, an OPM task 3666, a clock task 3668, a TP_EG task 3670, an OA_EG task 3672, an OSF task 3674, an ALI task 3676 and an OSM task 3678.
  • the "- 1" notation denotes one of multiple tasks that are running for corresponding multiple line cards of that type when present at the OTS.
  • TP_IN-1 represents a task running for a first TP IN card. Additional tasks for other TP IN cards are not shown specifically, but could be denoted as TP_IN-2, TP_IN-3, and so forth.
  • Managers shown collectively at 3650, manage resources and system services for the line card tasks. These managers include a Database Manager 3652, an Alarms Manager 3654, and an Optical Cross Connect (OXC) Manager 3656.
  • OXC Optical Cross Connect
  • the Database Manager 3652 may manage a database of nonvolatile information at the Node Manager, such as data for provisioning the LCMs.
  • This data may include, e.g., alarm/fault thresholds that are to be used by the LCMs in determining whether to declare a fault if one ofthe monitored parameters ofthe line cards crosses the threshold.
  • the Database Manager 3652 manages a collection of information that needs to be saved if the OTS fails/goes down - similar to a hard disk.
  • the Database Manager 3652 when the OTS is powered up, or when a line card is inserted into a slot in the OTS bay, the associated LCM generates a discovery packet for the Node Manager to inform it that the line card is up and exists. This enables the line cards to be hot swappable, that is, they can be pulled from and reinserted into the slots at any time.
  • the Node Manager uses the Database manager 3652 to contact the database to extract non-volatile data that is needed to provision that line card, and communicates the data to the LCM via the OTS's LAN.
  • the Node Manager's database may be provided using the non-volatile memory resources discussed in connection with FIG. 5.
  • the Alarms Manager 3654 receives alarm/fault reports from the LCMs (e.g., via any of the tasks 3660) when the LCMs determine that a fault condition exists on the associated line card. For example, the LCM may report a fault to the Alarms Manager 3654 if it determines that a monitored parameter such as laser current consumption has crossed a minimum or maximum threshold level. In turn, the Alarms Manager 3654 may set an alarm if the fault or other anomaly persists for a given amount of time or based on some other criteria, such as whether some other fault or alarm condition is present, or the status of one or more other monitored parameters. Furthermore, the presence of multiple alarms may be analyzed to determine if they have a common root cause. Generally, the Alarms Manager 3654 abstracts the fault and/or alarm information to try to extract a story line as to what caused the alarm, and passes this story up to the higher-level Event Manager 3632 via the "S" interface 3640.
  • the Event Manager 3632 distributes the alarm event to any ofthe software components that have registered to receive such an event. A corrective action can then be implemented locally at the OTS, or at the network-level.
  • the OXC Manager 3656 makes sense of how to use the different line cards to make one seamless connection for the customer. For example, using a GUI at the NMS, the customer may request a light path connection from Los Angeles to San Francisco. The NMS decides which OTSs to route the light path through, and informs each OTS via the OSC ofthe next-hop OTS in the light path. The OTS then establishes a light path, e.g., by using the OXC Manager 3656 to configure an ALI line card, TP_IN line card, OA_EG line card, a wavelength, and several other parameters that have to be configured for one cross connect.
  • the OXC Manager 3656 may configure the OTS such that port 1 on TP IN is connected to port 2 on TP OUT.
  • the OXC Manager 3656 disassembles the elements of a cross connection and disseminates the relevant information at a low level to the involved line cards via their LCMs.
  • 23.2 System Services 23.2.1 Resource Manager The Resource Manager 3631 performs functions such as maintaining information on resources such as wavelengths and the state ofthe cross-connects ofthe OTS, and providing cross-connect setup and teardown capability. In particular, the Resource Manager performs the interaction with the switch hardware during path creation, modification, and termination.
  • the context diagram ofthe Resource Manager is shown in FIG. 43.
  • the legend 4330 indicates whether the communications between the components use the Event Manager, an API and TCP, or message passing.
  • the Resource Manager is responsible for setting up network devices upon receiving requests from the NMS Agent (in case of provisioning) or the Signaling component (for a signaled setup).
  • the Resource Manager provides an API that enables other components 4320 to obtain current connection data. Also, the Resource Manager obtains configuration data via an API provided by the Configuration Manager.
  • the associated parameters are stored in flash memory 4310, e.g., via the Flash interface 3637, which may be DOS file based.
  • the Resource Manager retrieves the parameters from flash memory via the Flash Interface and restores them automatically.
  • the associated parameters may be stored in RAM at the Node Manager. Upon reset, these lightpaths must be re-established based on user requests, or other switches could re-establish them.
  • the Resource Manager component also logs all relevant events via the
  • the Event Manager 3632 receives events from the Core Embedded system software 3641 and distributes those events to high level components (e.g., other software components/functions at the System Services 3630 and Applications 3610). It is also used for communication between high level components in cases where the communication is one-way (as opposed to request/response).
  • FIG. 44 depicts its context diagram.
  • the Event Manager sends events to components based on their registrations/subscriptions to the events. That is, in an important aspect ofthe push model ofthe present invention, components can subscribe/unsubscribe to certain events of interest to them. Any application that wants to accept events registers with the Event Manager 3632 as an event listener. Moreover, there is anonymous delivery of events so that specific destinations for the events do not have to be named. For example, when something fails in the hardware, an alarm is sent to whoever (e.g., which application) has registered for that type of alarm.
  • the sender ofthe alarm does not have to know who is interested in particular events, and the receivers ofthe events only receive the types of events in which they are interested.
  • the OTS software architecture thus uses a push model since information is pushed from a lower layer to a higher layer in near real-time.
  • the Event Manager may be used as a middleman between two components for message transfer. For example, a component A, which wants to send a message X to another component B, sends it to the Event Manager. Component B must subscribe to the message X in order to receive it from the Event Manager.
  • the event library software (EventLib) may include the following routines:
  • EventRegister( ) register for an event to get an event message when the event occurs; EvenfUnRegiste ⁇ ) - un-register for an event; and
  • High-level applications e.g. signaling, routing, protection, and NMS agent components
  • register for events that are posted by Core Embedded components such as device drivers.
  • High-level components register/un-register for events by calling EventRegister()/EventUnRegister().
  • Core Embedded components use EventPost() to post events.
  • the Event Dispatcher may be implemented via POSIX message queues for handling event registration, un-registration, and delivery. It creates a message queue, ed_ dispQ, when it starts. Two priority levels, high and low, are supported by ed_dispQ.
  • EventRegister() When a component registers for an event by calling EventRegister(), a registration event is sent to ed_ dispQ as a high priority event.
  • Event Dispatcher registers the component for that event when it receives the registration event. If the registration is successful an acknowledgment event is sent back to the registering component. A component should consider the registration failed if it does not receive an acknowledgment within a short period of time. It is up to the component to re-register for the event.
  • a component may register for an event for multiple times with the same or different message queues. If the message queue is the same, later registration will over- write earlier registration. If the message queues are different, multiple registrations for the same event will co-exist, and events will be delivered to all message queues when they are posted.
  • event registration may be permanent or temporary. Permanent registrations are in effect until cancelled by EventUnRegister().
  • EventUnRegister() sends a un-register event (a high priority event) to ed_ dispQ for Event Dispatcher to un-register the component for that event. Temporary registrations are cancelled when the lease time expired.
  • a component may pre-maturely cancel a temporary registration by calling EventUnRegister(). If the un-registration is successful, an acknowledgment event is delivered to the message queue ofthe component. When a component uses EventPost() to post an event, the posted event is placed in ed_ dispQ, too.
  • An event is either a high priority or a low priority event.
  • Event Dispatcher delivers an event by moving the event from ed_ dispQ to the message queues of registered components. So, a component must create a POSIX message queue before registering for an event and send the message queue name to the Event Dispatcher when it registers for that event. Moreover, a component may create a blocking or non-blocking message queue. If the message queue is non-blocking, the component may set up a signal handler to get notification when an event is placed in its message queue.
  • the Software Version Manager (SVM) 3633 is responsible for installing, reverting, backing up and executing of software in the Node Manager and LCMs. Its context diagram is depicted in FIG. 45.
  • the SVM maintains and updates software on both the Node Manager and the LCMs by keeping track ofthe versions of software that are used, and whether a newer version is available.
  • different versions of Node Manager software and LCM software can be downloaded remotely from the NMS to the Node Manager from time to time as new software features are developed, software bugs are fixed, and so forth.
  • the Node Manager distributes the LCM software to the LCMs.
  • the SVM keeps a record of which version of software is currently being used by the Node Manager and LCMs.
  • the SVM installs new software by loading the software onto flash memory, e.g., at the Node Manager.
  • the SVM performs backing up by copying the current software and saving it on another space on the flash memory.
  • the SVM performs the reverting operation by copying the back up software to the current software.
  • the SVM performs the execution operation by rebooting the Node Manager or the LCMs.
  • the SVM receives an install command from the NMS agent that contains the address, path and filename ofthe code to be installed.
  • the SVM may perform a File Transfer Protocol (FTP) operation to store the code into its memory. Then, it uses the DOS Flash interface services 3637 to store the code into the flash memory.
  • FTP File Transfer Protocol
  • the SVM receives the backup command from the NMS agent.
  • the SVM uses the DOS Flash interface to copy the current version ofthe code to a backup version.
  • the SVM receives the revert command from the NMS agent and uses the DOS Flash interface to copy the backup version ofthe software to the current version.
  • the Node Manager software is executed by rebooting the Node Manager card.
  • the Installation, reverting, backing up and executing operations can also be performed on the software residing on the line cards.
  • the software/firmware is first "FTPed" down to the Node Manager's flash memory. Then, the new firmware is downloaded to the line card. This new code is stored in the line card's flash memory. The new code is executed by rebooting the line card.
  • 23.2.4 Configuration Manager The Configuration Manager 3634 maintains the status of all OTS hardware and software components. Its context diagram is shown in FIG. 46.
  • the legend 4610 indicates whether the communications between the components use the Event Manager, an API and TCP, or message passing.
  • the Configuration Manager obtains the desired configuration parameters from the database/server (or possibly a configuration file) at the NMS.
  • the LCMs are responsible for monitoring the status ofthe line cards. When a line card becomes active, it immediately generates a Discovery message that the LCM for each optical card forwards to the Event Manager 3632 that is running on the Node Manager.
  • the Configuration Manager receives these messages by subscribing to them at the Event Manager. It then compares the stored configuration versus the reported configuration. If there is a difference, the Configuration Manager sets the configuration according to the stored data by sending a message to the LCM via the Event Manager and S-Interface. It also reports an error and stores the desired configuration in the Node Manager's flash memory. When the system is subsequently re-booted, the operation is identical, except the desired configuration is stored in flash memory.
  • the LCMs are configured to periodically report a status of their optical line cards. Also, when a device fails or has other anomalous behavior, an event message such a fault or alarm is generated.
  • the Configuration Manager receive these messages via the Event Manager, and issues an event message to other components. Moreover, while not necessary, the Configuration Manager may poll the LCMs to determine the line card status if it is desired to determine the status immediately.
  • the Configuration Manager may request that the database/server client gets the information (configuration parameters) via the database/server, which resides in the NMS host system. After configuring the devices, the Configuration Manager posts an event to the Event Manager so that other components (e.g., NMS Agent and the Resource Manager) can get the desired status ofthe devices.
  • the database/server client gets the information (configuration parameters) via the database/server, which resides in the NMS host system.
  • the Configuration Manager posts an event to the Event Manager so that other components (e.g., NMS Agent and the Resource Manager) can get the desired status ofthe devices.
  • the desired configuration can be changed via CLI or NMS command.
  • the Configuration Manager After the Configuration Manager receives a request from the NMS or CLI to change a device configuration, the Configuration manager sends an "S" message down to the LCMs to satisfy the request. Upon receiving the acknowledge message that the request was carried out successfully, the Configuration Manager sends an acknowledgement message to the requester, stores the new configuration into the database service, logs a message to the Logger, and post an event via the Event Manager.
  • the NMS/CLI can send queries to the Configuration Manager regarding the network devices' configurations.
  • the Configuration Manager retrieves the information from the database and forwards them to the NMS/CLI.
  • the NMS/CLI can also sends a message to the Configuration Manager to change the reporting frequency or schedule ofthe device/line card.
  • the Logger 3635 sends log messages to listening components such as debugging tasks, displays, printers, and files. These devices may be directly connected to the Node Manager or connected via a socket interface.
  • the Logger's context diagram is shown in FIG. 47.
  • the Logger is controlled via the CLI, which may be implemented as either a local service or remote service via Telnet.
  • the control may specify device(s) to receive the Logging messages (e.g., displays, files, printers - local or remote), and the level of logging detail to be captured (e.g., event, error event, parameter set).
  • device(s) to receive the Logging messages e.g., displays, files, printers - local or remote
  • the level of logging detail to be captured e.g., event, error event, parameter set
  • the Watchdog component 3638 monitors the state ("health") of other (software) components in the Node Manager by verifying that the components are working. 23.2.7 Flash Memory Interface
  • a Disk Operating System (DOS) file interface may be used to provide an interface 3637 to the flash memory on the Node Manager for all persistent configuration and connection data. Its context diagram is depicted in FIG. 48. The legend 4820 indicates that the components communicate using an API and TCP. The Resource Manager 3631 and Configuration Manager 3634 access the Flash Memory 4810 as if it were a DOS File System. Details of buffering and actual writing to flash are vendor- specific.
  • DOS Disk Operating System
  • the primary function ofthe Protection/Fault Manager component is to respond to alarms by isolating fault conditions and initiating service restoration.
  • the Protection Fault Manager isolates failures and restores service, e.g., by providing alternate link or path routing to maintain a connection in the event of node or link failures.
  • the Protection/Fault Manager interfaces with the Logger 3635, WatchDog 3636, Resource Manager 3631, Configuration Manager 3634, Event Manager 3632, NMS Agent 3620, NNI Signaling 3615 and Other Switches/OTSs 3710.
  • the legend 3720 indicates the nature ofthe communications between the components.
  • the Protection Fault Manager subscribes to the Event Manager to receive events related to the failure of links or network devices.
  • the Protection Fault Manager When the Protection Fault Manager receives a failure event and isolates the cause ofthe alarm, it determines the restorative action and interacts with the appropriate application software to implement it. If there is problem isolating or restoring service, the problem is handed over to the NMS for resolution. Some service providers may elect to perform their own protection by requesting two disjoint paths. With this capability, the service provider may implement 1+1 or 1 :1 protection as desired. When a failure occurs, the service provider can perform the switchover without any assistance from the optical network. However, the optical network is responsible for isolating and repairing the failure.
  • the Protection/Fault component uses the Event Manager to log major events via the Logger component, updates its MIB, and provides its status to the Watchdog component. It also updates the Protection parameters in the shared memory. 23.3.2 UNI Signaling
  • the Signaling components includes the User-Network Interface (UNI) signaling and the internal Network-Network Interface (NNI) signaling.
  • the primary purpose of signaling is to establish a lightpath between two endpoints. In addition to path setup, it also performs endpoint Registration and provides a Directory service such that users can determine the available endpoints.
  • the UNI signaling context diagram is depicted in FIG. 38.
  • the UNI uses both message passing and APIs provided by other components to communicate with other components.
  • the legend 3830 indicates whether the communications between the components use an API and TCP, or message passing.
  • the UNI component provides a TCP/IP interface with User devices 3810, e.g., devices that access the optical network via an OTS. If the User Device does not support signaling, a NMS proxy signaling agent 3820 resident on an external platform performs this signaling. When a valid "create lightpath" request is received, the UNI invokes the
  • NNI to establish the path.
  • users may query, modify or delete a lightpath.
  • the UNI Signaling component 3614 obtains current configuration and connection data from the Configuration and Resource Managers, respectively. It logs major events via the Logger component, updates its MIB used by the SNMP Agent, and provides a hook to the WatchDog component to enable the WatchDog to keep track of its status.
  • the NNI signaling component 3615 performs the internal signaling between switches in the optical network, e.g., using MPLS signaling.
  • the legend 3910 indicates whether the communications between the components use the Event Manager, an API and TCP, or message passing.
  • requests for service to establish a lightpath between two endpoints may be received over the UNI from an external device or a proxy signaling agent.
  • UNI signaling validates the request and forwards it, with source and destination endpoints, to the NNI signaling function for setup.
  • Source- based routing may be used, in which case NNI must first request a route from the Routing component 3618.
  • the Routing component 3618 returns the selected wavelength and set of switches/OTSs that define the route.
  • the NNI signaling component requests the Resource Manager 3631 to allocate the local hardware components implementing the path, and forwards a create message to the next switch in the path using TCP/IP over the OSC.
  • Each OTS has its local Resource Manager allocate hardware resources to the light path. When the path is completed, each OTS returns an acknowledgment message along the reverse path confirming the successful setup, and that the local hardware will be configured. If the attempt failed due to unavailability of resources, the resources that had been allocated along the path are de-allocated. In order for other components (other than UNI, e.g., Routing) to learn if an attempt if the path setup was successful, the NNI distributes (posts) a result event using the Event Manager 3632.
  • the NNI Signaling component 3615 obtains current configuration data from the Configuration Manager 3634, and connection data from the Resource Manager 3631. It also logs major events via the Logger component 3635, updates its MIB used by the SNMP Agent, and provides a hook to the WatchDog component 3636 to enable the WatchDog to keep track of its status. 23.3.4 Command Line Interface
  • the CLI task 3616 an interface that is separate from the GUI interface, provides a command-line interface for an operator via a keyboard/display to control or monitor OTSs.
  • the functions ofthe CLI 3616 include setting parameters at bootup, entering a set/get for any parameter in the Applications and System Services software, and configuring the Logger.
  • the TL-1 craft interface definition describes the command and control capabilities that are available at the "S" interface. Table 5 lists example command types that may be supported.
  • Rtrv-crs Type retrieves cross connect information
  • Rtrv-eqpt Address-id retrieves the equipage (configuration) of the
  • Rtrv-node N/A retrieves OTS node parameters Rtrv-pmm Slot, port, wavelength Retrieves the performance monitor meas.
  • an NMS database client 3617 may reside at the Node Manager to provide an interface to one or more database servers at the NMS.
  • One possibility is to use LDAP servers.
  • FIG. 40 Its context diagram is depicted in FIG. 40.
  • the database/server client 3617 interacts with the NMS's database server, and with the Configuration Manager 3634.
  • the database client contacts the server for configuration data.
  • the client Upon receiving a response from the server, the client forwards the data to the Configuration Manager.
  • the legend 4020 indicates whether the communications between the components use the Event Manager, or an API and TCP.
  • the database client may be used relatively infrequently. For example, it may be used to resolve problems when the stored configuration is not consistent with that obtained via the LCM's discovery process.
  • the client keeps the addresses of both servers. If the primary server does not function, after waiting for a pre-determined period, the client forwards the request to the backup server.
  • the Configuration Manager posts an event to the Event Manager.
  • the Event Manager forwards the event to the NMS Agent, which in turn forwards the event to the NMS application.
  • the NMS application recognizes the event and contacts the server to update its table.
  • the Routing Component 3618 computes end-to-end paths in response to a request from the NNI component.
  • the context diagram, FIG. 41 depicts its interfaces with the other components.
  • the legend 4110 indicates whether the communications between the components use the Event Manager, an API and TCP, or message passing.
  • the Routing Component which may implement the OSPF routing algorithm with optical network extensions, is invoked by the NNI Signaling component at the path source during setup. Routing parameters are input via the SNMP Agent.
  • Routing is closely related to the Protection/Fault Manager. As part of the protection features, the Routing component may select paths that are disjoint (either link disjoint or node and link disjoint as specified by signaling) from an existing path.
  • the Routing component exchanges Link State Advertisement messages with other switches. With the information received in these messages, the Routing component in each switch maintains a complete view ofthe network such that it can compute a path.
  • the embedded NMS Agent 3620 provides the interface between NMS applications 4210 (e.g., configuration, connection, topology, fault/alarm, and performance) and the Applications resident on the Node Manager.
  • the NMS agent may use SNMP and a proprietary method.
  • FIG. 42 shows the context diagram ofthe NMS Agent.
  • the legend 4220 indicates whether the communications between the components use the Event Manager, an API and TCP, or message passing.
  • the NMS Agent operates using a "pull model" - all ofthe SNMP data is stored locally with the relevant component (e.g., UNI, NNI, Routing, Protection). When the NMS Agent must respond to a Get request, it pulls the information from its source.
  • the NMS Agent receives requests from an NMS application and validates the request against its MIB tables. If the request is not validated, it sends an error message back to the NMS. Otherwise, it sends the request using a message passing service to the appropriate component, such as the Signaling, Configuration Manager, or Resource Manager components.
  • the NMS agent may subscribe to events from the Event managers.
  • the events of interest include the "change" events posted by the Resource Manager, Configuration Manager and the UNI and NNI components, as well as messages from the LCMs.
  • the NMS Agent Upon receiving events from the Event Manager or unsolicited messages from other components (e.g., Signaling), the NMS Agent updates its MIB and, when necessary, sends the messages to the NMS application using a trap.
  • FIG. 49 illustrates a Line Card Manager software architecture in accordance with the present invention.
  • the LCM software 4900 is provided below the "D" interface 3690, and generally includes a Core Embedded control layer to provide the data telemetry and I/O capability on each ofthe physical interfaces, and an associated operating system that provides the protocols (e.g., TCP/IP) and timer features necessary to support real-time communications.
  • the LCM software 4900 which may run on top of an operating system such as VxWorks, includes an LCM "D" Message Manager 4970 for sending messages to, and receiving messages from, the Node Manager "D" Message Manager 3646 via the "D" interface 3690.
  • This manager 4970 is an inter-process communication module which has a queue on it for queuing messages to the Node Manager.
  • An LCM Configuration Manager 4972 is a master process for spawning and initializing all other LCM tasks, and performs functions such as waking up the LCM board, configuring the LCM when the system/line card comes up, and receiving voltages and power.
  • the LCM line card tasks 4973 include tasks for handling a number of line cards, including an TP IN handler or task 4976, an OA_IN handler 4978, a OPM handler 4980, a clock (CLK) handler 4982, a TP EG handler 4984, an OA_EG handler 4986, an OSF handler 4988, an ALI handler 4990, and an OSM handler 4992.
  • the line card handlers can be thought of as being are XORed such that when the identity ofthe pack (line card) is discovered, only the corresponding pack handler is used.
  • the LCM software 4900 is generic in that it has software that can handle any type of line card, so there is no need to provide a separate software load for each LCM according to a certain line card type. This simplifies the implementation and maintenance of the OTS. Alternatively, it is possible to provide each LCM with only the software for a specific type of line card.
  • Each ofthe active line card handlers can declare faults based on monitored parameters that they receive from the respective line card. Such faults may occur, e.g., when a monitored parameter is out of a pre-set, normal range.
  • the line card handlers may signal to the customer that fault conditions are present and should be examined in further detail, by using the Node Manager and NMS.
  • the line card handlers use push technology in that they push event information up to the next layer, e.g., the Node Manager, as appropriate. This may occur, for example, when a fault requires attention by the Node Manager or the NMS. For example, a fault may be pushed up to the Alarms Manager 3654 at the Node Manager Core Embedded Software, where an alarm is set and pushed up to the Event manager 3632 for distribution to the software components that have registered to receive that type of alarm. Thus, a lower layer initiates the communication to the higher layer.
  • the next layer e.g., the Node Manager
  • the clock handler 4982 handles a synchronizing clock signal that is propagated via the electrical backplane (LAN) from the Node Manager to each LCM. This is necessary, for example, for the line cards that handle SONET signals and thereby need a very accurate clock for multiplexing and demultiplexing.
  • the LCM performs telemetry by constantly collecting data from the associated line card and storing it in non-volatile memory, e.g., using tables.
  • only specific information is sent to the Node Manager, such as information related to a threshold crossing by a monitored parameter ofthe line card, or a request, e.g., by the NMS through the Node Manager, to read something from the line card.
  • a transparent control architecture is provided since the Node Manager can obtain fresh readings from the LCM memory at any time.
  • the Node Manager may keep a history log ofthe data it receives from the LCM.
  • Node Manager Message Interfaces As mentioned, the Node Manager supports two message interfaces, namely the "D" Message Interface, which is for messages exchanged between the LCMs and the Node Manager, and the "S" Message interface, which is for messages exchanged between the application software and the Core Embedded system services software on the Node Manager. 25.1 "D" Message Interface Operation
  • the "D" message interface allows the Node Manager to provision and control the line cards, retrieve status on demand and receive alarms as the conditions occur. Moreover, advantageously, upgraded LCMs can be connected in the future to the line cards using the same interface. This provides great flexibility in allowing baseline LCMs to be fielded while enhanced LCMs are developed. Moreover, the interface allows the LCMs and Node Manager to use different operating systems.
  • the Core Embedded Node Manager software builds an in-memory image of all provisioned data and all current transmission-specific monitored parameters.
  • the Node Manager periodically polls each line card for its monitored data and copies this data to the in-memory image in SDRAM.
  • the in-memory image is modified for each alarm indication and clearing of an alarm, and is periodically saved to flash memory to allow rapid restoration ofthe OTS in the event of a system reboot, selected line card reboot or selected line card swap.
  • the in-core memory image is organized by type of line card, instance of line card and instances of interfaces or ports on the type of line card.
  • LCM has a local in-memory image of provisioning information and monitored parameters specific to that board type and instance.
  • the "D” message interface uses a data link layer protocol (Layer 2) that is carried by the OTS's internal LAN.
  • the line cards and Node Manager may connect to this LAN to communicate "D" message using RJ-45 connectors, which are standard serial data interfaces.
  • a "D" Message interface dispatcher may run as a VxWorks task on the LCM.
  • the LCM is able to support this dispatcher as an independent process since the LCM processor is powerful enough to run a multi-tasking operating system.
  • the data link layer protocol which may use raw Ethernet frames (including a destination field, source field, type field and check bits), avoids the overhead of higher-level protocol processing that is not warranted inside the OTS.
  • a sniffer connected to the OTS system's internal LAN captures and display all messages on the LAN.
  • a sniffer is a program and/or device that monitors data traveling over a network. The messages should be very easy to comprehend.
  • all messages are contained in one standard Ethernet frame payload to avoid message fragmenting on transmission, and reassembly upon receipt.
  • this protocol is easy to debug, and aids in system debugging.
  • this scheme avoids the problem of assigning a network address to each line card. Instead, each line card is addressed using its built-in Ethernet address.
  • the Node Manager discovers all line cards as they boot, and adds each line card's address to an address table.
  • This use of discovery messages combined with periodic audit messages obviates the need for equipage leads (i.e., electrical leads/contacts that allow monitoring of circuits or other equipment) in the electrical backplane, and the need for monitoring of such leads by the Node Manager.
  • an LCM informs the Node Manager of its presence by sending it a Discovery message. Audit messages are initiated by the Node Manager to determine what line cards are present at the OTS.
  • DISCOVERY message Used by the LCM to inform the Node Manager of its presence in the OTS when the line card reboots.
  • the Node Manager responds with a Discovery Acknowledge message.
  • Tables 6-11 define example "D” message interface packets. Note that some ofthe messages, such as the "discovery” and “attention” messages, are examples of anonymous push technology since they are communications that are initiated by a lower layer in the control hierarchy to a higher layer. Table 6: Instruction Codes in LCM to Node Manager Packets
  • Pack type pack type type, version, serial number
  • ADC measures last measured values of analog inputs
  • Node Manager Source address 3 MAC address of OTS Node Manager
  • the "S" message interface ofthe Node Manager provides the application layer software with access to the information collected and aggregated at the "D" message interface.
  • Information is available on the Core Embedded software side (control plane) ofthe "S" message interface by line card type and instance for both read and write access.
  • An example of read access is "Get all monitored parameters for a particular line card instance.”
  • An example of write access is "Set all control parameters for a specific line card instance.” Performance can be increased by not supporting Gets and Sets on individual parameters.
  • these messages may register/deregister an application task for one or more alarms from all instances of a line card type, provide alarm notification, get all monitored parameters for a specific line card, or set all control parameters for a specific line card.
  • the "S” message interface is an abstraction layer: it abstracts away, from the application software's perspective, the details by which the lower-level Node Manager software collected and aggregated information. While providing an abstract interface, the "S” Message Interface still provides the application layer software with access to the aggregated information and control obtained from the hardware via the "D" Message Interface, and from the Node Manager state machines. Moreover, the "S” interface defines how the TL-1 craft interface is encoded/decoded by the Node Manager. The TL-1 craft interface definition describes the command and control capabilities that are available at the "S” interface. See section 23.3.4, entitled “Command Line Interface.” The application software using the "S” Message interface may run as, e.g., one or more VxWorks tasks.
  • the Core Embedded software may run as a separate VxWorks task also.
  • the "S" Message Interface may be implemented using message queues, which insulates both sides ofthe interface from a hung or rebooting task on the opposite side ofthe interface.
  • the LCM this division ofthe Node Manager software into independent tasks is possible because the Node Manager is powerful enough to run a multi-tasking operating system. Therefore, the present inventive control architecture utilizes the presence of a multi-tasking operating system at all three of its levels: LCM, Node
  • Wavelength capacity 64 wavelength channels
  • Fiber wavelength density 8 wavelengths
  • Data rate Totally transparent
  • Wavelength spacing 200GHz (ITU-grid)
  • Optical bandwidth (channels) C and L bands
  • Wavelength protection Selectable on a per lightpath basis
  • Optical line interface cards GbE, OC-n/STM-n (i) 16-ports (8 input & 8 output) OC-12 line card (ii) 16-ports (8 input & 8 output) OC-48 line card
  • SDRAM 256 MB upgradable to 512 MB
  • Flash Memory 64 Mbytes Ethernet Port: 100 BaseT with Auto-Sensing
  • Ethernet Hubs OEM assembly 10 ports, 1 per shelf
  • SDRAM 64 MB upgradable to 128 MB
  • Ethernet Port 100 BaseT with Auto-Sensing
  • the OTS system's chassis is designed in a modular fashion for a high density circuit pack. Two stacks of sub-rack systems may be used.
  • OTS all-optical transport switching system
  • Protection and restoration features are categorized in terms of network managed: (i) path protection; (ii) line protection; and (iii) path restoration.
  • Path protection is the provisioning of pre-determined redundant end-to-end paths that can be switched into operation whenever a problem is detected with a working end-to-end path.
  • Line protection is the provisioning of predetermined links that can be utilized to dynamically re-route traffic around a failed link.
  • Path restoration is the re- signaling or re-institution of a connection which ceases to function.
  • the hierarchy comprises three tiers or levels: a network management system (NMS) 280, Node Managers (NM) 250, and line card managers (LCM) 410.
  • the LCMs 410 control and monitor local resources, such as lasers and optical light paths, on line cards and the switch fabric.
  • one LCM 410 controls one line card (including switch fabric).
  • Each LCM performs basic line-card monitoring functions and communicates the results ofthe monitoring to its respective NM 250.
  • the LCMs 410 also receive instructions from the NMs 250 to control line card resources such as input or output signal multiplexers.
  • Each NM 250 interfaces with all the LCMs 410 within a given OTS and is responsible for node level functions such as signaling and routing. For example, whenever a light path, trail or circuit is created between OTSs, the NM 250 of each OTS performs the necessary signaling, routing and switch configuration to set up a link involving that OTS along the trail. As such, the NM 250 may send configuration instructions to a particular optical access ingress card, switch fabric, and a particular optical access egress card in order to establish the required optical cross-connection.
  • the NM 250 also receives fault messages from the LCMs 410 under its supervision so that alarm conditions can be detected, isolated, and reported to the NMS 280. 27.1.1.1 Path Protection
  • path protection mechanisms There are at least two types of path protection mechanisms: 1+1 protection and 1 :1 (or 1 :N) protection.
  • the "+” refers to a redundant backup or alternate path, which carries a duplicate ofthe protected traffic.
  • the ":” refers to a reserved path which may carry no or low priority traffic that may be pre-empted by the protected traffic.
  • an ingress OTS splits the bearer channel or user data signal into two optical signals which are transmitted over two paths, referred to as the working path and protection path, to the destination endpoint.
  • the working path signal is initially forwarded to the user or external network access device.
  • the protection path signal is forwarded to the user or external network access device instead.
  • This approach relies on the signaling and routing capabilities ofthe OTS to establish disjoint paths and the LCMs to monitor the received signal and effect switchover, as described in greater detail below.
  • the NM and NMS can isolate the failure and initiate repair actions. Since the LCM on the egress switch controls the switchover, recovery can be performed well within 50 ms, which is considerably faster that standard SONET performance requirements.
  • 1 :1 path protection is supported using the same hardware features but different control software.
  • 1 : 1 protection two lightpaths are setup with one path supporting high priority data traffic and the other supporting lower priority, pre-emptable traffic. When a failure occurs on the lightpath supporting the high priority traffic, the low priority traffic is pre-empted and the high priority traffic is re-routed over the low priority lightpath.
  • 1 : 1 protection provides the capability to use the protection path for low priority traffic, it will take longer to switchover since both ingress and egress OTS's are involved and the NMS must co-ordinate the switchover. However, it is expected that service can be restored within one second.
  • the networked managed path protection is carried out through bridging between line cards. This implementation is shown for line cards supporting OC-n devices. In the alternative, networked managed path protection may be implemented using the bridging capability ofthe switch fabric. This implementation is shown for line cards supporting Gigabit Ethernet (GbE).
  • GbE Gigabit Ethernet
  • Line protection generally reduces bandwidth requirements since protection paths may be shared among links.
  • links A and B may share part ofthe same protection path since it is very unlikely both links will fail at the same time. However, this comes at the expense of less rapid recovery than 1+1 protection.
  • protection spans i.e., set of links, not an end-to-end path
  • protection spans are preferably stored for recovery of all failed links and switches. Since the backup paths are pre-stored, the recovery time is expected to be less than one second. However, the recovery time is likely to exceed the recovery time provided by 1+1 path protection because signaling between switches is required to perform the switchover.
  • the service provider could implement a network with a mix of protected and unprotected links. Lightpath requests requiring protection could be routed over the protected links while lightpaths not needing protection would be routed over the unprotected links.
  • the OTS may also include line backup capability.
  • a service provider may use a diversely routed four-fiber interface between switches. It is unlikely that both fiber pairs will fail at the same time when diversely routed. When there is a fiber cut in the working fiber pair, the OTS will automatically switch to the protection pair of fibers.
  • path based restoration schemes can be used for re-instituting signaled (or switched) lightpaths.
  • the OTS switch possibly with the help ofthe NMS, will re-route switched lightpaths when failures occur.
  • Provisioned (or permanent) lightpaths can be restored administratively under operator control or using the services of an NMS, which stores provisioned light paths.
  • FIG. 51 depicts this concept with the traffic between SONET systems (network access devices) 5140A and 5140B being transmitted over a working path C-D-E (solid line 5110) and over a protection path C-B-E (dashed line 5120).
  • SONET systems 5140A and 5140B are examples of network access devices since they allow an access network, such as one using SONET, to access the optical network via an OTS.
  • this feature employs a single bi-directional OC-n interface 5220 between the OTS 5225 and the user or external network access equipment 5140 using two ALI cards 5210 and 5212 respectively referred to as the primary (“P") and secondary (“S”) cards.
  • P primary
  • S secondary
  • FIG. 52 shows the configuration when the OTS functions as an ingress node (for traffic flowing from right to left in the drawing) as well as an egress node (for traffic flowing from left to right in the drawing).
  • both OTS switches C and E are set up in this way.
  • the primary and secondary ALI cards 5210 and 5212 are interconnected via an inter-card electrical channel 5214 which allows the working and protection communication pathsto bridge the two ALI cards 5210 and 5212 as described in greater detail below.
  • the Node Manager ofthe OTS responsible for establishing the protected path e.g., Switch C
  • the ALI cards 5210 and 5212 are connected to separate OA cards (not shown in FIG. 52) and should use different ⁇ 's so that separate 8x8 MEMs in the switch fabric are used.
  • the routing function ofthe OTS provides disjoint paths between endpoints 5140A and 5140B for the working and protection traffic flows.
  • the network access equipment 5140 A or 5140B (up to 4 OC12 devices) is connected only to the primary ALI card 5210 on both the ingress and egress OTSs.
  • each ALI supports two OC48 lightpaths, a pair of ALI cards can provide protection switching to two sets of OC 12 user equipment .
  • the Node Manager instructs the primary ALI card 5210 to duplicate select traffic flows for transmission through the inter-card electrical channel 5214. However, the Node Manager does not instruct the secondary ALI card 5212 to transmit these duplicate flows over the low priority path until such time as a failure is detected in the high priority path. Similarly, on the egress OTS (when traffic flows from left to right in FIG. 54) the Node Manager thereof instructs the primary ALI card to select the duplicate traffic flows (arriving from the secondary ALI card) for delivery to the user equipment when the failure is detected in the high priority path. 27.2.2 Failure Scenario
  • the primary and secondary paths are routed over a disjoint set of nodes and links, the propagation delay will be different for each path. Thus, when the switchover occurs, there may be a loss of data or redundant data passed to the user device.
  • the primary and secondary paths are preferably set up using a constraint-based routing in order to set a limit on permitted propagation delay.
  • Failure detection for 1 :1 path protection is similar to that of 1+1 protection, but involves the services ofthe Node Managers on the ingress and egress OTS switches as well as the NMS.
  • the line card manager thereof sends a fault message to the local Node Manager.
  • the local Node Manager signals the line card manager ofthe secondary ALI card 5212 to select and transmit over the network the traffic flows received from the inter-card channel 5214 rather than user equipment 5140A2 (see FIG. 54). The same actions are expected to occur on the egress switch.
  • the Node Managers associated with the ingress and egress switches will also send an alarm to the NMS, which will signal the alarm to the Node Managers ofthe ingress and egress OTSs.
  • FIG. 55 shows the architecture of an OC-48 ALI card 5510 in greater detail.
  • the OC-48 ALI card multiplexes/de-multiplexes eight OC-12 signals 5512 into/from two OC-48 signals 5514.
  • Note that other versions ofthe optical line card are also possible, such as an OC-192 ALI card described above, and have similar although not identical architectures.
  • the line card manager 5516 is a daughterboard ofthe ALI card which provides a processor for executing line card manager software, as described in greater detail above.
  • the OC-48 ALI card 5510 converts the physical characteristics of optical signals used by external network access equipment into optical signals supported by the OTS network. This necessitates conversion of optical signals into the electrical domain and vice versa.
  • Transceivers (TRx) 5515 and 5517 are used for this purpose.
  • the OC-n ALI cards may provide the SONET framers 5520 using a pair of AMCC SONET MissouriTM microchips 5520 (part no. S4802).
  • the OC-48 SerDes provides serializing and deserializing. With serializing, one packet is fully transmitted prior to the next being sent such that cell interleaving is precluded.
  • FIG. 56 The datapath for 1+1 or 1:1 protection is illustrated in FIG. 56, which shows the datapath on ingress, and in FIG. 57, which shows the datapath on egress. (External OC-12 fiber is not shown.)
  • 1+1 path protection (FIG. 56) the OC-12 data 5512 is connected to the Rx ports 5610P of a SONET framing chip 5520P on the primary ("P") ALI card 5510P.
  • the OC-12 input ports ofthe secondary (“S") ALI card 5510S are not connected to the network access equipment and hence the Rx ports 561 OS on the SONET framing chip 5520S receive or carry no data.
  • the OC-12 data 5512 associated with the high priority path 5310 (FIG. 53) is connected to the Rx ports 561 OP ofthe primary SONET framing chip 5520P.
  • the OC-12 data associated with the low priority path 5312 is connected to the Rx ports 561 OS ofthe secondary SONET framing chip 5520S.
  • the primary SONET framing chip 5520P interleaves the data into an OC-48 signal 5514P and sends it out an OC-48 Tx interface 5620P Also, the primary SONET framing chip 5520P forwards the OC-12 data out of a protection port 5630 to the secondary SONET framing chip 5520S.
  • the data links are shown by horizontal lines connecting the two framing chips 5520P and 5520S. Each bundle ofthe four horizontal lines corresponds to 16 data, 2 control, and 1 differential clock signals routed over the inter-card electrical channel.
  • the inter-card electrical channel is preferably carried over the OTS electrical backplane (schematically represented by dashed line 5640).
  • the OTS electrical backplane is preferably constructed to include a parallel bus between selected pairs of adjacent slots in the OTS chassis or bay that are intended to house ALI cards, thereby facilitating the inter- card electrical channel used for protection switching applications.
  • Each SONET framing chip 5520 has a data duplication capability and a data input selector 5650 which is utilized to select the appropriate OC-12 input 5512.
  • the primary SONET framing chip 5520P selects the left inputs to the selectors 5650P whereas the secondary SONET framing chip 5520S selects the right inputs to the selectors 5650S. This way the data transmitted on the primary and secondary OC-48 signals 5514P and 5514S is identical.
  • the primary SONET framing chip 5520P selects the left inputs to the selectors 5650P and the secondary SONET framing chip 5520S also selects the left inputs to the selectors 5650S, thereby enabling both high priority and low priority traffic to be transmitted over the optical network.
  • the secondary SONET framing chip 5520S is instructed to select the right inputs to the selectors 5650S thereby activating the switchover.
  • the primary and secondary OC-48 signals 5514P and 5514S are received and demultiplexed into four OC-12 signals 5512 by the SONET framing chips.
  • the line card manager 5516 detects that the quality ofthe primary OC-48 signal 5514P falls below a certain threshold, then the right inputs of selectors 5750P are chosen.
  • the behavior ofthe selectors 5650P, 5650S, 5750P and 5750S is programmed via a CPU interface 5530 (FIG. 55) available on the SONET framing chips 5520.
  • the line card manager 5516 can thus control the framing chips.
  • the quality ofthe OC-48 signals 5514 can be read via the CPU interface 5530 as well.
  • the software interface between the line card manager and the rest ofthe ALI card is done via the SPI serial bus 5532 connecting the LCM 5516 and a control FPGA 5534 (see more particularly Section 4 and FIG. 6).
  • the control FPGA 5534 actually interfaces to all the ICs on the card.
  • All ofthe registers ofthe SONET framing chips are readable and/or writeable via the CPU interface 5530.
  • the SONET framing chip To monitor the state ofthe OC-12 and OC-48 inputs 5512 and 5514 the SONET framing chip provides LOC (loss of clock), LOS (loss of signal), OOF (out of frame), LOF (loss of frame) registers. Whenever appropriate the SONET framing chip maintains counters or state change indication registers for the above bytes. The chip can also be programmed to generate an alarm output based on a predefined error condition.
  • the input signal structure of the SONET framing chip 5520 is defined via a CONFIG register.
  • the selector/cross-connect functionality ofthe chip is controlled via a MUXSEL register.
  • the chip will send an alarm signal if an LAIS_GEN register is set.
  • REF CLK SEL and REF_CLK_FREQ registers are provided. The above registers provide the information about the quality ofthe input data and control the data flow needed to implement APS (automatic protection switching).
  • FIG. 58 depicts the concept for a uni-directional flow from A to B.
  • the ingress switch fabric 5815A has the capability to bridge an OA_In channel 5834A onto two TP_eg cards 5832A and 5832B thus creating a working flow and a protection flow 5816 within the optical network to the destination. These flows take disjoint paths through the optical network such that a single failure does not affect both of them.
  • the Node Manager on the egress switch 5820 monitors the received signal on each path. When a loss of signal is detected on the working path 5814, the Node Manager has the protection TP in 5832D cross-connected to the OA Eg 5834B (the cross-connect is denoted by dashed line 5838).
  • Path Protection 1 :1 path protection uses the same concept shown in FIG. 58. However, the switch fabric 5815A does not split the OA_In channel 5834A until after a fault is detected on the working path 5814. At that point the switch fabric 5815A bridges the OA_In channel 5834A onto the TP_eg card 5832B, thereby pre-empting any low priority traffic carried over the protection path 5816.
  • the protection TP_in card 5832D is also cross- connected to the OA_Eg card 5834B.
  • FIG. 59A shows the logical software architecture of a reference hierarchical network management system (NMS) 5910 which comprises multiple NMS managers (generically denoted by reference no. 5912 with specific instances at a given level being given an alphabetic suffix from "A" to "C”).
  • NMS manager 5912 is responsible for administrating or supervising various portions or aggregations of a communications network 5914.
  • the NMS managers and nodes in network 5914 communicate with one another through a traffic management messaging network, not shown, which may be in-bound or out-of-band relative to the bearer traffic.
  • the NMS managers 5912 are logically arranged in a tree structure, thus forming a hierarchy comprising a plurality of levels. At each level other than the bottom or leaf level an NMS manager 5912 administers or supervises one or more dependant or child NMS managers. Similarly, at each level other than the top or root level each NMS manager has a parent or supervising NMS manager. There may be none, one or more intermediate levels in the hierarchy (only one intermediate level is shown). At the bottom-most or leaf level, the NMS managers 5912C are responsible for supervising distinct groups of network nodes which are divided in logical sub-networks such as subnetworks 1-4 shown in FIG. 59A.
  • each intermediate-level NMS manager 5912B has "n” children, such as M 1.1.1 to M 1.1.n for M 1.1.
  • Each of the "n” values shown may, in fact, represent a different numeric value.
  • the NMS manager 5912 A supervises an aggregation of all nodes in network 5914.
  • the main advantage of this structure is that it provides a distributed and scalable approach to network management.
  • each NMS manager communicates with its local family group, the communications complexity will be less than the case where each NMS manager communicates with every other manager.
  • each NMS manager performs similar functions such as configuration management, connection management, topology management, fault management, and performance management.
  • the data objects or events which each NMS manager processes or reacts to will differ depending on its position or level in the hierarchy, which denotes the functional role the manager is expected to carry out.
  • NMS manager Ml .1.1 may receive multiple "cross-connect up" event messages from multiple nodes or exchanges within sub-network 1. Assuming the cross-connects define a path spanning sub-network 1 , Ml.1.1 aggregates such connection state information and transmits a "sub-network connection" event up to its parent manager Ml.l.
  • FIG. 59A should therefore be understood to represent a role/responsibility hierarchy.
  • the NMS managers 5912 can be implemented in a variety of ways. Since the NMS managers at different levels ofthe hierarchy carry out different operating tasks, the program or software code for managers at different levels need not be identical. However, managers situated on the same level ofthe hierarchy provide the same functionality and so are preferably identical to one another.
  • the term "Segmented NMS” is used herein to refer to an NMS manager implemented in the foregoing manner. However, it is preferable to implement every NMS manager irrespective of its level in the hierarchy using one software program or code which provides the functionality required to operate at every position and level in the responsibility hierarchy. This eliminates the need to deal with, update and manage multiple bodies of code.
  • the term "Holistic NMS" is employed to refer to an NMS manager implemented in this manner. In such an implementation, each instance ofthe Holistic NMS has to
  • FIG. 59A depicts a software architecture, irrespective ofthe underlying hardware platforms. If desired, each NMS manager (whether implemented as a Holistic NMS or Segmented NMS) can execute on a physically distinct hardware platform. This provides the greatest fault-tolerance capability but is also the most expensive solution. Alternatively, one or more NMS manager instances (i.e., software processes or execution threads) can execute on a common hardware platform. For example, FIG.
  • NMS manager Ml.1.1, Ml.l, and Ml executing on hardware platform 5918A
  • NMS manager 1.1.2 executing on hardware platform 5918B
  • NMS managers 1.2.1 and 1.2 executing on hardware platform 5918C
  • NMS manager 1.2.2 executing on hardware platform 5918D.
  • FIG. 59C shows NMS managers Ml.1.1, Ml.l, and Ml executing on hardware platform 5918A
  • NMS manager 1.1.2 executing on hardware platform 5918B
  • NMS manager 1.2.2 executing on hardware platform 5918D.
  • FIG. 59C shows a Holistic NMS 5916A, which provides multi-level functionality, assumes the dual roles of Ml .1.1 and Ml .l . (In the degenerate case, one instance of a Holistic NMS can theoretically assume the role of all NMS managers within the hierarchy, but as will be seen this would defeat the purpose ofthe invention and so
  • the role an NMS manager is expected to fulfill can be established or initiated using a variety of schemes, including configuration and self- discovery.
  • configuration scheme such information can be hard-coded or the operator prompted for such information through a human interface as known in the art.
  • the root NMS manager can, for example, message all the other managers with their role indication.
  • each NMS manager can be associated with an IP network address that implies the manager's role in the hierarchy.
  • IP network address x.y.zl implies that the manager is in the third level ofthe hierarchy.
  • the manager sends out "hello" messages to all other NMS elements which return their network addresses.
  • the just-activated manager could determine, for example, that an NMS manager associated with address x.y.z2 is a common child of that parent, i.e., a sibling.
  • the NMS managers which are typically first activated are the leaf-level NMS managers. After the initial discovery process is completed the NMS managers will be able to determine who their siblings are. For example, in FIG. 59B, NMS manager Ml .1.1 can determine that it is a sibling to Ml .1.2, and Ml .2.1 can determine that it is a sibling of Ml .2.2.
  • the leaf-level NMS managers can then spawn or launch the code of parent NMS managers (as shown in FIG. 59B) or assume their roles (as shown in FIG. 59C), as needed, in order to complete the hierarchy. (The former process is applicable for Segregated NMS's while both processes are applicable for Holistic NMS's.)
  • Ml.1.1 and Ml .1.2 can exchange a set of messages to elect which one of them should spawn the parent Ml.l.
  • Different election schemes are presented below.
  • Ml.1.1 is elected and spawns Ml.l .
  • Ml.2.1 spawns Ml.2.
  • the discovery and election process is recursively carried out until the root NMS Manager Ml is initiated.
  • NMS managers which are siblings communicate state information with one another, as shown in FIG. 59A, but do not directly communicate with NMS managers belonging to other sibling groups. However, as between siblings within the same group only one of them has the responsibility for aggregating state information and passing it up to the parent NMS manager. This is possible because each NMS manager within a sibling group maintains state information for all the elements supervised by all its siblings. This can be accomplished in a variety of ways, including:
  • each NMS manager incorporates an event service to which its siblings can subscribe in order to receive notice of various events.
  • the OTS optical network described in greater detail above and below employs the event subscription technique as the primary state synchronization method with archiving as a backup mechanism.
  • every NMS manager communicating with its parent is also possible, but the former is preferred because it offers the potential to reduce network management traffic. For instance, if the hardware/software architecture of FIG. 59B is followed, communication between NMS managers and their parents is limited to local communication within the same hardware platform.
  • every NMS manager is able to communicate with its children, if any, or the network nodes.
  • each NMS manager shown in the reference hierarchy of FIG. 59A is active in that it communicates pre-aggregated state information to its children. For example, consider a severely malfunctioning node, A, in sub-network 1. As the line cards ofthe node begin to fail, it will transmit many alarm messages about failed components to NMS manager Ml .1.1. Ml .1.1 correlates these alarms until it determines that node A is non-operational. Ml .1.1 then generates a summarized alarm which indicates that "node A is non-operational".
  • the summarized alarm is transmitted up the NMS hierarchy to Ml , which in turn, communicates the summarized alarm to its children, such as Ml .n.
  • Ml .n communicates the alarm to all its children, Ml .2.1 ... Ml .2.n. In this manner, all NMS managers become aware ofthe problem in sub-network 1.
  • a heartbeat process is preferably employed within each sibling group as the discovery mechanism.
  • each NMS manager periodically transmits "hello" messages over the traffic management network to all of its siblings, and expects to receive a hello message from each sibling within a specified time period.
  • This provides a k:k-l discovery mechanism (k being the number of elements in a sibling group), meaning that every manager in a sibling group communicates its status with every other manager in a sibling group.
  • the non-reception of a hello message when such a message is expected signifies that the NMS manager at the other end ofthe link has ceased to operate.
  • the NMS manager that first discovers a non-operating manager alerts all of its siblings. In other words, the discovery of a non-responding NMS is flooded amongst the sibling group.
  • the discovery mechanism can alternatively be implemented through the use of sequenced 'keep alive' messages, or through the use of explicit acknowledgements. In such cases the non-reception of a keep-alive message when such a message is expected, or the non-communication of an acknowledgement message, would signify that the NMS manager at the other end ofthe link has ceased to operate
  • FIG. 59D shows an example where manager Ml.1.1 dies.
  • manager Ml.1.2 assumes the responsibility for sub-network 1 previously managed by Ml.1.1.
  • Ml.1.2 also assumes the responsibility for aggregating information to the parent NMS manager Ml.l since Ml .1.1 previously had that responsibility.
  • the NMS manager assuming responsibility for a non-operational sibling can do so using a "split" model or an "aggregated" model.
  • Ml.1.2 clones itself and spawns a new instantiation (i.e., new execution thread) of its software code on the same hardware platform.
  • Ml.1.2 itself assumes the role/responsibility of Ml.1.1, thus modifying its role indicator. Both techniques are applicable whether Ml .1.2 is implemented as a Holistic NMS or a Segmented NMS.
  • the election process is preferably carried out by having each NMS manager compute a ranking according to a predefined election scheme and flooding its siblings with such data. Each NMS manager will thus also receive ranking data from its siblings. Each NMS manager within a sibling group assumes that it is the winner unless it receives notice that one of its siblings has a higher rank. In the unlikely event of a tie, a predefined tie breaking mechanism can be employed such as determining the winner based on an IP address associated with each NMS manager.
  • a variety of election schemes may be used to for selecting a replacement manager or for self-discovery purposes as described above. Such schemes include, and are not limited to: (a) pre-configuration; (b) administrative weight; (c) load bearing capability; and (d) network size.
  • the pre-configuration scheme basically sets out ahead of time which NMS manager will take over for a non-functioning manager. This could be implemented in the form of a pre-configured table.
  • the administrative weight scheme assigns each manager an administrative weight based on the power or speed of its underlying hardware platform.
  • the NMS manager having or associated with the highest (or lowest) weight wins.
  • each NMS manager assesses its own busyness, e.g., based on current or historical processor utilization, speed of execution capability and other such parameters, the particulars of which may vary widely from embodiment to embodiment.
  • an OTS network has a control hierarchy which comprises three tiers or levels: a Network Management System (NMS) 280, Node Managers (NM) 250, and Line Card Managers (LCM) 410. As shown in this drawing, each entity is a separate software process executing over a distinct hardware platform.
  • NMS Network Management System
  • NM Node Managers
  • LCD Line Card Managers
  • the LCMs 410 control and monitor local resources, such as lasers and optical light paths, on line cards and the optical switch fabric. Generally speaking, there is one LCM 410 for each line card or optical switch fabric module. There are typically multiple line cards per OTS, and more than one card of each type may be provided. Each LCM communicates the results of its line card monitoring to its respective NM 250. The LCMs 410 also receive instructions from the NMs 250 to control local resources such as input or output signal multiplexers.
  • Each NM 250 interfaces with all the LCMs 410 within a given OTS and is responsible for switch level functions such as signaling, routing, and fault protection. For example, whenever a light path is created between OTSs, the NM 250 of each OTS performs the necessary signaling, routing and switch configuration to set up a cross- connect involving each OTS along the path. As such, the NM 250 may send configuration instructions, for example, to a particular optical access ingress card, optical switch fabric, and a particular transport egress card in order to establish a required optical cross-connection. The NM 250 also receives fault messages from the LCMs 410 under its supervision so that alarm conditions can be detected, isolated, and reported to the NMS 280.
  • FIGs. 30, 31, 34 and the accompanying text in Sections 18, 19, 20 and 22 are focused on describing NMS functionality in the OTS network.
  • the OTS system preferably implements:
  • State information synchronization amongst NMS manager siblings is based on the principle of flooding using an event service.
  • the general model of an event service is shown in FIG. 59F.
  • a software component 5920 process or module
  • Software components 5924 "subscribe" to events and receive notice thereof.
  • the Event Manager ofthe Node Manager is described in Section 23.2.2 and its FIG. 44.
  • Events are organized by topics, and each topic can itself be comprised of a hierarchy of sub-topics, as shown for instance in FIG. 59G. For instance, the following topics may be defined as shown in Table A:
  • any cross-connect event at OTS such between node x-connect as "cross-connect up” and “cross- elements and leaf- connect down” level NMS manager
  • connection event at the OTS such as cross-connect events and protection switching events
  • any sub-network link event such as between leaf-level link “link up” and “link down” NMS manager and its parent
  • FIG. 59H shows the software architecture of each OTS switch (which comprises LCM software 4900 and NM software 3600) from the perspective of an event manager 3632 present within the NM.
  • the low level software 3641 ofthe NM which is situated between the "D" and “S" interfaces (see more particularly FIG. 36 and Section 23.1), passes events to the NM event manager 3632 which distributes events to other NM components 3612, 3614, 3615, 3618, 3631, 3633, 3634, and 3666 according to subscription. For example, suppose a new cross-connect is configured for a signaled light path.
  • the NM receives a path "set up" message via the inter-node signaling network (described more particularly by FIG. 9 and Section 7 ).
  • the message is processed by NNI signaling 3615, which requests the resource manager 3631 to allocate ports and possibly wavelengths on ingress & egress line cards.
  • the resource manager 3631 then employs the "S" interface to instruct the low level drivers (e.g., OXC manager 3656 in FIG. 36) to interface with the line cards and switch fabric through the "D" interface to create the cross-connect.
  • the low-level software 3641 utilizing the "S" interface, sends a "cross- connect up" event to the event manager 3632 which publishes the event to the relevant subscribers.
  • the NMS agent 3620 on the NM analyzes events and forwards messages relating to configuration, connection, fault and performance to the corresponding managers associated with an NMS Instance (see FIG. 34).
  • the NMS agent 3620 thus forms a part ofthe element management layer (3404) in the TMN model.
  • the preferred software architecture of an NMS manager 5912C for OTS networks is shown in greater detail in FIG. 591.
  • a proxy agent 5960 is instantiated for each OTS/NM supervised by the NMS manager.
  • the proxy agent 5960 is present because in the preferred embodiment the NMS is written is Java and the NM is written in another language and so the proxy agent provides an interface with each OTS/NM 250.
  • the proxy agent 5960 also collects and translates messages such as traps and alarms received from the corresponding NMS Agent 3620, converts them to events, and publishes them through an NMS Event Service 5965.
  • the NMS Event Service 5965 distributes events to the relevant components within the NMS manager.
  • the relevant components in sibling NMS managers also subscribe to the Event Service 5965.
  • a fault manager 3445 within Ml.l.n subscribes to fault events published by the Event Service of Ml.1.1, and vice versa.
  • An NMS manager is capable of properly registering with its sibling's Event Service once the self- discovery process has terminated and role indication is confirmed. In this way NMS managers that are siblings of one another can synchronize state information pertaining to the network elements collectively supervised by a sibling group.
  • the Event Service 5965 is also preferably used as the mechanism for one NMS manager to alert it siblings when it has detected a non-operational sibling.
  • the event service model is recursively followed up the hierarchy, albeit at higher layers the proxy agent 5960 is not employed. So, for example, a connection manager in Ml.n of FIG. 59A subscribes to connection events published by the Event Service of Ml.l, and vice versa.
  • each NMS Manager also includes a database service 5966 as shown in FIG. 591.
  • the database service 5966 employs a database interface service 5968 to store information in a remote database 5969.
  • the database service 5966 stores state information from the various management components ofthe NMS Manager in the remote database 5969.
  • the elected NMS manager can retrieve saved state information associated with a non- functioning NMS manager from the remote database.
  • ODSI Optical Domain Service/System Interconnect
  • OEO Optical To Electrical To Optical (conversion) OEM Original Equipment Manufacturer
  • the present invention provides an all- optical switch that can be selectively configured to provide cross-connection and add/drop multiplexing functions in an optical network.
  • Optical circuit cards are provided in a chassis, and connected by an optical backplane.
  • a local area network enables control and monitoring ofthe cards by a local node manager.
  • the all-optical switch can be used in an optical network that allows external access networks to access the optical network, including access networks that use Gigabit Ethernet and SONET OC-n optical signaling. Wavelength conversion is provided for wavelengths that are not compliant with the all- optical network.
  • the invention also provides a Node Manager control mechanism that is local to each switch for configuring and monitoring the switch, as well as distributed Line Card Manager control mechanisms for monitoring and controlling each circuit card.
  • a common, centralized Network Management System control mechanism is also provided for configuring and monitoring the switches in the network.
  • the present invention provides a hierarchical and distributed control architecture for managing an optical communications network.
  • the architecture includes a line card manager level for managing individual line cards at an optical switch, a node manager level for managing multiple line cards at the optical switch, and a network management system level for managing multiple node managers in a network.
  • Control architecture functionalities include signaling, routing, protection switching and network management.
  • the network management functionality includes a topology manager, a performance manager, a connection manager, a fault detection manager, and a configuration manager.
  • the line cards include various optical circuit cards at the switches, such as access line interface cards, transport ingress cards, transport egress cards, optical access ingress cards, optical access egress cards, switching fabric cards, optical signaling cards, and optical performance monitoring cards.
  • the control hierarchy uses a push model to enable important information to be communicated to higher layers in the control hierarchy.
  • the Node Manager includes an Event Manager for enabling other software components to post, register for, and receive, events. These events may be set based on activities at the Node Manager or the line cards, and may denote, e.g., a change in the configuration ofthe optical switch, or an alarm condition at a line card.
  • the present invention provides a line card manager architecture for use at an optical switch in an optical communications network.
  • a line card manager with a dedicated processor is provided for each line card.
  • the line cards include access line interface cards, transport ingress cards, transport egress cards, optical access ingress cards, optical access egress cards, switching fabric cards, optical performance measurement cards, and optical signaling cards.
  • the line card manager includes an interface for receiving monitored parameters from a line card, a memory for storing the values, and a controller for processing the values to determine if an event such as a fault should be set, e.g., when a monitored parameter is out of range, as indicated by a threshold crossing. The event is reported to a node manager, which manages the different line card managers.
  • a local area network may be used at the optical switch to allow the LCMs and Node Manager to exchange messages.
  • the line card manager may apply control signals to the line card on its own or at the direction of the node manager.
  • the line card manager is preferably a removable plug-in module or daughter board ofthe line card to allow easy upgrades and maintenance.
  • the present invention provides a Line Card Manager (LCM) and Node Manager architecture for use at an optical switch in an optical communications network.
  • LCMs are provided for managing different line cards at the switch, while a Node Manager at the switch manages the LCMs.
  • the Node Manager and LCMs exchange messages using a message-passing interface.
  • the messages may include, e.g., read messages that enable the node manager to retrieve monitored parameter values that the line card manager receives from its line card, write messages that enable the node manager to write provisioning data to the line card manager, alarm messages that allow the line card managers to report alarm/fault conditions to the node manager, an audit message that enables the node manager to verify a presence ofthe line card at the optical switch, and discovery messages that allow a line card manager to announce its presence to the node manager, e.g., after rebooting.
  • read messages that enable the node manager to retrieve monitored parameter values that the line card manager receives from its line card
  • write messages that enable the node manager to write provisioning data to the line card manager
  • alarm messages that allow the line card managers to report alarm/fault conditions to the node manager
  • an audit message that enables the node manager to verify a presence ofthe line card at the optical switch
  • discovery messages that allow a line card manager to announce its presence to the node manager, e.g., after rebooting.
  • the present invention provides a node manager architecture for use at an optical switch in an optical communications network.
  • the node manager manages line cards at the optical switch, such as access line interface cards, transport ingress cards, transport egress cards, optical access ingress cards, optical access egress cards, optical signaling cards, optical performance monitoring cards and switching fabric cards.
  • the node manager includes an interface, such as a local area network interface, for communicating with line card managers that manager the line cards.
  • the node manager provides the line card managers with software, and may request information such as monitored parameters ofthe line cards.
  • the node manager may also receive fault and other information that are reported by the line card managers using push technology.
  • the node manager may have an additional interface, which uses optical or electrical signaling, to communicate with a network management system that manages a number of node managers in the network.
  • the node manager has processing resources that enable applications and system services.
  • the hierarchical structure ofthe NMS has been shown to be a balanced tree.
  • the tree can be unbalanced in alternative embodiments.

Abstract

An optical transport switching system having an all-optical switch (200) can be selectively configured to provide cross-connection (240, 245) and add/drop multiplexing (230, 235) functions. In a further aspect, a hierarchical and distibuted control architecture (280, 250, 410) for managing an optical communications network is provided. In further aspect, a Line Card Manager (LCM) (410) and Node Manager (250) architecture are used at an optical communications network, where the node manager (250) manages line cards (420) at the optical switch. In a further aspect, a hierarchical network management system (NMS) (5910) where a plurality of NMS managers (5910), each responsible for different portions or aggregations of a communications network, are logically arranged in atree structure. In a further aspect, a protection switching capability is achieved in an optical communications network having at least one optical switch (5225) connected to a network access device (5140).

Description

3 NODE ARCHITECTURE AND MANAGEMENT SYSTEM FOR OPTICAL
NETWORKS
FIELD OF THE INVENTION The present invention relates to an optical communications network and, more particularly, to an all-optical switching system for use at nodes in such a network, and to a control architecture for such a network. In another aspect, the invention relates to a control architecture for line cards that are located at optical switches in an optical communications network. In another aspect, the invention relates to an interface architecture for an optical switch in an optical communications network. In another aspect, the invention relates to a node/optical switch architecture in an optical communications network. In another aspect, the invention relates to the field of network management systems and more specifically to fault-tolerant network management systems that supervise and/or control communication networks.
BACKGROUND OF THE INVENTION Fiber optic and laser technology have enabled the communication of data at ever-higher rates. The use of optical signals has been particularly suitable for use over long haul links. Moreover, recently there has been a push to integrate optical signals into metro core and other networks. Conventional approaches use switches (i.e., network nodes) that receive an optical signal, convert it to the electrical domain for switching, then convert the switched electrical signal back to the optical domain for communication to the next switch. Moreover, while some all-optical switches have been developed that avoid the need for O-E-O conversion, they possess many limitations, such as their being time consuming and difficult to configure or re-configure.
Additionally, conventional network infrastructures have other disadvantages, such as inflexibility, inefficient bandwidth utilization, fixed bandwidth connections, and hardware that provide overlapping functions.
Furthermore, a network management system (NMS) typically interfaces with the individual nodes or exchanges of a data communications network through an overlay network, e.g., an out-of-band data transmission infrastructure dedicated to handling network management traffic. Through such an interface the NMS provides a variety of functions required to effectively manage the network from a system-wide perspective. These functionalities, as conceptualized for instance by the M Series Recommendations ofthe ITU-T Telecommunication Management Network (TMN) standards, include system-wide issues such as fault management, configuration management, accounting, security and performance management.
For example, in a connection-orientated network such as an ATM network or a switched optical network as hereinafter described, configuration management functionality could include the ability to establish or provision a permanent virtual circuit or light path using a graphical user interface (GUI) provided by the NMS. In such cases the NMS may be capable of computing the route across the communications network for the bearer channel path and, by interfacing with the nodes, configuring and establishing the individual cross-connects on each node in the bearer channel path.
Furthermore, because the NMS interfaces with each node through the overlay network, the nodes can inform the NMS about a failed bearer channel link. The NMS can then take corrective action such as automatically re-routing any bearer channel paths associated with the failed link. This is an example of fault management functionality provided by the NMS.
Fault tolerance is an important issue for service providers, particularly since one ofthe business parameters service providers often negotiate with their customers is network availability or permissible "down" time. Towards this end many schemes have been proposed in the art for: performance measurement and load balancing to minimize potential problems; centralized path restoration mechanisms; path and/or line protection switching; and, most particularly, equipment redundancy.
However, one aspect of network availability that may be overlooked is the fault-tolerant capability ofthe NMS itself. This is particularly so where the network management system features a hierarchical or multi-layered structure where substantial information aggregation occurs. This is often necessary in a large, complex network in order to handle adequately the vast amount of telemetric-like data that may originate from network elements. However, such hierarchical structures can considerably multiply the number of NMS elements or agents and exacerbate the chain of command or communication from a root element ofthe NMS to the network nodes. The failure of one such NMS element could substantially effect the viability ofthe entire network management system. Accordingly, the invention seeks to provide a fault-tolerant NMS, and more particularly a fault-tolerant NMS attuned to the complexities introduced by a hierarchical structure.
Furthermore, network service providers typically offer their customers varying types of service, often based on different quality of service guarantees. One parameter that may be negotiated between the service provider and its customers is service availability or permitted outage. Customers having demanding service requirements often necessitate the service provider to provide path protection or restoration features to its customers. Path protection is the provisioning of a pre- determined redundant or pre-emptable communication path(s) that can be switched into operation whenever a problem is detected with a connection over a first communication path. Path restoration is the ability to automatically re-establish or re-signal a connection using a different path through the network.
The advent of all-optical networks system which establish switched light paths without any optical to electrical conversion, such as the novel system described below, has increased the importance of network protection and restoration features. There are many reasons for this. First, optical networks are intended for deployment in network backbones and as such will likely carry massive levels of communications traffic, the loss of which can be disruptive to many organizations. Second, optical switches are likely to employ state-of-the-art optical devices such as switching fabrics. The long-term performance characteristics and longevity of such devices are not yet fully understood. Third, the new all-optical networks are likely to embody complex mesh topologies, unlike the linear and ring topologies of current SONET networks. In view of these and other factors a robust set of protection and restoration features is desired. In particular, it is desired to be able to provide protection switching which can be initiated very rapidly in order to minimize signal disruption. For those customers who do not wish to subscribe to such a premium service, it is also desired to provide protection/restoration services that may not recover as quickly but use less resources. SUMMARY OF THE INVENTION In a first aspect, it is an object ofthe invention to provide an all-optical switch that can be selectively configured to provide cross-connection and add/drop multiplexing functions. It is an object ofthe invention to provide an all-optical switch for use in an optical network that interfaces with access networks, including Gigabit Ethernet over fiber networks, SONET OC-n networks, and others. It is an object ofthe invention to provide wavelength conversion at such an interface for wavelengths of such access networks that are not compliant with the optical network. The invention utilizes particular decompositions of a node's optical switching hardware in achieving these and other aspects ofthe invention. A node of an optical network in accordance with the present invention is also referred to as an optical transport switching system (or simply an "OTS").
In one aspect ofthe invention, a node includes an optical access ingress subsystem, an optical access egress subsystem a transport ingress subsystem, a transport egress subsystem, and an optical switch subsystem. The optical switch subsystem selectively provides optical coupling between (a) the transport egress subsystem and at least one of (a)(1) the optical access ingress subsystem and (a)(2) the transport ingress subsystem, and between (b) the transport ingress subsystem and at least one of one of (b)(1) the optical access egress subsystem, and (b)(2) the transport egress subsystem. In another aspect ofthe invention, a node includes a chassis having a plurality of receiving locations, where at least some ofthe receiving locations are adapted to receive optical circuit cards. Moreover, an optical backplane is associated with the chassis for providing selective optical coupling between the optical circuit cards, and a local area network enables control and monitoring ofthe optical circuit cards by a central controller.
In a second aspect, it is an object ofthe invention to provide a multi-tiered adaptive control architecture for managing an optical communications network.
It is an object ofthe invention to provide a control architecture that supports signaling, including signaling at the edge of a network, such as between an edge switch acting as an add/drop multiplexer and an IP router or ATM function of a network that is accessing the optical network, and signaling between switches that are internal to the optical network. It is an object ofthe invention to provide a control architecture that supports routing, such as for setting up the connectivity of light paths in the network.
It is an object ofthe invention to provide a control architecture that supports protection switching for responding immediately to detected faults such as damaged fiber paths.
It is an object ofthe invention to implement control architecture functionalities in a hierarchical and distributed manner. The control architecture hierarchy should include a line card manager level for managing individual line cards in an optical switch/node, a node manager level for managing multiple line cards in an optical switch/node, and a network management system level for managing multiple optical switches/nodes in a network.
It is an object ofthe invention to provide a control architecture that is adaptive in that it detects and reacts to problems at the optical switches or elsewhere in the network. For example, if a laser at one ofthe switches fails, the control architecture will switch to another laser if available. Or, if a card fails and there are light path connections going though that card, the control architecture moves the connections to another card.
It is an object ofthe invention to provide a control architecture that follows a "push model" that facilitates handling of problems at the lowest possible layer in the hierarchy. Upper layers can rely on the fact that if a lower layer determines it cannot handle a problem, it will notify the next higher layer, e.g., via an event such as a fault or alarm. A given layer may also periodically provide performance data to the next higher layer based on a request by the next higher layer, or according to a predetermined schedule. It is an object ofthe invention to provide a control architecture that provides at least three levels of control that operate with different response times, such that software at the line card level very closely tracks the hardware and can therefore take immediate action, e.g., within microseconds, software at the node manager level sits a little further away from the hardware and therefore takes more time to act, e.g., milliseconds, and software at the network management level takes even more time to act, e.g., seconds. The increased response time at each higher layer is due to the time spent in aggregating data from the lower layer or layers. For example, a lower layer such as a line card layer may observe laser current consumption over a period of time. When the current consumption eventually exceeds a threshold, the line card layer informs the node manager layer, and it reacts accordingly, e.g., by sending an alarm to the network management system.
It is an object ofthe invention to provide a control architecture that supports a network management function, which includes: (a) a topology manager for keeping track ofthe arrangement and physical connectivity of switches in the network, (b) a performance manager for monitoring the performance ofthe switches, such as a selected port or channel on the switch, and the network as a whole, (c) a connection manager for setting up new light path connections between switches in the network by specifying ports and cross-connects to use at a switch, (d) a fault detection component for detecting and responding to faults, such as hardware faults, and (e) a configuration component for performing configuration tasks, such as determining a switch IP address and switch type, as well as line card-specific information such as serial number and firmware/software versions being used.
In one aspect ofthe invention, a control architecture is provided for an optical network having a plurality of optical switches. The control architecture includes, for each optical switch, respective line card managers for managing respective line cards associated therewith, and a node manager for managing the line card managers. The control architecture further includes a centralized network management system for managing the node managers ofthe optical switches. Moreover, at each optical switch, the node manager includes an event manager for enabling software components running at the node manager to register for, and receive, events, and/or post events.
In a third aspect, it is an object ofthe invention to provide a line card manager architecture for managing line cards at an optical switch in an optical communications network. It is an object ofthe invention to provide a line card manager architecture that provides local control of each line card.
It is an object ofthe invention to provide a line card manager architecture that executes commands received from a node manager level in a multi-tiered control hierarchy that includes a line card manager level for managing individual line cards in an optical switch/node, a node manager level for managing multiple line cards in an optical switch/node, and a network management system level for managing multiple optical switches/nodes in a network.
It is an object ofthe invention to provide a line card manager architecture that provides digital and analog control and monitoring of line cards. It is an object ofthe invention to provide a line card manager architecture that sends monitored parameters and events such as faults to the node manager ofthe associated optical switch. For example, the line card manager layer may observe laser current consumption over a period of time. When the current consumption eventually exceeds a threshold, the line card layer informs the node manager layer, and it reacts accordingly, e.g., by sending an alarm to the network management system.
It is an object ofthe invention to provide a line card manager architecture that communicates with its node manager via a message-passing interface.
It is an object ofthe invention to provide a line card manager architecture that provides a dedicated processor for each of multiple line cards at an optical switch.
It is an object ofthe invention to provide a line card manager architecture that uses a processor that is capable of multi-tasking multiple sense-and-control processes for a line card.
It is an object ofthe invention to provide a generic line card manager architecture that can manage multiple types of line cards.
It is an object ofthe invention to provide a line card manager architecture that stores software for managing a line card in non-volatile memory.
It is an object ofthe invention to provide a line card manager architecture that automatically identifies a line card type. In one aspect ofthe invention, a management architecture is provided for use at an optical switch in an optical network. The architecture includes a line card manager for managing an associated line card at the optical switch. In particular, the line card manager includes: (a) a first interface for receiving monitored parameter values from the line card, (b) processing resources for setting an event regarding the line card when criteria for setting the event is met by the monitored parameter values, and (c) a second interface for communicating the event to a node manager at the optical switch that manages the line card manager.
In another aspect ofthe invention, a management architecture for use at an optical switch in an optical network includes a number of respective line card managers for managing respective line cards at the optical switch. Each line card manager includes: (a) a first interface for receiving monitored parameter values from the respective line card, (b) processing resources for setting an event regarding the respective line card when criteria for setting the event is met by the monitored parameter values, and (c) a second interface for communicating the event to a node manager at the optical switch. Moreover, the node manager manages each ofthe respective line card managers.
In a fourth aspect, it is an object ofthe invention to provide an interface architecture for enabling a node manager and line cards to communicate at an optical switch in an optical communications network.
It is an object ofthe invention to provide a message-passing interface that uses a predetermined set of message to enable quick and reliable communication between a node manager and line card managers at an optical switch.
It is an object ofthe invention to provide messages in a node manager-line card manager interface that include read messages that enable the node manager to retrieve monitored parameter values that the line card manager receives from its line card, write messages that enable the node manager to write provisioning data to the line card manager, alarm messages that allow the line card managers to report alarm conditions to the node manager, an audit message that enables the node manager to verify a presence of the line card at the optical switch, and discovery messages that allow a line card manager to announce its presence to the node manager, e.g., after rebooting.
In one aspect ofthe invention, a management architecture for use at an optical switch in an optical network includes a line card manager for managing an associated line card, where the line card manager includes a message-passing interface for communicating with a node manager at the optical switch that manages the line card manager.
In another aspect ofthe invention, a management architecture for use at an optical switch in an optical communication network includes a node manager for managing a number of respective line card managers, where each respective line card manager manages an associated respective line card at the optical switch. The node manager includes a message-passing interface for communicating with the line card managers.
In another aspect ofthe invention, an interface is provided for use at an optical switch that includes a node manager for managing a number of respective line card managers, where each respective line card manager manages an associated respective line card at the optical switch. The interface includes a message-passing interface for enabling the node manager to exchange messages with each ofthe line card managers. Moreover, the messages include: (a) line card manager-to-node manager messages for enabling the line card managers to report events regarding the respective line cards to the node manager, and (b) node manager-to-line card manager messages for enabling the node manager to provide commands to the line card managers.
In a fifth aspect, it is an object ofthe invention to provide a node manager architecture for managing an optical switch in an optical communications network. It is an object ofthe invention to provide a node manager architecture that has an interface for communicating with multiple line card managers at an optical switch. It is an object ofthe invention to provide a node manager architecture that has processing resources for enabling applications in an optical network, such as protection switching, node-to-node signaling, command line interface, routing, database client, and an agent to a network management system that manages the node manager.
It is an object ofthe invention to provide a node manager architecture that has processing resources for enabling system services in an optical network, such as resource/connection management, event management, software version management, flash memory interface, and a node watchdog capability. It is an object ofthe invention to provide a node manager architecture that manages multiple types of line cards at an optical switch, including transport ingress cards, transport egress cards, optical access ingress cards, optical access egress cards, optical signaling cards, performance monitoring cards, and access line interface cards. In one aspect ofthe invention, a management architecture for use at an optical switch in an optical communication network includes a node manager for managing a number of line card managers at the optical switch. Each line card manager manages an associated line card at the optical switch. The node manager includes: (a) an interface for communicating with the line card managers, and (b) processing resources for enabling at least one of applications and system services. In a sixth aspect, the invention provides a hierarchical network management system in which a plurality of NMS managers, each responsible for different portions or aggregations of a communications network, are logically arranged in a tree structure. The NMS managers are further organized into various sub-groups. The NMS managers within each sub-group monitor the status of one another in order to detect when one of them is no longer operational. If this happens, the remaining operational NMS managers ofthe sub-group collectively elect one of them to assume the responsibility of the non-operational NMS manager. The NMS is thus "self-healing" in the sense that one NMS manager can dynamically, without operator intervention, assume the responsibilities for another NMS manager. Preferably, the NMS managers within a given sub-group are duplicate copies of one another, i.e., provide the same functionality. To effect this, it is preferred to group together NMS manager that are siblings, i.e., situated at the same level in the hierarchy and have a common parent. Furthermore, the NMS managers within a sub- group preferably maintain, or have access to, state information pertaining to all portions or aggregations ofthe communications network under the collective administration of all the NMS managers within the sub-group. This allows the elected, replacement NMS manager to assume quickly and readily the responsibility for the non-operational NMS manager, including information aggregation functions. According to one aspect ofthe invention a method for managing a network is provided. The method includes organizing a plurality of network management system (NMS) managers in a hierarchy. The hierarchy has at least a root level and a leaf level, wherein each non-leaf level NMS manager supervises at least one child NMS manager and each leaf-level NMS manager supervises one or more network nodes. When a determination is made that a given NMS manager has ceased to operate, another NMS manager within the hierarchy is elected to assume the responsibility ofthe non-operating NMS manager.
In the embodiments described below, each NMS manager receives and stores state information pertaining to the network nodes supervised by sibling NMS managers, thereby synchronizing network state information amongst siblings. An event service is the preferred mechanism for carrying this out. However, in each group of sibling NMS managers, only one NMS manager within the group aggregates state information pertaining to all nodes supervised by the group to the common parent NMS manager. In order to determine the existence of a non-operating NMS manager a heartbeat process is preferably established between at least two NMS manager siblings. In the preferred heartbeat process, each NMS manager transmits a "hello" message to every other NMS manager in the same sibling group.
In a seventh aspect, the invention provides protection-switching capability for an all-optical communications network. In one aspect ofthe invention an optical communications network is provided having at least one optical switch connected to a network access device. The optical switch includes a first line card disposed along a first communications path over which a first optical signal is transmitted. The first line card is connected to the network access device. A second line card is disposed along a second communications path over which a second optical signal is transmitted. An inter-card communication channel is provided for bridging the second communications path to the first line card.
To provide a completely redundant protection path, i.e., "1+1 protection", the first and second optical signals preferably carry the same data. In this case the first optical signal may be referred to as the "working path" signal and the second signal may be referred to as the "protection path" signal. The optical switch further includes a first line card manager associated with the first line card for monitoring the quality ofthe working path signal and delivering the protection path signal to the network access device when the working path signal degrades below a specified threshold. In this manner, any protection switching can be rapidly carried out as only the first line card is involved in the switchover.
To provide for greater utilization of bandwidth wherein a protection path may carry pre-emptable traffic i.e., "1:1 protection", the first optical signal carries high priority traffic and the second optical signal carries low priority traffic. To effect protection switching, the optical switch preferably includes a Node Manager for controlling the first and second line card managers. A first line card manager is associated with the first line card for monitoring the quality ofthe first optical signal. In the event of poor quality the first line card manager notifies the Node Manager which consequently instructs the second line card manager to transit the high priority traffic using the second optical signal and second path. The Node Manager also communicates the fault to a network management system which informs the egress switch. Similar actions are initiated at the egress switch so as to provide traffic carried over the second optical signal to user equipment connected to a line card disposed along the first communication path.
According to another aspect ofthe invention, an optical communications network, is provided comprising: an ingress optical switch having a first ingress line card and first and second egress line cards connected to a first switch fabric, wherein the first switch fabric bridges an ingress optical signal onto the first and second egress line cards thereby providing first and second copies ofthe optical signal; transit optical switches for transiting the first and second copies ofthe optical signal across the networks across first and second optical paths; an egress optical switch having second and third ingress line cards and a third egress line card connected to a second switch fabric, wherein said second and third ingress line cards respectively receive the first and second optical signal copies and the second switch fabric cross-connects only one ofthe optical signal copies to the third egress line card.
BRIEF DESCRIPTION OF THE DRAWINGS The invention is illustrated in the figures ofthe accompanying drawings, which are meant to be exemplary, and not limiting, in which like references are intended to refer to like or corresponding structures, and in which: FIG. 1 illustrates an all-optical network architecture in accordance with the present invention;
FIG. 2 illustrates a logical node architecture in accordance with the present invention;
FIG. 3 illustrates an optical transport switching system hardware architecture in accordance with the present invention;
FIG. 4 illustrates a control architecture for an OTS in accordance with the present invention;
FIG. 5 illustrates a single Node Manager architecture in accordance with the present invention; FIG. 6 illustrates a Line Card Manager architecture in accordance with the present invention;
FIG. 7 illustrates an OTS configuration in accordance with the present invention;
FIG. 8 illustrates backplane Ethernet hubs for an OTS in accordance with the present invention;
FIG. 9 illustrates the operation of a control architecture and Optical Signaling Module in accordance with the present invention;
FIG. 10 illustrates an optical switch fabric module in accordance with the present invention; FIG. 11 illustrates a Transport Ingress Module in accordance with the present invention;
FIG. 12 illustrates a Transport Egress Module in accordance with the present invention;
FIG. 13 illustrates an Optical Access Ingress module in accordance with the present invention;
FIG. 14 illustrates an Optical Access Egress module in accordance with the present invention;
FIG. 15 illustrates a Gigabit Ethernet Access Line Interface module in accordance with the present invention; FIG. 16 illustrates a SONET OC-12 Access Line Interface module in accordance with the present invention;
FIG. 17 illustrates a SONET OC- 48 Access Line Interface module in accordance with the present invention; FIG. 18 illustrates a SONET OC- 192 Access Line Interface module in accordance with the present invention;
FIG. 19 illustrates an Optical Performance Monitoring module in accordance with the present invention;
FIG. 20 illustrates a physical architecture of an OTS chassis in an OXC configuration in accordance with the present invention;
FIG. 21 illustrates a physical architecture of an OTS chassis in an OXC/OADM configuration in accordance with the present invention;
FIG. 22 illustrates a physical architecture of an OTS chassis in an ALI configuration in accordance with the present invention; FIG. 23 illustrates a full wavelength cross-connect configuration in accordance with the present invention;
FIG. 24 illustrates an optical add/drop multiplexer configuration with compliant wavelengths in accordance with the present invention;
FIG. 25 illustrates an optical add multiplexer configuration in accordance with the present invention;
FIG. 26 illustrates an optical drop multiplexer configuration in accordance with the present invention;
FIG. 27 illustrates an example data flow through optical switches, including add/drop multiplexers and wavelength cross-connects, in accordance with the present invention;
FIG. 28 illustrates Gigabit Ethernet networks accessing a managed optical network in accordance with the present invention;
FIG. 29 illustrates SONET networks accessing a managed optical network in accordance with the present invention; FIG. 30 illustrates a hierarchical optical network structure in accordance with the present invention;
FIG. 31 illustrates a system functional architecture in accordance with the present invention; FIG. 32 illustrates network signaling in accordance with the present invention;
FIGs 33(a)-(c) illustrate a normal data flow, a data flow with line protection, and a data flow with path protection, respectively, in accordance with the present invention;
FIG. 34 illustrates a high-level Network Management System functional architecture in accordance with the present invention;
FIG. 35 illustrates a Network Management System hierarchy in accordance with the present invention; FIG. 36 illustrates a Node Manager software architecture in accordance with the present invention;
FIG. 37 illustrates a Protection/Fault Manager context diagram in accordance with the present invention;
FIG. 38 illustrates a UNI Signaling context diagram in accordance with the present invention;
FIG. 39 illustrates a NNI Signaling context diagram in accordance with the present invention;
FIG. 40 illustrates an NMS Database/Server Client context diagram in accordance with the present invention; FIG. 41 illustrates a Routing context diagram in accordance with the present invention;
FIG. 42 illustrates an NMS Agent context diagram in accordance with the present invention;
FIG. 43 illustrates a Resource Manager context diagram in accordance with the present invention;
FIG. 44 illustrates an Event Manager context diagram in accordance with the present invention;
FIG. 45 illustrates a Software Version Manager context diagram in accordance with the present invention; FIG. 46 illustrates a Configuration Manager context diagram in accordance with the present invention;
FIG. 47 illustrates a Logger context diagram in accordance with the present invention; FIG. 48 illustrates a Flash Interface context diagram in accordance with the present invention;
FIG. 49 illustrates a Line Card Manager software process diagram in accordance with the present invention; and FIG. 50 is a schematic diagram ofthe control architecture of an optical switch according to an embodiment ofthe invention;
FIG. 51 shows a working and a protection path configured over a reference optical network for the purposes of network managed protection;
FIG. 52 shows the architecture in-relevant-part of an optical switch in accordance with another embodiment ofthe invention for establishing the working and protection paths of FIG. 59;
FIG. 53 shows a high priority path and a low-priority, pre-emptable path configured over a reference optical network for the purposes of network managed protection; FIG. 54 shows the architecture in-relevant-part of an optical switch in accordance with one embodiment ofthe invention for establishing the high and low priority paths of FIG. 53;
FIG. 55 is a system block diagram of a line card which supports the protection features shown in FIGs. 53-54; FIG. 56 is a datapath diagram showing how SONET framing components of two line cards (one being illustrated in FIG. 55) are connected in order to support the protection features shown in FIGs. 53-54 on ingress;
FIG. 57 is a datapath diagram showing how SONET framing components of two line cards (one being illustrated in FIG. 55) are connected in order to support the protection features shown in FIGs. 53-54 on egress;
FIG. 58 is a schematic diagram showing how a protection and a working path may be created over an optical network using switch fabric bridging, in accordance with another embodiment ofthe invention;
FIG. 59A illustrates a responsibility hierarchy for managers of a multi- tiered network management system (NMS) in accordance with an embodiment ofthe invention;
FIG. 59B illustrates a hardware and software architecture for implementing the multi-tiered NMS shown in FIG. 59A; FIG. 59C illustrates an alternative hardware and software architecture for implementing the multi-tiered NMS shown in FIG. 59A;
FIG. 59D illustrates a revised responsibility hierarchy for the multi-tiered NMS shown in FIG. 59A when one ofthe NMS managers thereof ceases to function;
FIG. 59E illustrates a control hierarchy employed in an optical switching network;
FIG. 59F illustrates a model of an event service;
FIG. 59G illustrates an event topic tree;
FIG. 59H illustrates software components employed in an optical network switch; and
FIG. 591 illustrates a software architecture for an NMS manager in accordance with an embodiment ofthe invention geared towards optical switching networks.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
Embodiments ofthe present invention will now be described in detail with reference to the accompanying figures. A Glossary is provided at the end ofthe following description, wherein certain terms and acronyms are defined. In sections 1-26 ofthe detailed description, a novel optical switching network is described. Section 27 in particular discusses protection and restoration features. Section 28 in particular discusses a generic embodiment of a hierarchical network management system (NMS) which is applicable to a wide variety of network types. Section 29 in particular discusses an implementation ofthe generic embodiment in relation to the novel optical switching network, which, when configured as a large complex network producing vast amounts of telemetric data, is particularly well-suited to benefit from the increased reliability provided by the present invention. 1. OTS Overview The present invention provides for an all optical configurable switch (i.e., network node or OTS) that can operate as an optical cross-connect (OXC) (also referred to as a wavelength cross-connect, or WXC), which switches individual wavelengths, and/or an optical add/drop multiplexer (OADM). The switch is typically utilized with a Network Management System also discussed herein.
As an all-optical switching system, the switch ofthe present invention operates independently of bit rates and protocols. Typically, the all-optical switching, between inputs and outputs ofthe OTS, is achieved through the use of Micro-Electro- Mechanical System (MEMS) technology. Moreover, this optical switch offers an on- demand λ switching capability to support, e.g., either SONET ring based or mesh configurations. The OTS also provides the capability to achieve an optimized network architecture since multiple topologies, such as ring and mesh, can be supported. Thus, the service provider can tailor its network design to best meet its traffic requirements. The OTS also enables flexible access interconnection supporting SONET circuits, Gigabit Ethernet (GbE) (IEEE 802.3z), conversion from non-ITU compliant optical wavelengths, and ITU-compliant wavelength connectivity. With these interfaces, the service provider is able to support a broad variety of protocols and data rates and ultimately provide IP services directly over DWDM without SONET equipment. The OTS further enables a scalable equipment architecture that is provided by a small form factor and modular design such that the service provider can minimize its floor space and power requirements needs and thereby incrementally expand its network within the same footprint.
FIG. 1 illustrates an all-optical "metro core" network architecture that utilizes the present invention in accordance with a variety of configurations. OTS equipment is shown within the optical network boundary 105, and is designed for deployment both at the edges of a metro core network (when operating as an OADM), and internally to a metro core network (when operating as an OXC). For example, the OTSs at the edge ofthe network include OADMs 106, 108, 110 and 112, and the OTSs internal to the network include WXCs 115, 117, 119, 121 and 123. Each OTS is a node of the network.
External devices such as SONET and GbE equipment may be connected directly to the optical network 105 via the edge OTSs. For example, SONET equipment 130 and 134, and GbE equipment 132 connect to the network 105 via the OADM 106. GbE 136 and SONET 138 equipment connect to the network via OADM 108. GbE 140 and SONET equipment 142 and 144 connect to the network via OADM 110. SONET equipment 146 connects to the network via OADM 112. The network architecture may also support other network protocols as indicated, such as IP, MPLS, ATM, and Fibre Channel operating over the SONET and GbE interfaces.
FIG. 2 depicts a logical node architecture 200, which includes an Optical Switch Fabric 210, Access Line Interface 220, Optical Access Ingress 230, Optical
Access Egress 235, Transport Ingress 240, and Transport Egress 245 functions. These functions (described below) are implemented on respective line cards, also referred to as optical circuit cards or circuit packs or packages, that are receivable or deployed in a common chassis. Moreover, multiple line cards ofthe same type may be used at an OTS to provide scalability as the bandwidth needs ofthe OTS grow over time.
A Node Manager 250 and Optical Performance Monitoring (OPM) module 260 may also be implemented on respective line cards in the chassis. Node Manager 250 typically communicates with the rest ofthe OTS 200 through a 100 BaseT Ethernet internal LAN distributed to every line card and module and terminated by the Line Card Manager module 270 residing on every line card. Alternatively, a selectable 10/100
BaseT connection may be used. OPM 260 is responsible for monitoring optical hardware of OTS 200, and typically communicates its findings to the Node Manager 250 via the internal LAN and the OPM's LCM. The Node manager may process this performance information to determine whether the hardware is functioning properly. In particular, based on the OPM information, the Node Manager may apply control signals to the line cards, switchover to backup components on the line cards or to backup line cards, set alarms for the NMS, or take other appropriate action.
Each ofthe line cards, including the OPM 260 and the line cards that carry the optical signals in the network, shown within the dashed line 265, are controlled by respective LCMs 270. The Node Manager 250 may control the line cards, and receive data from the line cards, via the LCMs 270.
Being interfaced to all other cards ofthe OTS 200 via the internal LAN and LCMs, the Node Manager 250 is responsible for the overall management and operation ofthe OTS 200 including signaling, routing, and fault protection. The responsibility for telemetry of all control and status information is delegated to the LCMs. There are also certain local functions that are completely abstracted away from the Node Manager and handled solely by the LCMs, such as laser failsafe protection. Whenever a light path is created between OTSs, the Node Manager 250 of each OTS performs the necessary signaling, routing and switch configuration to set up the path. The Node Manager 250 also continuously monitors switch and network status such that fault conditions can be detected, isolated, and repaired. The OPM 260 may be used in this regard to detect a loss of signal or poor quality signal, or to measure signal parameters such as power, at any ofthe line cards using appropriate optical taps and processing circuitry. Three levels of fault recovery may be supported: (1) Component Switchover - replacement of failed switch components with backup, (2) Line Protection - rerouting of all light paths around a failed link; and (3) Path Protection - rerouting of individual light paths affected by a link or node failure. Component Switchover is preferably implemented within microseconds, while Line Protection is preferably implemented within milli-seconds of failure, and Path Restoration may take several seconds.
The all-optical switch fabric 210 is preferably implemented using MEMS technology. However, other optical switching components may be used, such as lithium niobate modules, liquid crystals, bubbles and thermo-optical switching technologies. MEMS have arrays of tiny mirrors that are aimed in response to an electrostatic control signal. By aiming the mirrors, any optical signal from an input fiber (e.g., of a transport ingress or optical access ingress line card) can be routed to any output fiber (e.g., of a transport egress or optical access egress line card).
The Optical Access Network 205 may support various voice and data services, including switched services such as telephony, ISDN, interactive video, Internet access, videoconferencing and business services, as well as multicast services such as video. Service provider equipment in the Optical Access Network 205 can access the OTS 200 in two primary ways. Specifically, if the service provider equipment operates with wavelengths that are supported by the OTS 200 ofthe optical network, such as selected OC-n ITU-compliant wavelengths, it can directly interface with the Optical Access (OA) ingress module 230 and egress module 235. Alternatively, if the service provider equipment is using a non-compliant wavelength, e.g., in the 1310 nm range, or GbE (or 10 GbE), then it accesses the OTS 200 via an ALI card 220. Advantageously, since a GbE network can be directly bridged to the OTS without a SONET Add/Drop Multiplexer (ADM) and a SONET/SDH terminal, this relatively more expensive equipment is not required, so service provider costs are reduced. That is, typically, legacy electronic infrastructure equipment is required to connect with a SONET terminator and add-drop multiplexer (ADM). In contrast, these functions are integrated in the OTS of the present invention, resulting in good cost benefits and a simpler network design. In other words, because the GbE physical layer is a substitute for the SONET physical layer, and because there is no reason to stack two physical layers, the SONET equipment would be redundant. Table 1 summarizes the access card interface parameters associated with each type of OA and ALI card, in some possible implementations.
Table 1
Protocol Data Rate Card Tv] pe External Ports Internal Ports
ITU-Compliant OC-12 OA 8 OC-12 8 OC-12
SONET OC-48 OA 8 OC-48 8 OC-48
OC-192 OA 8 OC-192 8 OC-192
Non-Compliant OC-12 ALI 8 OC 12 Input 2 OC48 Output
SONET 8 OC12 Output 2 OC48 Input
OC-48 ALI 8 OC48 Input 2 OC192 Output 8 OC48 Output 2 OC192 Input
OC-192 ALI 2 OC192 Input 2 OC192 Output 2 OC 192 Output 2 OC192 Input
Gigabit Ethernet 1 Gbps ALI 8 GbE Input 2 OC48 Output 8 GbE Output 2 OC48 Input
The OTS can interface with all existing physical and data-link layer domains (e.g., ATM, IP router, Frame relay, TDM, and SONET/SDH/STM systems) so that legacy router and ATM systems can connect to the OTS. The OTS solution also provides the new demand services, e.g., audio/video on demand, with cost-effective bandwidth and efficient bandwidth utilization.
The OTS 200 can be configured, e.g., for metro and long haul configurations. In one possible implementation, the OTS can be deployed in up to four- fiber rings, up to four fiber OADMs, or four fiber point-to-point connections. Each OTS can be set to add/drop any wavelength with the maximum of sixty- four channels of local connections.
2. Hardware Architecture
FIG. 3 illustrates an OTS hardware architecture in accordance with the present invention. The all-optical switch fabric 210 may include eight 8x8 switch elements, the group of eight being indicated collectively as 211. Each ofthe eight switch elements is responsible for switching an optical signal from each of eight sources to any one of eight outputs.
Generally, selected outputs ofthe TP ingress cards 240 and OA ingress cards 230 are optically coupled by the switching fabric cards 210 to selected inputs ofthe TP egress cards 245 and/or OA egress cards 235. The optical coupling between cards and the fabric occurs via an optical backplane, which may comprise optical fibers. Preferably, the cards are optically coupled to the optical backplane when they are inserted into their slots in the OTS bay such that the cards can be easily removed and replaced. For example, MTP™-type connectors (Fiber Connections, Inc.) may be used. This allows easy troubleshooting and upgrading of cards. Moreover, each line card may connect to an RJ-45 connector when inserted into their slots.
Moreover, each TP ingress and OA ingress card has appropriate optical outputs for providing optical coupling to inputs ofthe switch fabric via the optical backplane. Similarly, each TP egress and OA egress card has appropriate optical inputs for providing optical coupling to outputs ofthe switch fabric via the optical backplane. With appropriate control signals, the switching fabric is controlled to optically couple selected inputs and outputs ofthe switch fabric card, thereby providing selective optical coupling between outputs ofthe TP ingress and OA ingress cards, and the inputs ofthe TP egress and OA egress cards. As a result, the optical signals carried by the outputs of TP ingress and OA ingress cards can be selectively switched (optically coupled) to the inputs ofthe TP egress and OA egress cards.
In the example configuration shown in FIG. 3, the transport ingress module 240 includes four cards 302, 304, 306 and 308, each of which includes a wavelength division demultiplexer (WDD), an example of which is the WDD 341, for recovering the OSC, which may be provided as an out-of-band signal with the eight multiplexed data signals (λ's).
An optical amplifier (OA), an example of which is the OA 342, amplifies the optical transport signal multiplex, and a demux, an example of which is the demux 343, separates out each individual wavelength (optical transport signal) in the multiplex. Each individual wavelength is provided to the switch fabric 210 via the optical backplane, then switched by one ofthe modules 211 thereat. The outputs ofthe switch fabric 210 are provided to the optical backplane, then received by either a mux, an example of which is the mux 346, of one ofthe transport egress cards 320, 322, 324 or 326, or an 8x8 switch of one ofthe OA egress cards 235. At each ofthe TP egress cards, the multiplexer output is amplified at the associated OA, and the input OSC is multiplexed with data signals via the WDM. The multiplexer output at the WDM can then be routed to another OTS via an optical link in the network. At the OA egress cards 235, each received signal is amplified and then split at 1x2 dividers/splitters to provide corresponding outputs either to the faceplate ofthe OA egress cards for compliant wavelengths, or to the ALI cards via the optical backplane for non-compliant wavelengths. Note that only example light paths are shown in FIG. 3, and that for clarity, all possible light paths are not depicted.
The ALI cards perform wavelength conversion for interfacing with access networks that use optical signals that are non-compliant with the OTS. As an example, the ALI card receives non-compliant wavelength signals, converts them to electrical signals, multiplexes them, and generates a compliant wavelength signal. Two optical signals that are output from the ALI card 220 are shown as inputs to one ofthe OA In cards 230 to be transmitted by the optical network, and two optical signals that are output from one ofthe OA Eg cards 235 are provided as inputs to the ALI cards 220. N total inputs and outputs (e.g., N=4, two inputs and two outputs) may be input to, or output from, the ALI cards 220.
The OSC recovered at the TP ingress cards, namely OSCOUT, is processed by the Optical Signaling Module (OSM) ofthe OTS using an O-E conversion. The OSM generates a signaling packet that contains signaling and route information, and passes it on to the Node Manager. The OSM is discussed further below, particularly in conjunction with FIG. 9. If the OSC is intended for use by another OTS, it is regenerated by the OSM for communication to another OTS and transmitted via, e.g., OSCIN- Or, if the OSC is intended for use only by the present OTS, there is no need to relay it to a further node. Alternatively, OSCIN could also represent a communication that originated from the present OTS and is intended for receipt, e.g., by another node. For a group of nodes operating under the control of an NMS, typically only one ofthe nodes acts as a gateway to the shared NMS. The other nodes ofthe group communicate with the NMS via the gateway node and communication by the other nodes with their gateway node is typically also accomplished via the OSC.
FIG. 4 illustrates a control architecture for an OTS in accordance with the present invention. The OTS implements the lower two tiers ofthe above described three- tier control architecture typically without a traditional electrical backplane or shelf controller. Moreover, the OTS has a distributed architecture, which results in maximum system reliability and stability. The OTS does not use a parallel backplane bus such as Compact PCI or VME bus because they represent a single point of failure risk, and too much demand on one shared element is a performance risk. Instead, the invention preferably provides a distributed architecture wherein each line card ofthe OTS is outfitted with at least one embedded controller referred to as a LCM on at least one daughter board, with the daughter boards communicating with the node's single Node Manager via a LAN technology such as 100 BaseT Ethernet and Core Embedded Control Software. In particular, the LCM may use Ethernet layer 2 (L2) datagrams for communication with the Node Manager, with the Node Manager being the highest-level processor within an individual OTS. The Node Manager and all OTS line cards plug into a 100 BaseT port on one or more hubs via RJ-45 connectors to allow electronic signaling between LCMs and the Node Manager via an internal LAN at the OTS. In a particular embodiment, two twenty-four port hubs are provided to control two shelves of line cards in an OTS bay, and the different hubs are connected by crossover cables. For example, FIG. 4 depicts LCMs 410 and associated line cards 420 as connected to hubs 415 and 418, which may be 24-port 100 BaseT hubs. The line cards may perform various functions as discussed, including Gigabit Ethernet interface (a type of ALI card), SONET interface (a type of ALI card), TP ingress, TP egress, optical access ingress, optical access egress, switching fabric, optical signaling, and optical performance monitoring.
Moreover, while only one Node Manager is required, the primary Node Manager 250 can be provided with a backup Node Manager 450 for redundancy. Each Node Manager has access to the non-volatile data on the LCMs which help in reconstructing the state ofthe failed node manager. The backup Node Manager gets copies ofthe primary node manager non-volatile store, and listens to all traffic (e.g., messages from the LCMs and the primary Node Manager) on all hubs in the OTS to determine if the primary has failed. Various schemes may be employed for determining if the primary Node Manager is not functioning properly, e.g., by determining whether the primary Node Manager 250 responds to a message from an LCM within a specified amount of time.
In particular, the hubs 415 and 418 are connected to one another via a crossover 417 and additional hubs may also be connected in this manner. See also FIG. 8. In terms ofthe OTS bay, every shelf connects to a 100 BaseT hub. This use of an Ethernet backplane provides both hot-swappability of line cards (i.e., removal and insertion of line cards into the OTS bay when optical and/or electrical connections are active), and totally redundant connections between the line cards and both Node Managers. Moreover, if the node is a gateway, its primary Node Manager communicates with the NMS, e.g., via a protocol such as SNMP, using 100 BaseT ports 416, 419.
Alternatively, selectable 10/100 BaseT may be used. RJ-45 connectors on the faceplate ofthe Node Manager circuit pack may be used for this purpose.
The Node Manager and Line Card Manager are described further, below. 3. Node Manager Module FIG. 5 illustrates a single Node Manager architecture 250 in accordance with the present invention. An OTS with primary and backup Node Managers would have two ofthe architectures 250.
The Node Manager executes all application software at the OTS, including network management, signaling, routing, and fault protection functions, as well as other features.
As discussed above, each Node Manager circuit pack has a 100 BaseT network connection to a backplane hub that becomes the shared medium for each LCM in the OTS. Additionally, for a gateway OTS node, another 100 BaseT interface to a faceplate is provided for external network access. The Node Manager Core Embedded Software performs a variety of functions, including: i) issuing commands to the LCMs, ii) configuring the LCMs with software, parameter thresholds or other data, iii) reporting alarms, faults or other events to the NMS, and iv) aggregating the information from the LCMs into a node-wide view that is made available to applications software at the Node Manager. This node-wide view, as well as the complete software for each LCM controller, are stored in flash memory 530. The node- or switch- wide view may provide information regarding the status of each component ofthe switch, and may include, e.g., performance information, configuration information, software provisioning information, switch fabric connection status, presence of alarms, and so forth. Since the node's state and the LCM software are stored locally to the node, the Node Manager can rapidly restore a swapped line card to the needed configuration without requiring a remote software download, e.g., from the NMS.
The Node Manager is also responsible for node-to-node communications processing. All signaling messages bound for a specific OTS are sent to the Node Manager by that OTS's optical signaling module. The OSM, which has an associated LCM, receives the OSC wavelength from the Transport Ingress module. The incoming OSC signal is converted from optical to electrical, and received as packets by the OSM. The packets are sent to the Node Manager for proper signaling setup within the system. On the output side, out-going signaling messages are packetized and converted into an optical signal of, e.g., 1310 nm or 1510 nm, by the OSM, and sent to the Transport Egress module for transmission to the next-hop OTS. The Node Manager configures the networking capabilities of the OSM, e.g., by providing the OSM with appropriate software for implementing a desired network communication protocol.
The Node Manager may receive remote software downloads from the NMS to provision itself and the LCMs. The Node Manager distributes each LCM's software via the OTS's internal LAN, which is preferably a shared medium LAN. Each LCM may be provisioned with only the software it needs for managing the associated line card type. Or, each LCM may be provisioned with multi-purpose software for handling any type of line card, where the appropriate software and/or control algorithms are invoked after an LCM identifies the line card type it is controlling (e.g., based on the LCM querying its line card or identifying its slot location in the bay).
In one possible implementation, the Node Manager uses a main processor 505, such as the 200MHz MPC 8255 or MPC8260 (Motorola PowerPC microprocessor, available from Motorola Corp., Schaumburg, Illinois), with an optional plug-in module 510 for a higher power plug-in processor 512, which may be a RISC CPU such as the 400 MHz MPC755. These processors 505, 510 simultaneously support Fast Ethernet, 155 Mbps ATM and 256 HDLC channels. However, the invention is not limited to use with any particular model of microprocessor. Moreover, while the plug-in module 510 is optional, it is intended to provide for a longer useful life for the Node Manager circuit pack by allowing the processor to be upgraded without changing the rest ofthe circuit pack.
The Node Manager architecture is intended to be flexible in order to meet a variety of needs, such as being a gateway and/or OTS controller. The architecture is typically provided with a communications module front end that has two Ethernet interfaces: 1) the FCC2 channel 520, which is a 100 BaseT to service the internal 100 BaseT Ethernet hub on the backplane 522, and, for gateway nodes, 2) the FCC3 channel 525, which is a 100 BaseT port to service the NMS interface to the outside. The flash memory 530 may be 128 MB organized in a xl6 array, such that it appears as the least significant sixteen data bits on the bus 528. See the section entitled "Flash Memory Architecture" for further information regarding the flash memory 530.
The bus 528 may be an address and data bus, such as Motorola's PowerPC 60x. The SDRAM 535 may be 256 MB organized by sixty-four data bits. An EPROM 532 may store start up instructions that are loaded into the processor 505 or 512 via the bus 528 during an initialization or reset ofthe Node Manager. A PCMCIA Flash disk 537 also communicates with the bus 528, and is used for persistent storage, e.g., for storing long term trend data and the like from monitored parameters ofthe line cards. A warning light may be used so that the Flash disk is not inadvertently removed while data is being written to it. Preferably, to prevent tampering, the non- volatile memory resources, such as the Flash disk, are designed so that they cannot be removed while the Node Manager card is installed on the OTS backplane.
Additionally, there is a SDRAM 540 (e.g., having 4 MB) on the local bus 545, which is used to buffer packets received on the communications module front-end of the main processor 505. The local bus 545 may carry eighteen address bits and thirty-two data bits.
Flexibility is promoted if the core microprocessor (such as is possible with Motorola's PowerPC 603e core inside the MPC8260) 505 can be disabled, and the plug- in processor 512 can be installed on the bus 528. Such plug-in processor 512 can be further assisted with an L2 backside cache 514, e.g., having 256 KB. It is expected that a plug-in processor can be used to increase the performance ofthe Node Manager 505 by more than double. As an example, the plug-in processor 512 may be any future type of RISC processor that operates on the 60x bus. The processor 505 yields the bus to, and may also align its peripherals to, the more powerful plug-in processor 512. In addition to providing a general purpose path for upgradability ofthe Node Manager, the plug-in processor is also useful, e.g., for the specific situation where the OTS has had line cards added to it and the main processor 505 is therefore no longer able to manage its LCMs at a rate compatible with the desired performance characteristics ofthe optical networking system. A serial port 523 for debugging may also be created.
In summary, the Node Manager provides NMS interface and local node management, as well as providing signaling, routing and fault protection functions (all using the Node Manager's application software), provides real-time LCM provisioning, receives monitored parameters and alarms/faults from each LCM, aggregates monitored parameters and alarms/faults from each line card into a node-wide view, processes node- to-node communication messages, provides remote software download capability, distributes new software to all LCMs, is expandable to utilize a more powerful CPU (through plug-in processor 512), such as of RISC design, is built on a Real-Time Operating System (RTOS), provides intra-OTS networking support (e.g., LAN connectivity to LCMs), and provides node-to-node networking support. 4. Line Card Manager Module
FIG. 6 illustrates a Line Card Manager architecture 600 in accordance with the present invention. As discussed above, the LCM modules may be provided as daughter boards/plug-in modules that plug into the respective line cards to control each line card in the OTS. The LCMs offload local processing tasks from the Node Manager and provide continued line card support without any interruptions in the event the Node Manager fails (assuming no backup is available, or the backup has also failed), or the communication path to the Node Manager is not available. That is, even if the control path is lost, the user data paths are still active. The line card state and data are stored until the Node Manager is back in service. This is made possible by the loosely coupled distributed architecture which allows the LCM to act independently ofthe Node Manager whenever necessitated by failure ofthe Node Manager. The parameters which keep the line card active are kept locally on the LCM, thus allowing the line card to act independently ofthe Node Manager for a time. The Node Manager can be replaced while the OTS continues to function.
The line cards which an LCM 600 may control include any ofthe following: switch fabric, TP_IN, TPJEG, OA_IN, OA EG, OSM, OPM, or ALI cards (acronyms defined in Glossary). The LCM daughter board is built around an embedded controller/processor 605, and contains both digital and analog control and monitoring hardware. LCMs typically communicate with the Node Manager via the OTS internal LAN. The LCM receives commands from the Node Manager, such as for configuring the line cards, and executes the commands via digital and analog control signals that are applied to the associated line card. The LCM gathers from its line card digital and analog feedback and monitored parameter values, and may periodically send this information to the Node Manager, e.g., if requested by the Node Manager. The LCM also passes events such as faults/alarms and alerts to the Node Manager as they occur. These values and all provisioning data are kept in an in-memory snapshot ofthe line card status.
Preferably, the LCM stores this snapshot and a copy ofthe software that is currently running the LCM in its non- volatile (e.g., flash) memory 610 to allow rapid rebooting ofthe LCM. Specifically, when the LCM powers up, it loads the software from the non- volatile memory 610 into SDRAM 625, and then begins to execute. This avoids the need for the LCM to download the software from the Node Manager via the OTS internal LAN each time it starts up, which saves time and avoids unnecessary traffic on the internal OTS LAN. The software logic for all line cards is preferably contained in one discrete software load which has the ability to configure itself based on the identity of the attached line card as disclosed during the discovery phase of LCM initialization. The type of line card may be stored on an EEPROM on the line card. The LCM queries the EEPROM through the I2C bus to obtain the identifier.
See the section entitled "Flash Memory Architecture" for further information regarding the flash memory 610.
The LCM can also receive new software from the Node Manager via the OTS internal LAN and store it in the flash memory 610. It is desirable to have sufficient non-volatile memory at the LCM to store two copies ofthe software, i.e., a current copy and a backup copy. In this way, a new software version, e.g., that provides new features, could be stored at the LCM and tested to see if it worked properly. If not, the backup copy (rollback version) ofthe previous software version could be used. The Node Manager delegates most ofthe workload for monitoring and controlling the individual line cards to each line card's local LCM. This reduces the central point of failure threat posed by a centralized architecture, increasing the probability that the optical network can keep functioning, even if levels of control above the LCM (i.e., the Node Manager or NMS) were to suffer a failure. Distributed architectures also scale better since, as each line card is added, at least one dedicated processor daughter board (i.e., the LCM) is added to control it. In one possible implementation, the controller 605 is the 200 MHz Motorola MPC8255 or MPC8260. However, the invention is not limited to use with any particular model of microprocessor. The controller 605 may have a built-in communications processor front-end, which includes an Ethernet controller (FCC2) 615 that connects to the Node Manager via the internal switch LAN. In the embodiment shown, this connection is made via the line card using an RJ-45 connector. Other variations are possible.
The flash memory 610 may be 128 MB organized in xl6 mode, such that it appears as the least significant sixteen data bits on the bus 620, which may be
Motorola's 60x bus. The SDRAM 625 may be 64 MB organized by sixty-four data bits. An A/D converter 635, such as the AD7891-1 (Analog Devices, Inc., Norwood, Mass.) includes a 16 channel analog multiplexer into a 12 bit A/D converter. A D/A converter 622, which may be an array of four "quad" D/A converters, such as MAX536's (Maxim Integrated Products, Inc., Sunnyvale, Calif), provides sixteen analog outputs to a connector 640, such as a 240-pin Berg Mega- Array connector (Berg Electronics Connector Systems Ltd, Herts, UK). The LCMs and line cards preferably adhere to a standard footprint connect scheme so that it is known which pins ofthe connector are to be driven or read. Essentially, a telemetry connection is established between the LCM and the line card via the connector 640.
Advantageously, since the LCM can be easily removed from its line card instead of being designed into the line card, it can be easily swapped with an LCM with enhanced capabilities, e.g., processor speed and memory, for future upgrades.
The LCM daughter board removeably connects to the associated line card via a connector 640. A serial port 645 for debugging may be added. For the MPC8255 or MPC8260, such a serial port 645 may be constructed from port D (SMC1). There is typically a 4 MB SDRAM 650 on the Local Bus 655, which is used to buffer packets received on the communications module front-end ofthe controller 605. Port A 636 receives a latch signal. A serial bus known as a Serial Peripheral Interface SPI 606 is specialized for A/D and D/A devices, and is generated by the controller 605. It is a three-wire SPI for transmitted data, received data, and clock data that may be used with the more complicated line cards that have many registers and inputs/outputs. Examples of such more complicated line cards may be the OC-n and GbE ALI cards and the switching fabric line cards. Essentially, the SPI 606 provides an interface that allows a line card to communicate directly with the controller 605. The SPI 606 may carry analog signals to the line card via the D/A 622, or receive analog signals from the line card via the A/D 635. The FPGA 602 provides a 40-bit status read only register for reading in signals from the line card, and a 32 bit read/write control register for reading/writing of control signals from to the line card. These registers may be addressed via a GPIO on the connector 640. The FPGA 602 also receives an 8-bit line card ID tag that identifies the location ofthe line card within the OTS (i.e., slot, shelf and bay) since certain slots are typically reserved for certain line card types. The slot locations are digitally encoded for this purpose. Alternatively, or in addition, the type of line card could be identified directly regardless ofthe slot, shelf and bay, e.g., by using a serial number or other identifier stored on the line card and accessible to the LCM, e.g., via an I2C bus 604. This bus enables the communication of data between the controller 605 and the connector 640. In particular, the bus 604 may be part of a GPIO that receives information from a line card, including the bay, shelf and slot, that identifies the line card's position at the OTS.
The controller 605 may receive a hard reset signal from the Node Manager, e.g., via the Ethernet controller (FCC2) 615, which clears all registers and performs a cold boot ofthe system software on the LCM, and a soft reset signal, which performs a warm boot that does not interfere with register contents. The soft reset is preferred for preserving customer cross connect settings.
To fulfill the mission ofthe Node Manager as an abstraction/aggregation ofthe LCM primitives, the LCM is preferably not accessible directly from the customer LAN/WAN interfaces. An EPROM 612, e.g., having 8 KB, may store instructions that are loaded into the processor 605 via the bus 620 during an initialization or reset ofthe LCM.
The microcontroller 605 typically integrates the following functions: 603e core CPU (with its non -multiplexed 32 bit address bus and bi-directional 64 bit data bus), a number of timers (including watchdog timers), chip selects, interrupt controller, DMA controllers, SDRAM controls, and asynchronous serial ports. The second fast communication channel (FCC2) 100 BaseT Ethernet controller is also integrated into the Communications Processor Module functions ofthe controller 605. The microcontroller may be configured for 66 Mhz bus operation, 133 Mhz CPM operation, and 200 Mhz 603 e core processor operation. In summary, the line card manager module provides local control for each line card, executes commands received from the Node Manager, provides digital and/or analog control and monitoring ofthe line card, sends monitored parameters and alarms/faults ofthe line card to the Node Manager, provides an embedded controller with sufficient processing power to support a RTOS and multi-tasking, and provides Intra-OTS networking support.
5. OTS Configuration
FIG. 7 illustrates an OTS configuration in accordance with the present invention. The OTS 700 includes an optical backplane 730 that uses, e.g., optical fibers to couple optical signals to the different optical circuit cards (line cards). Preferably, specific locations/slots ofthe chassis are reserved for specific line card types according to the required optical inputs and outputs ofthe line card. Moreover, the optical backplane 730 includes optical connections to optical links ofthe optical network, and, optionally, to links of one or more access networks.
Furthermore, while one of each line card type is shown, as noted previously, more than one line card of each type is typically provided in an OTS configuration.
Each ofthe optical circuit cards (specifically, the LCMs ofthe cards) also communicates via a LAN with the Node Manager to enable the control and monitoring of the line cards.
The optical inputs and outputs of each card type are as follows:
ALI - inputs an from access network link and OA egress cards; outputs to an access network link and OA ingress cards; OA ingress cards - inputs from an access network link and ALI cards; outputs to switching fabric cards and OPM cards;
OA egress cards - inputs from switching fabric cards; outputs to ALI cards, OPM cards, and an access network link;
TP ingress cards - inputs from an optical network link; outputs to switching fabric cards and OPM cards;
TP egress cards - inputs from switching fabric cards; outputs to an optical network link and OPM cards;
Switch fabric cards - inputs from OA ingress cards and TP ingress cards; outputs to OA egress cards and TP egress cards; OSM - inputs from TP ingress cards; outputs to TP egress cards; and OPM - inputs from TP ingress cards, TP egress cards, OA ingress cards, and OA egress cards (may monitor additional cards also).
6. Interconnected Backplane Ethernet Hubs FIG. 8 illustrates backplane Ethernet hubs for an OTS in accordance with the present invention. The OTS may use standard Ethernet hub assemblies, such as 24- port hubs 830 and 840, to form the basis of inter-processor communication (i.e., between the Node Manager and the LCMs). Each hub assembly 830, 840 may have, e.g., twenty- four or more ports, whereas the corresponding shelf backplanes (815, 825, 835, 845, respectively) typically have, e.g., 6-8 ports. A number of connectors, two examples of which are denoted at 820, are provided to enable each line card to connect to a hub. The connectors may be RJ-45 connectors. The dashed lines denote a conceptual electrical connection from the connectors 820 to one or more ofthe hubs. Typically, each connector 820 is connected individually to a hub. For example, the connectors for shelf 1 (815) and shelf 3 (835) may connect to hub 830, while the connectors for shelf 2 (825) and shelf 4 (845) may connect to hub 840. Moreover, a crossover cable 842, which may be a cable such as 100 BaseT media, may connect the two hubs such that they are part of a common LAN. Other variations are possible. For example, a single hub may be used that is sized large enough to connect to each line card in the OTS bay. In this arrangement, the backup Node Manager 750 shadows the primary
Node Manager 250 by listening to all traffic on the internal OTS backplane hubs (the shared media LAN), to deter ine when the primary Node Manager ceases to operate. When such a determination is made, the backup Node Manager takes over for the primary Node Manager 750. 7. Optical Signaling Module
FIG. 9 illustrates the operation of a control architecture and OSM in accordance with the present invention. The OSM provides an IP signaling network between switches for the interchange of signaling, routing and control messages. A gateway node 900 can interact with other networks, and includes an intra-product (internal to the OTS) LAN 905, which enables communication between the Node
Manager 910 and the LCMs, such as LCM 916 and the associated line card 915, ..., LCM 918 and the associated line card 917, and LCM 921 and the associated line card 920, which is an OSM. An example non-gateway node 950 similarly includes an intra-product LAN 955, which enables communication between its Node Manager and the LCMs, such as LCM 966 and the associated line card 965, ..., LCM 968 and the associated line card 967, and LCM 971 and the associated line card 970, which is an OSM.
The OSC wavelength from the Transport Ingress module is extracted and fed into the optical signaling module (OSM). For example, assume the network topology is such that the node A 900 receives the OSC first, then forwards it to node B 950. In this case, the extracted OSC wavelength from the OSM 920 is provided to the OSM 970. The incoming OSC wavelength from node A 900 is converted from optical to electrical and packetized by the OSM 970, and the packets are sent to the Node Manager 960 for proper signaling setup within the system. On the output side of Node B 950, outgoing signaling messages are packetized and converted into an optical signal by OSM 970 and sent to the
Transport Egress module for the next-hop OTS. Note that the OSC connection shown in
FIG. 9 is logical, and that the OSC typically propagates from TP card to TP card where it is added to TPJEg by the outgoing OSM and extracted from TP In by the inbound OSM.
FIG. 9 shows the inter-operation ofthe Node Manager, LCMs, and the OSM in the OTS. The interconnection of the NMS 901 with the OTS/node 900 via routers 904 and 906 is also shown. In particular, the node 900 communicates with the NMS 901 via a POP gateway LAN 902, an NMS platform 908 via an NMS LAN 909 and the routers 904 and 906. Thus, in addition to the OSC, which enables the NMS to provide optical signals to each node, an electrical signaling channel enables a gateway node to communicate with the NMS.
Each Node Manager at each OTS typically has three distinct network interfaces: 1) a 100 BaseT interface to the intra-OTS LAN, 2) a 100 BaseT interface to remote NMS platforms, and 3) an out-of-band optical signaling channel (OSC) for node- to-node communications. OTSs that act as gateways to the NMS, such as node A 900, may use the 100 BaseT interface, while non-gateways nodes, such as node B 950, need not have this capability. Advantageously, the service provider's LAN is separated from the OTS LAN for more efficient traffic handling. Layer 3 (L3) IP routing over the OSC provides nodes without gateway connectivity access to nodes that have such Gateway capability. L3 here refers to the 3rd layer ofthe OSI model, i.e., the network layer. Moreover, there are three different levels of messaging-related software on the OTS Node Manager. First, an NMS connects to application software on the Node Manager through the Node Manager NMS agent. Second, an "S" (services) message interface provides an abstraction layer for connecting Node Manager application software to a collection of Core Embedded Control software services, on the Node Manager, that serves to aggregate information sent to, or received from, the LCMs. Third, a "D" (driver) message interface connects the aggregating software ofthe Node Manager to the LCMs.
8. Optical Switch Fabric Module FIG. 10 illustrates an optical switch fabric module architecture 210 in accordance with the present invention. The OSF module 210 may be designed using 8x8 MEMS modules/chips 1010 as switching elements. The switching is done in the optical domain, and no O/E/O conversions are involved. All inputs to a switching element carry one wavelength (i.e., one optical signal as opposed to a multiplex of optical signals), thus enabling wavelength level switching. Moreover, each optical output of every switching element goes through a variable optical attenuator (VOA) 1050, which may be part ofthe switch fabric card, to equalize the power across all the wavelengths being subsequently multiplexed into one fiber. The switch fabric 210 is designed in a modular and scalable fashion so that it can be easily configured from a small-scale system to a large-scale system depending on the system configuration requirements.
The switch fabric 210 may receive optical inputs from an input module 1070 such as a transport ingress card and/or an optical access ingress card. The switch fabric provides the corresponding optical outputs to designated ports of an output module 1080, such as a transport egress card and/or an optical access egress card. Note that, for clarity of depiction in FIG. 10, only example light paths are shown.
In summary, the optical switch module provides wavelength-level switching, individually controllable signal attenuation of each output, interconnection to other modules via the optical fiber backplane, power level control management for ensuring that the power ofthe signal that is output between switches is acceptable, and path loss equalization for ensuring that all channels have the same power. The optical switch module may also use an inherently very low cross-talk switch fabric technology such as MEMS, typically with a 2-D architecture, have a modular architecture for scalability with 8x8 switch modules, and provide digital control ofthe MEMS fabric with electrostatic actuation. 9. Optical Transport Modules
The optical transport module (or "TP" module) is a multiplexed multi- wavelength (per optical fiber) optical interface between OTSs in an optical network. For configuration and network management, this transport module supports in-band control signals, which are within the EDFA window of amplification, e.g., 1525-1570 nm, as well as out-of-band control signals. For the out-of-band channel, the OTS may support a 1510 nm channel interface. The OTS uses two primary types of transport modules: Transport Ingress 240 (FIG. 11) and Transport Egress 245 (FIG. 12).
In summary, the optical transport module provides demultiplexing ofthe OSC signal (ingress module), multiplexing ofthe OSC signal (egress module), optical amplification (ingress and egress modules) which may use low noise optical amplification and gain flattening techniques, demultiplexing ofthe multi-wavelength transport signal (ingress module), multiplexing ofthe individual wavelength signals (egress module). The optical transport module may also provide dynamic suppression of optical power transients ofthe multi-wavelength signal. This suppression may be independent ofthe number ofthe surviving signals (i.e., the signals at the transport ingress module that survive at the transport egress module - some signals may be egressed due to drop multiplexing), and independent ofthe number ofthe added signals (i.e., the signals added at the transport egress module that are not present at the transport ingress module - these signals may be added using add multiplexing). The optical transport module may also provide dynamic power equalization of individual signals, wavelength connection to the optical switch fabric via the optical backplane, and pump control.
FIG. 11 shows the architecture for the Transport Ingress module 240. The module includes a demultiplexer 1105 to recover the OSC, an EDFA pre-amplifier 1110, an EDFA power amplifier 1115, a demultiplexer 1120 to demultiplex the eight wavelengths from the input port, and pump lasers 1122 and 1124 (e.g., operating at 980 nm).
Additionally, a filter 1107 filters the OSC before it is provided to the OSM. A coupler 1108 couples a tapped pre-amplified optical signal to the OPM, and to a PIN diode 1109 to provide a first feedback signal. In particular, the PIN diode outputs a current that represents the power ofthe optical signal. The OPM may measure the power ofthe optical signal (as well as other characteristics such as wavelength registration), typically with more accuracy than the PIN diode. The tap used allows monitoring ofthe multi-wavelength signal and may be a narrowband coupler with a low coupling ratio to avoid depleting too much signal power out ofthe main transmission path. Similarly, a coupler 1126 couples a tapped amplified optical signal to the OPM, and to a PIN diode 1 127 to provide a second feedback signal. Moreover, the pump laser 1122 is responsive to a pump laser driver 1130 and a TEC driver 1132. Similarly, the high-power pump laser 1124 is responsive to a pump laser driver 1140 and a TEC driver 1142. Both pump laser drivers 1130 and 1140 are responsive to an optical transient and amplified spontaneous emission noise suppression function 1150, which in turn is responsive to the feedback signals from the PIN diodes 1109 and 1127, and control signals from the LCM 1170. A DC conversion and filtering function may be used to provide local DC power. The LCM 1170 provides circuit parameters and control by providing control bits and receiving status bits, performs A/D and D/A data conversions as required, and communicates with the associated Node Manager via an Ethernet or other LAN.
In particular, the LCM 1170 may provide control signals, e.g., for pump laser current control, laser on/off, laser current remote control, TEC on/off, and TEC remote current control. The LCM 1170 may receive status data regarding, e.g., pump laser current, backface photocurrent, pump laser temperature, and TEC current.
FIG. 12 shows the architecture ofthe Transport Egress module 245, which includes a multiplexer 1205 to multiplex the eight wavelengths from the switch fabric, an EDFA Pre-amplifier 1210, an EDFA Power amplifier 1215, a multiplexer 1220 to multiplex the eight wavelengths and the OSC, and pump lasers 1222 and 1224 (e.g., operating at 980 nm).
Analogous to the transport ingress module 240, the transport egress module 245 also includes a coupler 1208 that couples a tapped pre-amplified optical signal to the OPM module, and to a PIN diode 1209 to provide a first feedback signal, e.g., ofthe optical signal power. Similarly, a coupler 1226 couples a tapped amplified optical signal to the OPM module, and to a PIN diode 1227 to provide a second feedback signal. Moreover, the pump laser 1222 is responsive to a pump laser driver 1230 and a TEC driver 1232. Similarly, the high-power pump laser 1224 is responsive to a pump laser driver 1240 and a TEC driver 1242. Both pump laser drivers 1230 and 1240 are responsive to an optical transient and amplified spontaneous emission noise suppression function 1250, which in turn is responsive to feedback signals from the PIN diodes 1209 and 1227, and the LCM 1270. A DC conversion and filtering function may be used to provide local DC power.
The LCM 1270 operates in a similar manner as discussed in connection with the LCM 1170 of the TP ingress module. 10. Optical Access Modules
The optical access module 230 provides an OTS with a single wavelength interface to access networks that use wavelengths that are compliant with the optical network ofthe OTSs, such as ITU-grid compliant wavelengths. Therefore, third party existing or future ITU-grid wavelength compliant systems (e.g. GbE router, ATM switch, and Fibre Channel equipment) can connect to the OTS. The optical access modules are generally of two types: Optical Access Ingress 230 (FIG. 13) for ingressing (inputting) one or more signals from an access network, and Optical Access Egress 235 (FIG. 14) for egressing (outputting) one or more signals to an access network. The ITU grid specifies the minimum spacing and the actual wavelengths ofthe individual wavelengths in a WDM system.
Various functions and features provided by the optical access modules include: optical amplification, connection to the optical switch fabric to route the signal for its wavelength provisioning, ITU-Grid wavelength based configuration, reconfiguration at run-time, direct connectivity for ITU-grid based wavelength signals, local wavelength switching, and direct wavelength transport capability.
FIG. 13 shows the architecture ofthe Optical Access Ingress module 230, which includes EDFAs (EDFA-l,...,EDFA-8) 1350, 2x1 switches 1310 and 8x8 optical (e.g., MEMS) switch 1360.
In particular, each 2x1 switch receives a compliant wavelength (λ) from the faceplate and from the output of an ALI card via the optical backplane. In a particular example, eight compliant wavelengths from the outputs of four ALI cards are received via the optical backplane. The LCM 1370 provides a control signal to each switch to output one ofthe two optical inputs to an associated EDFA.
The LCM 1370 operates in a similar manner as discussed in connection with the TP ingress and egress modules.
Taps 1390 are provided for each ofthe signals input to the switch 1360 to provide monitoring points to the OPM via the optical backplane. Similarly, taps 1395 are provided for each ofthe output signals from the switch 1360 to obtain additional monitoring points for the OPM via the optical backplane.
In particular, the performance ofthe optical signals is monitored, and a loss of signal detected. Each wavelength passes through the optical tap 1390 and a 1x2 optical splitter that provides outputs to: (a) a 8x1 optical coupler to provide a signal to the OPM via the optical backplane, and (b) a PIN diode for loss of signal detection by the LCM 1370. The OPM is used to measure the OSNR and for wavelength registration. The wavelengths at the taps 1395 are provided to a 8x1 optical coupler to provide a signal to the OPM via the optical backplane. The optical taps, optical splitters and 8x1 optical coupler are passive devices.
FIG. 14 shows the architecture ofthe Optical Access Egress module 235. The module 235 includes EDFAs (EDFA-l,...,EDFA-8) 1450, 1x2 switches 1470 and 8x8 optical (e.g., MEMS) switch 1420.
In particular, the optical switch 1420 receives eight optical inputs from a switch fabric module 210. Taps 1410 and 1490 provide monitoring points for each ofthe inputs and outputs, respectively, ofthe switch 1420 to the OPM via the optical backplane. The optical signals from the switch fabric are monitored for performance and loss of signal detection as discussed in connection with the Optical Access Ingress module 230. The LCM 1472 provides control signals to the switches 1470 for outputting eight compliant wavelengths to the faceplate, and eight compliant wavelengths to the input of four ALI cards via the optical backplane. The LCM 1472 operates in a similar manner as discussed previously. 11. Access Line Interface Modules
This O/E/O convergent module is a multi-port single wavelength interface between the switching system and legacy access networks using non-compliant wavelengths, e.g., around 1300 nm. The ALI module/card may be provided as either a GbE interface module 220a (FIG. 15) or SONET OC-n module. For example, FIG. 16 shows the ALI module configured as an OC-12 module 220b, FIG. 17 shows the ALI module configured as an OC-48 module 220c, and FIG. 18 shows the ALI module configured as an OC-192 module 220d. Other OC-n speeds may also be supported. In FIGs 15-18, the solid lines denote transport data flow, and the dashed lines denote control data flow. Referring to FIG. 15, the GbE module 220a provides dual data paths, each of which accepts four GbE signals, and multiplexes them to a single OC-48 signal. In the other direction, the module accepts an OC-48 signal and demultiplexes it into four GbE signals in each ofthe two paths.
The GbE module 220a includes SONET framers 1510 and 1520 that handle aggregation and grooming from each GbE port. The SONET framers may use the Model S4083 or Yukon chips from Advanced Micro Circuits Corporation (AMCC) of Andover, Massachusetts. The module 220a aggregates two or more GbE lines into each SONET framer 1510, 1520, which support OC-48 and OC-192 data rates. The module 220a also performs wavelength conversion to one ofthe ITU-grid wavelengths. For each ofthe modules 220a-220d, the desired ITU-grid wavelength is configured at initial path signaling setup.
For scheduling the use of OA bandwidth to support multiple legacy access networks, a variety of scheduling algorithms may be used when the aggregate bandwidth ofthe ALI inputs is greater than that ofthe ALI output. Such algorithms are typically performed by FPGAs 1540 and 1542. For example, one may use round robin scheduling, where the same bandwidth is allocated to each ofthe GbE interfaces, or weighted round robin scheduling, where relatively more bandwidth is allocated to specified GbE interfaces that have a higher priority. The MAC/PHY chips 1530, 1532, 1534, 1536 communicate with GbE transceivers, shown collectively at 1525, which in turn provide O-E and E-O conversion. MAC, or Media Access Control, refers to processing that is related to how the medium (the optical fiber) is accessed. The MAC processing performed by the chips may include frame formatting, token handling, addressing, CRC calculations, and error recovery mechanisms. The Physical Layer Protocol, or PHY, processing, may include data encoding or decoding procedures, clocking requirements, framing, and other functions. The chips may be AMCC's Model S2060. The module 220a also includes FPGAs 1540, 1542 which are involved in signal processing, as well as a control FPGA 1544. The FPGAs 1540, 1542 may be the Model XCV300 from Xilinx Corp., San Jose, Calif. Optical transceivers (TRx) 1550 and 1552 perform O-E and E-O conversions. In an ingress mode, where optical signals from an access network are ingressed into an OTS via the an ALI card, the MAC/PHY chips 1530-1536 receive input signals from the GbE transceivers 1525, and provide them to the associated FPGA 1540 or 1542, which in turn provides the data in an appropriate format for the SONET framers 1510 and 1520, respectively. The SONET framers 1510 or 1520 output SONET-compliant signals to the transceivers 1550 and 1552, respectively, for subsequent E-O conversion and communication to the OA In cards 230 via the optical backplane.
In an egress mode, where optical signals are egressed from the all optical network to an access network via the OTS, SONET optical signals are received from the optical access egress cards 235 at the transceivers 1550 and 1552, where O-E conversion is performed, the results of which are provided to the SONET framers 1510 or 1520 for de-framing. The de-framed data is provided to the FPGAs 1540 and 1542, which provide the data in an appropriate format for the MAC/PHY chips 1530-1536. The MAC/PHY chips include FIFOs for storing the data prior to forwarding it to the GbE transceivers 1525.
The control FPGA 1544 communicates with the ALI card's associated LCM, and also provides control signals to the transceivers 1550 and 1552, FPGAs 1540 and 1542, SONET framers 1510 and 1520, and MAC/PHY chips 1530-1536. The FPGA 1544 may be the Model XCV150 from Xilinx Corp.
In summary, the ALI modules may include module types 220a-220c, having: 16 physical ports (8 input and 8 output) of GbE, OC-12, or OC-48, and four physical ports (two input and two output) of OC-192. Module 220d has four physical ports on either end. The ALI modules may support OC-12 to OC-192 bandwidths (or faster, e.g., OC-768), provide wavelength conversion, e.g., from the 1250-1600 nm range, to ITU-compliant grid, support shaping and re-timing through O-E-O conversion, provide optical signal generation and amplification, and may use a wavelength channel sharing technique. See FIG. 28 for additional related information.
FIGs 16, 17 and 18 show the architecture ofthe OC-12, OC-48, and OC- 192 access line interface cards, respectively. See also FIG. 29 for additional related information.
FIG. 16 shows an OC-12 module 220b, which aggregates four or more OC-12 lines into each SONET framer 1610 or 1620, which support OC-48 data rates. In an optical ingress mode, Quad PHY functions 1630 and 1640 each receive four signals from OC-12 interfaces via transceivers, shown collectively at 1625, and provide them to corresponding SONET framers 1610 and 1620, respectively. The SONET Framers may use AMCC's Model S4082 or Missouri chips. The Quad PHY functions may each include four of AMCC's Model S3024 chips. The SONET framers 1610 and 1620 provide the data in frames. Since four OC-12 signals are combined, a speed of OC-48 is achieved. The framed data is then provided to optical transceivers 1650 and 1652 for E-O conversion, and communication to the optical access ingress cards 230 via the optical backplane. The SONET framers 1610 and 1620 may also communicate with adjacent ALI cards via an electrical backplane to receive additional input signals, e.g., to provide a capability for switch protection mechanisms. The electrical backplane may comprise a parallel bus that allows ALI cards in adjacent bays to communicate with one another. The electrical backplane may also have a component that provides power to each of the cards in the OTS bay. In an optical egress mode, optical signals are received by the transceivers 1650 and 1652 from the OA Eg cards and provided to the SONET framers 1610 and 1620 following O-E conversion. The SONET framers 1610 and 1620 provide the signals in a format that is appropriate for the Quad PHY chips 1630 and 1640. The control FPGA 1644 communicates with the ALI card's associated
LCM, and also provides control signals to the transceivers 1650 and 1652, SONET framers 1610 and 1620, and Quad PHY chips 1630 and 1640.
FIG. 17 shows an OC-48 module 220c, which aggregates two or more OC- 48 lines into each SONET framer 1710 and 1720, which support OC-192 data rates. In an optical ingress mode, PHY chips 1730, 1732, 1734 and 1736 each receive two signals from OC-48 interfaces via transceivers 1725 and provide them to corresponding SONET framers 1710 and 1720, respectively. The SONET framers 1710 and 1720 provide the signals in frames. Since four OC-48 signals are combined, a speed of OC-192 is achieved. The signals are then provided to optical transceivers 1750 and 1752 for E-O conversion, and for communication to optical access ingress cards 230 via the optical backplane. The SONET framers 1710 and 1720 may also communicate with adjacent ALI cards.
In an optical egress mode, optical signals are received by the optical transceivers 1750 and 1752 from optical access egress cards and provided to the SONET framers 1710 and 1720 following O-E conversion at the transceivers 1650, 1652. The SONET framers 1710 and 1720 provide the signals in a format that is appropriate for the OC-48 interfaces. The formatted optical signals are provided to the OC-48 interfaces via the PHY chips 1730-1736. Moreover, dedicated ports may be provided, which obviate MAC processing. The FPGA 1744 communicates with the ALI card's associated LCM, and also provides control signals to the transceivers 1750 and 1752, SONET framers 1710 and 1720, and PHY chips 1730-1736.
FIG. 18 shows an OC-192 module 220d, which provides one OC-192 line into each SONET framer 1810, 1820, which support OC-192 data rates. In an optical ingress mode, PHY chips 1830 and 1832 each receive a signal from OC-192 interfaces via transceivers 1825 and provide it to corresponding SONET framers 1810 and 1820, respectively, which provide the signals in frames. The signals are then provided to optical transceivers 1850 and 1852 for E-O conversion, and communicated to OA In cards 230 via the optical backplane. The SONET framers 1810 and 1820 may also communicate with adjacent ALI cards.
In an optical egress mode, optical signals are received by the optical transceivers 1850 and 1852 from the OA_Eg cards and provided to the SONET framers 1810 and 1820 following O-E conversion. The SONET framers 1810 and 1820 provide the signals in a format that is appropriate for the OC-192 interfaces. The formatted signals are provided to the OC-192 interfaces via the PHY chips 1830 and 1832.
The FPGA 1844 communicates with the ALI card's associated LCM, and also provides control signals to the transceivers 1850 and 1852, SONET framers 1810 and 1820, and PHY chips 1830 and 1832.
12. Optical Performance Monitoring Module
Referring to FIG. 19, the Optical Performance Monitoring (OPM) module 260 is used for several activities. For example, it monitors the power level of a multi- wavelength signal, the power level of a single wavelength signal, and the optical signal- to-noise ratio (OSNR) of each wavelength. It also measures wavelength registration. Each incoming wavelength power variation should be less than 5 dB and each out-going wavelength power variation should be less than 1 dB.
In particular, the OPM acts as an optical spectrum analyzer. The OPM may sample customer traffic and determine whether the expected signals levels are present. Moreover, the OPM monitoring is in addition to the LCM monitoring of a line card, and generally provides higher resolution readings. The OPM is connected through the optical backplane, e.g., using optical fibers, to strategic monitoring points on the line cards. The OPM switches from point to point to sample and take measurements. Splitters, couplers and other appropriate hardware are used to access the optical signals on the line cards.
The OPM module and signal processing unit 260 communicates with a LCM 1920, and receives monitoring data from all the line card monitoring points from a lxN optical switch 1930 via the optical backplane ofthe OTS. A faceplate optical jumper 1912 allows the OPM module and signal processing unit 260 and the optical switch 1930 to communicate. A conversion and filtering function may be used to provide local DC power.
The LCM 1920 (like all other LCMs of a node) communicates with the Node Manager via the intra-node LAN. In summary, the OPM supports protection switching, fault isolation, and bundling, and measures optical power, OSNR of all wavelengths (by sweeping), and wavelength registration. Moreover, the OPM, which preferably has a high sensitivity and large dynamic range, may monitor each wavelength, collect data relevant to optical devices on the different line cards, and communicate with the NMS (via the LCM and Node Manager). The OPM is preferably built with a small form factor. 13. OTS Chassis Configurations
The OTS is designed to be flexible, particularly as a result of its modular system design that facilitates expandability. The OTS is based on a distributed architecture where each line card has an embedded controller. The embedded controller performs the initial configuration, boots up the line card, and is capable of reconfiguring each line card without any performance impact on the whole system.
FIG. 20 illustrates a physical architecture of an OTS chassis or bay (receiving apparatus) in an OXC configuration 2000 in accordance with the present invention. There may be two OTS Node Manager circuit packs in each OTS node, namely a primary and a backup. Each of these circuit packs corresponds to Node Manager 250 of FIG. 2. In the example configuration of FIG. 20, a total of twenty-two circuit packs/line cards are provided in receiving locations or slots ofthe bay, with two of those twenty-two circuit packs being OTS Node Manager cards. An OTS is typically designed to provide a certain number of slots per shelf in its bay. Based upon the number of shelves, provision is made for up to a certain number of total circuit packs for the bay, such as, for example, twenty-four circuit packs in a bay, to allow for different configurations of OTS to be constructed. Communication to or from the bay is via the OTS Node Manager. FIG. 21 shows a fully configured OTS 2100 in a OXC/OADM configuration. FIG. 22 shows a fully configured ALI card bay 2200.
Optical cables in an OTS are typically connected through the optical backplane to provide a simple and comprehensive optical cable connectivity of all ofthe optical modules. In addition to providing for the LAN, the electrical backplane handles power distribution, physical board connection, and supports all physical realizations with full NEBS level 3 compliance. Note that since "hot" plugging of cards into an OTS is often desirable, it may be necessary to equip such cards with transient suppression on their power supply inputs to prevent the propagation of powering-up transients on the electrical backplane's power distribution lines. In one approach to managing the complexity ofthe optical backplane, locations or slots in the OTS bay may be reserved for specific types of line cards since the required optical coupling of a line card depends on its function, and it is desirable to minimize the complexity ofthe optical fiber connections in the optical backplane. Each ofthe optical circuit cards also has a connection to an electrical backplane that forms the LAN for LCM-Node Manager communications. This connection is uniform for each card and may use an RJ-45 connector, which is an 8-wire connector used on network interface cards.
The OTS is flexible in that it can accommodate a mix of cards, including Optical Access and Transport line cards. Thus, largely generic equipment can be provided at various nodes in a network and then a particular network configuration can be remotely configured as the specific need arises. This simplifies network maintenance and provides great flexibility in reconfiguring the network. For example, the OTS may operate as a pure transport optical switch if it is configured with all cards are transport cards (FIG. 20), e.g., eight transport ingress (TP_In) cards and eight transport egress cards (TP_Eg). Moreover, each TP In card has one input port/fiber and each TP_Eg card has one output port/fiber. In a particular implementation, each port/fiber supports eight wavelength-division multiplexed λ's, along with the OSC.
The OTS may operate as an Add/Drop terminal if it is configured with ALI, OA, and TP cards (FIGs 21 and 22). A wide range of configurations is possible depending on the mix of compliant and non-compliant wavelengths supported. For example, a typical configuration might include sixteen ALI cards for conversion of non- compliant wavelengths, four OA In cards, four OA Eg cards, four TP_In cards, and four TP_Eg cards. Note that since the ALI cards provide wavelength conversion in this embodiment, no wavelength conversion need be performed within the optical fabric. However, wavelength conversion within the optical fabric is also a possibility as the switch fabric technology develops.
Moreover, the OTS is scalable since line cards may be added to the spare slots in the bay at a later time, e.g., when bandwidth requirements ofthe network increase. Furthermore, multiple OTS bays can be connected together to further expand the bandwidth-handling capabilities ofthe node and/or to connect bays having different types of line cards. This connection may be realized using a connection like the ALI card-to-OA card connection via the optical backplane. Having now discussed the different types of modules/line cards and the
OTS chassis configurations, some features ofthe OTS when configured as a OXC or
OADM are summarized in Table 2 in terms of Access Line Interface,
Transport/Switching, and Management functions. Since the OADM can be equipped with transport cards (TP In and TP_Eg), it performs all ofthe functions listed, while the dedicated OXC configuration performs the switching/transport and management functions, but not the ALI functions.
For example, the Node Manager or NMS may control the OTS to configure it in the OXC or OADM modes, or to set up routing for light paths in the network.
Table 2
Product Feature OADM OXC
Access Line Interface
Adding/dropping of wavelengths X Grooming of optical signals X
Non-ITU-compliant wavelength conversion X
Optical Signal Generation/Modulation (Timing/Shaping) X
Switching/Transport
Multiplexing and demultiplexing of multi -wavelength signals into X X individual wavelengths
Cross-connection ofthe individual wavelengths X X
Amplification of optical signals X X
Protection switching of wavelengths X X
Dynamic power equalization ofthe optical signals X X Dynamic suppression of optical power transients of the X X multi-wavelength signal
Management
Performance monitoring of wavelengths X X
Operations and maintenance capabilities to support TMN X X 14. System Configurations
In an important aspect ofthe invention, each OTS can be used in a different configuration based on its position within an optical network. In the optical cross-connect (OXC) configuration, the input transport module, the switch fabric and the output transport module are used. FIG. 23 shows the modules used for the OXC configuration. In particular, the OTS 200a includes the TP In modules 240 and the TP_Eg modules 245. Each TP_In card may receive one fiber that includes, e.g., eight multiplexed data channels and the OSC. Similarly, each TP Eg card outputs eight data channels in a multiplex and the OSC on an associated fiber.
FIG. 24 shows the modules used for the OADM configuration when the incoming optical signals are compliant, e.g., with the ITU grid. In this case, the access line modules are not needed since the wavelengths are input directly from the access network to the OA_In cards. Here, the OTS 200b includes the TP In modules 240, the TP_Eg modules 245, the OA In modules 230, and the OA Eg modules 235. Note that the OA In and OA Eg cards are typically provided in pairs to provide bi-directional signaling. FIGs 25 and 26 show the OTS configurations when non-compliant wavelengths are used. The non-compliant wavelengths may include, e.g., eight OC-12 wavelengths and eight OC-48 wavelengths. In FIG. 25, in an add only multiplexing configuration, the OTS 200c uses the ALI modules 220 for converting the non-compliant wavelengths to compliant wavelengths, e.g., using any known wavelength conversion technique. The OA In modules 230 receive the compliant wavelengths from the ALIs 220 and provide them to the switch fabric 210. The switched signals are then provided to the TP_Eg modules 245 for transport on optical fibers in the optical network. Note that, typically, bidirectional signaling is provided to/from the access network via the ALI cards. Thus, the processes of FIGs 25 and 26 may occur at the same time via one or more ALI cards.
In FIG. 26, in a drop-only multiplexing configuration, the OTS 200d includes the TP In module 240 for receiving the optical signal via the optical network, the OA Eg modules 235 for receiving the optical signals from the switch 210, and the ALI modules 220 for converting the compliant wavelengths to non-compliant wavelengths for use by the access network. The non-compliant wavelengths may be provided as, e.g., eight OC-12 wavelengths and eight OC-48 wavelengths.
For concurrent add and drop multiplexing of non-compliant signals, the ALI modules both provide inputs to the OA_In modules 230, and receive outputs from the OA Eg modules 245. Similarly, any concurrent combination ofthe following is possible: (a) inputting OTS-compliant signals from one or more access networks to the OA_In modules, (b) inputting non-OTS-compliant signals from one or more access networks to the ALI modules, (c) outputting signals, which are both OTS- and access-network compliant, from the OA Eg modules to one or more access networks, and (d) outputting signals, which are OTS-compliant but non-compliant with an access network, to the ALI modules.
15. Transparent Data Transfer
A primary service enabled by the present invention is a transparent circuit- switched light path. Compared to conventional services, these flows are distinguished by a large quantity of bandwidth provided, and a setup time measured in seconds.
FIG. 27 shows a simple example of wavelength adding, dropping, and cross-connection. Generally, in an example network 2700, light paths are terminated at the OADMs 2710, 2730, 2750 and 2760 (at edge nodes ofthe network 2700), and switched through the OXCs 2720 and 2740 (at internal nodes ofthe network 2700). When no wavelength conversion is performed in the OXCs, the same wavelength carrying the light path is used on all links comprising the light path, but the wavelength can be reused on different links. For example, λl can be used in light paths 2770 and 2780, λ2 can be used in light path 2775, and λ3 can be used in light paths 2785 and 2790. From a user perspective, this transparent data transfer service is equivalent to a dedicated line for SONET services, and nearly equivalent to a dedicated line for GbE services. Since the OTS operation is independent of data rate and protocol, it does not offer a Quality of Service in terms of bit error rate or delay. However, the OTS may monitor optical signal levels to ensure that the optical path signal has not degraded. Also, the OTS may perform dynamic power equalization ofthe optical signals, and dynamic suppression of optical power transients ofthe multi -wavelength signal independently of the number ofthe surviving signals, and independently ofthe number ofthe added signals. The OTS may thus measure an Optical Quality of Service (OQoS) based on optical signal-to-noise ratio (OSNR), and wavelength registration. Table 3 provides a summary of transparent data transfer functions performed by the OTS for each type of interface. The simplest case is the receipt of a compliant OC- 12/48 signal by the Optical Access module. Table 3
Line Interface Functions
Compliant Optical (SONET) Channel Multiplexing/Demultiplexing
Signal Amplification Switching/Cross-Connection
GbE Packet Multiplexing/Demultiplexing
Aggregation/Grooming SONET Framing Modulation/Demodulation O-E Conversion on input
E-O Conversion on output Channel Multiplexing/Demultiplexing Signal Amplification Switching/Cross-Connection Non-compliant Optical O-E-O translation from non-compliant wavelength (e.g.,
Waveforms 1310 nm)
Aggregation/Grooming
Retiming/Reshaping
Channel Multiplexing/Demultiplexing Signal Amplification
Switching/Cross-Connection
The signal shaping and timing may be performed on the ALI cards using on-off keying with Non-Return-to-Zero signaling.
In one possible embodiment, eight compliant waveforms are supported based on the ITU-specified grid, with 200 Ghz or 1.6 nm spacing, shown in Table 4. These are eight wavelengths from the ITU grid.
Table 4
Wavelength # Wavelength registration
1 1549.318 nm
2 1550.921 nm
3 1552.527 nm
4 1554.137 nm
5 1555.750 nm
6 1557.366 nm
7 1558.986 nm
8 1560.609 nm
For compliant wavelengths received on the OA modules, the received signal is optically amplified and switched to the destination.
For non-compliant wavelengths, signals are converted to electrical form and are groomed. If the current assignment has several lower rate SONET input streams, e.g., OC-12, going to the same destination, the ALI can groom them into one higher rate stream, e.g., OC-48. After being switched to the destination port, the stream is multiplexed by a TP module onto a fiber with other wavelengths for transmission. Moreover, for non-compliant wavelengths, the OTS performs a wavelength conversion to an ITU wavelength, and the stream is then handled as a compliant stream. Conversion of optical signals from legacy networks to ITU-compliant wavelengths listed in Table 4 may be supported.
FIG. 28 illustrates Gigabit Ethernet networks accessing a managed optical network in accordance with the present invention. The GbE interface supports the fiber media GbE option, where the media access control and multiplexing are implemented in the electrical domain. Therefore, the flow is somewhat different from SONET. The GbE packetized data streams are received as Ethernet packets, multiplexed into a SONET frame, modulated (initial timing and shaping), and converted to a compliant wavelength. After the compliant wavelengths are formed, they are handled as compliant wavelength streams as described above.
The following example clarifies how Ethernet packets are handled. GbEl 2802, GbE2 2804, GbE3 2806, GbE4 2808, GbE5 2840, GbE6 2842, GbE7 2844 and GbE8 2846 are separate LANs. Typically, each ofthe active ports are going to a different destination, so dedicated wavelengths are assigned. If two or more GbE ports have the same destination switch, they may be multiplexed onto the same wavelength. In this example, each of four GbE ports are transmitted to the same destination (i.e., OADM B 2830) but to separate GbE LANs (GbEl is transmitted to GbE5, GbE2 is transmitted to GbE6, etc.). The client can attach as many devices to the GbE as desired, but their packets are all routed to the same destination. In this case, the processing flow proceeds as follows. First, the OADM A
2810 receives GbE packets on GbEl 2802, GbE2 2804, GbE3 2806, and GbE4 2808. The OADM A 2810 performs O-E conversion and multiplexes the packets into SONET frames at the ALI/OA function 2812. OADM A 2810 performs the E-O conversion at the assigned λ, also at the ALI/OA function 2812. The resulting optical signal is switched through the switch fabric (SW) 2814 to the transport module (egress portion) 2816, and enters the network 2820. The optical signal is switched through the optical network 2820 to the destination switch at OADM B 2830. At the OADM B 2830, the optical signal is received at the transport module (ingress portion) 2832, and switched through the switch fabric 2834 to the OA_Eg/ALI function 2836. The OADM B 2830 extracts the GbE packets from the SONET frame at the OA/ALI function 2836. Finally, the OADM B 2830 demultiplexes the packets in hardware at the OA/ALI function 2836 to determine the destination GbE port and transmits the packet on that port. Since the ALI 2812 in the OADM A 2810 may receive packets on different ports at the same time, the ALI buffers one ofthe packets for transmission after the other. However, appropriate hardware can be selected for the ALI such that the queuing delays incurred are negligible and the performance appears to be like a dedicated line. Note that, in this example, all GbE ports are connected to the same ALI.
However, by bridging the Ethernets, the service provider can configure the traffic routing within the GbE networks to ensure that traffic going to the same destination is routed to the same input GbE port on the optical switch. Multiplexing GbE networks attached to different ALIs is also possible. Refer also to FIG. 15 and the related discussion.
The QoS in terms of traditional measures is not directly relevant to the optical network. Instead, the client (network operator) may control these performance metrics. For example, if the client expects that the GbE ports will have a relatively modest utilization, the client may choose to assign four ports to a single OC- 48 λ operating at 2.4 Gbps (assuming they all have the same destination port). In the worst case, the λ channel may be oversubscribed, but for the most part, its performance should be acceptable. *
However, some QoS features can be provided on the GbE ALI cards. For example, instead of giving all of GbE streams equal priority using round robin scheduling, weighted fair queuing may be used that allows the client to specify the weights given to each stream. In this way, the client can control the relative fraction of bandwidth allocated to each stream.
Similarly, for ATM, the client may be operating a mix of CBR, VBR, ABR, and UBR services as inputs to the OADM module. However, the switching system does not distinguish the different cell types. It simply forwards the ATM cells as they are received, and outputs them on the port as designated during setup.
FIG. 29 shows an example of interconnectivity ofthe optical network with OC-12 legacy networks. Other OC-n networks may be handled similarly. Refer also to FIGs 16-18 and the related discussions. The example shows four OC-12 networks 2902, 2904, 2906 and 2908, connected to the optical network 2920 through the OC-12 ALI card 2912. Similarly, four OC-12 networks 2940, 2942, 2944 and 2946 are connected to the OC-12 ALI card 2936 at the OADM B 2930. In the example, the processing flow proceeds as follows. First, the OADM
A 2910 receives packets on OC-12 1 (2902), OC-12 2 (2904), OC-12 3 (2906), and OC- 12 4 (2908). The OADM A 2910 multiplexes the packets into SONET frames at OC-48 at the ALI/OA module 2912 using TDM. For compliant wavelengths, OC-n uses only the OA portion, not the ALI portion. For non-compliant wavelengths, the ALI is used for wavelength conversion, through an O-E-O process, then the OA is used for handling the newly-compliant signals. The resulting optical signal is switched through the switch fabric (SW) 2914 to the transport module (egress portion) (TP) 2916, and enters the network 2920. The optical signal is switched through the optical network 2920 to the destination switch at OADM B 2930. At the OADM B 2930, the optical signal is received at the transport module (ingress portion) 2932, and switched through the switch fabric (SW) 2934 to the OA/ALI function 2936. The OADM B 2930 extracts the packets from the SONET frame at the OA ALI function 2936. The OADM B 2930 demultiplexes the packet in hardware at the OA/ALI function 2936 to determine the destination port, and transmits the packet on that port. 16. Routing and Wavelength Assignment
The routing block 3120 of FIG. 31 refers to a Routing and Wavelength Assignment (RWA) function that may be provided as software running on the NMS for selecting a path in the optical network between endpoints, and assigning the associated wavelengths for the path. For implementations where the OTS does not provide wavelength conversion, the same wavelength is used on each link in the path, i.e., there is wavelength continuity on each link.
A "Light Wave OSPF" approach to RWA, which is an adaptive source based approach based on the Open Shortest Path First (OSPF) routing as enhanced for circuit-switched optical networks, may be used. Developed originally for (electrical) packet networks, OSPF is a link state algorithm that uses link state advertisement (LSA) messages to distribute the state of each link throughout the network. Knowing the state of each link in the network, each node can compute the best path, e.g., based on OSPF criteria, to any other node. The source node, which may be the Node Manager associated with the path tail, computes the path based on the OSPF information. OSPF is particularly suitable for RWA since it is available at low risk, e.g., easily extended to support traffic engineering and wavelength assignment, scalable, e.g., able to support large networks using one or two levels of hierarchies, less complex than other candidate techniques, and widely commercially accepted. Several organizations have investigated the enhancement of OSPF to support optical networks and several alternative approaches have been formulated. The major variation among these approaches involves the information that should be distributed in the LSA messages. As a minimum, it is necessary to distribute the total number of active wavelengths on each link, the number of allocated wavelengths, the number of pre-emptable wavelengths, and the risk groups throughout the networks. In addition, information may be distributed on the association of fibers and wavelengths such that nodes can derive wavelength availability. In this way, wavelength assignments may be made intelligently as part ofthe routing process. The overhead incurred can be controlled by "re-advertising" only when significant changes occur, where the threshold for identifying significant changes is a tunable parameter.
Furthermore, the optical network may support some special requirements. For example, in the ODSI Signaling Control Specification, the client may request paths that are disjoint from a set of specified paths. In the Create Request, the client provides a list of circuit identifiers and request that the new path be disjoint from the path of each of these paths. When the source node determines the new path, the routing algorithm must specifically exclude the links/switches comprising these paths in setting up the new path.
It is expected that the light paths will be setup and remain active for an extended period of time. As a result, the incremental assignment of wavelengths may result in some inefficiency. Therefore, it may improve performance to do periodic re- assignments.
17. Flash Memory Architecture
Flash memory is used on all controllers for persistent storage. In particular, the Node Manager flash memory may have 164 Mbytes while LCM flash memories may have 16 Mbytes. The Intel 28F128J3A flash chip, containing 16 Mbytes, may be used as a building block. Designing flash memory into both controllers obviates the need for ROM on both controllers. Both controllers boot from their flash memory. Should either controller outgrow its flash storage, the driver can be modified to apply compression techniques to avoid hardware modifications. The flash memory on all controllers may be divided into fixed partitions for performance. The Node Manager may have five partitions, including (1) current version Node Manager software, (2) previous version (rollback) Node Manager software, (3) LCM software, (4) Core Embedded software data storage, and (5) application software/data storage
The LCM may have 3 partitions, including (1) LCM software, (2) previous version (rollback) LCM software, and (3) Core Embedded software data storage.
The flash memory on both the Node Manager and LCM may use a special device driver for read and write access since the flash memory has access controls to prevent accidental erasure or reprogramming.
For write access, the flash driver requires a partition ID, a pointer to the data, and a byte count. The driver first checks that the size ofthe partition is greater than or equal to the size ofthe read buffer, and returns a negative integer value if the partition is too small to hold the data in the buffer. The driver then checks that the specified partition is valid and, if the partition is not valid, returns a different negative integer. The driver then writes a header containing a timestamp, checksum, and user data byte count into the named partition. The driver then writes the specified number of bytes starting from the given pointer into the named partition. The flash driver returns a positive integer value indicating the number of user data bytes written to the partition. If the operation fails, the driver returns a negative integer value indicating the reason for failure (e.g., device failure).
For read access, the flash driver requires a partition ID, a pointer to a read data buffer, and the size ofthe data buffer. The driver checks that the size ofthe read buffer is greater than or equal to the size ofthe data stored in the partition (size field is zero if nothing has been stored there). The driver returns a negative integer value if the buffer is too small to hold the data in the partition. The driver then does a checksum validation ofthe flash contents. If checksum validation fails, the driver returns a different negative integer. If the checksum validation is successful, the driver copies the partition contents into the provided buffer and return a positive integer value indicating the number of bytes read. If the operation fails, the driver returns a negative integer value indicating the reason for failure (e.g., device failure). 18. Hierarchical Optical Network Structure
The all-optical network architecture is based on an open, hierarchical structure to provide interoperability with other systems and accommodate a large number of client systems. FIG. 30 depicts the hierarchical structure ofthe all-optical network architecture for a simple case with three networks, network A 3010, network B 3040 and network C 3070. Typically, a network is managed by a three-tiered control architecture: i) at the highest level a leaf NMS manages the multiple OTSs of its network, ii) at the middle level each OTS is managed individually by its associated Node Manager, and iii) at the lowest level each line card of a node (except the Node Manager) is managed by an associated Line Card Manager.
The nodes, such as nodes 3012, 3014, 3042 and 3072 depict the optical switching hardware (the OTSs). Moreover, network A 3010 and network B 3040 communicate with one another via OTSs 3012 and 3042, and network A 3010 and network C 3040 communicate with one another via OTSs 3014 and 3072. In this example, each network has its own NMS. For example, network A 3010 has an NMS 3015, network B 3040 has an NMS 3045, and network C 3070 has an NMS 3075.
When multiple NMSs are present, one is selected as a master or root NMS. For example, the NMS 3015 for Network A 3010 may be the root NMS, such that the NMSs 3045 and 3075 for Networks B and C, respectively, are subservient to it.
Each NMS includes software that runs separate and apart from the network it controls, as well as NMS agent software that runs on each Node Manager ofthe NMS's network. The NMS agent software allows the each NMS to communicate with the Node Managers of each of its network's nodes. Moreover, each NMS may use a database server to store persistent data, e.g., longer-life data such as configuration and connection information. The database server may use LDAP, and Oracle® database software to store longer-life data such as configuration and connection information.
LDAP is an open industry standard solution that makes use of TCP/IP, thus enabling wide deployment. Additionally, a LDAP server can be accessed using a web-based client, which is built into many browsers, including the Microsoft Explorer® and Netscape Navigator® browsers. The data can be stored in a separate database for each instance of a network, or multiple networks can share a common database server depending on the size of the network or networks. As an example, separate databases can be provided for each of networks A, B and C, where each database contains information for the associated network, such as connection, configuration, fault, and performance information. In addition, the root NMS (e.g., NMS 3015) can be provided with a summary view ofthe status and performance data for Networks B and C. The hierarchical NMS structure is incorporated into the control architecture as needed.
19. System Functional Architecture
The functionality provided by the OTS and NMS, as well as the external network interfaces are shown in FIG. 31. As indicated by the legend 3102, the path restoration 3115 and network management 3105 functionalities are implemented in the NMS, while the routing 3120, signaling 3135 including user-network signaling 3136 and internal signaling 3137 (internal to the network), agent/proxy 3110, and protection 3145 are real-time functionalities implemented in the Node Manager.
External interfaces to the optical network system include: (1) a client system 3140 requesting services, such as a light path, from the optical network via the UNI protocol, (2) a service provider/carrier NMS 3130 used for the exchange of management information, and (3) a hardware interface 3150 for transfer of data. An interface to a local GUI 3125 is also provided.
The client system 3140 may be resident on the service provider's hardware. However, if the service provider does not support UNI, then manual (e.g., voice or email) requests can be supported. Light path (i.e., optical circuit) setup may be provided, e.g., using a signaled light path, a provisioned light path, and proxy signaling. In particular, a signaled light path is analogous to an ATM switched virtual circuit, such that a service provider acts as UNI requestor and sends a "create" message to initiate service, and the Optical Network Controller (ONC) invokes NNI signaling to create a switched lightpath. A provisioned lightpath is analogous to an ATM permanent virtual circuit (PVC), such that a service provider via the NMS requests a lightpath be created (where UNI signaling is not used), and the NMS commands the switches directly to establish a lightpath. The NMS can also use the services of a proxy signaling agent to signal for the establishment of a lightpath.
The service provider/carrier NMS interface 3130 enables the service provider operator to have an integrated view ofthe network using a single display. This interface, which may be defined using CORBA, for instance, may also be used for other management functions, such as fault isolation. The local GUI interface 3125 allows local management ofthe optical network by providing a local administrator/network operator with a complete on-screen view of topology, performance, connection, fault and configuration management capabilities and status for the optical network. The control plane protocol interface between the service provider control plane and the optical network control plane may be based on an "overlay model" (not to be confused with an overlay network used by the NMS to interface with the nodes), where the optical paths are viewed by the service provider system as fibers between service provider system endpoints. In this model, all ofthe complexities ofthe optical network are hidden from the user devices. Thus, the routing algorithm employed by the optical network is separate from the routing algorithms employed by the higher layer user network. The internal optical network routing algorithm, internal signaling protocols, protection algorithms, and management protocols are discussed in further detail below. The all-optical network based on the OTS may be modified from the "overlay model" architecture to the "peer model" architecture, where the user device is aware ofthe optical network routing algorithm and the user level. The optical network and user network routing algorithms are integrated in the "peer model" architecture. 20. Internal Network Signaling 20.1 Protocol Description The Internal Signaling function 3137 of FIG. 31 uses a Network-Network
Interface (NNI) protocol for internal network signaling or for signaling between private networks. The NNI may be specified by extending the UNI protocol (ATM Forum 3.1 Signaling Protocol) by specifying additional messages fields, states, and transitions. UNI is a protocol by which an external network accesses an edge OTS ofthe optical network. For example, the NNI may include a path Type-Length- Value field in its
"create" message. It may also have to support a crankback feature in case the setup fails. The major requirements for the NNI are listed below. Capability Description
Create light path Normal and crankback
Modify light path Change bandwidth parameters
Disjoint light path Establish light path disjoint from specified existing light paths
Destroy light path Teardown channel
Failure Recovery Link or node failure
Traffic Pre-emption Terminate low priority traffic in case of failure
Backup Establish pre-defined backup links
NMS Interface Set MIB variables
External Network Interface Backbone Network Interconnection
20.2 Signaling Subnetwork (OSC)
The primary function ofthe signaling network is to provide connectivity among the Node Managers ofthe different OTSs. An IP network may be used that is capable of supporting both signaling as well as network management traffic. For signaling messages, TCP may be used as the transport protocol. For network management, either TCP or UDP may be used, depending upon the specific application.
FIG. 32 depicts an example of a signaling network having three OTSs, OTS A (3210), OTS B (3220), and OTS C (3230), an NMS 3240 that communicates with OTS B 3220 (and all other OTSs via OTS B) via an Ethernet 3245, a path requester 3215 and path head 3216 that communicate with the OTS A 3210 via an Ethernet 3217, and a path tail 3235 that communicates with the OTS C 3230 via an Ethernet 3232. The path requester 3215, path head 3216 and path tail 3235 denote client equipment that is external to the all-optical network. The internal signaling network may use the OSC within the optical network, in which case the facilities are entirely within the optical network and dedicated to the signaling and management ofthe optical network. The OSC is not directly available to external client elements.
Each Node Manager may have its own Ethernet for local communication with the client equipment. Also, a gateway node may have an additional Ethernet link for communication with the NMS manager if they are co-located. The signaling network has its own routing protocol for transmission of messages between OTSs as well as within an NMS. Moreover, for fail-safe operation, the signaling network may be provided with its own NMS that monitors the status and performance ofthe signaling network, e.g., to take corrective actions in response to fault conditions, and generate performance data for the signaling network. 21. Protection/Restoration Flow
Referring to the Path Restoration function 3115 and Protection function 3145 of FIG. 31, the all-optical network may provide a service recovery feature in response to failure conditions. Both line and path protection may be provided such that recovery can be performed within a very short period of time comparable to SONET (<50 ms). In cases where recovery time requirements are less stringent, path restoration under the control ofthe NMS may provide a more suitable capability.
Moreover, for SONET clients, client-managed protection may be provided by allowing the client to request disjoint paths, in which case the protection mechanisms utilized by the client are transparent to the optical network.
The recovery capability may include 1 : 1 line protection by having four optical fibers between OTSs - a primary and a backup in each direction. When a link or node fails, all paths in the affected link are re-routed (by pre-defined links) as a whole (e.g., on a line basis) rather than by individual path (e.g., on a path basis). While this is less bandwidth efficient, it is simpler to implement than path protection and is equivalent to SONET layer services. The re-routing is predefined via Network Management in a switch table such that when a failure occurs, the re-routing can be performed in real-time (< 50 ms per hop).
Path protection re-routes each individual circuit when a failure occurs. Protection paths may be dedicated and carry a duplicate data stream (1+1), dedicated and carry a pre-emptable low priority data stream (1 : 1), or shared (1 :N).
FIGs 33(a)-(c) compare line and path protection where two light paths, shown as λj and λ2, have been setup. FIG. 33(a) shows the normal case, where two signaling paths are available between nodes "1" and "6" (i.e., path 1-2-4-5-6 and path 1- 2-3-5-6). λj traverses nodes 1-2-3-5-6 in travelling toward its final destination, while λ2 traverses nodes 1 -2-3 in travelling toward its final destination.
FIG. 33(b) shows the case where line protection is used. In particular, consider the case where link 2-3 fails. With line protection, all channels affected by the failure are re-routed over nodes 2-4-5-3. In particular, λj is routed from node 5-3, and then back from 3-5-6, which is inefficient since λj travels twice between nodes "3" and "5", thereby reducing the availability of the 3-5 path for backup traffic.
FIG. 33(c) shows the case where path protection is used. With path protection, the light paths λj and λ2 are each routed separately in an optimum way, which eliminates the inefficiency of line protection. In particular, λj is routed on nodes 1-2-4-5- 6, and λ2 is routed on nodes 1-2-4-5-3.
Moreover, the backup fiber (here, the fiber between nodes 2-4-5) need not be used under normal conditions (FIG. 33(a)). However, pre-emptable traffic, e.g., lower priority traffic, may be allowed to use the backup fiber until a failure occurs. Once a failure occurs, the pre-emptable traffic is removed from the backup fiber, which is then used for transport of higher-priority traffic. The client having the lower-priority traffic is preferably notified ofthe preemption.
Protection and restoration in large complex mesh networks may also be provided. Protection features defined by the ODSI, OIF, and IETF standards bodies can also be included as they become available.
Protection services can also include having redundant hardware at the OTSs, such as for the Node Manager and other line cards. The redundancy ofthe hardware, which may range from full redundancy to single string operation, can be configured to meet the needs ofthe service provider. Moreover, the hardware can be equipped with a comprehensive performance monitoring and analysis capability so that, when a failure occurs, a switch over to the redundant, backup component is quickly made without manual intervention. In case of major node failures, traffic can be re-routed around the failed node using line protection. 22. Network Management System Software
The Network Management System is a comprehensive suite of management applications that is compatible with the TMN model, and may support TMN layers 1 to 3. Interfaces to layer 4, service layer management, may also be provided so that customer Operational Support Systems (OSSs) as well as third party solutions can be deployed in that space.
The overall architecture ofthe NMS is depicted in FIG. 34. The Element Management Layer 3404 corresponds to layer 2 ofthe TMN model, while the Network Management Layer 3402 components correspond to layer 3 ofthe TMN model. The functions shown are achieved by software running on the NMS and NMS agents at the Node Managers.
A common network management interface 3420 at the Network Management Layer provides an interface between: (a) applications 3405 (such as a GUI), customer services 3410, and other NMSs/OSSs 3415, and (b) a configuration manager 3425, connection manager 3430, 3440, fault manager 3445, and performance manager 3450, which may share common resources/services 3435, such as a database server, which uses an appropriate database interface, and a topology manager 3440. The database server or servers may store information for the managers 3425, 3430, 3445 and 3450. The interface 3420 may provides a rich set of client interfaces that include RMI, EJB and CORBA, which allow the carrier to integrate the NMS with their systems to perform end-to-end provisioning and unify event information. Third-party services and business layer applications can also be easily integrated into the NMS via this interface. The interface 3420 may be compatible with industry standards where possible. The GUI 3405 is an integrated set of user interfaces that may be built using
Java (or other similar object oriented) technology to provide an easy-to-use customer interface, as well as portability. The customer can select a manager from a menu of available GUI views, or drill down to a new level by obtaining a more detailed set of views. The customer services may include, e.g., protection and restoration, prioritized light paths, and other services that are typically sold to customers ofthe network by the network operator.
The "other NMSs" 3415 refer to NMSs that are subservient to a root NMS in a hierarchical optical network structure or an NMS hierarchy. The OSSs are switching systems other than the OTS system described herein.
The configuration manager 3425 provides a switch level view ofthe NMS, and may provide functions including provisioning ofthe Node Managers and LCMs, status and control, and installation and upgrade support. The configuration manager 3425 may also enable the user, e.g., via the GUI 3405, to graphically identify the state ofthe system, boards, and lower level devices, and to provide a point and click configuration to quickly configure ports and place them in service. The configuration manager may collect switch information such as IP address and switch type, as well as card-specific information such as serial number and firmware/software revision.
The connection manager 3430 provides a way to view existing light path connections between OTSs, including connections within the OTS itself, and to create such connections. The connection manager 3430 supports simple cross connects as well as end-to-end connections traversing the entire network. The user is able to dictate the exact path of a light path by manually specifying the ports and cross connects to use at an OTS. Or, the user may only specify the endpoints and let the connection manager set up the connection automatically. Generally, the endpoints of a connection are OA ports, and the intermediate ports are TP ports. The user may also select a wavelength for the connection. The types of connections supported include Permanent Optical Circuit (POC), Switched Optical Circuit (SOC), as well as Smart Permanent Optical Circuit (SPOC). SOC and SPOC connections are routed by the network element routing and signaling planes. SOC connections are available for viewing only.
The topology manager 3440 provides a NMS topological view ofthe network, which allows the user to quickly determine, e.g., via the GUI 3405, all resources in the network, including links and OTSs in the network, and how they are currently physically connected. The user can use this map to obtain more detailed views of specific portions ofthe network, or of an individual OTS, and even access a view of an OTS's front panel. For instance, the user can use the topological view to assist in making end- to-end connections, where each OTS or subnet in the path of a connection can be specified. Moreover, while the topology manager 3440 provides the initial view, the connection manager 3430 is called upon to set up the actual connection.
The fault manager 3445 collect faults/alarms from the OTSs as well as other SNMP-compliant devices, and may include functions such as alarm surveillance, fault localization, correction, and trouble administration. Furthermore, the fault manager 3445 can be implemented such that the faults are presented to the user in an easy to understand way, e.g., via the GUI 3405, and the user is able to sort the faults by various methods such as device origination, time, severity, etc. Moreover, the faults can be aggregated by applying rules that are predefined by the network administrator, or customer-defined.
The performance manager 3450 performs processing related to the performance ofthe elements/OTSs, as well as the network as a whole. Specific functionalities may include performance quality assurance, performance monitoring, performance management control, and performance analysis. An emphasis may be on optical connections, including the QoS and reliability ofthe connection. The performance manager 3450 allows the user to monitor the performance of a selected port of channel on an OTS. In particular, the performance manager may display data in realtime, or from archived data.
These managers 3425, 3430, 3445 and 3450 may provide specific functionality and share information, e.g., via Jini, and using an associated Jini server. Moreover, the manager may store associated data in one or more database servers, which can be configured in a redundant mode for high availability.
Furthermore, a common network management interface 3455 at the Element Management Layer provides an interface between: (a) the configuration manager 3425, connection manager 3430, fault manager 3445 and performance manager 3450, and (b) an agent adapter function 3460 and an "other adapter" function 3465. The agent adapter 3460 may communicate with the OTSs in the optical network 3462 using SNMP and IP, in which case corresponding SNMP agents and IP agents are provided at the OTSs. The SNMP agent at the OTSs may also interface with other NMS applications. SNMP is an industry standard interface that allows integration with other NMS tools. The interface from the NMS to the OTS in the optical network 3462 may also use a proprietary interface, which allows greater flexibility and efficiency than SNMP alone. The other adapter function 3465 refers to other types of optical switches other than the OTSs described herein that the NMS may manage. In summary, the NMS provides a comprehensive capability to manage an
OTS or a network of OTSs. A user- friendly interface allows intuitive control ofthe element/OTS or network. Finally, a rich set of northbound interfaces allows interoperability and integration with OSS systems.
Moreover, the NMS may be an open architecture system that is based on standardized Management Information Bases (MIBs). At this time, ODSI has defined a comprehensive MIB for the UNI. However, additional MIBs are required, e.g., for NNI signaling and optical network enhancements to OSPF routing. The NMS ofthe present invention can support the standard MIBs as they become available, while using proprietary MIBs in areas where the standards are not available. The NMS may be implemented in Java (or similar object oriented) technology, which allows the management applications to easily communicate and share data, and tends to enable faster software development, a friendlier (i.e., easier to use) user interface, robustness, self-healing, and portability. In particular, Java tools such as Jini, Jiro, Enterprise Java Beans (EJB), and Remote Method Invocation (RMI) may be used. RMI, introduced in JDK 1.1 , is a Java technology that allows the programmer to develop distributed Java objects similar to using local Java objects. It does this by keeping separate the definition of behavior, and the implementation of the behavior. In other words, the definition is coded using a Java interface while the implementation on the remote server is coded in a class. This provides a network infrastructure to access/develop remote objects.
The EJB specification defines an architecture for a transactional, distributed object system based on components. It defines an API that that ensures portability across vendors. This allows an organization to build its own components or purchase components. These server-side components are enterprise beans, and are distributed objects that are hosted in EJB containers and provide remote services for clients distributed throughout the network
Jini, which uses RMI technology, is an infrastructure for providing services in a network, as well as creating spontaneous interactions between programs that use these services. Services can be added or removed from the network in a robust way. Clients are able to rely upon the availability of these services. The Client program downloads a Java object from the server and uses this object to talk to the server. This allows the client to talk to the server even though it does not know the details ofthe server. Jini allows the building of flexible, dynamic and robust systems, while allowing the components to be built independently. A key to Jini is the Lookup Service, which allows a client to locate the service it needs.
Jiro is a Java implementation ofthe Federated Management Architecture. A federation, for example, could be a group of services at one location, i.e., a management domain. It provides technologies useful in building an interoperable and automated distributed management solution. It is built using Jini technology with enhancements added for a distributed management solution, thereby complementing Jini. Some examples ofthe benefits of using Jiro over Jini include security services and direct support for SNMP. FIG. 35 illustrates an NMS hierarchy in accordance with the present invention. Advantageously, scalability may be achieved via the NMS hierarchical architecture, thus allowing a networks from a few OTSs to hundreds of OTSs to still be manageable and using only the processing power ofthe necessary number of managing NMSs. In such an architecture, each NMS instance in an NMS hierarchy (which we may also refer to as "manager"), manages a subset of OTSs (with the "root" NMS managing, at least indirectly through its child NMSs, all the OTSs managed by the hierarchy). For example, NMS 1 (3510) manages NMS 1.1 (3520) and NMS 1.2 (3525). NMS 1.1 (3520) manages NMS 1.1.1 (3530), which in turn manages a first network 3540, and NMS 1.1.2 (3532), which in turn manages a second network 3542. NMS 1.2 (3525) manages NMS 1.2.1 (3534), which in turn manages a third network 3544, and NMS 1.2.2 (3536), which in turn manages a fourth network 3546. Each instance ofthe NMS in the hierarchy may be implemented as shown in FIG. 34, including having one more database servers for use by the managers ofthe different functional areas. The number of OTSs that an NMS instance can manage depends on factors such as the performance and memory ofthe instance's underlying processor, and the stability ofthe network configuration. The hierarchy of NMS instances can be determined using various techniques. In the event of failure of a manager, another manager can quickly recover the NMS functionality. The user can see an aggregated view ofthe entire network or some part ofthe network without regard to the number of managers being deployed.
One feature of multiple NMSs controlling multiple networks is the robustness and scalability provided by the hierarchical structure ofthe managing NMSs.. The NMSs form a hierarchy dynamically, through an election process, such that a management structure can be quickly reconstituted in case of failure of some ofthe NMSs. Furthermore, the NMS provide the capability to configure each OTS and dynamically modify the connectivity of OTSs in the network. The NMS also enables the network operators to generate on-the-fly statistical metrics for evaluating network performance. 23. Node Manager Software
The control software at the OTS includes the Node Manager software and the Line Card Manager software. As shown in FIG. 36, the Node Manager software 3600 includes Applications layer software 3610 and Core Embedded System Services layer 3630 software running on top of an operating system such as VxWorks (Wind River Systems, Inc., Alameda, Calif). The LCM software has Core Embedded System Services device drivers for the target peripheral hardware such as the GbE and OC-n SONET interfaces.
The Applications layer 3610 enables various functions, such as signaling and routing functions, as well as node-to-node communications. For example, assume it is desired to restore service within 50 msec for a customer using a SONET service. The routing and signaling functions are used to quickly communicate from one node to another when an alarm has been reported, such as "the link between Chicago and New York is down." So, the Applications software 3610 enables the nodes to communicate with each other for selecting a new route that does not use the faulty link. Generally, to minimize the amount of processing by the Applications software 3610, information that is used there is abstracted as much as possible by the Core Embedded Software 3641 and the System Services 3630.
In particular, the Applications layer 3610 may include applications such as a Protection/Fault Manager 3612, UNI Signaling 3614, NNI Signaling 3615, Command Line Interface (CLI) 3616, NMS Database Client 3617, Routing 3618, and NMS agent 3620, each of which is described in further detail below.
The System Services layer software 3630 may include services such as Resource Manager 3631, Event Manager 3632, Software Version Manager 3633, Configuration Manager 3634, Logger 3635, Watchdog 3636, Flash Memory Interface 3637, and Application "S" Message Manager 3638, each of which is described in further detail below.
The Node Manager's Core Embedded Control Software 3641 is provided below an "S" interface and the System Services software 3630. 23.1 Node Manager Core Embedded Software
The Node Manager Core Embedded software 3641 is provided between the "S" interface 3640 and the "D" interface 3690. The "D" (drivers) message interface 3690 is for messages exchanged between the LCMs and the Node Manager via the OTS's internal LAN, while the "S" (services) message interface 3640 is for messages exchanged between the application software and the Core Embedded software on the Node Manager.
Generally, these managers ensure that inter-process communication can take place. In particular, the Node Manager "D" message manager 3646 receives "D" messages such as raw Ethernet packets from the LCM and forwards them to the appropriate process. The Node Manager "S" Message Manager 3642 serves a similar general function: providing inter-process communication between messages from the
System Services layer 3630 and the Node Manager Core Embedded software. The interprocess communication provided by the "S" Interface is typically implemented quite differently from the "D" Interface since it is not over a LAN but within a single processor. These interfaces, which may use, e.g., header files or tables, are described further in the section entitled "Node Manager Message Interfaces."
Below the "S" interface 3640, the Node Manager's Core Embedded software further includes a Node Configuration Manager 3644, which is a master task for spawning other tasks, shown collectively at 3660, at the Node Manager, and may therefore have a large, complex, body of code. This manager is responsible for managing the other Node Manager processes, and knows how to configure the system, such as configuring around an anomaly such as a line card removal or insertion. Moreover, this manager 3644 determines how many ofthe tasks 3662, 3664, 3666, 3668, 3670, 3672, 3674, 3676 and 3678 need to be started to achieve a particular configuration. The tasks at the Node Manager Core Embedded software are line card tasks/processes for handling the different line card types. These include a TP_IN task 3662, an OA IN task 3644, an OPM task 3666, a clock task 3668, a TP_EG task 3670, an OA_EG task 3672, an OSF task 3674, an ALI task 3676 and an OSM task 3678. The "- 1" notation denotes one of multiple tasks that are running for corresponding multiple line cards of that type when present at the OTS. For example, TP_IN-1 represents a task running for a first TP IN card. Additional tasks for other TP IN cards are not shown specifically, but could be denoted as TP_IN-2, TP_IN-3, and so forth.
Managers, shown collectively at 3650, manage resources and system services for the line card tasks. These managers include a Database Manager 3652, an Alarms Manager 3654, and an Optical Cross Connect (OXC) Manager 3656.
In particular, the Database Manager 3652 may manage a database of nonvolatile information at the Node Manager, such as data for provisioning the LCMs. This data may include, e.g., alarm/fault thresholds that are to be used by the LCMs in determining whether to declare a fault if one ofthe monitored parameters ofthe line cards crosses the threshold. Generally, the Database Manager 3652 manages a collection of information that needs to be saved if the OTS fails/goes down - similar to a hard disk. As an example ofthe use ofthe Database Manager 3652, when the OTS is powered up, or when a line card is inserted into a slot in the OTS bay, the associated LCM generates a discovery packet for the Node Manager to inform it that the line card is up and exists. This enables the line cards to be hot swappable, that is, they can be pulled from and reinserted into the slots at any time. After receiving the discovery packet, the Node Manager uses the Database manager 3652 to contact the database to extract non-volatile data that is needed to provision that line card, and communicates the data to the LCM via the OTS's LAN. The Node Manager's database may be provided using the non-volatile memory resources discussed in connection with FIG. 5.
The Alarms Manager 3654 receives alarm/fault reports from the LCMs (e.g., via any of the tasks 3660) when the LCMs determine that a fault condition exists on the associated line card. For example, the LCM may report a fault to the Alarms Manager 3654 if it determines that a monitored parameter such as laser current consumption has crossed a minimum or maximum threshold level. In turn, the Alarms Manager 3654 may set an alarm if the fault or other anomaly persists for a given amount of time or based on some other criteria, such as whether some other fault or alarm condition is present, or the status of one or more other monitored parameters. Furthermore, the presence of multiple alarms may be analyzed to determine if they have a common root cause. Generally, the Alarms Manager 3654 abstracts the fault and/or alarm information to try to extract a story line as to what caused the alarm, and passes this story up to the higher-level Event Manager 3632 via the "S" interface 3640.
Using the push model, the Event Manager 3632 distributes the alarm event to any ofthe software components that have registered to receive such an event. A corrective action can then be implemented locally at the OTS, or at the network-level.
The OXC Manager 3656 makes sense of how to use the different line cards to make one seamless connection for the customer. For example, using a GUI at the NMS, the customer may request a light path connection from Los Angeles to San Francisco. The NMS decides which OTSs to route the light path through, and informs each OTS via the OSC ofthe next-hop OTS in the light path. The OTS then establishes a light path, e.g., by using the OXC Manager 3656 to configure an ALI line card, TP_IN line card, OA_EG line card, a wavelength, and several other parameters that have to be configured for one cross connect. For example, the OXC Manager 3656 may configure the OTS such that port 1 on TP IN is connected to port 2 on TP OUT. The OXC Manager 3656 disassembles the elements of a cross connection and disseminates the relevant information at a low level to the involved line cards via their LCMs. 23.2 System Services 23.2.1 Resource Manager The Resource Manager 3631 performs functions such as maintaining information on resources such as wavelengths and the state ofthe cross-connects ofthe OTS, and providing cross-connect setup and teardown capability. In particular, the Resource Manager performs the interaction with the switch hardware during path creation, modification, and termination. The context diagram ofthe Resource Manager is shown in FIG. 43. The legend 4330 indicates whether the communications between the components use the Event Manager, an API and TCP, or message passing. The Resource Manager is responsible for setting up network devices upon receiving requests from the NMS Agent (in case of provisioning) or the Signaling component (for a signaled setup). The Resource Manager provides an API that enables other components 4320 to obtain current connection data. Also, the Resource Manager obtains configuration data via an API provided by the Configuration Manager.
For the provisioned requests, which may be persistent, the associated parameters are stored in flash memory 4310, e.g., via the Flash interface 3637, which may be DOS file based. Upon reset, the Resource Manager retrieves the parameters from flash memory via the Flash Interface and restores them automatically.
For signaled requests, which may be non-persistent, the associated parameters may be stored in RAM at the Node Manager. Upon reset, these lightpaths must be re-established based on user requests, or other switches could re-establish them. The Resource Manager component also logs all relevant events via the
Logger, updates its MIB, and provides its status to the Watchdog component.
23.2.2 Event Manager
The Event Manager 3632 receives events from the Core Embedded system software 3641 and distributes those events to high level components (e.g., other software components/functions at the System Services 3630 and Applications 3610). It is also used for communication between high level components in cases where the communication is one-way (as opposed to request/response). FIG. 44 depicts its context diagram.
The Event Manager sends events to components based on their registrations/subscriptions to the events. That is, in an important aspect ofthe push model ofthe present invention, components can subscribe/unsubscribe to certain events of interest to them. Any application that wants to accept events registers with the Event Manager 3632 as an event listener. Moreover, there is anonymous delivery of events so that specific destinations for the events do not have to be named. For example, when something fails in the hardware, an alarm is sent to whoever (e.g., which application) has registered for that type of alarm. Advantageously, the sender ofthe alarm does not have to know who is interested in particular events, and the receivers ofthe events only receive the types of events in which they are interested. The OTS software architecture thus uses a push model since information is pushed from a lower layer to a higher layer in near real-time.
The Event Manager may be used as a middleman between two components for message transfer. For example, a component A, which wants to send a message X to another component B, sends it to the Event Manager. Component B must subscribe to the message X in order to receive it from the Event Manager. In particular, the event library software (EventLib) may include the following routines:
EventRegister( ) - register for an event to get an event message when the event occurs; EvenfUnRegisteπ ) - un-register for an event; and
EventPost( ) - post an event.
These routines return ERROR when they detect an error. In addition, they set an error status that elaborates the nature ofthe error.
Normally, high-level applications, e.g. signaling, routing, protection, and NMS agent components, register for events that are posted by Core Embedded components, such as device drivers. High-level components register/un-register for events by calling EventRegister()/EventUnRegister(). Core Embedded components use EventPost() to post events.
The Event Dispatcher may be implemented via POSIX message queues for handling event registration, un-registration, and delivery. It creates a message queue, ed_ dispQ, when it starts. Two priority levels, high and low, are supported by ed_dispQ. When a component registers for an event by calling EventRegister(), a registration event is sent to ed_ dispQ as a high priority event. Event Dispatcher registers the component for that event when it receives the registration event. If the registration is successful an acknowledgment event is sent back to the registering component. A component should consider the registration failed if it does not receive an acknowledgment within a short period of time. It is up to the component to re-register for the event. A component may register for an event for multiple times with the same or different message queues. If the message queue is the same, later registration will over- write earlier registration. If the message queues are different, multiple registrations for the same event will co-exist, and events will be delivered to all message queues when they are posted.
Furthermore, event registration may be permanent or temporary. Permanent registrations are in effect until cancelled by EventUnRegister(). EventUnRegister() sends a un-register event (a high priority event) to ed_ dispQ for Event Dispatcher to un-register the component for that event. Temporary registrations are cancelled when the lease time expired. A component may pre-maturely cancel a temporary registration by calling EventUnRegister(). If the un-registration is successful, an acknowledgment event is delivered to the message queue ofthe component. When a component uses EventPost() to post an event, the posted event is placed in ed_ dispQ, too. An event is either a high priority or a low priority event. To prevent low priority events from filling up ed dispQ, the low priority event is not queued when posted if ed dispQ is more than half full. This way, at least half of ed dispQ is reserved for high priority events. Event Dispatcher delivers an event by moving the event from ed_ dispQ to the message queues of registered components. So, a component must create a POSIX message queue before registering for an event and send the message queue name to the Event Dispatcher when it registers for that event. Moreover, a component may create a blocking or non-blocking message queue. If the message queue is non-blocking, the component may set up a signal handler to get notification when an event is placed in its message queue.
If the message queue of a component is full when Event Dispatcher tries to deliver an event, the event is silently dropped. Therefore, components should ensure there is space in its message queue to prevent an event from being dropped. 23.2.3 Software Version Manager
The Software Version Manager (SVM) 3633 is responsible for installing, reverting, backing up and executing of software in the Node Manager and LCMs. Its context diagram is depicted in FIG. 45. The SVM maintains and updates software on both the Node Manager and the LCMs by keeping track ofthe versions of software that are used, and whether a newer version is available. Generally, different versions of Node Manager software and LCM software can be downloaded remotely from the NMS to the Node Manager from time to time as new software features are developed, software bugs are fixed, and so forth. The Node Manager distributes the LCM software to the LCMs. The SVM keeps a record of which version of software is currently being used by the Node Manager and LCMs.
In particular, the SVM installs new software by loading the software onto flash memory, e.g., at the Node Manager. The SVM performs backing up by copying the current software and saving it on another space on the flash memory. The SVM performs the reverting operation by copying the back up software to the current software. Finally, the SVM performs the execution operation by rebooting the Node Manager or the LCMs. In particular, for installation, the SVM receives an install command from the NMS agent that contains the address, path and filename ofthe code to be installed. The SVM may perform a File Transfer Protocol (FTP) operation to store the code into its memory. Then, it uses the DOS Flash interface services 3637 to store the code into the flash memory. In performing the backup operation for the Node Manager software, the SVM receives the backup command from the NMS agent. The SVM uses the DOS Flash interface to copy the current version ofthe code to a backup version. In the revert operation for the Node Manager software, the SVM receives the revert command from the NMS agent and uses the DOS Flash interface to copy the backup version ofthe software to the current version.
The Node Manager software is executed by rebooting the Node Manager card.
The Installation, reverting, backing up and executing operations can also be performed on the software residing on the line cards. In particular, for installation, the software/firmware is first "FTPed" down to the Node Manager's flash memory. Then, the new firmware is downloaded to the line card. This new code is stored in the line card's flash memory. The new code is executed by rebooting the line card. 23.2.4 Configuration Manager The Configuration Manager 3634 maintains the status of all OTS hardware and software components. Its context diagram is shown in FIG. 46. The legend 4610 indicates whether the communications between the components use the Event Manager, an API and TCP, or message passing. During the first OTS system boot up, the Configuration Manager obtains the desired configuration parameters from the database/server (or possibly a configuration file) at the NMS. The LCMs are responsible for monitoring the status ofthe line cards. When a line card becomes active, it immediately generates a Discovery message that the LCM for each optical card forwards to the Event Manager 3632 that is running on the Node Manager. The Configuration Manager receives these messages by subscribing to them at the Event Manager. It then compares the stored configuration versus the reported configuration. If there is a difference, the Configuration Manager sets the configuration according to the stored data by sending a message to the LCM via the Event Manager and S-Interface. It also reports an error and stores the desired configuration in the Node Manager's flash memory. When the system is subsequently re-booted, the operation is identical, except the desired configuration is stored in flash memory.
The LCMs are configured to periodically report a status of their optical line cards. Also, when a device fails or has other anomalous behavior, an event message such a fault or alarm is generated. The Configuration Manager receive these messages via the Event Manager, and issues an event message to other components. Moreover, while not necessary, the Configuration Manager may poll the LCMs to determine the line card status if it is desired to determine the status immediately.
If the configuration table in the Node Manager's flash memory is corrupted, the Configuration Manager may request that the database/server client gets the information (configuration parameters) via the database/server, which resides in the NMS host system. After configuring the devices, the Configuration Manager posts an event to the Event Manager so that other components (e.g., NMS Agent and the Resource Manager) can get the desired status ofthe devices.
The desired configuration can be changed via CLI or NMS command. After the Configuration Manager receives a request from the NMS or CLI to change a device configuration, the Configuration manager sends an "S" message down to the LCMs to satisfy the request. Upon receiving the acknowledge message that the request was carried out successfully, the Configuration Manager sends an acknowledgement message to the requester, stores the new configuration into the database service, logs a message to the Logger, and post an event via the Event Manager.
Moreover, the NMS/CLI can send queries to the Configuration Manager regarding the network devices' configurations. The Configuration Manager retrieves the information from the database and forwards them to the NMS/CLI. The NMS/CLI can also sends a message to the Configuration Manager to change the reporting frequency or schedule ofthe device/line card.
23.2.5 Logger
The Logger 3635 sends log messages to listening components such as debugging tasks, displays, printers, and files. These devices may be directly connected to the Node Manager or connected via a socket interface. The Logger's context diagram is shown in FIG. 47. The Logger is controlled via the CLI, which may be implemented as either a local service or remote service via Telnet.
The control may specify device(s) to receive the Logging messages (e.g., displays, files, printers - local or remote), and the level of logging detail to be captured (e.g., event, error event, parameter set).
23.2.6 Watchdog
The Watchdog component 3638 monitors the state ("health") of other (software) components in the Node Manager by verifying that the components are working. 23.2.7 Flash Memory Interface
A Disk Operating System (DOS) file interface may be used to provide an interface 3637 to the flash memory on the Node Manager for all persistent configuration and connection data. Its context diagram is depicted in FIG. 48. The legend 4820 indicates that the components communicate using an API and TCP. The Resource Manager 3631 and Configuration Manager 3634 access the Flash Memory 4810 as if it were a DOS File System. Details of buffering and actual writing to flash are vendor- specific.
23.2.8 Application "S" Message Manager The Application "S" Message Manager receives messages from the Node
Manager's Core Embedded software, also referred to as control plane software. 23.3 Applications Layer 23.3.1 Protection/Fault Manager
The primary function ofthe Protection/Fault Manager component is to respond to alarms by isolating fault conditions and initiating service restoration. The Protection Fault Manager isolates failures and restores service, e.g., by providing alternate link or path routing to maintain a connection in the event of node or link failures. As depicted in FIG. 37, the Protection/Fault Manager interfaces with the Logger 3635, WatchDog 3636, Resource Manager 3631, Configuration Manager 3634, Event Manager 3632, NMS Agent 3620, NNI Signaling 3615 and Other Switches/OTSs 3710. The legend 3720 indicates the nature ofthe communications between the components. The Protection Fault Manager subscribes to the Event Manager to receive events related to the failure of links or network devices. When the Protection Fault Manager receives a failure event and isolates the cause ofthe alarm, it determines the restorative action and interacts with the appropriate application software to implement it. If there is problem isolating or restoring service, the problem is handed over to the NMS for resolution. Some service providers may elect to perform their own protection by requesting two disjoint paths. With this capability, the service provider may implement 1+1 or 1 :1 protection as desired. When a failure occurs, the service provider can perform the switchover without any assistance from the optical network. However, the optical network is responsible for isolating and repairing the failure.
Using the Event Manager, the Protection/Fault component also logs major events via the Logger component, updates its MIB, and provides its status to the Watchdog component. It also updates the Protection parameters in the shared memory. 23.3.2 UNI Signaling
The Signaling components includes the User-Network Interface (UNI) signaling and the internal Network-Network Interface (NNI) signaling. The primary purpose of signaling is to establish a lightpath between two endpoints. In addition to path setup, it also performs endpoint Registration and provides a Directory service such that users can determine the available endpoints.
The UNI signaling context diagram is depicted in FIG. 38. The UNI uses both message passing and APIs provided by other components to communicate with other components. The legend 3830 indicates whether the communications between the components use an API and TCP, or message passing.
The UNI component provides a TCP/IP interface with User devices 3810, e.g., devices that access the optical network via an OTS. If the User Device does not support signaling, a NMS proxy signaling agent 3820 resident on an external platform performs this signaling. When a valid "create lightpath" request is received, the UNI invokes the
NNI to establish the path. In addition to creating a lightpath, users may query, modify or delete a lightpath.
The UNI Signaling component 3614 obtains current configuration and connection data from the Configuration and Resource Managers, respectively. It logs major events via the Logger component, updates its MIB used by the SNMP Agent, and provides a hook to the WatchDog component to enable the WatchDog to keep track of its status.
23.3.3 NNI Signaling
The NNI signaling component 3615, depicted in FIG. 39, performs the internal signaling between switches in the optical network, e.g., using MPLS signaling. The legend 3910 indicates whether the communications between the components use the Event Manager, an API and TCP, or message passing.
As discussed, requests for service to establish a lightpath between two endpoints may be received over the UNI from an external device or a proxy signaling agent. Upon receipt ofthe request, UNI signaling validates the request and forwards it, with source and destination endpoints, to the NNI signaling function for setup. Source- based routing may be used, in which case NNI must first request a route from the Routing component 3618. Several options are available, e.g., the user may request a path disjoint from an existing path. The Routing component 3618 returns the selected wavelength and set of switches/OTSs that define the route. Then, the NNI signaling component requests the Resource Manager 3631 to allocate the local hardware components implementing the path, and forwards a create message to the next switch in the path using TCP/IP over the OSC.
Each OTS has its local Resource Manager allocate hardware resources to the light path. When the path is completed, each OTS returns an acknowledgment message along the reverse path confirming the successful setup, and that the local hardware will be configured. If the attempt failed due to unavailability of resources, the resources that had been allocated along the path are de-allocated. In order for other components (other than UNI, e.g., Routing) to learn if an attempt if the path setup was successful, the NNI distributes (posts) a result event using the Event Manager 3632.
Moreover, the NNI Signaling component 3615 obtains current configuration data from the Configuration Manager 3634, and connection data from the Resource Manager 3631. It also logs major events via the Logger component 3635, updates its MIB used by the SNMP Agent, and provides a hook to the WatchDog component 3636 to enable the WatchDog to keep track of its status. 23.3.4 Command Line Interface
The CLI task 3616, an interface that is separate from the GUI interface, provides a command-line interface for an operator via a keyboard/display to control or monitor OTSs. The functions ofthe CLI 3616 include setting parameters at bootup, entering a set/get for any parameter in the Applications and System Services software, and configuring the Logger. The TL-1 craft interface definition describes the command and control capabilities that are available at the "S" interface. Table 5 lists example command types that may be supported.
Table 5: TL-1 Craft Command List
Craft Command Parameters Description
Rtrv-alm Type, slot, severity Retrieve alarm messages
Rtrv-crs Type, port Retrieves cross connect information Rtrv-eqpt Address-id Retrieves the equipage (configuration) of the
OTS node
Rtrv-hist Start, end Retrieves the event history
Rtrv-ali Port, wavelength, mode Retrieves the ALI port parameters
Rtrv-node N/A Retrieves OTS node parameters Rtrv-pmm Slot, port, wavelength Retrieves the performance monitor meas.
Rtrv-port Port Retrieves per port performance measurements
Rtrv-prot-sws Port Retrieves path protection connections
Set-ali Out-port, in-port, mode Sets ALI port parameters Set-node Id, date, time, aim-delay Sets OTS node parameters
Set-port Port, wavelength, thresh Sets port and wavelength thresholds
23.3.5 NMS Database Client
Optionally, an NMS database client 3617 may reside at the Node Manager to provide an interface to one or more database servers at the NMS. One possibility is to use LDAP servers. Its context diagram is depicted in FIG. 40. As shown, the database/server client 3617 interacts with the NMS's database server, and with the Configuration Manager 3634. Upon request from the Configuration manager, the database client contacts the server for configuration data. Upon receiving a response from the server, the client forwards the data to the Configuration Manager. The legend 4020 indicates whether the communications between the components use the Event Manager, or an API and TCP.
Since the Configuration Data is stored in the Node Manager's flash memory, the database client may be used relatively infrequently. For example, it may be used to resolve problems when the stored configuration is not consistent with that obtained via the LCM's discovery process.
Moreover, there may be primary and backup database servers, in which case the client keeps the addresses of both servers. If the primary server does not function, after waiting for a pre-determined period, the client forwards the request to the backup server.
Moreover, when the Configuration Manager makes changes to its configuration table, the Configuration Manager posts an event to the Event Manager. The Event Manager forwards the event to the NMS Agent, which in turn forwards the event to the NMS application. The NMS application recognizes the event and contacts the server to update its table.
23.3.6 Routing
The Routing Component 3618 computes end-to-end paths in response to a request from the NNI component. The context diagram, FIG. 41, depicts its interfaces with the other components. The legend 4110 indicates whether the communications between the components use the Event Manager, an API and TCP, or message passing.
The Routing Component, which may implement the OSPF routing algorithm with optical network extensions, is invoked by the NNI Signaling component at the path source during setup. Routing parameters are input via the SNMP Agent.
Routing is closely related to the Protection/Fault Manager. As part ofthe protection features, the Routing component may select paths that are disjoint (either link disjoint or node and link disjoint as specified by signaling) from an existing path.
Moreover, as part of its operation, the Routing component exchanges Link State Advertisement messages with other switches. With the information received in these messages, the Routing component in each switch maintains a complete view ofthe network such that it can compute a path.
23.3.7 NMS Agent
The embedded NMS Agent 3620 provides the interface between NMS applications 4210 (e.g., configuration, connection, topology, fault/alarm, and performance) and the Applications resident on the Node Manager. The NMS agent may use SNMP and a proprietary method. FIG. 42 shows the context diagram ofthe NMS Agent. The legend 4220 indicates whether the communications between the components use the Event Manager, an API and TCP, or message passing. The NMS Agent operates using a "pull model" - all ofthe SNMP data is stored locally with the relevant component (e.g., UNI, NNI, Routing, Protection). When the NMS Agent must respond to a Get request, it pulls the information from its source.
The NMS Agent receives requests from an NMS application and validates the request against its MIB tables. If the request is not validated, it sends an error message back to the NMS. Otherwise, it sends the request using a message passing service to the appropriate component, such as the Signaling, Configuration Manager, or Resource Manager components.
For non-Request/Response communications, the NMS agent may subscribe to events from the Event managers. The events of interest include the "change" events posted by the Resource Manager, Configuration Manager and the UNI and NNI components, as well as messages from the LCMs. Upon receiving events from the Event Manager or unsolicited messages from other components (e.g., Signaling), the NMS Agent updates its MIB and, when necessary, sends the messages to the NMS application using a trap.
24. Line Card Manager Software
FIG. 49 illustrates a Line Card Manager software architecture in accordance with the present invention.
In the OTS control hierarchy, the LCM software 4900 is provided below the "D" interface 3690, and generally includes a Core Embedded control layer to provide the data telemetry and I/O capability on each ofthe physical interfaces, and an associated operating system that provides the protocols (e.g., TCP/IP) and timer features necessary to support real-time communications. The LCM software 4900, which may run on top of an operating system such as VxWorks, includes an LCM "D" Message Manager 4970 for sending messages to, and receiving messages from, the Node Manager "D" Message Manager 3646 via the "D" interface 3690. This manager 4970 is an inter-process communication module which has a queue on it for queuing messages to the Node Manager. An LCM Configuration Manager 4972 is a master process for spawning and initializing all other LCM tasks, and performs functions such as waking up the LCM board, configuring the LCM when the system/line card comes up, and receiving voltages and power.
The LCM line card tasks 4973 include tasks for handling a number of line cards, including an TP IN handler or task 4976, an OA_IN handler 4978, a OPM handler 4980, a clock (CLK) handler 4982, a TP EG handler 4984, an OA_EG handler 4986, an OSF handler 4988, an ALI handler 4990, and an OSM handler 4992. Here, the line card handlers can be thought of as being are XORed such that when the identity ofthe pack (line card) is discovered, only the corresponding pack handler is used. Advantageously, the LCM software 4900 is generic in that it has software that can handle any type of line card, so there is no need to provide a separate software load for each LCM according to a certain line card type. This simplifies the implementation and maintenance of the OTS. Alternatively, it is possible to provide each LCM with only the software for a specific type of line card.
Each ofthe active line card handlers can declare faults based on monitored parameters that they receive from the respective line card. Such faults may occur, e.g., when a monitored parameter is out of a pre-set, normal range. The line card handlers may signal to the customer that fault conditions are present and should be examined in further detail, by using the Node Manager and NMS.
Moreover, the line card handlers use push technology in that they push event information up to the next layer, e.g., the Node Manager, as appropriate. This may occur, for example, when a fault requires attention by the Node Manager or the NMS. For example, a fault may be pushed up to the Alarms Manager 3654 at the Node Manager Core Embedded Software, where an alarm is set and pushed up to the Event manager 3632 for distribution to the software components that have registered to receive that type of alarm. Thus, a lower layer initiates the communication to the higher layer.
The clock handler 4982 handles a synchronizing clock signal that is propagated via the electrical backplane (LAN) from the Node Manager to each LCM. This is necessary, for example, for the line cards that handle SONET signals and thereby need a very accurate clock for multiplexing and demultiplexing. Generally speaking, the LCM performs telemetry by constantly collecting data from the associated line card and storing it in non-volatile memory, e.g., using tables. However, only specific information is sent to the Node Manager, such as information related to a threshold crossing by a monitored parameter ofthe line card, or a request, e.g., by the NMS through the Node Manager, to read something from the line card. A transparent control architecture is provided since the Node Manager can obtain fresh readings from the LCM memory at any time.
The Node Manager may keep a history log ofthe data it receives from the LCM.
25. Node Manager Message Interfaces As mentioned, the Node Manager supports two message interfaces, namely the "D" Message Interface, which is for messages exchanged between the LCMs and the Node Manager, and the "S" Message interface, which is for messages exchanged between the application software and the Core Embedded system services software on the Node Manager. 25.1 "D" Message Interface Operation
The "D" message interface allows the Node Manager to provision and control the line cards, retrieve status on demand and receive alarms as the conditions occur. Moreover, advantageously, upgraded LCMs can be connected in the future to the line cards using the same interface. This provides great flexibility in allowing baseline LCMs to be fielded while enhanced LCMs are developed. Moreover, the interface allows the LCMs and Node Manager to use different operating systems.
The Core Embedded Node Manager software builds an in-memory image of all provisioned data and all current transmission-specific monitored parameters. The Node Manager periodically polls each line card for its monitored data and copies this data to the in-memory image in SDRAM. The in-memory image is modified for each alarm indication and clearing of an alarm, and is periodically saved to flash memory to allow rapid restoration ofthe OTS in the event of a system reboot, selected line card reboot or selected line card swap. The in-core memory image is organized by type of line card, instance of line card and instances of interfaces or ports on the type of line card. Each
LCM has a local in-memory image of provisioning information and monitored parameters specific to that board type and instance.
The "D" message interface uses a data link layer protocol (Layer 2) that is carried by the OTS's internal LAN. The line cards and Node Manager may connect to this LAN to communicate "D" message using RJ-45 connectors, which are standard serial data interfaces. A "D" Message interface dispatcher may run as a VxWorks task on the LCM. The LCM is able to support this dispatcher as an independent process since the LCM processor is powerful enough to run a multi-tasking operating system. The data link layer protocol, which may use raw Ethernet frames (including a destination field, source field, type field and check bits), avoids the overhead of higher-level protocol processing that is not warranted inside the OTS. All messages are acknowledged, and message originators are responsible for re-transmitting a message if an acknowledgement is not received in a specified time. A sniffer connected to the OTS system's internal LAN captures and display all messages on the LAN. A sniffer is a program and/or device that monitors data traveling over a network. The messages should be very easy to comprehend.
Preferably, all messages are contained in one standard Ethernet frame payload to avoid message fragmenting on transmission, and reassembly upon receipt. Moreover, this protocol is easy to debug, and aids in system debugging. Moreover, this scheme avoids the problem of assigning a network address to each line card. Instead, each line card is addressed using its built-in Ethernet address. Moreover, the Node Manager discovers all line cards as they boot, and adds each line card's address to an address table. This use of discovery messages combined with periodic audit messages obviates the need for equipage leads (i.e., electrical leads/contacts that allow monitoring of circuits or other equipment) in the electrical backplane, and the need for monitoring of such leads by the Node Manager. In particular, when it reboots, an LCM informs the Node Manager of its presence by sending it a Discovery message. Audit messages are initiated by the Node Manager to determine what line cards are present at the OTS.
25.1.1 "D" Interface Message Types
The following message types are defined for the "D" interface.
• READ Message Pair - Used by the Node Manager to retrieve monitored parameters from the LCMs. The Node Manager sends Read Request messages to the LCMs, and they respond via Read Acknowledge messages.
• WRITE Message Pair - Used by the Node Manager to write provisioning data to the LCMs. The Node Manager sends Write Request messages to the LCMs, and they respond via Write Acknowledge messages. • ALARM Message Pair - Used by the LCM to inform the Node Manager of alarm conditions. A LCM sends an Alarm message to the Node Manager indicating the nature ofthe alarm, and the Node Manager responds with an Alarm Acknowledge message.
• DISCOVERY message (autonomous) - Used by the LCM to inform the Node Manager of its presence in the OTS when the line card reboots. The Node Manager responds with a Discovery Acknowledge message.
• AUDIT message - Used by the Node Manager to determine what line cards are present in the OTS. The LCM responds with a Discovery Acknowledge message.
25.1.2 "D" Interface Message Definitions Tables 6-11 define example "D" message interface packets. Note that some ofthe messages, such as the "discovery" and "attention" messages, are examples of anonymous push technology since they are communications that are initiated by a lower layer in the control hierarchy to a higher layer. Table 6: Instruction Codes in LCM to Node Manager Packets
Code (hex) Name Description
60 Discovery first packet sent after power-up
61 attention sending alarm and data
11 data sending data requested
31 ack acknowledge data write packet
36 nack error - packet not accepted
Table 7: LCM originated "Discovery" Packet
Size
Function (16-biι t wc >rds) Description Dest. address 3 hex FF:FF:FF:FF:FF:FF
(Node Manager) Source address 3 hex <OTS LCM MAC PREFIX pos. ID (LCM) protocol key 1 hex BEEF sw process tag 2 initially 0, after time-out 1 instruction 1 hex 0060 pack type 1 pack type, version, serial number data size 1 hex 0000
Table 8: LCM originated "Attention" Packet
Size
Function (16-bit words) Description
Dest. address 3 Node Manager MAC address from (Node Manager) received packet
Source address hex <OTS LCM MAC PREFIX > : pack
(LCM) position ID
Protocol key 1 hex BEEF
Sw process tag 2 initially 0, after time-out 1 Instruction hex 0061
Pack type pack type, version, serial number
Data size number of 16-bit data words to follow
ADC measures last measured values of analog inputs
Limit select 16 limit select bits in use Alarm mask 16 alarm mask bits in use
Status reg. 4 64 pack status bits
Status reg. 4 64 status alarm level select bits level select
Status reg. 4 64 status alarm mask bits in use mask
ADC 2 16 analog limit exception bits attn bits
Status 4 64 status exception bits attn bits Device results 32 control and results registers
Table 9: LCM "Response" Packet
Size
Function (16-bit words) Description
Dest. address 3 Node Manager address from received
(Node Manager) packet
Source address 3 hex <OTS LCM MAC PREFIX > : pack
(LCM) position ID
Protocol key 1 hex BEEF
Sw process tag 2 copied from request packet
Instruction 1 see Table 8
Address 1 copied from request packet
Data size 1 number of 16-bit data words to follow
Data n payload Table 10: Instruction Codes in Node Manager to LCM Packets
Code (Hex Name Description
50 First ack acknowledging the "Discovery" packet
51 Alarm ack acknowledging "Attention" packet
01 Read read data from address indicated
02 Write write data to address indicated
03 Wsw write switch
15 Bitwrite bit position to change: where mask word bit=l data to write: data word 41 Reload causes re-loading the MPC8255 microcontroller from EPROMs
42 Soft reset causes "soft reset" ofthe pack 43 Hard reset causes "hard reset" ofthe pack
Table 11: Node Manager - originated packets
Size
Function (16-bit w< Drds) Description
Dest. address 3 hex <OTS LCM MAC PREFIX > : pa
(Node Manager) Source address 3 MAC address of OTS Node Manager
(LCM) Protocol key 1 hex BEEF sw process tag 2 sequence number Instruction 1 see Table 8
Address 1 LCM register, or other valid on-pack location data size 1 number of 16-bit data words to follow
Data n payload
25.2 "S" Message Interface The "S" message interface ofthe Node Manager provides the application layer software with access to the information collected and aggregated at the "D" message interface. Information is available on the Core Embedded software side (control plane) ofthe "S" message interface by line card type and instance for both read and write access. An example of read access is "Get all monitored parameters for a particular line card instance." An example of write access is "Set all control parameters for a specific line card instance." Performance can be increased by not supporting Gets and Sets on individual parameters.
For example, these messages may register/deregister an application task for one or more alarms from all instances of a line card type, provide alarm notification, get all monitored parameters for a specific line card, or set all control parameters for a specific line card.
The "S" message interface is an abstraction layer: it abstracts away, from the application software's perspective, the details by which the lower-level Node Manager software collected and aggregated information. While providing an abstract interface, the "S" Message Interface still provides the application layer software with access to the aggregated information and control obtained from the hardware via the "D" Message Interface, and from the Node Manager state machines. Moreover, the "S" interface defines how the TL-1 craft interface is encoded/decoded by the Node Manager. The TL-1 craft interface definition describes the command and control capabilities that are available at the "S" interface. See section 23.3.4, entitled "Command Line Interface." The application software using the "S" Message interface may run as, e.g., one or more VxWorks tasks. The Core Embedded software may run as a separate VxWorks task also. To preserve the security afforded by the RTOS to independent tasks, the "S" Message Interface may be implemented using message queues, which insulates both sides ofthe interface from a hung or rebooting task on the opposite side ofthe interface. As for the LCM, this division ofthe Node Manager software into independent tasks is possible because the Node Manager is powerful enough to run a multi-tasking operating system. Therefore, the present inventive control architecture utilizes the presence of a multi-tasking operating system at all three of its levels: LCM, Node
Manager and NMS. This multi-tasking ability has been exploited at all levels of control to produce a system that is more modularized, and therefore more reliable, than prior approaches to optical network control.
26. Example OTS Embodiment Summary information of an example embodiment ofthe OTS is as follows:
Optical Specs:
Wavelength capacity: 64 wavelength channels
Fiber wavelength density: 8 wavelengths Data rate: Totally transparent
Physical topology: Point-to-Point
Lightpath topology: Point-to-Point
Wavelength spacing: 200GHz (ITU-grid)
Optical bandwidth (channels): C and L bands Wavelength protection: Selectable on a per lightpath basis
Optical Modules:
(i) Optical transport Modules
(ii) Optical switching module (iii) Optical add/drop module
(iv) Optical performance monitoring module
Access Line Interface Modules:
Optical line interface cards: GbE, OC-n/STM-n (i) 16-ports (8 input & 8 output) OC-12 line card (ii) 16-ports (8 input & 8 output) OC-48 line card
(iii) 16-ports (8 input & 8 output) Gigabit Ethernet line card (iv) 4-ports (2 input and 2 output) OC-192 line card
Optical Signaling Module:
4-Ports using Ethernet Signaling Support IP, Ethernet Packets
Node Manager:
Processors: MPC8260, MPC755
SDRAM: 256 MB upgradable to 512 MB
Flash Memory: 64 Mbytes Ethernet Port: 100 BaseT with Auto-Sensing
Ethernet Hubs: OEM assembly 10 ports, 1 per shelf
Serial Port: 1 EIA 232-D Console Port
Software Upgrades: Via remote download
Line Card Manager: Processor: MPC8260
SDRAM: 64 MB upgradable to 128 MB
Flash Memory: 16 Mbytes
Ethernet Port: 100 BaseT with Auto-Sensing
Serial Port: 1 EIA 232-D Console Port Software Upgrades: Via local download
Backplanes:
Optical backplane
Electrical backplane
Ethernet LAN interconnecting Node Manager and LCMs Chassis
The OTS system's chassis is designed in a modular fashion for a high density circuit pack. Two stacks of sub-rack systems may be used.
27. Detailed Description of Protection and Restoration Features
This section is organized as follows:
27.1 Overview of the basic types of protection and restoration mechanisms provided by the novel all-optical transport switching system (OTS).
27.2 Network managed path protection using line card bridging 27.3 Network managed path protection using switch fabric bridging.
27.1 Overview Of Protection And Restoration Features Protection and restoration features (as discussed herein) are categorized in terms of network managed: (i) path protection; (ii) line protection; and (iii) path restoration. Path protection is the provisioning of pre-determined redundant end-to-end paths that can be switched into operation whenever a problem is detected with a working end-to-end path. Line protection is the provisioning of predetermined links that can be utilized to dynamically re-route traffic around a failed link. Path restoration is the re- signaling or re-institution of a connection which ceases to function. These concepts are elaborated on below. 27.1.1 Network Managed Protection
In network managed protection and restoration, the network itself takes care of provisioning and switching to an alternate optical path. The way this can be accomplished can be better understood with reference to FIGs. 1-4 as described above as well as FIG. 50 which shows in abstract terms the preferred control hierarchy of an OTS network. The hierarchy comprises three tiers or levels: a network management system (NMS) 280, Node Managers (NM) 250, and line card managers (LCM) 410. The LCMs 410 control and monitor local resources, such as lasers and optical light paths, on line cards and the switch fabric. Generally speaking, one LCM 410 controls one line card (including switch fabric). There are typically multiple LCMs per OTS, and more than one card of each type may be provided, in which case one LCM is provided to manage each such card. Each LCM performs basic line-card monitoring functions and communicates the results ofthe monitoring to its respective NM 250. The LCMs 410 also receive instructions from the NMs 250 to control line card resources such as input or output signal multiplexers. Each NM 250 interfaces with all the LCMs 410 within a given OTS and is responsible for node level functions such as signaling and routing. For example, whenever a light path, trail or circuit is created between OTSs, the NM 250 of each OTS performs the necessary signaling, routing and switch configuration to set up a link involving that OTS along the trail. As such, the NM 250 may send configuration instructions to a particular optical access ingress card, switch fabric, and a particular optical access egress card in order to establish the required optical cross-connection. The NM 250 also receives fault messages from the LCMs 410 under its supervision so that alarm conditions can be detected, isolated, and reported to the NMS 280. 27.1.1.1 Path Protection
There are at least two types of path protection mechanisms: 1+1 protection and 1 :1 (or 1 :N) protection. The "+" refers to a redundant backup or alternate path, which carries a duplicate ofthe protected traffic. The ":" refers to a reserved path which may carry no or low priority traffic that may be pre-empted by the protected traffic. These concepts are elaborated on below.
With 1+1 path protection, an ingress OTS splits the bearer channel or user data signal into two optical signals which are transmitted over two paths, referred to as the working path and protection path, to the destination endpoint. At the egress OTS the working path signal is initially forwarded to the user or external network access device. However, if the working path signal degrades below a specified threshold, the protection path signal is forwarded to the user or external network access device instead. This approach relies on the signaling and routing capabilities ofthe OTS to establish disjoint paths and the LCMs to monitor the received signal and effect switchover, as described in greater detail below. After switchover has occurred, the NM and NMS can isolate the failure and initiate repair actions. Since the LCM on the egress switch controls the switchover, recovery can be performed well within 50 ms, which is considerably faster that standard SONET performance requirements.
1 :1 path protection is supported using the same hardware features but different control software. With 1 : 1 protection, two lightpaths are setup with one path supporting high priority data traffic and the other supporting lower priority, pre-emptable traffic. When a failure occurs on the lightpath supporting the high priority traffic, the low priority traffic is pre-empted and the high priority traffic is re-routed over the low priority lightpath. While 1 : 1 protection provides the capability to use the protection path for low priority traffic, it will take longer to switchover since both ingress and egress OTS's are involved and the NMS must co-ordinate the switchover. However, it is expected that service can be restored within one second.
In one implementation the networked managed path protection is carried out through bridging between line cards. This implementation is shown for line cards supporting OC-n devices. In the alternative, networked managed path protection may be implemented using the bridging capability ofthe switch fabric. This implementation is shown for line cards supporting Gigabit Ethernet (GbE).
27.1.1.2 Line Protection
Line protection generally reduces bandwidth requirements since protection paths may be shared among links. Thus, for example, links A and B may share part ofthe same protection path since it is very unlikely both links will fail at the same time. However, this comes at the expense of less rapid recovery than 1+1 protection.
With this feature, protection spans (i.e., set of links, not an end-to-end path) are preferably stored for recovery of all failed links and switches. Since the backup paths are pre-stored, the recovery time is expected to be less than one second. However, the recovery time is likely to exceed the recovery time provided by 1+1 path protection because signaling between switches is required to perform the switchover.
Note that when line protection is used, all λ channels carried by a failed link are restored. This may be a disadvantage for some service providers who wish to provide different levels of service at different prices. Therefore, such service providers may prefer to use the path protection feature rather than the line protection feature. Alternatively, the service provider could implement a network with a mix of protected and unprotected links. Lightpath requests requiring protection could be routed over the protected links while lightpaths not needing protection would be routed over the unprotected links.
In addition to the foregoing, the OTS may also include line backup capability. For example, a service provider may use a diversely routed four-fiber interface between switches. It is unlikely that both fiber pairs will fail at the same time when diversely routed. When there is a fiber cut in the working fiber pair, the OTS will automatically switch to the protection pair of fibers.
27.1.1.3 Path Restoration
When longer recovery times are acceptable to the user, e.g., up to several minutes, then path based restoration schemes can be used for re-instituting signaled (or switched) lightpaths. With this feature, the OTS switch, possibly with the help ofthe NMS, will re-route switched lightpaths when failures occur. Depending upon the number of paths affected by a failure, recovery could take several minutes. Provisioned (or permanent) lightpaths can be restored administratively under operator control or using the services of an NMS, which stores provisioned light paths.
27.1.1.4 Summary
Table 12 summarizes the key features ofthe foregoing protection mechanisms.
Table 12
Figure imgf000092_0001
27.2 Network Managed Path Protection This section addresses the path protection mechanism provided by the optical network for OC-n connections. It addresses both 1+1 and 1 :1 path protection features.
27.2.1 Concept
27.2.1.1 1+1 Path Protection With 1+1 path protection, the user or bearer channel traffic is split or duplicated and transmitted over two disjoint paths referred to as the working and protection paths. At the destination, when the working path degrades below a specified threshold, the protection path signal is delivered to the user. FIG. 51 depicts this concept with the traffic between SONET systems (network access devices) 5140A and 5140B being transmitted over a working path C-D-E (solid line 5110) and over a protection path C-B-E (dashed line 5120). The SONET systems 5140A and 5140B are examples of network access devices since they allow an access network, such as one using SONET, to access the optical network via an OTS.
As shown in FIG. 52 this feature employs a single bi-directional OC-n interface 5220 between the OTS 5225 and the user or external network access equipment 5140 using two ALI cards 5210 and 5212 respectively referred to as the primary ("P") and secondary ("S") cards. Note that FIG. 52 shows the configuration when the OTS functions as an ingress node (for traffic flowing from right to left in the drawing) as well as an egress node (for traffic flowing from left to right in the drawing). As such, both OTS switches C and E (FIG. 51) are set up in this way.
The primary and secondary ALI cards 5210 and 5212 are interconnected via an inter-card electrical channel 5214 which allows the working and protection communication pathsto bridge the two ALI cards 5210 and 5212 as described in greater detail below. In 1+1 path protection, the Node Manager ofthe OTS responsible for establishing the protected path e.g., Switch C, instructs the line card manager ofthe primary ALI card 5210 to duplicate particular optical signals to the secondary ALI card 5212, and instructs the secondary ALI card 5212 to transmit these duplicate optical signals over the network. The ALI cards 5210 and 5212 are connected to separate OA cards (not shown in FIG. 52) and should use different λ's so that separate 8x8 MEMs in the switch fabric are used. Also, when the working and protection paths are established the routing function ofthe OTS provides disjoint paths between endpoints 5140A and 5140B for the working and protection traffic flows.
At the path tail or egress OTS, e.g., Switch E (FIG. 51), the primary ALI card receives duplicate optical signals transmitted over the working path (which includes the primary ALI card 5210 on the ingress OTS) and over the protection path (which includes the secondary ALI card 5212 on the ingress OTS, the secondary ALI card on the egress OTS and the inter-card channel between the secondary and primary ALI cards on the egress OTS). The line card manager associated with the primary ALI card on the egress OTS switches to the protection path when the optical signal associated with the working path has degraded below a threshold, as discussed in greater detail below. Although two ALI cards 5210 and 5212 are required, the network access equipment 5140 A or 5140B (up to 4 OC12 devices) is connected only to the primary ALI card 5210 on both the ingress and egress OTSs. However, since each ALI supports two OC48 lightpaths, a pair of ALI cards can provide protection switching to two sets of OC 12 user equipment .
27.2.1.2 1:1 Path Protection
With 1 :1 protection as shown in FIG. 53, the OTS supports both a high priority user traffic flow 5310 (shown by a solid line) and a low priority user traffic flow 5312 (shown by a dashed line). During normal operation, both the high and low priority flows 5310 and 5312 are supported. However, when a failure occurs affecting the high priority flow 5310, the high priority flow is re-routed using an ALI card associated with the low priority flow, thereby pre-empting the low priority flow. As shown in FIG. 53, the high and low priority flows have the same endpoints 5140 A and 5140B. This configuration may be relaxed in alternative embodiments at the expense of switch software complexity.
As shown in FIG. 54, this feature employs two bi-directional OC-n interface 5220 between the OTS 5225 and the network access equipment/devices 5140-1 and 5140-2 using two ALI cards 5210 and 5212 referred to as the primary and secondary, respectively. The network access devices 5140-1 with high priority traffic are connected to the primary ALI 5210 while the network access devices 5140-2 with low priority traffic are connected to the secondary ALI 5212. The primary and secondary ALI cards 5210 and 5212 are interconnected via the intercard electrical channel 5214 which allows traffic flows that ingress at the primary ALI card 5210 to be bridged onto the secondary ALI card 5212. In 1 :1 path protection the Node Manager within the ingress OTS (when traffic flows from right to left in FIG. 54) instructs the primary ALI card 5210 to duplicate select traffic flows for transmission through the inter-card electrical channel 5214. However, the Node Manager does not instruct the secondary ALI card 5212 to transmit these duplicate flows over the low priority path until such time as a failure is detected in the high priority path. Similarly, on the egress OTS (when traffic flows from left to right in FIG. 54) the Node Manager thereof instructs the primary ALI card to select the duplicate traffic flows (arriving from the secondary ALI card) for delivery to the user equipment when the failure is detected in the high priority path. 27.2.2 Failure Scenario
27.2.2.1 1+1 Protection
The primary and secondary ALI line card managers are continuously monitoring the status of their ALI cards. When a loss of signal (LOS) or other indication of a poor quality signal has been detected on the OC-48 optical access egress port 5240 on the primary ALI 5210 (FIG. 52), the line card manager thereof automatically switches the output ofthe secondary ALI card 5212 to the user devices. A fault or alarm is then sent to the Node Manager and NMS in order to trigger an alarm and initiate repair. Since the protection switching is initiated only by the egress line card manager, the switchover is expected to occur in less than 5ms.
As the primary and secondary paths are routed over a disjoint set of nodes and links, the propagation delay will be different for each path. Thus, when the switchover occurs, there may be a loss of data or redundant data passed to the user device. In order to minimize this disruption the primary and secondary paths are preferably set up using a constraint-based routing in order to set a limit on permitted propagation delay.
27.2.2.2 1:1 Protection
Failure detection for 1 :1 path protection is similar to that of 1+1 protection, but involves the services ofthe Node Managers on the ingress and egress OTS switches as well as the NMS. When a loss of signal or other indication of a poor quality signal is detected on the OC-48 egress port 5240 of the primary ALI card 5210 (FIG. 54) the line card manager thereof sends a fault message to the local Node Manager. The local Node Manager signals the line card manager ofthe secondary ALI card 5212 to select and transmit over the network the traffic flows received from the inter-card channel 5214 rather than user equipment 5140A2 (see FIG. 54). The same actions are expected to occur on the egress switch. In addition, the Node Managers associated with the ingress and egress switches will also send an alarm to the NMS, which will signal the alarm to the Node Managers ofthe ingress and egress OTSs.
Because ofthe signaling required for endpoint co-ordination, there will likely be a longer lag for 1 :1 protection between the receipt ofthe alarm by the Node Manager and the restoration of service to the high priority flow.
27.2.3 ALI Card Architecture
FIG. 55 shows the architecture of an OC-48 ALI card 5510 in greater detail. The OC-48 ALI card multiplexes/de-multiplexes eight OC-12 signals 5512 into/from two OC-48 signals 5514. Note that other versions ofthe optical line card are also possible, such as an OC-192 ALI card described above, and have similar although not identical architectures.
The line card manager 5516 is a daughterboard ofthe ALI card which provides a processor for executing line card manager software, as described in greater detail above. The OC-48 ALI card 5510 converts the physical characteristics of optical signals used by external network access equipment into optical signals supported by the OTS network. This necessitates conversion of optical signals into the electrical domain and vice versa. Transceivers (TRx) 5515 and 5517 are used for this purpose. Moreover, the OC-n ALI cards may provide the SONET framers 5520 using a pair of AMCC SONET Missouri™ microchips 5520 (part no. S4802). The OC-48 SerDes provides serializing and deserializing. With serializing, one packet is fully transmitted prior to the next being sent such that cell interleaving is precluded.
The datapath for 1+1 or 1:1 protection is illustrated in FIG. 56, which shows the datapath on ingress, and in FIG. 57, which shows the datapath on egress. (External OC-12 fiber is not shown.) In 1+1 path protection (FIG. 56) the OC-12 data 5512 is connected to the Rx ports 5610P of a SONET framing chip 5520P on the primary ("P") ALI card 5510P. The OC-12 input ports ofthe secondary ("S") ALI card 5510S are not connected to the network access equipment and hence the Rx ports 561 OS on the SONET framing chip 5520S receive or carry no data. In 1:1 path protection, the OC-12 data 5512 associated with the high priority path 5310 (FIG. 53) is connected to the Rx ports 561 OP ofthe primary SONET framing chip 5520P. The OC-12 data associated with the low priority path 5312 is connected to the Rx ports 561 OS ofthe secondary SONET framing chip 5520S.
For transmission over the optical network, data comes in on the four OC- 12 Rx ports 5610P ofthe primary ALI card 5510P (FIG. 56). The primary SONET framing chip 5520P interleaves the data into an OC-48 signal 5514P and sends it out an OC-48 Tx interface 5620P Also, the primary SONET framing chip 5520P forwards the OC-12 data out of a protection port 5630 to the secondary SONET framing chip 5520S. The data links are shown by horizontal lines connecting the two framing chips 5520P and 5520S. Each bundle ofthe four horizontal lines corresponds to 16 data, 2 control, and 1 differential clock signals routed over the inter-card electrical channel. The inter-card electrical channel is preferably carried over the OTS electrical backplane (schematically represented by dashed line 5640). In particular, the OTS electrical backplane is preferably constructed to include a parallel bus between selected pairs of adjacent slots in the OTS chassis or bay that are intended to house ALI cards, thereby facilitating the inter- card electrical channel used for protection switching applications.
Each SONET framing chip 5520 has a data duplication capability and a data input selector 5650 which is utilized to select the appropriate OC-12 input 5512. For 1+1 protection, the primary SONET framing chip 5520P selects the left inputs to the selectors 5650P whereas the secondary SONET framing chip 5520S selects the right inputs to the selectors 5650S. This way the data transmitted on the primary and secondary OC-48 signals 5514P and 5514S is identical. For 1 :1 protection the primary SONET framing chip 5520P selects the left inputs to the selectors 5650P and the secondary SONET framing chip 5520S also selects the left inputs to the selectors 5650S, thereby enabling both high priority and low priority traffic to be transmitted over the optical network. However, when the 1 : 1 protection switching is actuated the secondary SONET framing chip 5520S is instructed to select the right inputs to the selectors 5650S thereby activating the switchover. In the egress or demultiplex direction as shown in FIG. 57, the primary and secondary OC-48 signals 5514P and 5514S are received and demultiplexed into four OC-12 signals 5512 by the SONET framing chips. In 1+1 protection, when the line card manager 5516 detects that the quality ofthe primary OC-48 signal 5514P falls below a certain threshold, then the right inputs of selectors 5750P are chosen. The behavior ofthe selectors 5650P, 5650S, 5750P and 5750S is programmed via a CPU interface 5530 (FIG. 55) available on the SONET framing chips 5520. The line card manager 5516 can thus control the framing chips. The quality ofthe OC-48 signals 5514 can be read via the CPU interface 5530 as well. The software interface between the line card manager and the rest ofthe ALI card is done via the SPI serial bus 5532 connecting the LCM 5516 and a control FPGA 5534 (see more particularly Section 4 and FIG. 6). The control FPGA 5534 actually interfaces to all the ICs on the card.
All ofthe registers ofthe SONET framing chips are readable and/or writeable via the CPU interface 5530. To monitor the state ofthe OC-12 and OC-48 inputs 5512 and 5514 the SONET framing chip provides LOC (loss of clock), LOS (loss of signal), OOF (out of frame), LOF (loss of frame) registers. Whenever appropriate the SONET framing chip maintains counters or state change indication registers for the above bytes. The chip can also be programmed to generate an alarm output based on a predefined error condition. The input signal structure of the SONET framing chip 5520 is defined via a CONFIG register. The selector/cross-connect functionality ofthe chip is controlled via a MUXSEL register. The chip will send an alarm signal if an LAIS_GEN register is set. To select an 8KHz reference clock derived based on the Rx interfaces ofthe chip, REF CLK SEL and REF_CLK_FREQ registers are provided. The above registers provide the information about the quality ofthe input data and control the data flow needed to implement APS (automatic protection switching).
27.3 Path Protection Using Switching Fabric Bridging
27.3.1 1+1 Path Protection In addition to protection switching made available through the bridging features ofthe ALI line cards, the OTS can also provide protection switching through bridging in the switch fabric. FIG. 58 depicts the concept for a uni-directional flow from A to B. At ingress switch 5810, the ingress switch fabric 5815A has the capability to bridge an OA_In channel 5834A onto two TP_eg cards 5832A and 5832B thus creating a working flow and a protection flow 5816 within the optical network to the destination. These flows take disjoint paths through the optical network such that a single failure does not affect both of them. At the destination, the Node Manager on the egress switch 5820 monitors the received signal on each path. When a loss of signal is detected on the working path 5814, the Node Manager has the protection TP in 5832D cross-connected to the OA Eg 5834B (the cross-connect is denoted by dashed line 5838).
Since both flows use the same ALI and OA cards, this feature is not as robust as the 1+1 protection feature described in Section 27.2. However, re-using the same ALI and OA cards reduces the equipment requirements.
27.3.2 1:1 Path Protection 1 :1 path protection uses the same concept shown in FIG. 58. However, the switch fabric 5815A does not split the OA_In channel 5834A until after a fault is detected on the working path 5814. At that point the switch fabric 5815A bridges the OA_In channel 5834A onto the TP_eg card 5832B, thereby pre-empting any low priority traffic carried over the protection path 5816. The protection TP_in card 5832D is also cross- connected to the OA_Eg card 5834B.
28. Self-Healing Hierarchical NMS
FIG. 59A shows the logical software architecture of a reference hierarchical network management system (NMS) 5910 which comprises multiple NMS managers (generically denoted by reference no. 5912 with specific instances at a given level being given an alphabetic suffix from "A" to "C"). Each NMS manager 5912 is responsible for administrating or supervising various portions or aggregations of a communications network 5914. The NMS managers and nodes in network 5914 communicate with one another through a traffic management messaging network, not shown, which may be in-bound or out-of-band relative to the bearer traffic.
The NMS managers 5912 are logically arranged in a tree structure, thus forming a hierarchy comprising a plurality of levels. At each level other than the bottom or leaf level an NMS manager 5912 administers or supervises one or more dependant or child NMS managers. Similarly, at each level other than the top or root level each NMS manager has a parent or supervising NMS manager. There may be none, one or more intermediate levels in the hierarchy (only one intermediate level is shown). At the bottom-most or leaf level, the NMS managers 5912C are responsible for supervising distinct groups of network nodes which are divided in logical sub-networks such as subnetworks 1-4 shown in FIG. 59A. Note that the root NMS manager 5912A has "n" children , denoted Ml .1 to Ml .n, which are situated at the illustrated intermediate level of the hierarchy. Likewise, each intermediate-level NMS manager 5912B has "n" children, such as M 1.1.1 to M 1.1.n for M 1.1. Each of the "n" values shown may, in fact, represent a different numeric value.
At the root level the NMS manager 5912 A supervises an aggregation of all nodes in network 5914. The main advantage of this structure is that it provides a distributed and scalable approach to network management. In particular, because each NMS manager communicates with its local family group, the communications complexity will be less than the case where each NMS manager communicates with every other manager. In the illustrated embodiment each NMS manager performs similar functions such as configuration management, connection management, topology management, fault management, and performance management. However the data objects or events which each NMS manager processes or reacts to will differ depending on its position or level in the hierarchy, which denotes the functional role the manager is expected to carry out. This is because NMS managers summarize or aggregate state information up the hierarchy in order to reduce the processing load on the NMS managers in the upper echelons of the hierarchy. For instance, NMS manager Ml .1.1 may receive multiple "cross-connect up" event messages from multiple nodes or exchanges within sub-network 1. Assuming the cross-connects define a path spanning sub-network 1 , Ml.1.1 aggregates such connection state information and transmits a "sub-network connection" event up to its parent manager Ml.l. FIG. 59A should therefore be understood to represent a role/responsibility hierarchy.
The NMS managers 5912 can be implemented in a variety of ways. Since the NMS managers at different levels ofthe hierarchy carry out different operating tasks, the program or software code for managers at different levels need not be identical. However, managers situated on the same level ofthe hierarchy provide the same functionality and so are preferably identical to one another. The term "Segmented NMS" is used herein to refer to an NMS manager implemented in the foregoing manner. However, it is preferable to implement every NMS manager irrespective of its level in the hierarchy using one software program or code which provides the functionality required to operate at every position and level in the responsibility hierarchy. This eliminates the need to deal with, update and manage multiple bodies of code. The term "Holistic NMS" is employed to refer to an NMS manager implemented in this manner. In such an implementation, each instance ofthe Holistic NMS has to
"know" how to function, and this is preferably carried out by associating each Holistic NMS instance with a role indicator which specifies the role/responsibility it is expected to provide in terms of its logical position and level within the hierarchy. Further details concerning how the role indicator may be initiated is discussed below. Note also that FIG. 59A depicts a software architecture, irrespective ofthe underlying hardware platforms. If desired, each NMS manager (whether implemented as a Holistic NMS or Segmented NMS) can execute on a physically distinct hardware platform. This provides the greatest fault-tolerance capability but is also the most expensive solution. Alternatively, one or more NMS manager instances (i.e., software processes or execution threads) can execute on a common hardware platform. For example, FIG. 59B shows NMS managers Ml.1.1, Ml.l, and Ml executing on hardware platform 5918A, NMS manager 1.1.2 executing on hardware platform 5918B, NMS managers 1.2.1 and 1.2 executing on hardware platform 5918C and NMS manager 1.2.2 executing on hardware platform 5918D. It should also be appreciated that a single instance of an NMS manager can potentially assume multiple roles or positions within the hierarchy. An example of this is shown in FIG. 59C where a Holistic NMS 5916A, which provides multi-level functionality, assumes the dual roles of Ml .1.1 and Ml .l . (In the degenerate case, one instance of a Holistic NMS can theoretically assume the role of all NMS managers within the hierarchy, but as will be seen this would defeat the purpose ofthe invention and so is not recommended.)
However implemented, the role an NMS manager is expected to fulfill can be established or initiated using a variety of schemes, including configuration and self- discovery. In the configuration scheme such information can be hard-coded or the operator prompted for such information through a human interface as known in the art. In this case the root NMS manager can, for example, message all the other managers with their role indication.
In a self-discovery scheme, each NMS manager can be associated with an IP network address that implies the manager's role in the hierarchy. For example, network address x.y.zl implies that the manager is in the third level ofthe hierarchy. In order to determine its relative position, the manager sends out "hello" messages to all other NMS elements which return their network addresses. Based on the response, the just-activated manager could determine, for example, that an NMS manager associated with address x.y.z2 is a common child of that parent, i.e., a sibling.
The NMS managers which are typically first activated are the leaf-level NMS managers. After the initial discovery process is completed the NMS managers will be able to determine who their siblings are. For example, in FIG. 59B, NMS manager Ml .1.1 can determine that it is a sibling to Ml .1.2, and Ml .2.1 can determine that it is a sibling of Ml .2.2. The leaf-level NMS managers can then spawn or launch the code of parent NMS managers (as shown in FIG. 59B) or assume their roles (as shown in FIG. 59C), as needed, in order to complete the hierarchy. (The former process is applicable for Segregated NMS's while both processes are applicable for Holistic NMS's.)
For example, in FIG. 59B Ml.1.1 and Ml .1.2 can exchange a set of messages to elect which one of them should spawn the parent Ml.l. Different election schemes are presented below. In FIG. 59B, Ml.1.1 is elected and spawns Ml.l . Similarly, Ml.2.1 spawns Ml.2. The discovery and election process is recursively carried out until the root NMS Manager Ml is initiated.
Once each NMS manager has been initiated and/or their roles are determined, NMS managers which are siblings communicate state information with one another, as shown in FIG. 59A, but do not directly communicate with NMS managers belonging to other sibling groups. However, as between siblings within the same group only one of them has the responsibility for aggregating state information and passing it up to the parent NMS manager. This is possible because each NMS manager within a sibling group maintains state information for all the elements supervised by all its siblings. This can be accomplished in a variety of ways, including:
• archiving - each NMS manager periodically stores or archives state information in an external database accessible by its siblings; • flooding - NMS managers communicate state information to their siblings directly through pre-defined messages; and
• event subscription - each NMS manager incorporates an event service to which its siblings can subscribe in order to receive notice of various events. The OTS optical network described in greater detail above and below employs the event subscription technique as the primary state synchronization method with archiving as a backup mechanism.
The alternative of every NMS manager communicating with its parent is also possible, but the former is preferred because it offers the potential to reduce network management traffic. For instance, if the hardware/software architecture of FIG. 59B is followed, communication between NMS managers and their parents is limited to local communication within the same hardware platform.
In the downward direction every NMS manager is able to communicate with its children, if any, or the network nodes. It should be appreciated that each NMS manager shown in the reference hierarchy of FIG. 59A is active in that it communicates pre-aggregated state information to its children. For example, consider a severely malfunctioning node, A, in sub-network 1. As the line cards ofthe node begin to fail, it will transmit many alarm messages about failed components to NMS manager Ml .1.1. Ml .1.1 correlates these alarms until it determines that node A is non-operational. Ml .1.1 then generates a summarized alarm which indicates that "node A is non-operational". The summarized alarm is transmitted up the NMS hierarchy to Ml , which in turn, communicates the summarized alarm to its children, such as Ml .n. In turn, Ml .n communicates the alarm to all its children, Ml .2.1 ... Ml .2.n. In this manner, all NMS managers become aware ofthe problem in sub-network 1.
In order to determine if an NMS manager ceases to operate, a heartbeat process is preferably employed within each sibling group as the discovery mechanism. In this process, each NMS manager periodically transmits "hello" messages over the traffic management network to all of its siblings, and expects to receive a hello message from each sibling within a specified time period. This provides a k:k-l discovery mechanism (k being the number of elements in a sibling group), meaning that every manager in a sibling group communicates its status with every other manager in a sibling group. The non-reception of a hello message when such a message is expected signifies that the NMS manager at the other end ofthe link has ceased to operate. In this event, the NMS manager that first discovers a non-operating manager alerts all of its siblings. In other words, the discovery of a non-responding NMS is flooded amongst the sibling group. Note that the discovery mechanism can alternatively be implemented through the use of sequenced 'keep alive' messages, or through the use of explicit acknowledgements. In such cases the non-reception of a keep-alive message when such a message is expected, or the non-communication of an acknowledgement message, would signify that the NMS manager at the other end ofthe link has ceased to operate
When an NMS manager is deemed to be non-operational its siblings then undertake an election in order to determine which one of them should assume the responsibilities ofthe dead manager. Note also that if the dead NMS manager was the one that communicated with the parent NMS manager, then the newly elected NMS manager bears that responsibility as well. FIG. 59D shows an example where manager Ml.1.1 dies. In this case, manager Ml.1.2 assumes the responsibility for sub-network 1 previously managed by Ml.1.1. Ml.1.2 also assumes the responsibility for aggregating information to the parent NMS manager Ml.l since Ml .1.1 previously had that responsibility. The NMS manager assuming responsibility for a non-operational sibling can do so using a "split" model or an "aggregated" model. For example, in the split model, Ml.1.2 clones itself and spawns a new instantiation (i.e., new execution thread) of its software code on the same hardware platform. In the aggregated model, Ml.1.2 itself assumes the role/responsibility of Ml.1.1, thus modifying its role indicator. Both techniques are applicable whether Ml .1.2 is implemented as a Holistic NMS or a Segmented NMS.
The election process is preferably carried out by having each NMS manager compute a ranking according to a predefined election scheme and flooding its siblings with such data. Each NMS manager will thus also receive ranking data from its siblings. Each NMS manager within a sibling group assumes that it is the winner unless it receives notice that one of its siblings has a higher rank. In the unlikely event of a tie, a predefined tie breaking mechanism can be employed such as determining the winner based on an IP address associated with each NMS manager. A variety of election schemes may be used to for selecting a replacement manager or for self-discovery purposes as described above. Such schemes include, and are not limited to: (a) pre-configuration; (b) administrative weight; (c) load bearing capability; and (d) network size. The pre-configuration scheme basically sets out ahead of time which NMS manager will take over for a non-functioning manager. This could be implemented in the form of a pre-configured table. The administrative weight scheme assigns each manager an administrative weight based on the power or speed of its underlying hardware platform. The NMS manager having or associated with the highest (or lowest) weight wins. In the load bearing scheme each NMS manager assesses its own busyness, e.g., based on current or historical processor utilization, speed of execution capability and other such parameters, the particulars of which may vary widely from embodiment to embodiment. The NMS manager associated with the highest capability wins. Finally, the network size scheme simply declares the winner to be the NMS manager that supervises the 'smallest' network, e.g., by the number of network elements under administration. A combination of these techniques can also be implemented. 29. Self-Healing Hierarchical NMS on the OTS Platform An implementation ofthe generic self healing NMS described in Section 28 is now presented for the OTS platform presented in Sections 1-26 above. As shown in FIG. 59E, an OTS network has a control hierarchy which comprises three tiers or levels: a Network Management System (NMS) 280, Node Managers (NM) 250, and Line Card Managers (LCM) 410. As shown in this drawing, each entity is a separate software process executing over a distinct hardware platform. The LCMs 410 control and monitor local resources, such as lasers and optical light paths, on line cards and the optical switch fabric. Generally speaking, there is one LCM 410 for each line card or optical switch fabric module. There are typically multiple line cards per OTS, and more than one card of each type may be provided. Each LCM communicates the results of its line card monitoring to its respective NM 250. The LCMs 410 also receive instructions from the NMs 250 to control local resources such as input or output signal multiplexers.
Each NM 250 interfaces with all the LCMs 410 within a given OTS and is responsible for switch level functions such as signaling, routing, and fault protection. For example, whenever a light path is created between OTSs, the NM 250 of each OTS performs the necessary signaling, routing and switch configuration to set up a cross- connect involving each OTS along the path. As such, the NM 250 may send configuration instructions, for example, to a particular optical access ingress card, optical switch fabric, and a particular transport egress card in order to establish a required optical cross-connection. The NM 250 also receives fault messages from the LCMs 410 under its supervision so that alarm conditions can be detected, isolated, and reported to the NMS 280. FIGs. 30, 31, 34 and the accompanying text in Sections 18, 19, 20 and 22 are focused on describing NMS functionality in the OTS network. In implementing the self-healing hierarchical NMS described generically above, the OTS system preferably implements:
• the hardware/software architecture shown in FIG. 59B; • each NMS manager as a Holistic NMS;
• the self-discovery process described above, that works from the leaf-level NMS managers and proceeds upwards, for managerial role identification;
• the split (as opposed to aggregate) model described above for instances when one NMS manager has to replace a non- functioning manager; and • an administrative weight election scheme with an address-based tie-breaking mechanism.
State information synchronization amongst NMS manager siblings is based on the principle of flooding using an event service. The general model of an event service is shown in FIG. 59F. In this model a software component 5920 (process or module) "publishes" an event to an Event Manager 5922. Software components 5924 "subscribe" to events and receive notice thereof. In particular, the Event Manager ofthe Node Manager is described in Section 23.2.2 and its FIG. 44. Events are organized by topics, and each topic can itself be comprised of a hierarchy of sub-topics, as shown for instance in FIG. 59G. For instance, the following topics may be defined as shown in Table A:
Table A
Topic Meaning Interface
NM.connection. any cross-connect event at OTS such between node x-connect as "cross-connect up" and "cross- elements and leaf- connect down" level NMS manager
NM.connection any connection event at the OTS such as cross-connect events and protection switching events
NMS .connection, any sub-network link event such as between leaf-level link "link up" and "link down" NMS manager and its parent
NMS. connection any subnetwork connection event
FIG. 59H shows the software architecture of each OTS switch (which comprises LCM software 4900 and NM software 3600) from the perspective of an event manager 3632 present within the NM. The low level software 3641 ofthe NM, which is situated between the "D" and "S" interfaces (see more particularly FIG. 36 and Section 23.1), passes events to the NM event manager 3632 which distributes events to other NM components 3612, 3614, 3615, 3618, 3631, 3633, 3634, and 3666 according to subscription. For example, suppose a new cross-connect is configured for a signaled light path. The NM receives a path "set up" message via the inter-node signaling network (described more particularly by FIG. 9 and Section 7 ). The message is processed by NNI signaling 3615, which requests the resource manager 3631 to allocate ports and possibly wavelengths on ingress & egress line cards. The resource manager 3631 then employs the "S" interface to instruct the low level drivers (e.g., OXC manager 3656 in FIG. 36) to interface with the line cards and switch fabric through the "D" interface to create the cross-connect. The low-level software 3641, utilizing the "S" interface, sends a "cross- connect up" event to the event manager 3632 which publishes the event to the relevant subscribers. These include NNI signaling 3615, which originated the request, and the NMS agent 3620.
The NMS agent 3620 on the NM analyzes events and forwards messages relating to configuration, connection, fault and performance to the corresponding managers associated with an NMS Instance (see FIG. 34). The NMS agent 3620 thus forms a part ofthe element management layer (3404) in the TMN model. The preferred software architecture of an NMS manager 5912C for OTS networks is shown in greater detail in FIG. 591. A proxy agent 5960 is instantiated for each OTS/NM supervised by the NMS manager. The proxy agent 5960 is present because in the preferred embodiment the NMS is written is Java and the NM is written in another language and so the proxy agent provides an interface with each OTS/NM 250. The proxy agent 5960 also collects and translates messages such as traps and alarms received from the corresponding NMS Agent 3620, converts them to events, and publishes them through an NMS Event Service 5965.
The NMS Event Service 5965 distributes events to the relevant components within the NMS manager. In addition, the relevant components in sibling NMS managers also subscribe to the Event Service 5965. For example, with reference to the responsibility hierarchy of FIG. 59A, a fault manager 3445 within Ml.l.n subscribes to fault events published by the Event Service of Ml.1.1, and vice versa. An NMS manager is capable of properly registering with its sibling's Event Service once the self- discovery process has terminated and role indication is confirmed. In this way NMS managers that are siblings of one another can synchronize state information pertaining to the network elements collectively supervised by a sibling group. The Event Service 5965 is also preferably used as the mechanism for one NMS manager to alert it siblings when it has detected a non-operational sibling. The event service model is recursively followed up the hierarchy, albeit at higher layers the proxy agent 5960 is not employed. So, for example, a connection manager in Ml.n of FIG. 59A subscribes to connection events published by the Event Service of Ml.l, and vice versa.
As a backup mechanism, each NMS Manager also includes a database service 5966 as shown in FIG. 591. The database service 5966 employs a database interface service 5968 to store information in a remote database 5969. The database service 5966 stores state information from the various management components ofthe NMS Manager in the remote database 5969. In the event of any state synchronization problems between sibling NMS managers, the elected NMS manager can retrieve saved state information associated with a non- functioning NMS manager from the remote database.
30. Glossary
A/D Analog-to-Digital
ABR Available Bit Rate ADM Add-Drop Multiplexer
ALI Access Line Interface
API Application Programming Interface
ATM Asynchronous Transfer Mode
CBR Constant Bit Rate
CIT Craft Interface Terminal
CORBA Common Object Request Broker Architecture
DAC Digital-to- Analog Converter
DMA Direct Memory Access
DWDM Dense Wavelength Division Multiplexing
EDFA Erbium Doped Fiber Amplifier
EJB Enterprise Java Beans
EEPROM Electrically Erasable PROM
EPROM Erasable Programmable Read-Only Memory
FCC Fast Communication Channel
Gbps Giga bits per second
GbE Gigabit Ethernet
GPIO General Purpose Input-Output (interface)
GUI Graphical User Interface
HDLC High-Level Data Link Control
IETF Internet Engineering Task Force
I2C Inter Integrated Circuit (bus)
IP Internet Protocol
ITU International Telecommunications Union
JDK Java Development Kit (Sun Microsystems, Inc.)
L2 Level 2 (cache) or Layer 2 (of OSI model)
LCM Line Card Manager
LDAP Lightweight Directory Access Protocol (IETF RFC 1777)
LSR Label Switch Router
MAC Medium Access Control (layer)
MB Megabyte
MEMS Micro-Electro-Mechanical System
MIB Management Information Base
MPC Motorola® PowerPC (microprocessor)
MPLS Multi Protocol Label Switching
NEBS Network Equipment Building Standards
NMS Network Management System nm Nanometers
OA Optical Access Or Optical Amplifier
OA_Eg Optical Access Egress
OA n Optical Access Ingress
OADM Optical Add Drop Multiplexer
OC-n Optical Carrier - specifies the speed (data rate) of a fiber optic network that conforms to the SONET standard, "n" denotes the speed as a multiple of 51.84 Mbps, such that OC-12=622.08 Mbps, OC-
48=2.488
Gbps, etc.
ODSI Optical Domain Service/System Interconnect OEO Optical To Electrical To Optical (conversion) OEM Original Equipment Manufacturer
OPM Optical Performance Monitoring Module
OSC Optical Signaling Channel
OSF Optical Switch
OSI Open Standards Interconnection
OSM Optical Signaling Module
OSNR Optical Signal To Noise Ratio
OSPF Open Shortest Path First
OSS Operational Support Systems
OTS All-Optical Transport Switching System
OXC Optical Cross Connect
PCI Peripheral Component Interconnect
PCMCIA Personal Computer Memory Card International Association
PHY Physical (layer)
PIN Photo Intrinsic
POP Point Of Presence
PVC Permanent Virtual Circuit
QoS Quality of Service
RISC Reduced Instruction Set Computer
RMI Remote Method Invocation
RWA Routing and Wavelength Assignment
RTOS Real-Time Operating System
Rx Receiver
SDH Synchronous Digital Hierarchy (Networks)
SDRAM Synchronous Dynamic Random Access Memory
SerDes Serializer/Deserializer
SMC Shared Memory Cluster
SNMP Simple Network Management Protocol
SONET Synchronous Optical Network
SPI Special Peripheral Interface
STM Synchronous Transport Mode
SW Software or Switch
TCP Transmission Control Protocol
TDM Time Division Multiplexing
TMN Telecommunication Management Network (an ITU-T standard)
TP Trunk Port / Transport
TP_Eg Transport Egress
TP_In Transport Ingress
Tx Transmitter
UBR Unspecified Bit Rate
VBR Variable Bit Rate
VME VersaModule Eurocard (bus)
WAN Wide Area Network
WDD Wavelength Division Demultiplexer
WDM Wavelength Division Multiplexer
WXC Wavelength Cross Connect Accordingly, it can be seen that the present invention provides an all- optical switch that can be selectively configured to provide cross-connection and add/drop multiplexing functions in an optical network. Optical circuit cards are provided in a chassis, and connected by an optical backplane. A local area network enables control and monitoring ofthe cards by a local node manager. The all-optical switch can be used in an optical network that allows external access networks to access the optical network, including access networks that use Gigabit Ethernet and SONET OC-n optical signaling. Wavelength conversion is provided for wavelengths that are not compliant with the all- optical network. The invention also provides a Node Manager control mechanism that is local to each switch for configuring and monitoring the switch, as well as distributed Line Card Manager control mechanisms for monitoring and controlling each circuit card. A common, centralized Network Management System control mechanism is also provided for configuring and monitoring the switches in the network.
In a further aspect, it can be seen that the present invention provides a hierarchical and distributed control architecture for managing an optical communications network. The architecture includes a line card manager level for managing individual line cards at an optical switch, a node manager level for managing multiple line cards at the optical switch, and a network management system level for managing multiple node managers in a network. Control architecture functionalities include signaling, routing, protection switching and network management. Furthermore, the network management functionality includes a topology manager, a performance manager, a connection manager, a fault detection manager, and a configuration manager. The line cards include various optical circuit cards at the switches, such as access line interface cards, transport ingress cards, transport egress cards, optical access ingress cards, optical access egress cards, switching fabric cards, optical signaling cards, and optical performance monitoring cards.
Moreover, the control hierarchy uses a push model to enable important information to be communicated to higher layers in the control hierarchy. The Node Manager includes an Event Manager for enabling other software components to post, register for, and receive, events. These events may be set based on activities at the Node Manager or the line cards, and may denote, e.g., a change in the configuration ofthe optical switch, or an alarm condition at a line card.
In a further aspect, it can be seen that the present invention provides a line card manager architecture for use at an optical switch in an optical communications network. A line card manager with a dedicated processor is provided for each line card. The line cards include access line interface cards, transport ingress cards, transport egress cards, optical access ingress cards, optical access egress cards, switching fabric cards, optical performance measurement cards, and optical signaling cards. The line card manager includes an interface for receiving monitored parameters from a line card, a memory for storing the values, and a controller for processing the values to determine if an event such as a fault should be set, e.g., when a monitored parameter is out of range, as indicated by a threshold crossing. The event is reported to a node manager, which manages the different line card managers. A local area network may be used at the optical switch to allow the LCMs and Node Manager to exchange messages. The line card manager may apply control signals to the line card on its own or at the direction of the node manager. The line card manager is preferably a removable plug-in module or daughter board ofthe line card to allow easy upgrades and maintenance.
In a further aspect, it can be seen that the present invention provides a Line Card Manager (LCM) and Node Manager architecture for use at an optical switch in an optical communications network. LCMs are provided for managing different line cards at the switch, while a Node Manager at the switch manages the LCMs. The Node Manager and LCMs exchange messages using a message-passing interface. The messages may include, e.g., read messages that enable the node manager to retrieve monitored parameter values that the line card manager receives from its line card, write messages that enable the node manager to write provisioning data to the line card manager, alarm messages that allow the line card managers to report alarm/fault conditions to the node manager, an audit message that enables the node manager to verify a presence ofthe line card at the optical switch, and discovery messages that allow a line card manager to announce its presence to the node manager, e.g., after rebooting.
In a further aspect, it can be seen that the present invention provides a node manager architecture for use at an optical switch in an optical communications network. The node manager manages line cards at the optical switch, such as access line interface cards, transport ingress cards, transport egress cards, optical access ingress cards, optical access egress cards, optical signaling cards, optical performance monitoring cards and switching fabric cards. The node manager includes an interface, such as a local area network interface, for communicating with line card managers that manager the line cards. The node manager provides the line card managers with software, and may request information such as monitored parameters ofthe line cards. The node manager may also receive fault and other information that are reported by the line card managers using push technology. The node manager may have an additional interface, which uses optical or electrical signaling, to communicate with a network management system that manages a number of node managers in the network. The node manager has processing resources that enable applications and system services.
In a further aspect, it can be seen that the present invention, the hierarchical structure ofthe NMS has been shown to be a balanced tree. However, the tree can be unbalanced in alternative embodiments.
While the invention has been described and illustrated in connection with preferred embodiments, many variations and modifications as will be evident to those skilled in this art may be made without departing from the spirit and scope ofthe invention, and the invention is thus not to be limited to the precise details of methodology or construction set forth above as such variations and modification are intended to be included within the scope ofthe invention.
-I l l-

Claims

What is claimed is:
1. An optical transport switching system for use in an optical network, comprising: an optical access ingress subsystem; an optical access egress subsystem; a transport ingress subsystem; a transport egress subsystem; and an optical switch subsystem for selectively providing optical coupling between: (a) the transport egress subsystem and at least one of (a)(1) the optical access ingress subsystem and (a)(2) the transport ingress subsystem, and between: (b) the transport ingress subsystem and at least one of one of (b)(1) the optical access egress subsystem, and (b)(2) the transport egress subsystem.
2. The system of claim 1, wherein: the optical switch subsystem comprises an all-optical switch.
3. The system of claim 1, wherein: the optical switch subsystem comprises a micro-electro-mechanical switch.
4. The system of claim 1, wherein: the optical switch subsystem is configurable to provide add multiplexing by optically coupling a specified optical output ofthe optical access ingress subsystem with a specified optical input ofthe transport egress subsystem.
5. The system of claim 1, wherein: the optical switch subsystem is configurable to provide drop multiplexing by optically coupling a specified optical output ofthe transport ingress subsystem with a specified optical input ofthe optical access egress subsystem.
6. The system of claim 1, wherein: the transport ingress subsystem and transport egress subsystems each communicate via respective optical links with at least one respective remote node in the optical network.
7. The system of claim 1, wherein: the optical switch subsystem is adapted to optically couple a specified optical output ofthe transport ingress subsystem with at least one of: (a) a specified optical input ofthe optical access egress subsystem, and (b) a specified optical input of the transport egress subsystem.
8. The system of claim 1, wherein: the optical switch subsystem is configurable to provide a wavelength cross-connect by optically coupling a specified optical output ofthe transport ingress subsystem with a specified optical input ofthe transport egress subsystem.
9. The system of claim 8, wherein; the optical switch subsystem, when configured to provide the wavelength cross-connect, is adapted to route optical signals in the optical network according to a desired routing protocol.
10. The system of claim 1, wherein: the optical switch subsystem is adapted to optically couple a specified optical output ofthe optical access ingress subsystem with at least one of: (a) a specified optical input ofthe optical access egress subsystem, and (b) a specified optical input of the transport egress subsystem.
11. The system of claim 1 , wherein: the transport ingress subsystem receives an optical signal multiplex via at least one optical link ofthe optical network, and comprises a demultiplexer for demultiplexing the optical signal multiplex to provide a plurality of individual optical signals to be optically coupled by the optical switch subsystem.
12. The system of claim 1, wherein: a plurality of individual optical signals are optically coupled to the transport egress subsystem by the optical switch subsystem, and the transport egress subsystem comprises a multiplexer for multiplexing the individual optical signals to provide an optical signal multiplex for transport via at least one optical link ofthe optical network.
13. The system of claim 1, wherein: the optical access ingress subsystem is adapted to receive an optical signal associated with an access network; and the optical switch subsystem is adapted to ingress the optical signal into the optical network by optically coupling the optical access ingress subsystem to the transport egress subsystem.
14. The system of claim 1 , wherein: the optical switch subsystem is adapted to egress an optical signal from the optical network by optically coupling the optical signal from the transport ingress subsystem to the optical access egress subsystem; and the optical access egress subsystem is adapted to direct the optical signal toward an access network.
15. The system of claim 1, further comprising: an access line interface subsystem for converting an optical signal of an access network that has a wavelength that is non-compliant with the optical network to an optical signal having a wavelength that is compliant with the optical network.
16. The system of claim 15, wherein: the access line interface subsystem provides the optical signal having the compliant wavelength to the optical access ingress subsystem.
17. The system of claim 1 , further comprising: an access line interface subsystem for converting an optical signal ofthe optical network that has a wavelength that is compliant with the optical network to an optical signal having a wavelength that is non-compliant with the optical network, but compliant with an access network associated with the optical network.
18. The system of claim 17, wherein: the access line interface subsystem receives the optical signal having the compliant wavelength from the optical access egress subsystem.
19. The system of claim 1, further comprising: a node manager for selectively configuring the optical switch subsystem to provide at least one of an add/drop multiplexer and a wavelength cross-connect by controlling the optical coupling provided by the optical switch subsystem.
20. The system of claim 1, wherein: the transport ingress subsystem and transport egress subsystem provide optical inputs and outputs, respectively, of a node in the optical network.
21. The system of claim 1 , wherein: the optical access ingress subsystem, optical access egress subsystem, transport ingress subsystem, and transport egress subsystem each comprise at least one respective optical circuit card.
22. The system of claim 21, wherein: the cards are receivable in a common bay.
23. The system of claim 1, wherein: the subsystems each comprise respective electrical connections for communicating with a common node manager via a local area network.
24. The system of claim 1, wherein: the subsystems each comprise respective optical connections for communicating with an optical backplane.
25. The system of claim 1, further comprising: an optical performance monitoring subsystem for optically tapping into at least one ofthe optical access ingress subsystem, the optical access egress subsystem, the transport ingress subsystem, and the transport egress subsystem to obtain performance data therefrom.
26. The system of claim 1, further comprising: an optical signaling subsystem for at least one of: (a) receiving signaling data via the transport ingress subsystem, and (b) transmitting signaling data via the transport egress subsystem.
27. An optical transport switching system for use in an optical network, comprising: means for switching optical signals; means for providing optical access ingress to the optical network; means for providing optical access egress from the optical network; means for providing transport ingress to the optical switching means; and means for providing transport egress from the optical switching means; wherein: the switching means selectively provide optical coupling between: (a) the means for providing transport egress and one of (a)(1) the means for providing optical access ingress and (a)(2) the means for providing transport ingress, and between: (b) the means for providing transport ingress and one of (b)(1) the means for providing optical access egress, and (b)(2) the means for providing transport egress.
28. An optical transport switching system for use in an optical network, comprising: a receiving apparatus having a plurality of respective receiving locations adapted to receive respective optical circuit cards with respective controllers, and a receiving location for receiving a central controller; an optical backplane associated with the receiving apparatus for optically coupling to the optical circuit cards; and a local area network for electrically coupling the central controller to the circuit card controllers to enable the central controller to control and monitor the optical circuit cards.
29. The system of claim 28, wherein: the optical circuit cards include at least a transport ingress card, a transport egress card, and a switching fabric card for optically coupling the transport ingress card and the transport egress card.
30. The system of claim 28, wherein: the optical backplane comprises optical fibers.
31. The system of claim 28, wherein: the optical circuit cards are of a plurality of card types; and at least some ofthe receiving locations are designated for receiving particular ones ofthe card types.
32. An optical transport switching method for use in an optical network, comprising: receiving an optical signal at an optical access ingress subsystem; and optically coupling the optical signal, via an optical switch subsystem, from the optical access ingress subsystem to at least one of one of: (a) an optical access egress subsystem, and (b) a transport egress subsystem.
33. An optical transport switching method for use in an optical network, comprising: receiving an optical signal at a transport ingress subsystem; and optically coupling the optical signal, via an optical switch subsystem, from the transport ingress subsystem to at least one of one of: (a) an optical access egress subsystem, and (b) a transport egress subsystem.
34. A multi-tiered control architecture for an optical network having a plurality of optical switches, comprising: for each optical switch, respective line card managers for managing respective line cards associated therewith, and a node manager for managing the line card managers; and a centralized network management system for managing the node managers ofthe optical switches; wherein, at each optical switch, the node manager includes an event manager for enabling software components running at the node manager to at least one of: (a) register for, and receive, events, and (b) post events.
35. The control architecture of claim 34, wherein: the software components comprise a protection/fault manager component that is adapted to register and receive events from the associated event manager related to alarms at the associated optical switch.
36. The control architecture of claim 34, wherein: the software components comprise a protection/fault manager component that is adapted to post events with the associated event manager related to at least one of: (a) alarms at the associated optical switch, (b) changes in configuration information at the associated optical switch, and (c) changes in connection information at the associated optical switch.
37. The control architecture of claim 34, wherein: the software components comprise a signaling component that is adapted to post an event with the associated event manager regarding whether a light path setup attempt was successful.
38. The control architecture of claim 34, wherein: the software components comprise a database access protocol client that is adapted to interact with a database at the centralized network management system; and the database access protocol client is adapted to at least one of: (a) request configuration data regarding the associated optical switch from other ones ofthe software components via the associated event manager, and (b) provide configuration data regarding the associated optical switch to other ones ofthe software components via the associated event manager.
39. The control architecture of claim 34, wherein: the software components comprise a routing component that is adapted to register for and receive change events from the associated event manager regarding a change in at least one of connection and configuration information ofthe associated optical switch.
40. The control architecture of claim 34, wherein: the software components comprise an agent component for applications running at the centralized network management system; and the agent component is adapted to register for and receive change events from the associated event manager regarding at least one of a resource component, a configuration component and a signaling component.
41. The control architecture of claim 34, wherein: the software components comprise a resource manager component that is adapted to post change events with the associated event manager regarding a change in resource information at the associated optical switch.
42. The control architecture of claim 41, wherein: the resource information associated with the change event includes a change in at least one of: (a) wavelengths used at the associated optical switch, and (b) a state of cross-connects at the associated optical switch.
43. The control architecture of claim 34, wherein: the software components comprise a configuration manager component that is adapted to register for and receive events from the associated event manager designating that a line card has become active.
44. The control architecture of claim 34, wherein: the software components comprise a configuration manager component that is adapted to post an event at the associated event manager after configuring at least one ofthe line cards.
45. The control architecture of claim 34, wherein: the software components comprise a configuration manager component that is adapted to post an event at the associated event manager after sending a request to change a configuration of at least one ofthe line cards, and receiving an acknowledgement that the request was carried out.
46. The control architecture of claim 34, wherein: the software components comprise a software version manager component for configuring the associated node manager and line card managers with software.
47. The control architecture of claim 34, wherein: the line cards associated with each optical switch include at least one of a transport ingress line card, transport egress line card, optical access ingress line card, optical access egress line card, access line interface card, switching fabric line card, optical signaling card, and optical performance monitoring card.
48. The control architecture of claim 34, wherein: the line card managers push data up to the associated node managers when the data requires attention by the associated node managers.
49. The control architecture of claim 34, wherein: the node managers push data up to the network management system when the data requires attention by the network management system.
50. The control architecture of claim 34, wherein: the node managers collect trend data ofthe line cards from the line card managers; and the network management system collects trend data ofthe node managers.
51. The control architecture of claim 34, wherein: within at least one ofthe optical switches, the associated node manager and line card managers communicate with one another via a local area network (LAN).
52. The control architecture of claim 34, further comprising: an interface between the network management system and a client system that is external to the optical network for enabling the client system to request a light path in the optical network.
53. The control architecture of claim 34, further comprising: an interface for enabling an exchange of management information between the network management system and a client management system that is external to the optical network.
54. The control architecture of claim 34, further comprising: an interface to an on-screen graphical user interface for providing status data regarding the optical network.
55. The control architecture of claim 54, wherein: the status data provides information regarding at least one of topology of the optical switches in the optical network, performance ofthe optical switches, light path connections in the optical network, faults in the optical network, and configuration ofthe optical switches.
56. The control architecture of claim 34, wherein: the network management system and node managers support signaling in the optical network.
57. The control architecture of claim 56, wherein: the signaling is provided for configuring at least one ofthe optical switches as at least one of: (a) an add multiplexer, and (b) a drop multiplexer.
58. The control architecture of claim 56, wherein: the signaling is provided for configuring at least one ofthe optical switches as a wavelength cross-connect.
59. The control architecture of claim 34, wherein: the network management system and node managers support protection switching for responding to detected light path interruptions in the optical network.
60. The control architecture of claim 59, wherein: the protection switching responds to detected light path interruptions at at least one ofthe optical switches.
61. The control architecture of claim 59, wherein; the protection switching provides at least one of line protection and path protection.
62. The control architecture of claim 34, wherein: the network management system and node managers support network management in the optical network.
63. The control architecture of claim 62, wherein: the network management is implemented using a software component that is executed at the network management system, and agent software components that are executed at the node managers.
64. The control architecture of claim 62, wherein: the network management includes a topology manager for managing information regarding physical connectivity of switches in the network.
65. The control architecture of claim 64, wherein: the topology manager is implemented using a software component that is executed at the network management system.
66. The control architecture of claim 62, wherein: the network management includes a configuration manager for configuring the optical switches.
67. The control architecture of claim 66, wherein: the optical switches are configurable by the configuration manager as one of: (a) a wavelength cross-connect and (b) an optical add and/or drop multiplexer.
68. The control architecture of claim 66, wherein: the configuration manager is implemented using a software component that is executed at the network management system, and software components that are executed at the node managers.
69. The control architecture of claim 62, wherein: the network management includes a connection manager for managing information regarding light path connectivity within the optical network.
70. The control architecture of claim 69, wherein: the connection manager is implemented using a software component that is executed at the network management system, and software components that are executed at the node managers.
71. The control architecture of claim 69, wherein: the connection manager supports creation and removal of end-to-end light path connections in the optical network.
72. The control architecture of claim 62, wherein: the network management includes a performance manager for providing information regarding performance ofthe optical switches and/or the optical network.
73. The control architecture of claim 72, wherein: the performance manager is implemented using a software component that is executed at the network management system, and software components that are executed at the node managers.
74: The control architecture of claim 34, wherein: the node managers provide real-time software configuring ofthe associated line card managers.
75. The control architecture of claim 34, wherein: the node managers receive at least one of monitored parameters and faults from the associated line card managers.
76. The control architecture of claim 75, wherein: the node managers aggregate the received monitored parameters and/or alarms into a switch-wide view.
77. The control architecture of claim 34, wherein: the node managers are adapted to remotely download software from the network management system.
78. The control architecture of claim 34, wherein: the node managers are adapted to distribute software to the associated line card managers.
79. The control architecture of claim 34, wherein: the line card managers are adapted to execute commands received from the associated node manager.
80. The control architecture of claim 34, wherein: the line card managers provide digital and/or analog monitoring ofthe associated line cards.
81. The control architecture of claim 34, wherein: the line card managers send at least one of monitored parameters and alarms faults to the associated node manager.
82. A multi-tiered control architecture for an optical network having a plurality of optical switches, comprising: for each optical switch, respective line card manager means for managing respective line cards associated therewith, and a node manager means for managing the line card manager means; and a centralized network management means for managing the node manager means ofthe optical switches; wherein, at each optical switch, the node manager means includes an event manager means for enabling software components running at the node manager means to at least one of: (a) register for, and receive, events, and (b) post events.
83. A method for providing a multi-tiered control architecture for an optical network having a plurality of optical switches, comprising: for each optical switch, managing respective line cards using respective line card managers, and managing the line card managers using a node manager; managing the node managers ofthe optical switches using a centralized network management system; and at each optical switch, enabling software components running at the node manager to at least one of: (a) register for, and receive, events, and (b) post events.
84. A management architecture for use at an optical switch in an optical network, comprising: a line card manager at the optical switch for managing an associated line card at the optical switch; said line card manager comprising:
(a) a first interface for receiving monitored parameter values from the line card;
(b) processing resources for setting an event regarding the line card when criteria for setting the event is met by the monitored parameter values; and
(c) a second interface for communicating the event to a node manager at the optical switch that manages the line card manager.
85. The architecture of claim 84, wherein: the event comprises a fault regarding the line card.
86. The architecture of claim 84, wherein: the monitored parameter values comprise at least one of digital and analog parameter values.
87. The architecture of claim 84, wherein: the line card manager is adapted to communicate the monitored parameter values to the node manager via the second interface.
88. The architecture of claim 84, wherein: the processing resources are adapted to execute commands received from the node manager via the second interface.
89. The architecture of claim 84, wherein: the second interface comprises an interface to a local area network at the optical switch that allows the line card manager to communicate with the node manager.
90. The architecture of claim 89, wherein: the local area network uses a shared medium.
91. The architecture of claim 84, further comprising: non-volatile memory resources at the line card manager for storing software that is executable by the processing resources for use in managing the line card.
92. The architecture of claim 91 , wherein : the line card is managed by the stored software at the line card manager without requiring software to be downloaded from the Node Manager.
93. The architecture of claim 91 , wherein: the software is loaded from the non-volatile memory resources to the processing resources upon initialization ofthe line card manager.
94. The architecture of claim 84, further comprising: non-volatile memory resources at the line card manager for storing a current version and a backup version of software that is executable by the processing resources for use in managing the line card.
95. The architecture of claim 84, further comprising: a status register at the line card manager for receiving the monitored parameter values from the line card via the first interface.
96. The architecture of claim 84, wherein: the line card comprises one of a transport ingress line card, transport egress line card, optical access ingress line card, optical access egress line card, access line interface card, switching fabric line card, optical performance monitoring line card, and optical signaling line card.
97. The architecture of claim 84, wherein: the processing resources comprise a microprocessor for setting the event.
98. The architecture of claim 84, wherein: the line card manager is provided as a removable plug-in module ofthe line card.
99. The architecture of claim 84, wherein: the processing resources are provided with software by the node manager via the second interface; and the software is executable by the processing resources for use in managing the line card.
100. The architecture of claim 84, further comprising: memory resources at the line card manager for storing the monitored parameter values.
101. The architecture of claim 84, wherein: the line card manager is provided with a threshold level by the node manager via the second interface for setting the event when the monitored parameter values cross the threshold level.
102. The architecture of claim 84, wherein : the processing resources are adapted to apply control signals to the line card via the first interface.
103. The architecture of claim 102, wherein: the control signals comprise at least one of digital and analog control signals.
104. The architecture of claim 102, further comprising: a control register at the line card manager for applying the control signals to the line card.
105. The architecture of claim 84, further comprising: at least one additional line card manager for managing a respective line card at the optical switch; said additional line card manager comprising:
(a) a first interface for receiving respective monitored parameter values from the respective line card;
(b) processing resources for setting a respective event regarding the respective line card when criteria for setting the respective event is met by the respective monitored parameter values; and
(c) a second interface for communicating the respective event to the node manager at the optical switch; wherein the node manager manages each ofthe line card managers.
106. The architecture of claim 105, wherein: each line card manager provides local control of its respective line card.
107. The architecture of claim 105, wherein: the respective line card managers communicate via their respective second interfaces with the node manager via a common local area network.
108. A management architecture for use at an optical switch in an optical network, comprising: line card manager means at the optical switch for managing an associated line card at the optical switch; said line card manager means comprising:
(a) means for receiving monitored parameter values from the line card;
(b) means for setting an event regarding the line card when criteria for setting the event is met by the monitored parameter values; and
(c) means for communicating the event to a node manager at the optical • switch that manages the line card manager means.
109. The architecture of claim 108, further comprising: at least one additional line card manager means for managing an associated line card at the optical switch; said additional line card manager means comprising:
(a) means for receiving respective monitored parameter values from the respective line card;
(b) means for setting a respective event regarding the respective line card when criteria for setting the respective event is met by the respective monitored parameter values; and
(c) means for communicating the respective event to the node manager at the optical switch; wherein the node manager manages each ofthe respective line card manager means.
110. A method for managing an optical switch in an optical network, comprising: receiving, at a line card manager at the optical switch, monitored parameter values from an associated line card at the optical switch; setting an event regarding the line card when criteria for setting the event is met by the monitored parameter values; and communicating the event to a node manager at the optical switch that manages the line card manager.
111. The method of claim 110, further comprising: pushing data regarding the line card from the line card manager up to the node manager when the data requires attention by the node manager.
112. The method of claim 110, wherein : the event is set when the monitored parameter values cross a threshold level.
113. The method of claim 110, further comprising: receiving, at the line card manager, a line card identifier from the line card for use in identifying a type ofthe line card.
114. The method of claim 113, further comprising: providing the line card manager with software for managing a plurality of different types of line cards; and using a portion ofthe software for controlling the associated line card according to the received line card identifier.
115. The method of claim 113, wherein: the type comprises one of a transport ingress line card, transport egress line card, optical access ingress line card, optical access egress line card, access line interface card, switching fabric line card, optical performance monitoring line card, and optical signaling line card.
116. The method of claim 110, further comprising: multi-tasking, at the line card manager, multiple sense-and-control processes for the line card.
117. The method of claim 116, wherein: each ofthe processes involves at least one of monitoring and controlling a corresponding function ofthe line card.
118. The method of claim 110, further comprising: receiving, at at least one additional line card manager, respective monitored parameter values from respective associated line cards; setting an event regarding the respective line card when criteria for setting the event is met by the respective monitored parameter values; and communicating the event to the node manager; wherein the node manager manages each ofthe respective line card managers.
119. A management architecture for use at an optical switch in an optical network, comprising: a line card manager at the optical switch for managing an associated line card at the optical switch; wherein said line card manager comprises a message-passing interface for communicating with a node manager at the optical switch that manages the line card manager.
120. The architecture of claim 119, wherein: the line card manager receives a read message from the node manager via the message-passing interface; and the read message allows the node manager to retrieve monitored parameter values that the line card manager receives from the line card.
121. The architecture of claim 119, wherein: the line card manager receives a write message from the node manager via the message-passing interface; and the write message allows the node manager to write provisioning data to the line card manager.
122. The architecture of claim 119, wherein: the message-passing interface uses an alarm message to allow the line card manager to report an alarm condition to the node manager.
123. The architecture of claim 119, wherein: the line card manager receives an audit message from the node manager via the message-passing interface; and the audit message allows the node manager to verify a presence ofthe line card at the optical switch.
124. The architecture of claim 119, wherein: the message-passing interface uses a discovery message to allow the line card manager to announce its presence to the node manager.
125. The architecture of claim 124, wherein: the discovery message designates a type ofthe line card.
126. The architecture of claim 125, wherein: the type comprises one of a transport ingress line card, transport egress line card, optical access ingress line card, optical access egress line card, access line interface card, switching fabric line card, optical performance monitoring line card, and optical signaling line card.
127. The architecture of claim 124, wherein: the presence ofthe line card manager is announced to the node manager using the discovery message following a rebooting ofthe line card manager.
128. The architecture of claim 124, wherein; the discovery message allows the line card manager to announce its presence to the node manager when the line card manager and the line card are installed at the optical switch.
129. The architecture of claim 119, wherein: the message-passing interface uses a predetermined set of messages.
130. The architecture of claim 119, wherein: the message-passing interface allows the line card manager to report an event regarding the line card to the node manager.
131. The architecture of claim 119, further comprising : at least one additional line card manager at the optical switch for managing an associated respective line card at the optical switch; wherein: said additional line card manager comprises a respective message passing interface for communicating with the node manager; and the respective line card managers use a common set of messages for communicating with the node manager via their message-passing interfaces.
132. A management architecture for use at an optical switch in an optical communication network, comprising: a node manager at the optical switch for managing a plurality of respective line card managers at the optical switch, wherein each respective line card manager manages an associated respective line card at the optical switch; wherein said node manager comprises a message-passing interface for communicating with the line card managers.
133. The architecture of claim 132, wherein: the message-passing interface uses a read message to allow the node manager to retrieve monitored parameter values from at least one ofthe line card managers; wherein the at least one line card manager receives the monitored parameter values from the respective line card.
134. The architecture of claim 132, wherein: the message-passing interface uses a write message to allow the node manager to write provisioning data to at least one ofthe line card managers.
135. The architecture of claim 132, wherein: the node manager receives an alarm message from at least one ofthe line card managers via the message-passing interface; and the alarm message allows the at least one line card manager to report an alarm condition to the node manager.
136. The architecture of claim 132, wherein: the message-passing interface uses an audit message to allow the node manager to verify a presence of at least one ofthe line cards at the optical switch.
137. The architecture of claim 132, wherein: the node manager receives a discovery message from at least one ofthe line card managers via the message-passing interface; and the discovery message allows the at least one line card manager to announce its presence to the node manager.
138. The architecture of claim 137, wherein: the discovery message designates a type ofthe associated line card ofthe at least one line card manager.
139. The architecture of claim 138, wherein: the type comprises one of a transport ingress line card, transport egress line card, optical access ingress line card, optical access egress line card, access line interface card, switching fabric line card, optical performance monitoring line card, and optical signaling line card.
140. The architecture of claim 137, wherein: the presence ofthe line card manager is announced to the node manager following a rebooting ofthe line card manager.
141. The architecture of claim 132, wherein: the message-passing interface uses a predetermined set of messages.
142. The architecture of claim 132, wherein: the node manager is informed of events via the message-passing interface that are set by at least one ofthe line card managers regarding the associated line card.
143. An interface for use at an optical switch in an optical network, wherein the optical switch comprises a node manager for managing a plurality of respective line card managers at the optical switch, and each respective line card manager manages an associated respective line card at the optical switch, said interface comprising: a message-passing interface for enabling the node manager to exchange messages with each ofthe line card managers; wherein: the messages include: (a) line card manager-to-node manager messages for enabling the line card managers to report events regarding the respective line cards to the node manager, and (b) node manager-to-line card manager messages for enabling the node manager to provide commands to the line card managers.
144. The interface of claim 143, wherein: the node manager-to-line card manager messages include a read message that allows the node manager to retrieve monitored parameter values that at least one of the line card managers receives from the respective line card.
145. The interface ofclaim 143, wherein : the node manager-to-line card manager messages include a write message that allows the node manager to write provisioning data to at least one ofthe line card managers.
146. The interface ofclaim 143, wherein: the line card manager-to-node manager messages include an alarm message that allows at least one ofthe line card managers to report an alarm condition to the node manager.
147. The interface ofclaim 143, wherein: the node manager-to-line card manager messages include an audit message that allows the node manager to verify a presence of at least one ofthe line cards at the optical switch.
148. The interface ofclaim 143, wherein: the line card manager-to-node manager messages include a discovery message that allows at least one ofthe line card managers to announce its presence to the node manager.
149. The interface ofclaim 148, wherein: the discovery message designates a type ofthe respective line card ofthe at least one line card manager.
150. The interface ofclaim 149, wherein: the type comprises one of a transport ingress line card, transport egress line card, optical access ingress line card, optical access egress line card, access line interface card, switching fabric line card, optical performance monitoring line card, and optical signaling line card.
151. The interface of claim 148, wherein: the presence ofthe at least one line card manager is announced to the node manager using the discovery message following a rebooting ofthe at least one line card manager.
152. The interface ofclaim 143, wherein: the message-passing interface uses a predetermined set of messages.
153. The interface ofclaim 143, wherein: the line card manager-to-node manager messages include a message that allows at least one ofthe line card managers to report an event regarding the respective line card to the node manager.
154. A method for managing an optical switch in an optical network, comprising: managing, at a line card manager at the optical switch, an associated line card at the optical switch; and using a message-passing interface to enable the line card manager to communicate with a node manager at the optical switch that manages the line card manager.
155. A method for managing an optical switch in an optical communication network, comprising: managing, at a node manager at the optical switch, a plurality of line card managers at the optical switch, wherein each line card manager manages an associated respective line card at the optical switch; and using a message-passing interface to enable the node manager to communicate with the line card managers.
156. A method for interfacing a node manager at an optical switch in an optical network with a plurality of respective line card managers at the optical switch, wherein the node manager manages the line card managers, and each respective line card manager manages an associated respective line card at the optical switch, comprising: enabling the node manager to exchange messages with each ofthe line card managers using a message-passing interface; wherein: the messages include: (a) line card manager-to-node manager messages for enabling the line card managers to report events regarding the respective line cards to the node manager, and (b) node manager-to-line card manager messages for enabling the node manager to provide commands to the line card managers.
157. A management architecture for use at an optical switch in an optical communication network, comprising: a node manager at the optical switch for managing a plurality of respective line card managers at the optical switch; wherein each respective line card manager manages an associated respective line card at the optical switch; said node manager comprising:
(a) an interface for communicating with the line card managers; and
(b) processing resources associated with the interface for enabling at least one of applications and system services.
158. The architecture ofclaim 157, wherein: the processing resources provide the line card managers with software via the interface.
159. The architecture of claim 157, wherein: the processing resources provide the line card managers with threshold limits for use thereat in setting faults for monitored parameters ofthe line cards.
160. The architecture of claim 157, wherein: the processing resources receive an identifier from at least one ofthe line card managers via the interface that identifies a type ofthe associated line card.
161. The architecture of claim 160, wherein: the type comprises one of a transport ingress line card, transport egress line card, optical access ingress line card, optical access egress line card, access line interface card, switching fabric line card, optical performance monitoring line card, and optical signaling line card.
162. The architecture ofclaim 157, wherein: the processing resources receive monitored parameters ofthe associated line cards from the line card managers via the interface.
163. The architecture ofclaim 162, wherein: the processing resources process the received monitored parameters to determine trend data thereof.
164. The architecture ofclaim 162, further comprising: non- volatile memory resources at the node manager for storing the received monitored parameters.
165. The architecture of claim 157, wherein: the processing resources comprise a microprocessor.
166. The architecture ofclaim 157, wherein: the processing resources comprise a main processor on a circuit card for running the node manager at a baseline level, and a connector on the circuit card that is adapted to a receive a plug-in processor for running the node manager at an enhanced level.
167. The architecture ofclaim 157, further comprising: non-volatile memory resources at the node manager for storing software used by the processing resources to manage the line card managers; wherein the software is loaded from the non-volatile memory resources to the processing resources upon initialization ofthe node manager.
168. The architecture ofclaim 167, wherein: the non-volatile memory resource are non-removable from the node manager while the node manager is installed in the optical switch.
169. The architecture ofclaim 157, further comprising: non-volatile memory resources at the node manager for storing software used by at least one ofthe line card managers to manage the associated line card; wherein the software is communicated from the non-volatile memory resources to the at least one line card manager via the interface upon initialization ofthe at least one line card manager.
170. The architecture ofclaim 169, wherein: the communication ofthe software from the node manager to the at least one line card manager is responsive to a message received at the node manager from the at least one line card manager that announces a presence ofthe at least one line card manager.
171. The architecture of claim 157, further comprising: non-volatile memory resources at the node manager for storing a current version and a backup version of software that is executable by the processing resources to manage the line card managers.
172. The architecture ofclaim 157, wherein: the enabled applications include a protection/fault manager capability.
173. The architecture ofclaim 157, wherein: the enabled applications include an optical switch-to-optical switch signaling capability.
174. The architecture ofclaim 157, wherein: the enabled applications include a routing capability for setting up light paths in the optical network.
175. The architecture ofclaim 157, wherein: the enabled applications include an interface to a database associated with a network management system that manages the node manager.
176. The architecture ofclaim 157, wherein: the enabled applications include a command line interface capability.
177. The architecture ofclaim 157, wherein: the enabled applications include an agent capability for applications associated with a network management system that manages the node manager.
178. The architecture ofclaim 157, wherein: the enabled system services include a resource management capability.
179. The architecture of claim 157, wherein: the enabled system services include a connection management capability.
180. The architecture ofclaim 157, wherein: the enabled system services include a configuration management capability.
181. The architecture of claim 157, wherein: the enabled system services include a database management capability for storing data for configuration ofthe optical network.
182. The architecture of claim 157, wherein: the enabled system services include a software version control capability for providing new software versions to the line card managers.
183. The architecture of claim 157, wherein: the enabled system services include an interface to non-volatile memory resources at the node manager.
184. The architecture ofclaim 157, wherein: the enabled system services include an event management capability.
185. The architecture of claim 184, wherein: the event management capability involves enabling the applications and/or system services to at least one of: (a) register for and receive events, and (b) post events.
186. The architecture ofclaim 157, wherein: the processing resources aggregate data received from the line card managers into an optical switch-wide view.
187. The architecture ofclaim 157, wherein: the processing resources push data regarding the line cards up to a network management system that manages the node manager when the processing resources determine that the data requires attention by the network management system.
188. The architecture ofclaim 157, wherein: the interface comprises a network interface to a local area network for communicating with the line card managers.
189. The architecture of claim 157, further comprising: non-volatile memory resources at the node manager for storing software for managing a plurality of types of line cards.
190. The architecture ofclaim 189, wherein: the types include a transport ingress line card, transport egress line card, optical access ingress line card, optical access egress line card, access line interface card, switching fabric line card, optical performance monitoring line card, and optical signaling line card.
191. The architecture of claim 157, further comprising: an additional interface at the node manager for communicating with a network management system that manages the node manager.
192. The architecture of claim 191, wherein: the node manager is adapted to send messages to the network management system via the additional interface that are responsive to received messages from the line card managers.
193. The architecture of claim 191, wherein: the node manager is adapted to send messages to the network management system via the additional interface that comprise monitored parameters ofthe line cards that were obtained from messages received from the line card managers.
194. The architecture of claim 191, wherein: the node manager is adapted to send messages to the network management system via the additional interface that comprise events that are posted by at least one of the applications and the system services.
195. The architecture of claim 191, wherein: the additional interface to the network management system comprises a network interface.
196. The architecture ofclaim 191, wherein: the node manager communicates with the network management system via the additional interface and an optical signaling interface.
197. The architecture ofclaim 191, wherein: the node manager receives messages from the network management system via the additional interface that comprise software for provisioning the line card managers.
198. The architecture of claim 191, wherein: the node manager receives messages from the network management system via the additional interface that comprise software for provisioning the node manager.
199. A management architecture for use at an optical switch in an optical communication network, comprising: a node manager means for managing a plurality of respective line card manager means at the optical switch; wherein each respective line card manager means manages an associated respective line card at the optical switch; said node manager means comprising:
(a) means for communicating with the line card manager means; and
(b) means associated with the communicating means for enabling at least one of applications and system services.
200. A method for managing an optical switch in an optical communication network, comprising: at a node manager at the optical switch, (a) managing a plurality of respective line card managers at the optical switch, each of which manages an associated respective line card at the optical switch, (b) communicating with the line card managers via an interface, and (c) enabling at least one of applications and system services.
201. A method for managing a network, comprising: arranging a plurality of network management system (NMS) managers in a hierarchy, said hierarchy having at least a root level and a leaf level, wherein each non- leaf level NMS manager supervises at least one child NMS manager and each leaf-level NMS manager supervises one or more network nodes; determining when a given NMS manager ceases to operate; and electing another NMS manager within said hierarchy to assume the responsibility ofthe non-operating NMS manager.
202. The method according to claim 201 , wherein, in the event a given NMS manager ceases to operate, the elected NMS manager is selected from a predetermined group of NMS managers within the hierarchy.
203. The method according to claim 202, wherein the elected NMS manager is a sibling ofthe non-operating NMS manager.
204. The method according to claim 203, wherein: each leaf-level NMS manager receives state information pertaining to network elements under its supervision; and each non-leaf level NMS manager receives aggregated state information pertaining to the network elements which are supervised by NMS managers that are descendent from the non-leaf level NMS manager.
205. The method according to claim 204, wherein each NMS manager is implemented as a Holistic NMS and wherein the role of each such NMS Manager is dynamically configurable.
206. The method according to claim 205, wherein the role ofthe NMS Manager is based on a network address.
207. The method according to claim 204, wherein each NMS manager is implemented as a Segregated NMS.
208. The method according to claim 204, wherein each NMS manager receives and stores state information pertaining to the network elements supervised by sibling NMS managers.
209. The method according to claim 208, wherein each NMS manager includes an event service in order to publish to the siblings thereof events pertaining to network changes of state.
210. The method according to claim 209, wherein the events include at least one of performance, connection, fault and configuration events.
211. The method according to claim 208, wherein, for each group of sibling NMS manager, only one NMS manager within the group aggregates state information pertaining to all network elements supervised by the group to the common parent NMS manager.
212. The method according to claim 203, wherein the determination of the non-operating NMS manager includes establishing a heartbeat process between at least two NMS manager siblings.
213. The method according to claim 201 , wherein the election is based on pre-configuration.
214. The method according to claim 201 , wherein the election is based on an administrative weight assigned to each NMS manager.
215. The method according to claim 201, wherein the election is based the load bearing capability of each NMS manager.
216. The method according to claim 201, wherein the election is based on network size.
217. The method according to claim 203, wherein, in the event of an election, each NMS manager assumes it is the winner unless it receives notice otherwise from one of its siblings.
218. The method according to claim 204, wherein each NMS manager within said hierarchy stores state information pertaining to the network elements under its sphere of responsibility to an external database such that the elected NMS manager can retrieve the state information associated with the non-operating NMS manager.
219. An optical communications network having at least one optical switch connected to a network access device, wherein said switch comprises: a first line card disposed along a first communications path over which a first optical signal is transmitted, said first line card being connected to said network access device; a second line card disposed along a second communications path over which a second optical signal is transmitted; and an inter-card communication channel for bridging said second communications path to said first line card.
220. The network according to claim 219, wherein: a plurality of optical signals are transmitted over each of said first and second communications paths.
221. The network according to claim 219, wherein said first and second optical signals carry the same communications traffic, further including: a first line card manager associated with said first line card for monitoring the quality of said first optical signal and switching to said second communications path so as to deliver said second optical signal to said network access device when the quality of said first optical signal degrades below a specified threshold.
222. The network according to claim 221, wherein: said first line card performs o/e/o conversion on said first optical signal; said second line card performs o/e/o conversion on said second optical signal; and said inter-card communications channel is an electrical channel.
223. The network according to claim 221 , further including: a node manager controlling said first and second line card managers; and a network management system (NMS), said node manager communicating a fault condition with any of said optical signals to said NMS.
224. The network according to claim 119, further including: first and second line card managers for respective control of said first and second line cards; and a first node manager controlling said first and second line card managers; wherein said first line card transmits communications traffic associated with said first optical signal to said second line card via said inter-card communications channel, said first line card manager monitoring the quality of said first optical signal and, in the event of poor quality, alerting the node manager which consequently instructs said second line card manager to transmit said communications traffic on said second optical signal.
225. The network according to claim 224, further including: a network management system (NMS), said node manager communicating a fault condition with any of said optical signals to said NMS.
226. The network according to claim 225, further including a second optical switch connected to second network access device, said second optical switch comprising: a second node manager; a third line card disposed along said first communications path; a fourth line card disposed along said second communications path; and an inter-card communications channel for transmitting communications traffic associated with said first optical signal to said fourth line card and over said second communications path.
227. The network according to claim 226, including third and fourth line card managers respectively associated with said third and fourth line cards; wherein said third line card manager monitors said first optical signal and signals said second node manager in the event said first optical signal is of poor quality, and in response thereto said node manager instructs said fourth line card manager to transmit said communications traffic using said second optical signal.
228. An optical communications network, comprising: an ingress optical switch having a first ingress line card and first and second egress line cards connected to a first switch fabric, wherein said first switch fabric bridges an ingress optical signal onto said first and second egress line cards thereby providing first and second copies of said optical signal; transit optical switches for transiting said first and second copies of said optical signal across said network across first and second optical paths; and an egress optical switch having second and third ingress line cards and a third egress line card connected to a second switch fabric, wherein said second and third ingress line cards respectively receive said first and second optical signal copies and said second switch fabric cross-connects only one of said optical signal copies to said third egress line card.
PCT/US2002/005826 2001-02-28 2002-02-26 Node architecture and management system for optical networks WO2002069104A2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2002254037A AU2002254037A1 (en) 2001-02-28 2002-02-26 Node architecture and management system for optical networks

Applications Claiming Priority (14)

Application Number Priority Date Filing Date Title
US79538501A 2001-02-28 2001-02-28
US09/795,370 2001-02-28
US09/795,950 US6973229B1 (en) 2001-02-28 2001-02-28 Node architecture for modularized and reconfigurable optical networks, and methods and apparatus therefor
US09/795,951 2001-02-28
US09/795,252 US7013084B2 (en) 2001-02-28 2001-02-28 Multi-tiered control architecture for adaptive optical networks, and methods and apparatus therefor
US09/795,951 US20020176131A1 (en) 2001-02-28 2001-02-28 Protection switching for an optical network, and methods and apparatus therefor
US09/795,534 US20020165962A1 (en) 2001-02-28 2001-02-28 Embedded controller architecture for a modular optical network, and methods and apparatus therefor
US09/795,255 US20030023709A1 (en) 2001-02-28 2001-02-28 Embedded controller and node management architecture for a modular optical network, and methods and apparatus therefor
US09/795,534 2001-02-28
US09/795,252 2001-02-28
US09/795,255 2001-02-28
US09/795,950 2001-02-28
US09/795,370 US20020174207A1 (en) 2001-02-28 2001-02-28 Self-healing hierarchical network management system, and methods and apparatus therefor
US09/795,385 2001-02-28

Publications (2)

Publication Number Publication Date
WO2002069104A2 true WO2002069104A2 (en) 2002-09-06
WO2002069104A3 WO2002069104A3 (en) 2009-06-11

Family

ID=27569919

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2002/005826 WO2002069104A2 (en) 2001-02-28 2002-02-26 Node architecture and management system for optical networks

Country Status (3)

Country Link
US (7) US6973229B1 (en)
AU (1) AU2002254037A1 (en)
WO (1) WO2002069104A2 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2329281A1 (en) * 2008-09-11 2011-06-08 Nortel Networks Limited Protection for provider backbone bridge traffic engineering
US20110164493A1 (en) * 2008-09-11 2011-07-07 Nigel Lawrence Bragg Protection for provider backbone bridge traffic engineering
EP3192198A4 (en) * 2014-09-11 2018-04-25 The Arizona Board of Regents on behalf of The University of Arizona Resilient optical networking
WO2021249639A1 (en) * 2020-06-10 2021-12-16 Telefonaktiebolaget Lm Ericsson (Publ) Fault location in an optical ring network
US11303533B2 (en) 2019-07-09 2022-04-12 Cisco Technology, Inc. Self-healing fabrics
US11316712B2 (en) 2017-06-21 2022-04-26 Byd Company Limited Canopen-based data transmission gateway changeover method, system and apparatus thereof

Families Citing this family (561)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8805195B2 (en) * 2007-07-16 2014-08-12 Ciena Corporation High-speed optical transceiver for InfiniBand and Ethernet
US6721508B1 (en) * 1998-12-14 2004-04-13 Tellabs Operations Inc. Optical line terminal arrangement, apparatus and methods
US7225243B1 (en) * 2000-03-14 2007-05-29 Adaptec, Inc. Device discovery methods and systems implementing the same
US20020048066A1 (en) * 2000-05-15 2002-04-25 Antoniades Neophytos A. Optical networking devices and methods for optical networks with increased transparency
US6799319B2 (en) * 2000-07-17 2004-09-28 Sun Microsystems, Inc. Method and apparatus for application packages and delegate packages to adopt and export standard execution state machine interfaces
US7720959B2 (en) 2000-10-17 2010-05-18 Avaya Inc. Method and apparatus for characterizing the quality of a network path
US7349994B2 (en) 2000-10-17 2008-03-25 Avaya Technology Corp. Method and apparatus for coordinating routing parameters via a back-channel communication medium
US8023421B2 (en) 2002-07-25 2011-09-20 Avaya Inc. Method and apparatus for the assessment and optimization of network traffic
DE60141417D1 (en) * 2000-10-17 2010-04-08 Avaya Technology Corp METHOD AND DEVICE FOR OPTIMIZING PERFORMANCE AND COST IN AN INTERNET PLANT
US6973229B1 (en) * 2001-02-28 2005-12-06 Lambda Opticalsystems Corporation Node architecture for modularized and reconfigurable optical networks, and methods and apparatus therefor
JP2002271354A (en) * 2001-03-06 2002-09-20 Fujitsu Ltd Light path switching apparatus and light wavelength multiplex diversity communication system
GB2373131A (en) * 2001-03-09 2002-09-11 Marconi Comm Ltd Telecommunications networks
WO2002075998A1 (en) * 2001-03-16 2002-09-26 Photuris, Inc. Method and apparatus for transferring wdm signals between different wdm communications systems in optically transparent manner
US8199649B2 (en) * 2001-03-28 2012-06-12 Alcatel Lucent Method and apparatus for rerouting a connection in a data communication network based on a user connection monitoring function
US6983414B1 (en) 2001-03-30 2006-01-03 Cisco Technology, Inc. Error insertion circuit for SONET forward error correction
US7003715B1 (en) 2001-03-30 2006-02-21 Cisco Technology, Inc. Galois field multiply accumulator
US7124064B1 (en) 2001-03-30 2006-10-17 Cisco Technology, Inc. Automatic generation of hardware description language code for complex polynomial functions
US7447982B1 (en) * 2001-03-30 2008-11-04 Cisco Technology, Inc. BCH forward error correction decoder
US7426210B1 (en) * 2001-04-03 2008-09-16 Yt Networks Capital, Llc Port-to-port, non-blocking, scalable optical router architecture and method for routing optical traffic
US20020191244A1 (en) * 2001-04-06 2002-12-19 Roman Antosik Disjoint shared protection
JP4167073B2 (en) * 2001-04-11 2008-10-15 トランスモード ホールディング エービー Optical add / drop node and optical WDM network
US8234338B1 (en) * 2001-04-20 2012-07-31 Microsoft Corporation System and method for reliable message delivery
JP3587250B2 (en) * 2001-04-27 2004-11-10 日本電気株式会社 Communication apparatus for performing automatic failure recovery and automatic failure recovery method
US20020167899A1 (en) * 2001-05-11 2002-11-14 Thompson Richard A. System and method for the configuration, repair and protection of virtual ring networks
US20020167896A1 (en) * 2001-05-11 2002-11-14 Arvind Puntambekar Methods and apparatus for configuration information recovery
GB0111869D0 (en) * 2001-05-15 2001-07-04 Marconi Comm Ltd Restoration protection in communication networks
US6567413B1 (en) * 2001-05-18 2003-05-20 Network Elements, Inc. Optical networking module including protocol processing and unified software control
US6580731B1 (en) 2001-05-18 2003-06-17 Network Elements, Inc. Multi-stage SONET overhead processing
JP4398113B2 (en) * 2001-05-23 2010-01-13 富士通株式会社 Layered network management system
US20040100684A1 (en) * 2001-06-07 2004-05-27 Jones Kevan Peter Line amplification system for wavelength switched optical networks
JP2002366426A (en) * 2001-06-11 2002-12-20 Mitsumi Electric Co Ltd Program executing device and program executing method
US7747165B2 (en) * 2001-06-13 2010-06-29 Alcatel-Lucent Usa Inc. Network operating system with topology autodiscovery
US7359377B1 (en) * 2001-06-19 2008-04-15 Juniper Networks, Inc. Graceful restart for use in nodes employing label switched path signaling protocols
WO2003001707A1 (en) * 2001-06-25 2003-01-03 Corvis Corporation Optical transmission systems, devices, and methods
US7203742B1 (en) 2001-07-11 2007-04-10 Redback Networks Inc. Method and apparatus for providing scalability and fault tolerance in a distributed network
US7315903B1 (en) * 2001-07-20 2008-01-01 Palladia Systems, Inc. Self-configuring server and server network
US7075953B2 (en) * 2001-07-30 2006-07-11 Network-Elements, Inc. Programmable SONET framing
US7251248B2 (en) * 2001-07-31 2007-07-31 Bridgeworks Ltd. Connection device
US7945650B1 (en) * 2001-08-01 2011-05-17 Cisco Technology, Inc. Identifying modular chassis composition by using network physical topology information
US8332502B1 (en) 2001-08-15 2012-12-11 Metavante Corporation Business to business network management event detection and response system and method
US7113699B1 (en) * 2001-08-15 2006-09-26 Ciena Corporation Fault forwarding in an optical network
EP1286498A1 (en) * 2001-08-21 2003-02-26 Alcatel Method, service - agent and network management system for operating a telecommunications network
US9048965B2 (en) * 2001-08-24 2015-06-02 Mark Henrik Sandstrom Input-controllable dynamic cross-connect
US8676956B1 (en) * 2001-09-05 2014-03-18 Alcatel Lucent Method and system for monitoring network resources utilization
US7010780B2 (en) * 2001-09-17 2006-03-07 Intel Corporation Method and system for software modularization and automatic code generation for embedded systems
US7619886B2 (en) * 2001-09-27 2009-11-17 Alcatel-Lucent Canada Inc. Method and apparatus for providing a common support services infrastructure for a network element
US7710866B2 (en) * 2001-09-27 2010-05-04 Alcatel-Lucent Canada Inc. Method and apparatus for optimization of redundant link usage in a multi-shelf network element
US6952529B1 (en) * 2001-09-28 2005-10-04 Ciena Corporation System and method for monitoring OSNR in an optical network
EP1298843A3 (en) * 2001-09-28 2004-04-07 Tyco Telecommunications (US) Inc. Replicated naming service to support a telecommunications network
WO2003034657A2 (en) * 2001-10-12 2003-04-24 Koninklijke Philips Electronics N.V. Scheme for dynamic process network reconfiguration
US8868715B2 (en) * 2001-10-15 2014-10-21 Volli Polymer Gmbh Llc Report generation and visualization systems and methods and their use in testing frameworks for determining suitability of a network for target applications
US8543681B2 (en) * 2001-10-15 2013-09-24 Volli Polymer Gmbh Llc Network topology discovery systems and methods
GB2381153B (en) * 2001-10-15 2004-10-20 Jacobs Rimell Ltd Policy server
US7035930B2 (en) * 2001-10-26 2006-04-25 Hewlett-Packard Development Company, L.P. Method and framework for generating an optimized deployment of software applications in a distributed computing environment using layered model descriptions of services and servers
US7054934B2 (en) * 2001-10-26 2006-05-30 Hewlett-Packard Development Company, L.P. Tailorable optimization using model descriptions of services and servers in a computing environment
US7039705B2 (en) * 2001-10-26 2006-05-02 Hewlett-Packard Development Company, L.P. Representing capacities and demands in a layered computing environment using normalized values
US20030084219A1 (en) * 2001-10-26 2003-05-01 Maxxan Systems, Inc. System, apparatus and method for address forwarding for a computer network
JP2003134055A (en) * 2001-10-29 2003-05-09 Yokogawa Electric Corp Measuring instrument for optical digital communication
US8244837B2 (en) * 2001-11-05 2012-08-14 Accenture Global Services Limited Central administration of one or more resources
US7443789B2 (en) * 2001-11-21 2008-10-28 Adc Dsl Systems, Inc. Protection switching mechanism
US20030120915A1 (en) * 2001-11-30 2003-06-26 Brocade Communications Systems, Inc. Node and port authentication in a fibre channel network
US20030161355A1 (en) * 2001-12-21 2003-08-28 Rocco Falcomato Multi-mode framer and pointer processor for optically transmitted data
US7519055B1 (en) * 2001-12-21 2009-04-14 Alcatel Lucent Optical edge router
JP2003198485A (en) * 2001-12-28 2003-07-11 Nec Corp Cross connect device and optical communication system
US7145914B2 (en) 2001-12-31 2006-12-05 Maxxan Systems, Incorporated System and method for controlling data paths of a network processor subsystem
US7085846B2 (en) * 2001-12-31 2006-08-01 Maxxan Systems, Incorporated Buffer to buffer credit flow control for computer network
CA2415598A1 (en) * 2002-01-11 2003-07-11 Nec Corporation Multiplex communication system and method
US20030163692A1 (en) * 2002-01-31 2003-08-28 Brocade Communications Systems, Inc. Network security and applications to the fabric
US7873984B2 (en) * 2002-01-31 2011-01-18 Brocade Communications Systems, Inc. Network security through configuration servers in the fabric environment
US6917759B2 (en) * 2002-01-31 2005-07-12 Nortel Networks Limited Shared mesh signaling algorithm and apparatus
US7243367B2 (en) 2002-01-31 2007-07-10 Brocade Communications Systems, Inc. Method and apparatus for starting up a network or fabric
US7962588B1 (en) * 2002-02-01 2011-06-14 Ciena Corporation Method and system for managing optical network elements
US8687959B2 (en) * 2002-02-06 2014-04-01 Ciena Corporation System and method for configuration discovery in an optical network
US7668080B2 (en) * 2002-02-25 2010-02-23 Pluris, Inc. Method and apparatus for implementing automatic protection switching functionality in a distributed processor data router
US7107353B1 (en) * 2002-03-07 2006-09-12 Bellsouth Intellectual Property Corporation Systems and methods for determining a fundamental route between central offices in a telecommunications network
US6856942B2 (en) * 2002-03-09 2005-02-15 Katrina Garnett System, method and model for autonomic management of enterprise applications
US20040120713A1 (en) * 2002-03-27 2004-06-24 Robert Ward Method and apparatus for providing sparing capacity for optical switches
US7295561B1 (en) 2002-04-05 2007-11-13 Ciphermax, Inc. Fibre channel implementation using network processors
FR2838217B1 (en) * 2002-04-05 2004-06-25 De Chelle Yvonne Auberlet METHOD AND DEVICE FOR GENERATING CUSTOMIZABLE AND SCALABLE EXECUTABLE SOFTWARE WITHOUT COMPUTER PROGRAMMING
US7379970B1 (en) 2002-04-05 2008-05-27 Ciphermax, Inc. Method and system for reduced distributed event handling in a network environment
US7406038B1 (en) * 2002-04-05 2008-07-29 Ciphermax, Incorporated System and method for expansion of computer network switching system without disruption thereof
US7307995B1 (en) 2002-04-05 2007-12-11 Ciphermax, Inc. System and method for linking a plurality of network switches
US7209656B2 (en) * 2002-04-12 2007-04-24 Fujitsu Limited Management of optical links using power level information
US7212742B2 (en) * 2002-04-12 2007-05-01 Fujitsu Limited Power level management in optical networks
US7209655B2 (en) * 2002-04-12 2007-04-24 Fujitsu Limited Sharing of power level information to support optical communications
US20030195956A1 (en) * 2002-04-15 2003-10-16 Maxxan Systems, Inc. System and method for allocating unique zone membership
WO2003090035A2 (en) * 2002-04-22 2003-10-30 Celion Networks, Inc. Automated optical transport system
US20030200330A1 (en) * 2002-04-22 2003-10-23 Maxxan Systems, Inc. System and method for load-sharing computer network switch
US7342883B2 (en) * 2002-04-25 2008-03-11 Intel Corporation Method and apparatus for managing network traffic
US20030202510A1 (en) * 2002-04-26 2003-10-30 Maxxan Systems, Inc. System and method for scalable switch fabric for computer network
ATE305678T1 (en) * 2002-05-06 2005-10-15 Cit Alcatel METHOD FOR PROTECTED TRANSMISSION IN A WDM NETWORK
US7489867B1 (en) * 2002-05-06 2009-02-10 Cisco Technology, Inc. VoIP service over an ethernet network carried by a DWDM optical supervisory channel
US8611363B2 (en) * 2002-05-06 2013-12-17 Adtran, Inc. Logical port system and method
US7299264B2 (en) * 2002-05-07 2007-11-20 Hewlett-Packard Development Company, L.P. System and method for monitoring a connection between a server and a passive client device
US7817540B1 (en) * 2002-05-08 2010-10-19 Cisco Technology, Inc. Method and apparatus for N+1 RF switch with passive working path and active protection path
US7398321B2 (en) * 2002-05-14 2008-07-08 The Research Foundation Of Suny Segment protection scheme for a network
US7218814B2 (en) * 2002-05-28 2007-05-15 Optun (Bvi) Ltd. Method and apparatus for optical mode conversion
US7609918B2 (en) 2002-05-28 2009-10-27 Optun (Bvi) Ltd. Method and apparatus for optical mode division multiplexing and demultiplexing
US7321705B2 (en) 2002-05-28 2008-01-22 Optun (Bvi) Ltd. Method and device for optical switching and variable optical attenuation
US7155124B2 (en) * 2002-05-31 2006-12-26 Fujitsu Limited Loss-less architecture and method for wavelength division multiplexing (WDM) optical networks
US7072960B2 (en) * 2002-06-10 2006-07-04 Hewlett-Packard Development Company, L.P. Generating automated mappings of service demands to server capacities in a distributed computer system
US7310314B1 (en) * 2002-06-10 2007-12-18 Juniper Networks, Inc. Managing periodic communications
US20080120399A1 (en) * 2006-11-16 2008-05-22 Mark Henrik Sandstrom Direct Binary File Transfer Based Network Management System Free of Messaging, Commands and Data Format Conversions
US9917883B2 (en) 2002-06-13 2018-03-13 Throughputer, Inc. Direct binary file transfer based network management system free of messaging, commands and data format conversions
US20080117068A1 (en) * 2006-11-16 2008-05-22 Mark Henrik Sandstrom Intelligent Network Alarm Status Monitoring
US20050089027A1 (en) * 2002-06-18 2005-04-28 Colton John R. Intelligent optical data switching system
US7539183B2 (en) * 2002-06-24 2009-05-26 Emerson Network Power - Embedded Computing, Inc. Multi-service platform system and method
US20040008622A1 (en) * 2002-06-26 2004-01-15 Jean Dolbec Lightpath visualization method for mesh WDM networks
US20040001711A1 (en) * 2002-06-26 2004-01-01 Kirby Koster Lightpath segment protection for WDM networks
US20040008985A1 (en) * 2002-06-26 2004-01-15 Jean Dolbec Approach for operator directed routing in conjunction with automatic path completion
US8667105B1 (en) * 2002-06-26 2014-03-04 Apple Inc. Systems and methods facilitating relocatability of devices between networks
US7869424B2 (en) * 2002-07-01 2011-01-11 Converged Data Solutions Inc. Systems and methods for voice and data communications including a scalable TDM switch/multiplexer
US20040010716A1 (en) * 2002-07-11 2004-01-15 International Business Machines Corporation Apparatus and method for monitoring the health of systems management software components in an enterprise
US7209963B2 (en) * 2002-07-11 2007-04-24 International Business Machines Corporation Apparatus and method for distributed monitoring of endpoints in a management region
US6773251B2 (en) * 2002-07-11 2004-08-10 Pechiney Emballage Flexible Europe Segmented wheel disk for extrusion blowmolding apparatus
US6711324B1 (en) * 2002-07-11 2004-03-23 Sprint Communications Company, L.P. Software model for optical communication networks
KR20020067028A (en) * 2002-07-25 2002-08-21 두산티엠에스주식회사 External Standalone Database Management Device
GB0217355D0 (en) * 2002-07-26 2002-09-04 Marconi Comm Ltd Communications system
US7907607B2 (en) * 2002-08-02 2011-03-15 Null Networks Llc Software methods of an optical networking apparatus with integrated modules having multi-protocol processors and physical layer components
US20040030766A1 (en) * 2002-08-12 2004-02-12 Michael Witkowski Method and apparatus for switch fabric configuration
US20040109683A1 (en) * 2002-08-21 2004-06-10 Meriton Networks Inc. Non-disruptive lightpath routing changes in WDM networks
JP3957065B2 (en) * 2002-08-28 2007-08-08 富士通株式会社 Network computer system and management device
EP1315358A1 (en) * 2002-09-12 2003-05-28 Agilent Technologies Inc. a Delaware Corporation Data-transparent management system for controlling measurement instruments
US7370092B2 (en) * 2002-09-12 2008-05-06 Computer Sciences Corporation System and method for enhanced software updating and revision
US8660427B2 (en) 2002-09-13 2014-02-25 Intel Corporation Method and apparatus of the architecture and operation of control processing unit in wavelenght-division-multiplexed photonic burst-switched networks
US7457277B1 (en) * 2002-09-20 2008-11-25 Mahi Networks, Inc. System and method for network layer protocol routing in a peer model integrated optical network
US7272778B2 (en) * 2002-09-27 2007-09-18 International Business Machines Corporation Method and systems for improving test of data transmission in multi-channel systems
US7593319B1 (en) 2002-10-15 2009-09-22 Garrettcom, Inc. LAN switch with rapid fault recovery
US8045539B2 (en) * 2002-10-25 2011-10-25 Alcatel Lucent Virtual group connection scheme for ATM architecture in an access node
US20040205240A1 (en) * 2002-11-21 2004-10-14 International Business Machines Corporation Method, system, and computer program product for providing a four-tier corba architecture
US7152958B2 (en) * 2002-11-23 2006-12-26 Silverbrook Research Pty Ltd Thermal ink jet with chemical vapor deposited nozzle plate
JP2004208034A (en) * 2002-12-25 2004-07-22 Nec Corp Communication system and transport system
US20040126107A1 (en) * 2002-12-31 2004-07-01 Intelligent Photonics Control Corporation Optical control system
AT501256A2 (en) * 2003-02-06 2006-07-15 Mobilkom Austria Ag & Co Kg SYSTEM FOR THE MANAGEMENT OF PRODUCTS AND PRODUCT PARTS OR ASSOCIATED SERIAL NUMBERS AND DATA PROCESSING SYSTEM
US20040168089A1 (en) * 2003-02-19 2004-08-26 Hyun-Sook Lee Security method for operator access control of network management system
US7343425B1 (en) 2003-02-21 2008-03-11 Marvell International Ltd. Multi-speed serial interface for media access control and physical layer devices
US7848649B2 (en) 2003-02-28 2010-12-07 Intel Corporation Method and system to frame and format optical control and data bursts in WDM-based photonic burst switched networks
JP2004280527A (en) * 2003-03-17 2004-10-07 Ge Medical Systems Global Technology Co Llc Multi-tier application architecture
US7287180B1 (en) * 2003-03-20 2007-10-23 Info Value Computing, Inc. Hardware independent hierarchical cluster of heterogeneous media servers using a hierarchical command beat protocol to synchronize distributed parallel computing systems and employing a virtual dynamic network topology for distributed parallel computing system
JP2004289674A (en) * 2003-03-24 2004-10-14 Ntt Docomo Inc Service quality control unit in ip network and method therefor, router and service quality control system
KR100971320B1 (en) * 2003-03-25 2010-07-20 트랜스퍼시픽 소닉, 엘엘씨 Method for storage/running application program of flash-ROM
US7451340B2 (en) * 2003-03-31 2008-11-11 Lucent Technologies Inc. Connection set-up extension for restoration path establishment in mesh networks
US8867333B2 (en) 2003-03-31 2014-10-21 Alcatel Lucent Restoration path calculation considering shared-risk link groups in mesh networks
US7545736B2 (en) * 2003-03-31 2009-06-09 Alcatel-Lucent Usa Inc. Restoration path calculation in mesh networks
US7646706B2 (en) 2003-03-31 2010-01-12 Alcatel-Lucent Usa Inc. Restoration time in mesh networks
US7643408B2 (en) 2003-03-31 2010-01-05 Alcatel-Lucent Usa Inc. Restoration time in networks
US7689693B2 (en) 2003-03-31 2010-03-30 Alcatel-Lucent Usa Inc. Primary/restoration path calculation in mesh networks based on multiple-cost criteria
US7606237B2 (en) * 2003-03-31 2009-10-20 Alcatel-Lucent Usa Inc. Sharing restoration path bandwidth in mesh networks
US8296407B2 (en) 2003-03-31 2012-10-23 Alcatel Lucent Calculation, representation, and maintenance of sharing information in mesh networks
US7298973B2 (en) * 2003-04-16 2007-11-20 Intel Corporation Architecture, method and system of multiple high-speed servers to network in WDM based photonic burst-switched networks
US20070053374A1 (en) * 2003-04-16 2007-03-08 David Levi Multi-service communication system
US7266295B2 (en) 2003-04-17 2007-09-04 Intel Corporation Modular reconfigurable multi-server system and method for high-speed networking within photonic burst-switched network
EP1471673B1 (en) * 2003-04-22 2006-08-02 Alcatel Method for using the complete resource capacity of a synchronous digital hierarchy network, subject to a protection mechanism, in the presence of a data (packet) network, and related apparatus for the implementation of the method
US7254815B2 (en) * 2003-04-24 2007-08-07 International Business Machines Corporation Method and apparatus for implementing distributed event management in an embedded support processor computer system
US7436840B1 (en) 2003-05-16 2008-10-14 Sprint Communications Company L.P. Network system manager for telecommunication carrier virtual networks
US7539135B1 (en) 2003-05-16 2009-05-26 Sprint Communications Company L.P. System and method for establishing telecommunication carrier virtual networks
US7526202B2 (en) 2003-05-19 2009-04-28 Intel Corporation Architecture and method for framing optical control and data bursts within optical transport unit structures in photonic burst-switched networks
US7379481B2 (en) 2003-05-30 2008-05-27 Hubbell Incorporated Apparatus and method for automatic provisioning of SONET multiplexer
US6958908B2 (en) * 2003-05-30 2005-10-25 Hubbell Incorporated Compact enclosure for interchangeable SONET multiplexer cards and methods for using same
US8543672B2 (en) * 2003-06-02 2013-09-24 Alcatel Lucent Method for reconfiguring a ring network, a network node, and a computer program product
US7860392B2 (en) * 2003-06-06 2010-12-28 Dynamic Method Enterprises Limited Optical network topology databases based on a set of connectivity constraints
US20040247317A1 (en) * 2003-06-06 2004-12-09 Sadananda Santosh Kumar Method and apparatus for a network database in an optical network
US7283741B2 (en) 2003-06-06 2007-10-16 Intellambda Systems, Inc. Optical reroutable redundancy scheme
ITBO20030368A1 (en) * 2003-06-17 2004-12-18 Qubica S P A SYSTEM FOR THE MANAGEMENT OF AT LEAST ONE EVENT IN A BOWLING PLANT.
US7310480B2 (en) * 2003-06-18 2007-12-18 Intel Corporation Adaptive framework for closed-loop protocols over photonic burst switched networks
KR100968030B1 (en) 2003-06-23 2010-07-07 주식회사 케이티 O-UNI connection reconfiguration method and system for optical transport network with minimal transition time
US7272310B2 (en) * 2003-06-24 2007-09-18 Intel Corporation Generic multi-protocol label switching (GMPLS)-based label space architecture for optical switched networks
US7974284B2 (en) * 2003-06-27 2011-07-05 Broadcom Corporation Single and double tagging schemes for packet processing in a network device
GB0315366D0 (en) * 2003-07-01 2003-08-06 Marconi Comm Ltd Improvements in or relating to communication systems
US7023793B2 (en) * 2003-08-01 2006-04-04 Ciena Corporation Resiliency of control channels in a communications network
WO2005015944A1 (en) * 2003-08-07 2005-02-17 Telecom Italia S.P.A. Packet and optical routing equipment and method
US20070258715A1 (en) * 2003-08-07 2007-11-08 Daniele Androni Modular, Easily Configurable and Expandible Node Structure for an Optical Communications Network
EP1661025A4 (en) 2003-08-11 2010-05-26 Chorus Systems Inc Systems and methods for creation and use of an adaptive reference model
JP4914212B2 (en) * 2003-08-15 2012-04-11 ジーブイビービー ホールディングス エス.エイ.アール.エル. Broadcast router with changeable functionality
US7549149B2 (en) * 2003-08-21 2009-06-16 International Business Machines Corporation Automatic software distribution and installation in a multi-tiered computer network
US8554947B1 (en) * 2003-09-15 2013-10-08 Verizon Laboratories Inc. Network data transmission systems and methods
US7451201B2 (en) * 2003-09-30 2008-11-11 International Business Machines Corporation Policy driven autonomic computing-specifying relationships
CN100337412C (en) * 2003-09-30 2007-09-12 华为技术有限公司 Method for protecting subnetwork expansion in optical network
US7533173B2 (en) * 2003-09-30 2009-05-12 International Business Machines Corporation Policy driven automation - specifying equivalent resources
US8892702B2 (en) 2003-09-30 2014-11-18 International Business Machines Corporation Policy driven autonomic computing-programmatic policy definitions
US7518982B1 (en) * 2003-10-03 2009-04-14 Nortel Networks Limited System and method of communicating status and protection information between cards in a communications system
JP2007507990A (en) * 2003-10-14 2007-03-29 ラプター・ネツトワークス・テクノロジー・インコーポレイテツド Switching system with distributed switching structure
US7653730B1 (en) 2003-10-30 2010-01-26 Sprint Communications Company L.P. System and method for latency assurance and dynamic re-provisioning of telecommunication connections in a carrier virtual network
US7460526B1 (en) 2003-10-30 2008-12-02 Sprint Communications Company L.P. System and method for establishing a carrier virtual network inverse multiplexed telecommunication connection
US7450592B2 (en) 2003-11-12 2008-11-11 At&T Intellectual Property I, L.P. Layer 2/layer 3 interworking via internal virtual UNI
US7340169B2 (en) * 2003-11-13 2008-03-04 Intel Corporation Dynamic route discovery for optical switched networks using peer routing
US7596612B1 (en) * 2003-11-25 2009-09-29 Sprint Communications Company L.P. Interface system for carrier virtual network system
US6926199B2 (en) * 2003-11-25 2005-08-09 Segwave, Inc. Method and apparatus for storing personalized computing device setting information and user session information to enable a user to transport such settings between computing devices
US7602701B1 (en) * 2003-12-22 2009-10-13 Alcatel Lucent WideBand cross-connect system and protection method utilizing SONET add/drop multiplexers
US7734176B2 (en) 2003-12-22 2010-06-08 Intel Corporation Hybrid optical burst switching with fixed time slot architecture
US7774474B2 (en) * 2003-12-23 2010-08-10 Nortel Networks Limited Communication of control and data path state for networks
US7573898B2 (en) * 2003-12-29 2009-08-11 Fujitsu Limited Method and apparatus to double LAN service unit bandwidth
US8155515B2 (en) * 2003-12-29 2012-04-10 Verizon Business Global Llc Method and apparatus for sharing common capacity and using different schemes for restoring telecommunications networks
US7646752B1 (en) * 2003-12-31 2010-01-12 Nortel Networks Limited Multi-hop wireless backhaul network and method
US6923224B1 (en) * 2004-01-15 2005-08-02 Stant Manufacturing Inc. Closure and vent system for capless filler neck
US6920586B1 (en) * 2004-01-23 2005-07-19 Freescale Semiconductor, Inc. Real-time debug support for a DMA device and method thereof
US7570672B2 (en) * 2004-02-02 2009-08-04 Simplexgrinnell Lp Fiber optic multiplex modem
GB0402572D0 (en) * 2004-02-05 2004-03-10 Nokia Corp A method of organising servers
US7383461B2 (en) * 2004-02-12 2008-06-03 International Business Machines Corporation Method and system to recover a failed flash of a blade service processor in a server chassis
US7697455B2 (en) * 2004-02-17 2010-04-13 Dynamic Method Enterprises Limited Multiple redundancy schemes in an optical network
US7940648B1 (en) * 2004-03-02 2011-05-10 Cisco Technology, Inc. Hierarchical protection switching framework
US20050196168A1 (en) * 2004-03-03 2005-09-08 Fujitsu Limited Optical connection switching apparatus and management control unit thereof
EP1575213A1 (en) * 2004-03-08 2005-09-14 Siemens Aktiengesellschaft Method and apparatus for operating at least two rack devices
GB2412823B (en) * 2004-03-31 2006-03-15 Siemens Ag A method of optimising connection set-up times between nodes in a centrally controlled network
US20050226212A1 (en) * 2004-04-02 2005-10-13 Dziong Zbigniew M Loop avoidance for recovery paths in mesh networks
US7500013B2 (en) * 2004-04-02 2009-03-03 Alcatel-Lucent Usa Inc. Calculation of link-detour paths in mesh networks
US8111612B2 (en) 2004-04-02 2012-02-07 Alcatel Lucent Link-based recovery with demand granularity in mesh networks
US7257288B1 (en) * 2004-04-23 2007-08-14 Nistica, Inc. Tunable optical routing systems
US7623784B1 (en) * 2004-05-04 2009-11-24 Sprint Communications Company L.P. Network connection verification in optical communication networks
US7949734B1 (en) * 2004-05-06 2011-05-24 Cisco Technology, Inc. Method and apparatus for re-generating configuration commands of a network device using an object-based approach
EP1598979A1 (en) * 2004-05-18 2005-11-23 Siemens Aktiengesellschaft Method and apparatus for operating a management network in the event of failure of a manager
US7437469B2 (en) * 2004-05-26 2008-10-14 Ciena Corporation Virtual network element framework and operating system for managing multi-service network equipment
US20050265719A1 (en) * 2004-05-27 2005-12-01 Bernard Marc R Optical line termination, optical access network, and method and apparatus for determining network termination type
US20050267961A1 (en) * 2004-05-27 2005-12-01 Pirbhai Rahim S Communication network management methods and systems
US7640317B2 (en) * 2004-06-10 2009-12-29 Cisco Technology, Inc. Configuration commit database approach and session locking approach in a two-stage network device configuration process
US7660882B2 (en) * 2004-06-10 2010-02-09 Cisco Technology, Inc. Deploying network element management system provisioning services
US7853676B1 (en) 2004-06-10 2010-12-14 Cisco Technology, Inc. Protocol for efficient exchange of XML documents with a network device
US7779404B2 (en) * 2004-06-10 2010-08-17 Cisco Technology, Inc. Managing network device configuration using versioning and partitioning
US7409315B2 (en) * 2004-06-28 2008-08-05 Broadcom Corporation On-board performance monitor and power control system
US7668941B1 (en) 2004-06-29 2010-02-23 American Megatrends, Inc. Systems and methods for implementing a TCP/IP stack and web interface within a management module
US7707282B1 (en) * 2004-06-29 2010-04-27 American Megatrends, Inc. Integrated network and management controller
US20060002705A1 (en) * 2004-06-30 2006-01-05 Linda Cline Decentralizing network management system tasks
FR2872655A1 (en) * 2004-07-01 2006-01-06 France Telecom MULTISERVICE PRIVACY NETWORK AND INTERFACE MODULES FOR VEHICULATING DATA IN DIFFERENT FORMATS ON SUCH A NETWORK
US20060015584A1 (en) * 2004-07-13 2006-01-19 Teneros, Inc. Autonomous service appliance
US7363366B2 (en) 2004-07-13 2008-04-22 Teneros Inc. Network traffic routing
US20060015764A1 (en) * 2004-07-13 2006-01-19 Teneros, Inc. Transparent service provider
US20060018260A1 (en) * 2004-07-21 2006-01-26 Tellabs Operations, Inc., A Delaware Corporation High density port complex protocol
US20060026278A1 (en) * 2004-07-27 2006-02-02 Jian Yu Administration system for network management systems
US8316438B1 (en) 2004-08-10 2012-11-20 Pure Networks Llc Network management providing network health information and lockdown security
US7925729B2 (en) 2004-12-07 2011-04-12 Cisco Technology, Inc. Network management
US7904712B2 (en) * 2004-08-10 2011-03-08 Cisco Technology, Inc. Service licensing and maintenance for networks
US7656901B2 (en) * 2004-08-10 2010-02-02 Meshnetworks, Inc. Software architecture and hardware abstraction layer for multi-radio routing and method for providing the same
US20060047801A1 (en) * 2004-08-26 2006-03-02 Anthony Haag SNMP wireless proxy
JP2006065660A (en) * 2004-08-27 2006-03-09 Sony Corp Terminal equipment, information delivery server, and information delivery method
US7801449B2 (en) * 2004-09-07 2010-09-21 Finisar Corporation Off-module optical transceiver firmware paging
US8229301B2 (en) * 2004-09-07 2012-07-24 Finisar Corporation Configuration of optical transceivers to perform custom features
US8364829B2 (en) * 2004-09-24 2013-01-29 Hewlett-Packard Development Company, L.P. System and method for ascribing resource consumption to activity in a causal path of a node of a distributed computing system
US7904546B1 (en) 2004-09-27 2011-03-08 Alcatel-Lucent Usa Inc. Managing processes on a network device
US8990365B1 (en) * 2004-09-27 2015-03-24 Alcatel Lucent Processing management packets
US20060067314A1 (en) * 2004-09-29 2006-03-30 Michael Ho Overhead processing and generation techniques
US7536290B1 (en) * 2004-09-30 2009-05-19 Silicon Valley Bank Model-based management of an existing information processing system
US7716386B1 (en) * 2004-10-15 2010-05-11 Broadcom Corporation Component identification and transmission system
GB2419484A (en) * 2004-10-22 2006-04-26 Zhou Rong Optical N x M switch
US7957651B2 (en) * 2004-10-29 2011-06-07 Finisar Corporation Configurable optical transceiver feature specific cost transaction
US7802124B2 (en) 2004-10-29 2010-09-21 Finisar Corporation Microcode configurable frequency clock
US7974538B2 (en) 2004-10-29 2011-07-05 Finisar Corporation Transaction for transceiver firmware download
US7613107B2 (en) * 2004-11-02 2009-11-03 Alcatel Lucent Protection switch logging methods and systems
US7707266B2 (en) * 2004-11-23 2010-04-27 Intel Corporation Scalable, high-performance, global interconnect scheme for multi-threaded, multiprocessing system-on-a-chip network processor unit
US8478849B2 (en) * 2004-12-07 2013-07-02 Pure Networks LLC. Network administration tool
US7827252B2 (en) * 2004-12-07 2010-11-02 Cisco Technology, Inc. Network device management
JP2006166037A (en) * 2004-12-08 2006-06-22 Fujitsu Ltd Optical transmission device and its system
US7499452B2 (en) * 2004-12-28 2009-03-03 International Business Machines Corporation Self-healing link sequence counts within a circular buffer
ATE471052T1 (en) 2005-01-13 2010-06-15 Ericsson Telefon Ab L M LOAD-BASED ACCESS CONTROL IN MULTIPLE ACCESS SYSTEMS
JP4576249B2 (en) * 2005-01-27 2010-11-04 株式会社クラウド・スコープ・テクノロジーズ Network management apparatus and method
US7908605B1 (en) * 2005-01-28 2011-03-15 Hewlett-Packard Development Company, L.P. Hierarchal control system for controlling the allocation of computer resources
US7644161B1 (en) * 2005-01-28 2010-01-05 Hewlett-Packard Development Company, L.P. Topology for a hierarchy of control plug-ins used in a control system
US8108510B2 (en) * 2005-01-28 2012-01-31 Jds Uniphase Corporation Method for implementing TopN measurements in operations support systems
US7171070B1 (en) * 2005-02-04 2007-01-30 At&T Corp. Arrangement for low cost path protection for optical communications networks
US7389018B1 (en) 2005-02-04 2008-06-17 At&T Corp. Arrangement for low cost path protection for optical communications networks
US7441061B2 (en) * 2005-02-25 2008-10-21 Dynamic Method Enterprises Limited Method and apparatus for inter-module communications of an optical network element
JP4791061B2 (en) * 2005-03-18 2011-10-12 富士通株式会社 Firmware version management method and information processing apparatus for computer system
US7490349B2 (en) * 2005-04-01 2009-02-10 International Business Machines Corporation System and method of enforcing hierarchical management policy
ES2433661T3 (en) * 2005-04-19 2013-12-12 Takeda Gmbh Roflumilast for the treatment of pulmonary hypertension
US7549077B2 (en) * 2005-04-22 2009-06-16 The United States Of America As Represented By The Secretary Of The Army Automated self-forming, self-healing configuration permitting substitution of software agents to effect a live repair of a system implemented on hardware processors
EP1878210B1 (en) * 2005-04-29 2016-01-13 Ciena Luxembourg S.a.r.l. Method and apparatus for non-disruptive call modification
CN101171794A (en) * 2005-05-04 2008-04-30 奥普拉克斯股份公司 A method, system and bandwidt manager for preventing overbooking of resources in a data network
EP1900120A2 (en) 2005-06-06 2008-03-19 Intellambda Systems, Inc Quality of service in an optical network
EP1734689A1 (en) * 2005-06-16 2006-12-20 Siemens Aktiengesellschaft Veto operation for a management system comprising a multi manager configuration
US7472189B2 (en) * 2005-07-21 2008-12-30 Sbc Knowledge Ventures, L.P. Method of collecting data from network elements
US20070027974A1 (en) * 2005-08-01 2007-02-01 Microsoft Corporation Online service monitoring
JP2007067991A (en) * 2005-09-01 2007-03-15 Fujitsu Ltd Network management system
US7706685B2 (en) * 2005-09-20 2010-04-27 Lockheed Martin Corporation Data communication network using optical power averaged multiplexing
JP4673712B2 (en) * 2005-09-28 2011-04-20 富士通株式会社 Network configuration apparatus and network configuration method
US7983560B2 (en) * 2005-10-11 2011-07-19 Dynamic Method Enterprises Limited Modular WSS-based communications system with colorless add/drop interfaces
US7512677B2 (en) * 2005-10-20 2009-03-31 Uplogix, Inc. Non-centralized network device management using console communications system and method
US20070112952A1 (en) * 2005-11-14 2007-05-17 Kabushiki Kaisha Toshiba And Toshiba Tec Kabushiki Kaisha System and method for synchronized startup of document processing services
US20070116008A1 (en) * 2005-11-22 2007-05-24 Sbc Knowledge Ventures, L.P. Second-order hubbing-and-grooming constrained by demand categories
EP1793553A1 (en) * 2005-12-02 2007-06-06 Alcatel Lucent A transmission control protocol (TCP) host with TCP convergence module
IL172856A (en) * 2005-12-27 2010-11-30 Eci Telecom Ltd Optical communications network and method of operating same
JP4372104B2 (en) * 2006-01-11 2009-11-25 株式会社東芝 Reserve transmission path reservation method
US8477596B2 (en) * 2006-01-30 2013-07-02 Infinera Corporation Application of hardware-based mailboxes in network transceivers and distributed approach for predictable software-based protection switching
US20070226536A1 (en) * 2006-02-06 2007-09-27 Crawford Timothy J Apparatus, system, and method for information validation in a heirarchical structure
CN101379671B (en) * 2006-02-06 2012-04-25 施恩禧电气有限公司 Coordinated fault protection system
JP2007226398A (en) * 2006-02-22 2007-09-06 Hitachi Ltd Database connection management method and computer system
US7877505B1 (en) * 2006-04-21 2011-01-25 Cisco Technology, Inc. Configurable resolution policy for data switch feature failures
ATE521171T1 (en) 2006-04-25 2011-09-15 Interdigital Tech Corp HIGH-THROUGHPUT CHANNEL OPERATION IN A WIRELESS LOCAL MESH NETWORK
WO2007124581A1 (en) * 2006-04-27 2007-11-08 Nortel Networks Limited Method and system for controlling optical networks
US8576855B2 (en) * 2006-05-17 2013-11-05 Alcatel Lucent System and method of interface association for interface operational status event monitoring
US20100238813A1 (en) * 2006-06-29 2010-09-23 Nortel Networks Limited Q-in-Q Ethernet rings
US7765294B2 (en) 2006-06-30 2010-07-27 Embarq Holdings Company, Llc System and method for managing subscriber usage of a communications network
US8000318B2 (en) 2006-06-30 2011-08-16 Embarq Holdings Company, Llc System and method for call routing based on transmission performance of a packet network
US8717911B2 (en) 2006-06-30 2014-05-06 Centurylink Intellectual Property Llc System and method for collecting network performance information
US7948909B2 (en) 2006-06-30 2011-05-24 Embarq Holdings Company, Llc System and method for resetting counters counting network performance information at network communications devices on a packet network
US9094257B2 (en) 2006-06-30 2015-07-28 Centurylink Intellectual Property Llc System and method for selecting a content delivery network
US8289965B2 (en) 2006-10-19 2012-10-16 Embarq Holdings Company, Llc System and method for establishing a communications session with an end-user based on the state of a network connection
US8194643B2 (en) 2006-10-19 2012-06-05 Embarq Holdings Company, Llc System and method for monitoring the connection of an end-user to a remote network
US8488447B2 (en) 2006-06-30 2013-07-16 Centurylink Intellectual Property Llc System and method for adjusting code speed in a transmission path during call set-up due to reduced transmission performance
US20080019286A1 (en) * 2006-07-21 2008-01-24 Wurst Michael J Method and apparatus for optical network alarm/event management
EP2047379B1 (en) * 2006-07-27 2012-02-08 ConteXtream Ltd. Distributed edge network
US8031617B2 (en) * 2006-07-28 2011-10-04 Hewlett-Packard Development Company, L.P. Fast detection of path failure for TCP
US7720061B1 (en) 2006-08-18 2010-05-18 Juniper Networks, Inc. Distributed solution for managing periodic communications in a multi-chassis routing system
US8238253B2 (en) 2006-08-22 2012-08-07 Embarq Holdings Company, Llc System and method for monitoring interlayer devices and optimizing network performance
US8549405B2 (en) 2006-08-22 2013-10-01 Centurylink Intellectual Property Llc System and method for displaying a graphical representation of a network to identify nodes and node segments on the network that are not operating normally
US8223655B2 (en) 2006-08-22 2012-07-17 Embarq Holdings Company, Llc System and method for provisioning resources of a packet network based on collected network performance information
US8743703B2 (en) 2006-08-22 2014-06-03 Centurylink Intellectual Property Llc System and method for tracking application resource usage
US8407765B2 (en) 2006-08-22 2013-03-26 Centurylink Intellectual Property Llc System and method for restricting access to network performance information tables
US8199653B2 (en) 2006-08-22 2012-06-12 Embarq Holdings Company, Llc System and method for communicating network performance information over a packet network
US7940735B2 (en) 2006-08-22 2011-05-10 Embarq Holdings Company, Llc System and method for selecting an access point
US7684332B2 (en) 2006-08-22 2010-03-23 Embarq Holdings Company, Llc System and method for adjusting the window size of a TCP packet through network elements
US8224255B2 (en) 2006-08-22 2012-07-17 Embarq Holdings Company, Llc System and method for managing radio frequency windows
US8194555B2 (en) 2006-08-22 2012-06-05 Embarq Holdings Company, Llc System and method for using distributed network performance information tables to manage network communications
US8107366B2 (en) 2006-08-22 2012-01-31 Embarq Holdings Company, LP System and method for using centralized network performance tables to manage network communications
US8189468B2 (en) 2006-10-25 2012-05-29 Embarq Holdings, Company, LLC System and method for regulating messages between networks
US8307065B2 (en) 2006-08-22 2012-11-06 Centurylink Intellectual Property Llc System and method for remotely controlling network operators
US8619600B2 (en) 2006-08-22 2013-12-31 Centurylink Intellectual Property Llc System and method for establishing calls over a call path having best path metrics
US8537695B2 (en) 2006-08-22 2013-09-17 Centurylink Intellectual Property Llc System and method for establishing a call being received by a trunk on a packet network
US8274905B2 (en) 2006-08-22 2012-09-25 Embarq Holdings Company, Llc System and method for displaying a graph representative of network performance over a time period
US9479341B2 (en) 2006-08-22 2016-10-25 Centurylink Intellectual Property Llc System and method for initiating diagnostics on a packet network node
US8098579B2 (en) 2006-08-22 2012-01-17 Embarq Holdings Company, LP System and method for adjusting the window size of a TCP packet through remote network elements
US8130793B2 (en) 2006-08-22 2012-03-06 Embarq Holdings Company, Llc System and method for enabling reciprocal billing for different types of communications over a packet network
US8144586B2 (en) 2006-08-22 2012-03-27 Embarq Holdings Company, Llc System and method for controlling network bandwidth with a connection admission control engine
US8064391B2 (en) 2006-08-22 2011-11-22 Embarq Holdings Company, Llc System and method for monitoring and optimizing network performance to a wireless device
US8228791B2 (en) 2006-08-22 2012-07-24 Embarq Holdings Company, Llc System and method for routing communications between packet networks based on intercarrier agreements
US7843831B2 (en) 2006-08-22 2010-11-30 Embarq Holdings Company Llc System and method for routing data on a packet network
US8576722B2 (en) 2006-08-22 2013-11-05 Centurylink Intellectual Property Llc System and method for modifying connectivity fault management packets
US8125897B2 (en) 2006-08-22 2012-02-28 Embarq Holdings Company Lp System and method for monitoring and optimizing network performance with user datagram protocol network performance information packets
US8750158B2 (en) 2006-08-22 2014-06-10 Centurylink Intellectual Property Llc System and method for differentiated billing
US8531954B2 (en) 2006-08-22 2013-09-10 Centurylink Intellectual Property Llc System and method for handling reservation requests with a connection admission control engine
US8040811B2 (en) 2006-08-22 2011-10-18 Embarq Holdings Company, Llc System and method for collecting and managing network performance information
US8015294B2 (en) 2006-08-22 2011-09-06 Embarq Holdings Company, LP Pin-hole firewall for communicating data packets on a packet network
US8144587B2 (en) 2006-08-22 2012-03-27 Embarq Holdings Company, Llc System and method for load balancing network resources using a connection admission control engine
US8102770B2 (en) 2006-08-22 2012-01-24 Embarq Holdings Company, LP System and method for monitoring and optimizing network performance with vector performance tables and engines
JP2008060971A (en) * 2006-08-31 2008-03-13 Fujitsu Ltd Information processing system, information processor, information processing method and program
US7742434B2 (en) * 2006-09-05 2010-06-22 General Electric Company Ethernet chaining method
JP2008076427A (en) * 2006-09-19 2008-04-03 Tomoegawa Paper Co Ltd Optical fiber assembly
US20080092146A1 (en) * 2006-10-10 2008-04-17 Paul Chow Computing machine
US20080117808A1 (en) * 2006-11-16 2008-05-22 Mark Henrik Sandstrom Automatic configuration of network elements based on service contract definitions
US20080130509A1 (en) * 2006-11-30 2008-06-05 Network Equipment Technologies, Inc. Leased Line Emulation for PSTN Alarms Over IP
US7546302B1 (en) * 2006-11-30 2009-06-09 Netapp, Inc. Method and system for improved resource giveback
US7711683B1 (en) * 2006-11-30 2010-05-04 Netapp, Inc. Method and system for maintaining disk location via homeness
US7613947B1 (en) 2006-11-30 2009-11-03 Netapp, Inc. System and method for storage takeover
TWI324456B (en) * 2006-12-01 2010-05-01 Cameo Communications Inc An intelligent automatic setting restoration method and device
US7986713B2 (en) * 2006-12-09 2011-07-26 Mark Henrik Sandstrom Data byte load based network byte-timeslot allocation
CN101202674B (en) * 2006-12-15 2010-05-19 鸿富锦精密工业(深圳)有限公司 Parameter adjusting device and method thereof
US8561166B2 (en) * 2007-01-07 2013-10-15 Alcatel Lucent Efficient implementation of security applications in a networked environment
US9442962B1 (en) * 2007-01-23 2016-09-13 Centrify Corporation Method and apparatus for creating RFC-2307-compliant zone records in an LDAP directory without schema extensions
ES2545776T3 (en) * 2007-02-15 2015-09-15 Tyco Electronics Subsea Communications Llc System and distributed network management method
US20080225723A1 (en) * 2007-03-16 2008-09-18 Futurewei Technologies, Inc. Optical Impairment Aware Path Computation Architecture in PCE Based Network
TW200840248A (en) * 2007-03-30 2008-10-01 Delta Electronics Inc Optical communication module and control method thereof
US7983558B1 (en) * 2007-04-02 2011-07-19 Cisco Technology, Inc. Optical control plane determination of lightpaths in a DWDM network
US7801156B2 (en) * 2007-04-13 2010-09-21 Alcatel-Lucent Usa Inc. Undirected cross connects based on wavelength-selective switches
US7796537B2 (en) * 2007-04-17 2010-09-14 Cisco Technology, Inc. Creating non-transit nodes in a link network
US8984108B2 (en) * 2007-05-03 2015-03-17 Telefonaktiebolaget L M Ericsson (Publ) Dynamic CLI mapping for clustered software entities
US8111692B2 (en) 2007-05-31 2012-02-07 Embarq Holdings Company Llc System and method for modifying network traffic
US20080320110A1 (en) * 2007-06-25 2008-12-25 Sharp Laboratories Of America, Inc. Firmware rollback and configuration restoration for electronic devices
US8913481B2 (en) * 2007-06-30 2014-12-16 Alcatel Lucent Method and system for efficient provisioning of multiple services for multiple failure restoration in multi-layer mesh networks
JP4519159B2 (en) * 2007-07-12 2010-08-04 株式会社日立製作所 Packet transfer apparatus and packet transfer method
US9026639B2 (en) 2007-07-13 2015-05-05 Pure Networks Llc Home network optimizing system
US9491077B2 (en) 2007-07-13 2016-11-08 Cisco Technology, Inc. Network metric reporting system
US8700743B2 (en) 2007-07-13 2014-04-15 Pure Networks Llc Network configuration device
US7853829B2 (en) 2007-07-13 2010-12-14 Cisco Technology, Inc. Network advisor
US8014356B2 (en) 2007-07-13 2011-09-06 Cisco Technology, Inc. Optimal-channel selection in a wireless network
TWI361592B (en) * 2007-10-29 2012-04-01 Realtek Semiconductor Corp Network apparatus with shared coefficient update processor and method thereof
US8929372B2 (en) * 2007-10-30 2015-01-06 Contextream Ltd. Grid router
US10089361B2 (en) * 2007-10-31 2018-10-02 Oracle International Corporation Efficient mechanism for managing hierarchical relationships in a relational database system
US7599314B2 (en) 2007-12-14 2009-10-06 Raptor Networks Technology, Inc. Surface-space managed network fabric
US8862706B2 (en) 2007-12-14 2014-10-14 Nant Holdings Ip, Llc Hybrid transport—application network fabric apparatus
US7774487B2 (en) * 2007-12-18 2010-08-10 The Directv Group, Inc. Method and apparatus for checking the health of a connection between a supplemental service provider and a user device of a primary service provider
JP5067153B2 (en) * 2007-12-26 2012-11-07 富士通株式会社 Optical transmission device and optical transmission device management method
US8104087B2 (en) * 2008-01-08 2012-01-24 Triumfant, Inc. Systems and methods for automated data anomaly correction in a computer network
US7870251B2 (en) * 2008-01-10 2011-01-11 At&T Intellectual Property I, L.P. Devices, methods, and computer program products for real-time resource capacity management
US8289879B2 (en) * 2008-02-07 2012-10-16 Ciena Corporation Methods and systems for preventing the misconfiguration of optical networks using a network management system
WO2009100961A1 (en) * 2008-02-13 2009-08-20 Telefonaktiebolaget Lm Ericsson (Publ) Overlay network node and overlay networks
US8040791B2 (en) * 2008-02-13 2011-10-18 Cisco Technology, Inc. Coordinated channel change in mesh networks
US8332385B2 (en) * 2008-03-11 2012-12-11 Semmle Limited Approximating query results by relations over types for error detection and optimization
JP5072673B2 (en) * 2008-03-18 2012-11-14 キヤノン株式会社 Management device, communication path control method, communication path control system, and program
DE102008017819B3 (en) * 2008-04-08 2009-12-03 Siemens Aktiengesellschaft Magnetic resonance system and method for operating a magnetic resonance system
US8068425B2 (en) 2008-04-09 2011-11-29 Embarq Holdings Company, Llc System and method for using network performance information to determine improved measures of path states
US8064200B1 (en) 2008-04-16 2011-11-22 Cyan Optics, Inc. Cooling a chassis by moving air through a midplane between two sets of channels oriented laterally relative to one another
US8390993B1 (en) 2008-04-16 2013-03-05 Cyan, Inc. Light source in chassis to provide frontal illumination of a faceplate on the chassis
US8155520B1 (en) * 2008-04-16 2012-04-10 Cyan, Inc. Multi-fabric shelf for a transport network
US8493974B1 (en) * 2008-05-28 2013-07-23 L-3 Communications Protocol-independent switch system and method
US20090310544A1 (en) * 2008-06-12 2009-12-17 Praval Jain Method and system for increasing throughput in a hierarchical wireless network
US8804760B2 (en) * 2008-06-12 2014-08-12 Mark Henrik Sandstrom Network data transport multiplexer bus with global and local optimization of capacity allocation
US8498207B2 (en) * 2008-06-26 2013-07-30 Reverb Networks Dynamic load balancing
US20100011140A1 (en) * 2008-07-08 2010-01-14 Micrel, Inc. Ethernet Controller Using Same Host Bus Timing for All Data Object Access
US8335154B2 (en) * 2008-07-30 2012-12-18 Verizon Patent And Licensing Inc. Method and system for providing fault detection and notification for composite transport groups
JP5051056B2 (en) * 2008-08-13 2012-10-17 富士通株式会社 Communications system
US8467295B2 (en) * 2008-08-21 2013-06-18 Contextream Ltd. System and methods for distributed quality of service enforcement
WO2010036268A1 (en) * 2008-09-26 2010-04-01 Dynamic Method Enterprises Limited Modular wss-based communications system with colorless add/drop interfaces
US8532487B2 (en) * 2008-10-21 2013-09-10 Broadcom Corporation Managed PON repeater and cross connect
US9049146B2 (en) * 2008-10-22 2015-06-02 Accenture Global Services Limited Automatically connecting remote network equipment through a graphical user interface
US8176200B2 (en) * 2008-10-24 2012-05-08 Microsoft Corporation Distributed aggregation on an overlay network
CN101741660A (en) * 2008-11-20 2010-06-16 深圳富泰宏精密工业有限公司 Electrical appliance connecting device
US8225005B2 (en) * 2008-12-09 2012-07-17 International Business Machines Corporation Use of peripheral component interconnect input/output virtualization devices to create high-speed, low-latency interconnect
US8346997B2 (en) * 2008-12-11 2013-01-01 International Business Machines Corporation Use of peripheral component interconnect input/output virtualization devices to create redundant configurations
KR101026637B1 (en) * 2008-12-12 2011-04-04 성균관대학교산학협력단 Method for healing faults in sensor network and the sensor network for implementing the method
WO2010073674A1 (en) * 2008-12-26 2010-07-01 日本電気株式会社 Path control apparatus, path control method, path control program, and network system
US8385739B2 (en) * 2009-02-27 2013-02-26 Futurewei Technologies, Inc. Encoding of wavelength converter systems
JP5123876B2 (en) * 2009-03-03 2013-01-23 富士通テレコムネットワークス株式会社 WDM transmission system, method for calculating optical signal-to-noise ratio of WDM transmission system, and WDM transmission apparatus
US7957400B2 (en) * 2009-03-26 2011-06-07 Terascale Supercomputing Inc. Hierarchical network topology
US20100250784A1 (en) * 2009-03-26 2010-09-30 Terascale Supercomputing Inc. Addressing Scheme and Message Routing for a Networked Device
US7957385B2 (en) * 2009-03-26 2011-06-07 Terascale Supercomputing Inc. Method and apparatus for packet routing
US8301412B2 (en) * 2009-03-26 2012-10-30 Globalfoundries Inc. Method and apparatus for measuring performance of hierarchical test equipment
US9075617B2 (en) * 2009-06-19 2015-07-07 Lindsay Ian Smith External agent interface
US8295701B2 (en) * 2009-07-17 2012-10-23 Cisco Technology, Inc. Adaptive hybrid optical control plane determination of lightpaths in a DWDM network
US20110055367A1 (en) * 2009-08-28 2011-03-03 Dollar James E Serial port forwarding over secure shell for secure remote management of networked devices
CN101645750B (en) * 2009-09-02 2013-09-11 中兴通讯股份有限公司 Distributed electrical cross device and system and method thereof for realizing SNC cascade protection
CN102014530A (en) * 2009-09-04 2011-04-13 中兴通讯股份有限公司 Processing method after failure of configuration updating and network element equipment
US8305877B2 (en) * 2009-09-10 2012-11-06 Tyco Electronics Subsea Communications Llc System and method for distributed fault sensing and recovery
US20110069608A1 (en) * 2009-09-22 2011-03-24 Miller Gary M System for providing backup programming at radio or television transmitter
US9826416B2 (en) * 2009-10-16 2017-11-21 Viavi Solutions, Inc. Self-optimizing wireless network
US20110090820A1 (en) 2009-10-16 2011-04-21 Osama Hussein Self-optimizing wireless network
US9485050B2 (en) * 2009-12-08 2016-11-01 Treq Labs, Inc. Subchannel photonic routing, switching and protection with simplified upgrades of WDM optical networks
US8385900B2 (en) * 2009-12-09 2013-02-26 Reverb Networks Self-optimizing networks for fixed wireless access
US8838745B2 (en) * 2009-12-14 2014-09-16 At&T Intellectual Property I, L.P. Systems, methods and machine-readable mediums for integrated quality assurance brokering services
US8379516B2 (en) * 2009-12-24 2013-02-19 Contextream Ltd. Grid routing apparatus and method
US8767584B2 (en) * 2010-01-29 2014-07-01 Alcatel Lucent Method and apparatus for analyzing mobile services delivery
US8542576B2 (en) * 2010-01-29 2013-09-24 Alcatel Lucent Method and apparatus for auditing 4G mobility networks
US8559336B2 (en) 2010-01-29 2013-10-15 Alcatel Lucent Method and apparatus for hint-based discovery of path supporting infrastructure
US8868029B2 (en) * 2010-01-29 2014-10-21 Alcatel Lucent Method and apparatus for managing mobile resource usage
US8493870B2 (en) * 2010-01-29 2013-07-23 Alcatel Lucent Method and apparatus for tracing mobile sessions
US20110191626A1 (en) * 2010-02-01 2011-08-04 Sqalli Mohammed H Fault-tolerant network management system
GB2477921A (en) * 2010-02-17 2011-08-24 Sidonis Ltd Analysing a network using a network model with simulated changes
US9413649B2 (en) * 2010-03-12 2016-08-09 Force10 Networks, Inc. Virtual network device architecture
US8499060B2 (en) * 2010-03-19 2013-07-30 Juniper Networks, Inc. Upgrading system software in a chassis without traffic loss
US9020344B2 (en) * 2010-03-24 2015-04-28 Zephyr Photonics Unified switching fabric architecture
US8649297B2 (en) 2010-03-26 2014-02-11 Cisco Technology, Inc. System and method for simplifying secure network setup
US8724515B2 (en) 2010-03-26 2014-05-13 Cisco Technology, Inc. Configuring a secure network
JP5494110B2 (en) * 2010-03-29 2014-05-14 富士通株式会社 Network communication path estimation method, communication path estimation program, and monitoring apparatus
US8630186B2 (en) * 2010-05-17 2014-01-14 Fujitsu Limited Systems and methods for transmission of trigger-based alarm indication suppression messages
EP2572475B1 (en) * 2010-05-20 2019-05-15 Hewlett-Packard Enterprise Development LP Switching in a network device
JP2012008871A (en) * 2010-06-25 2012-01-12 Ricoh Co Ltd Equipment management apparatus, equipment management method, and equipment management program
EP2589183B1 (en) * 2010-06-29 2014-03-26 Telefonaktiebolaget L M Ericsson (publ) A method and an apparatus for evaluating network performance
JP2012019336A (en) * 2010-07-07 2012-01-26 Fujitsu Ltd Network management system, management device, managed device, and network management method
JP5907067B2 (en) * 2010-08-11 2016-04-20 日本電気株式会社 Network information processing system, network information processing apparatus, and information processing method
ES2385011B1 (en) * 2010-10-25 2013-05-20 Telefónica, S.A. PROCEDURE FOR ESTABLISHING ROUTES ON THE TRANSMISSION NETWORK EFFECTIVELY.
US8468385B1 (en) * 2010-10-27 2013-06-18 Netapp, Inc. Method and system for handling error events
US9178804B2 (en) * 2010-11-12 2015-11-03 Tellabs Operations, Inc. Methods and apparatuses for path selection in a packet network
KR101749301B1 (en) * 2010-12-22 2017-06-21 삼성전자주식회사 Managing system of test device and managing method thereof
US20120179797A1 (en) * 2011-01-11 2012-07-12 Alcatel-Lucent Usa Inc. Method and apparatus providing hierarchical multi-path fault-tolerant propagative provisioning
WO2013106023A1 (en) * 2011-04-05 2013-07-18 Spidercloud Wireless, Inc. Configuration space feedback and optimization in a self-configuring communication system
US8768167B2 (en) * 2011-04-29 2014-07-01 Telcordia Technologies, Inc. System and method for automated provisioning of services using single step routing and wavelength assignment algorithm in DWDM networks
US8509762B2 (en) 2011-05-20 2013-08-13 ReVerb Networks, Inc. Methods and apparatus for underperforming cell detection and recovery in a wireless network
US8942562B2 (en) * 2011-05-31 2015-01-27 A Optix Technologies, Inc. Integrated commercial communications network using radio frequency and free space optical data communication
US8902758B2 (en) * 2011-06-01 2014-12-02 Cisco Technology, Inc. Light path priority in dense wavelength division multiplexed networks
US8806003B2 (en) 2011-06-14 2014-08-12 International Business Machines Corporation Forecasting capacity available for processing workloads in a networked computing environment
US8934352B2 (en) 2011-08-30 2015-01-13 At&T Intellectual Property I, L.P. Hierarchical anomaly localization and prioritization
EP2754271B1 (en) 2011-09-09 2019-11-13 Reverb Networks Inc. Methods and apparatus for implementing a self optimizing-organizing network manager
US8849776B2 (en) * 2011-10-17 2014-09-30 Yahoo! Inc. Method and system for resolving data inconsistency
US9258719B2 (en) 2011-11-08 2016-02-09 Viavi Solutions Inc. Methods and apparatus for partitioning wireless network cells into time-based clusters
GB2498336A (en) * 2012-01-04 2013-07-17 Oclaro Technology Plc Monitoring multiple optical channels
US8787365B2 (en) 2012-01-21 2014-07-22 Huawei Technologies Co., Ltd. Method for managing a switch chip port, main control board, switch board, and system
EP2815541B1 (en) 2012-02-17 2018-06-27 Osama Tarraf Methods and apparatus for coordination in multi-mode networks
JP5811005B2 (en) * 2012-03-29 2015-11-11 富士通株式会社 Network controller
US9002194B2 (en) 2012-04-09 2015-04-07 Telefonaktiebolaget L M Ericsson (Publ) Optical-layer multipath protection for optical network
US9252912B2 (en) 2012-04-09 2016-02-02 Telefonaktiebolaget L M Ericsson (Publ) Method for routing and spectrum assignment
WO2013162517A1 (en) * 2012-04-24 2013-10-31 Hewlett-Packard Development Company, L.P. Optical data interface with electrical forwarded clock
US9882643B2 (en) 2012-05-04 2018-01-30 Deutsche Telekom Ag Method and device for setting up and operating a modular, highly scalable, very simple, cost-efficient and enduring transparent optically routed network for network capacities of greater than 1 Petabit/s
US9800423B1 (en) * 2012-05-14 2017-10-24 Crimson Corporation Determining the status of a node based on a distributed system
US8929738B2 (en) 2012-05-30 2015-01-06 Telefonaktiebolaget L M Ericsson (Publ) Resilience in an access subnetwork ring
US8966148B2 (en) * 2012-06-01 2015-02-24 International Business Machines Corporation Providing real-time interrupts over Ethernet
US8984201B2 (en) 2012-06-01 2015-03-17 International Business Machines Corporation Providing I2C bus over Ethernet
US9112635B2 (en) * 2012-06-13 2015-08-18 Telefonaktiebolaget L M Ericsson (Publ) Methods and apparatus for a passive access subnetwork
US9148305B1 (en) 2012-06-18 2015-09-29 Google Inc. Configurable 10/40 gigabit ethernet switch for operating in one or more network operating modes
US10031782B2 (en) 2012-06-26 2018-07-24 Juniper Networks, Inc. Distributed processing of network device tasks
JP5949385B2 (en) * 2012-09-24 2016-07-06 富士通株式会社 Management program, management method, management apparatus, and information processing system
US8917997B2 (en) * 2012-10-05 2014-12-23 Applied Micro Circuits Corporation Collimated beam channel with four lens optical surfaces
US9692832B2 (en) * 2012-10-24 2017-06-27 Blackberry Limited System and method for controlling connection timeout in a communication network
WO2014083561A1 (en) * 2012-11-27 2014-06-05 Oliver Solutions Ltd. Telecommunication network node supporting hybrid management
US9258234B1 (en) 2012-12-28 2016-02-09 Juniper Networks, Inc. Dynamically adjusting liveliness detection intervals for periodic network communications
US9190809B2 (en) 2012-12-29 2015-11-17 Zephyr Photonics Inc. Method and apparatus for active voltage regulation in optical modules
US9728936B2 (en) 2012-12-29 2017-08-08 Zephyr Photonics Inc. Method, system and apparatus for hybrid optical and electrical pumping of semiconductor lasers and LEDs for improved reliability at high temperatures
US9160452B2 (en) 2012-12-29 2015-10-13 Zephyr Photonics Inc. Apparatus for modular implementation of multi-function active optical cables
US10958348B2 (en) 2012-12-29 2021-03-23 Zephyr Photonics Inc. Method for manufacturing modular multi-function active optical cables
US9468085B2 (en) 2012-12-29 2016-10-11 Zephyr Photonics Inc. Method and apparatus for implementing optical modules in high temperatures
US9172462B2 (en) * 2012-12-31 2015-10-27 Zephyr Photonics Inc. Optical bench apparatus having integrated monitor photodetectors and method for monitoring optical power using same
US9185170B1 (en) 2012-12-31 2015-11-10 Juniper Networks, Inc. Connectivity protocol delegation
US9998202B2 (en) 2013-03-15 2018-06-12 Smartsky Networks LLC Position information assisted beamforming
GB201305818D0 (en) * 2013-03-28 2013-05-15 British Telecomm Signal routing
JP2014199974A (en) * 2013-03-29 2014-10-23 富士通株式会社 Network element monitoring system and server
GB201305985D0 (en) 2013-04-03 2013-05-15 British Telecomm Optical switch
JP6160211B2 (en) * 2013-04-30 2017-07-12 富士通株式会社 Transmission equipment
WO2014203789A1 (en) * 2013-06-20 2014-12-24 国立大学法人名古屋大学 Optical cross-connect
US9535794B2 (en) * 2013-07-26 2017-01-03 Globalfoundries Inc. Monitoring hierarchical container-based software systems
KR101456997B1 (en) * 2013-07-30 2014-11-04 인하대학교 산학협력단 Zigbee physical layer apparatus and self-healing method thereof
US9800521B2 (en) * 2013-08-26 2017-10-24 Ciena Corporation Network switching systems and methods
US9684685B2 (en) * 2013-10-24 2017-06-20 Sap Se Using message-passing with procedural code in a database kernel
US9600551B2 (en) 2013-10-24 2017-03-21 Sap Se Coexistence of message-passing-like algorithms and procedural coding
US9419861B1 (en) * 2013-10-25 2016-08-16 Ca, Inc. Management information base table creation and use to map unique device interface identities to common identities
US9066161B2 (en) * 2013-10-30 2015-06-23 Airstone Labs, Inc. Systems and methods for physical link routing
WO2015067827A1 (en) * 2013-11-06 2015-05-14 Telefonica, S.A. Method and apparatus for the configuration of the control plane for network elements in a telecommunications network and computer program product
US10146607B2 (en) * 2013-11-26 2018-12-04 Anunta Technology Management Services Ltd. Troubleshooting of cloud-based application delivery
US10091204B1 (en) 2013-12-31 2018-10-02 EMC IP Holding Company LLC Controlling user access to protected resource based on outcome of one-time passcode authentication token and predefined access policy
US9524156B2 (en) 2014-01-09 2016-12-20 Ford Global Technologies, Llc Flexible feature deployment strategy
US9766874B2 (en) * 2014-01-09 2017-09-19 Ford Global Technologies, Llc Autonomous global software update
US11611473B2 (en) * 2014-01-14 2023-03-21 Zixcorp Systems, Inc. Provisioning a service using file distribution technology
CN110086540B (en) * 2014-03-20 2022-03-04 日本电信电话株式会社 Transmission device and transmission method
US10277496B2 (en) * 2014-03-28 2019-04-30 Fiber Mountain, Inc. Built in alternate links within a switch
US9323546B2 (en) 2014-03-31 2016-04-26 Ford Global Technologies, Llc Targeted vehicle remote feature updates
US9716762B2 (en) 2014-03-31 2017-07-25 Ford Global Technologies Llc Remote vehicle connection status
US10140110B2 (en) 2014-04-02 2018-11-27 Ford Global Technologies, Llc Multiple chunk software updates
US9325650B2 (en) 2014-04-02 2016-04-26 Ford Global Technologies, Llc Vehicle telematics data exchange
US9226217B2 (en) * 2014-04-17 2015-12-29 Twilio, Inc. System and method for enabling multi-modal communication
WO2015163865A1 (en) 2014-04-23 2015-10-29 Hewlett-Packard Development Company, L.P. Backplane interface sets
US9887874B2 (en) * 2014-05-13 2018-02-06 Cisco Technology, Inc. Soft rerouting in a network using predictive reliability metrics
EP3164962B1 (en) * 2014-07-03 2020-08-26 Fiber Mountain, Inc. Data center path switch with improved path interconnection architecture
US9485550B2 (en) * 2014-07-30 2016-11-01 Ciena Corporation Systems and methods for selection of optimal routing parameters for DWDM network services in a control plane network
US9769017B1 (en) 2014-09-26 2017-09-19 Juniper Networks, Inc. Impending control plane disruption indication using forwarding plane liveliness detection protocols
US9705630B2 (en) * 2014-09-29 2017-07-11 The Royal Institution For The Advancement Of Learning/Mcgill University Optical interconnection methods and systems exploiting mode multiplexing
GB2531546B (en) 2014-10-21 2016-10-12 Ibm Collaborative maintenance of software programs
CN104363125A (en) * 2014-11-27 2015-02-18 上海斐讯数据通信技术有限公司 Method and device for configuring optical network system and optical network system
US10834150B1 (en) 2014-12-26 2020-11-10 Ivanti, Inc. System and methods for self-organizing multicast
US9113353B1 (en) 2015-02-27 2015-08-18 ReVerb Networks, Inc. Methods and apparatus for improving coverage and capacity in a wireless network
US9525483B2 (en) * 2015-03-17 2016-12-20 Verizon Patent And Licensing Inc. Actively monitored optical fiber panel
US10102286B2 (en) * 2015-05-27 2018-10-16 Level 3 Communications, Llc Local object instance discovery for metric collection on network elements
US10042722B1 (en) * 2015-06-23 2018-08-07 Juniper Networks, Inc. Service-chain fault tolerance in service virtualized environments
US10353851B2 (en) * 2015-08-12 2019-07-16 xCelor LLC Systems and methods for implementing a physical-layer cross connect
US10623341B2 (en) * 2015-09-30 2020-04-14 International Business Machines Corporation Configuration of a set of queues for multi-protocol operations in a target driver
US10114665B2 (en) * 2015-11-18 2018-10-30 Level 3 Communications, Llc Communication node upgrade system and method for a communication network
CN106789753B (en) 2015-11-24 2020-06-26 新华三技术有限公司 Line card frame, multi-frame cluster router and message processing method
CN106789679B (en) * 2015-11-24 2020-02-21 新华三技术有限公司 Line card frame, multi-frame cluster router, route selection method and message processing method
CN105634960B (en) * 2015-12-24 2017-04-05 中国科学院计算技术研究所 Based on the data publication device of Fractal Tree structure, method, control device and intelligent chip
US10374936B2 (en) 2015-12-30 2019-08-06 Juniper Networks, Inc. Reducing false alarms when using network keep-alive messages
KR102396635B1 (en) * 2016-02-15 2022-05-12 주식회사 에치에프알 System for Compensating for the Latency caused by Switchover from the Fronthaul Ring Topology
US9960954B2 (en) * 2016-03-29 2018-05-01 Wipro Limited Methods and systems to improve correlation between overlay and underlay networks in data centers
US9935852B2 (en) * 2016-06-06 2018-04-03 General Electric Company Methods and systems for network monitoring
US10397085B1 (en) 2016-06-30 2019-08-27 Juniper Networks, Inc. Offloading heartbeat responses message processing to a kernel of a network device
JP2018011147A (en) * 2016-07-12 2018-01-18 オムロン株式会社 Information processing device, information processing program and information processing method
US10334334B2 (en) * 2016-07-22 2019-06-25 Intel Corporation Storage sled and techniques for a data center
CN109952744B (en) 2016-09-26 2021-12-14 河谷控股Ip有限责任公司 Method and equipment for providing virtual circuit in cloud network
CN107888322B (en) * 2016-09-30 2020-09-11 扬智科技股份有限公司 Decoding method of Ethernet physical layer and Ethernet physical layer circuit
US10241477B2 (en) * 2016-11-02 2019-03-26 Edison Labs, Inc. Adaptive control methods for buildings with redundant circuitry
US10785278B2 (en) * 2016-11-04 2020-09-22 Google Llc Network management interface
US10097262B2 (en) * 2016-11-30 2018-10-09 Ciena Corporation Client protection switch in optical pluggable transceivers activated through fast electrical data squelch
US10313259B2 (en) * 2016-12-09 2019-06-04 Vmware, Inc. Suppressing broadcasts in cloud environments
US10404580B2 (en) * 2017-01-20 2019-09-03 Ciena Corporation Network level protection route evaluation systems and methods
US10540190B2 (en) * 2017-03-21 2020-01-21 International Business Machines Corporation Generic connector module capable of integrating multiple applications into an integration platform
US10936598B2 (en) 2017-11-21 2021-03-02 Gto Llc Systems and methods for targeted exchange emulation
EP3738228A1 (en) 2018-01-09 2020-11-18 British Telecommunications public limited company Optical data transmission system
US10817274B2 (en) * 2018-01-31 2020-10-27 Salesforce.Com, Inc. Techniques for distributing software packages
US10966005B2 (en) * 2018-03-09 2021-03-30 Infinera Corporation Streaming telemetry for optical network devices
GB2573554B (en) 2018-05-10 2020-12-16 Samsung Electronics Co Ltd Improvements in and relating to failure modes in multi-hop networks
US11750441B1 (en) 2018-09-07 2023-09-05 Juniper Networks, Inc. Propagating node failure errors to TCP sockets
US10516482B1 (en) * 2019-02-08 2019-12-24 Google Llc Physical layer routing and monitoring
US11042416B2 (en) 2019-03-06 2021-06-22 Google Llc Reconfigurable computing pods using optical networks
US10785127B1 (en) * 2019-04-05 2020-09-22 Nokia Solutions And Networks Oy Supporting services in distributed networks
US10615867B1 (en) * 2019-04-18 2020-04-07 Ciena Corporation Optical amplifier signaling systems and methods for shutoff coordination and topology discovery
US11671195B2 (en) * 2019-05-14 2023-06-06 Infinera Corporation Proactive optical spectrum defragmentation scheme
US11218220B2 (en) * 2019-05-14 2022-01-04 Infinera Corporation Out-of-band communication channel for subcarrier-based optical communication systems
US11489613B2 (en) * 2019-05-14 2022-11-01 Infinera Corporation Out-of-band communication channel for subcarrier-based optical communication systems
US11122347B2 (en) 2019-07-01 2021-09-14 Google Llc Reconfigurable computing pods using optical networks with one-to-many optical switches
US11256709B2 (en) 2019-08-15 2022-02-22 Clinicomp International, Inc. Method and system for adapting programs for interoperability and adapters therefor
US11418395B2 (en) * 2020-01-08 2022-08-16 Servicenow, Inc. Systems and methods for an enhanced framework for a distributed computing system
US11394622B1 (en) 2020-03-19 2022-07-19 Juniper Networks, Inc Systems and methods for efficient presentation of device-level information via scalable interactive device-visualization interfaces
US11831739B2 (en) * 2020-06-22 2023-11-28 Sony Semiconductor Solutions Corporation Communication apparatus and communication system
CN112528901A (en) * 2020-12-17 2021-03-19 青岛以萨数据技术有限公司 Vehicle aggregation alarm method and system based on big data
US11770193B2 (en) * 2021-07-28 2023-09-26 Ciena Corporation Mitigating instability in cascaded optical power controllers
CN114020071B (en) * 2021-11-10 2022-10-18 经纬恒润(天津)研究开发有限公司 Constant temperature system and constant temperature control method
US11722435B2 (en) * 2021-11-18 2023-08-08 United States Of America As Represented By The Secretary Of The Navy System with layer-one switch for flexible communication interconnections
US20230319442A1 (en) * 2022-03-31 2023-10-05 WaveXchange Corporation System and method for managing a physical layer of an optical network and exchange therefor

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5450225A (en) * 1992-06-15 1995-09-12 Cselt-Centro Studi E Laboratori Optical switch for fast cell-switching network
US5488500A (en) * 1994-08-31 1996-01-30 At&T Corp. Tunable add drop optical filtering method and apparatus
US6061482A (en) * 1997-12-10 2000-05-09 Mci Communications Corporation Channel layered optical cross-connect restoration system

Family Cites Families (39)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4979118A (en) * 1989-03-10 1990-12-18 Gte Laboratories Incorporated Predictive access-control and routing system for integrated services telecommunication networks
JP2566081B2 (en) * 1990-12-19 1996-12-25 エイ・ティ・アンド・ティ・コーポレーション Optical packet encoding method and switching node
US5169332A (en) 1991-09-19 1992-12-08 International Business Machines Corp. Means for locking cables and connector ports
US5480319A (en) * 1993-12-30 1996-01-02 Vlakancic; Constant G. Electrical connector latching apparatus
JP3521955B2 (en) * 1994-06-14 2004-04-26 株式会社日立製作所 Hierarchical network management system
KR970705313A (en) 1994-08-12 1997-09-06 에버세드 마이클 A CONFIGURATION METHOD FOR DATA MANAGEMENT SYSTEM
US5911018A (en) 1994-09-09 1999-06-08 Gemfire Corporation Low loss optical switch with inducible refractive index boundary and spaced output target
JP3432958B2 (en) * 1995-07-21 2003-08-04 富士通株式会社 Optical transmission system and transmission line switching control method
KR0150367B1 (en) * 1995-12-19 1998-11-02 양승택 Full combining type atm switching apparatus
US6046742A (en) 1997-05-13 2000-04-04 Micron Electronics, Inc. Display of system information
US6292905B1 (en) 1997-05-13 2001-09-18 Micron Technology, Inc. Method for providing a fault tolerant network using distributed server processes to remap clustered network resources to other servers during server failure
US6078596A (en) 1997-06-26 2000-06-20 Mci Communications Corporation Method and system of SONET line trace
US5970072A (en) * 1997-10-02 1999-10-19 Alcatel Usa Sourcing, L.P. System and apparatus for telecommunications bus control
US6157657A (en) * 1997-10-02 2000-12-05 Alcatel Usa Sourcing, L.P. System and method for data bus interface
US6067288A (en) 1997-12-31 2000-05-23 Alcatel Usa Sourcing, L.P. Performance monitoring delay modules
US6167041A (en) * 1998-03-17 2000-12-26 Afanador; J. Abraham Switch with flexible link list manager for handling ATM and STM traffic
US6275499B1 (en) 1998-03-31 2001-08-14 Alcatel Usa Sourcing, L.P. OC3 delivery unit; unit controller
US6285673B1 (en) 1998-03-31 2001-09-04 Alcatel Usa Sourcing, L.P. OC3 delivery unit; bus control module
US6240087B1 (en) 1998-03-31 2001-05-29 Alcatel Usa Sourcing, L.P. OC3 delivery unit; common controller for application modules
US6567429B1 (en) 1998-06-02 2003-05-20 Dynamics Research Corporation Wide area multi-service broadband network
CA2242191A1 (en) * 1998-06-30 1999-12-30 Northern Telecom Limited A large scale communications network having a fully meshed optical core transport network
US6389015B1 (en) * 1998-08-10 2002-05-14 Mci Communications Corporation Method of and system for managing a SONET ring
US6356282B2 (en) 1998-12-04 2002-03-12 Sun Microsystems, Inc. Alarm manager system for distributed network management system
US6411623B1 (en) 1998-12-29 2002-06-25 International Business Machines Corp. System and method of automated testing of a compressed digital broadcast video network
US6260062B1 (en) * 1999-02-23 2001-07-10 Pathnet, Inc. Element management system for heterogeneous telecommunications network
US6647208B1 (en) * 1999-03-18 2003-11-11 Massachusetts Institute Of Technology Hybrid electronic/optical switch system
CA2285128C (en) 1999-10-06 2008-02-26 Nortel Networks Corporation Switch for optical signals
US6507421B1 (en) 1999-10-08 2003-01-14 Lucent Technologies Inc. Optical monitoring for OXC fabric
US6650803B1 (en) * 1999-11-02 2003-11-18 Xros, Inc. Method and apparatus for optical to electrical to optical conversion in an optical cross-connect switch
US6792174B1 (en) 1999-11-02 2004-09-14 Nortel Networks Limited Method and apparatus for signaling between an optical cross-connect switch and attached network equipment
US7085237B1 (en) * 2000-03-31 2006-08-01 Alcatel Method and apparatus for routing alarms in a signaling server
US6366716B1 (en) * 2000-06-15 2002-04-02 Nortel Networks Limited Optical switching device
US6738825B1 (en) 2000-07-26 2004-05-18 Cisco Technology, Inc Method and apparatus for automatically provisioning data circuits
US20020103921A1 (en) 2001-01-31 2002-08-01 Shekar Nair Method and system for routing broadband internet traffic
US20020109877A1 (en) 2001-02-12 2002-08-15 David Funk Network management architecture
US6973229B1 (en) * 2001-02-28 2005-12-06 Lambda Opticalsystems Corporation Node architecture for modularized and reconfigurable optical networks, and methods and apparatus therefor
US6731832B2 (en) 2001-02-28 2004-05-04 Lambda Opticalsystems Corporation Detection of module insertion/removal in a modular optical network, and methods and apparatus therefor
US20030091267A1 (en) * 2001-02-28 2003-05-15 Alvarez Mario F. Node management architecture with customized line card handlers for a modular optical network, and methods and apparatus therefor
US20050089027A1 (en) 2002-06-18 2005-04-28 Colton John R. Intelligent optical data switching system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5450225A (en) * 1992-06-15 1995-09-12 Cselt-Centro Studi E Laboratori Optical switch for fast cell-switching network
US5488500A (en) * 1994-08-31 1996-01-30 At&T Corp. Tunable add drop optical filtering method and apparatus
US6061482A (en) * 1997-12-10 2000-05-09 Mci Communications Corporation Channel layered optical cross-connect restoration system

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2329281A1 (en) * 2008-09-11 2011-06-08 Nortel Networks Limited Protection for provider backbone bridge traffic engineering
US20110164493A1 (en) * 2008-09-11 2011-07-07 Nigel Lawrence Bragg Protection for provider backbone bridge traffic engineering
CN102150053A (en) * 2008-09-11 2011-08-10 北方电讯网络有限公司 Protection for provider backbone bridge traffic engineering
EP2329281A4 (en) * 2008-09-11 2011-12-07 Nortel Networks Ltd Protection for provider backbone bridge traffic engineering
US9106524B2 (en) 2008-09-11 2015-08-11 Rpx Clearinghouse Llc Protection for provider backbone bridge traffic engineering
EP3192198A4 (en) * 2014-09-11 2018-04-25 The Arizona Board of Regents on behalf of The University of Arizona Resilient optical networking
US10158447B2 (en) 2014-09-11 2018-12-18 The Arizona Board Of Regents On Behalf Of The University Of Arizona Resilient optical networking
US11316712B2 (en) 2017-06-21 2022-04-26 Byd Company Limited Canopen-based data transmission gateway changeover method, system and apparatus thereof
US11303533B2 (en) 2019-07-09 2022-04-12 Cisco Technology, Inc. Self-healing fabrics
WO2021249639A1 (en) * 2020-06-10 2021-12-16 Telefonaktiebolaget Lm Ericsson (Publ) Fault location in an optical ring network

Also Published As

Publication number Publication date
US20030023709A1 (en) 2003-01-30
US20020165962A1 (en) 2002-11-07
US20030163555A1 (en) 2003-08-28
WO2002069104A3 (en) 2009-06-11
US20020174207A1 (en) 2002-11-21
US6973229B1 (en) 2005-12-06
US20050259571A1 (en) 2005-11-24
AU2002254037A8 (en) 2009-07-30
US7013084B2 (en) 2006-03-14
US20020176131A1 (en) 2002-11-28
AU2002254037A1 (en) 2002-09-12

Similar Documents

Publication Publication Date Title
US7013084B2 (en) Multi-tiered control architecture for adaptive optical networks, and methods and apparatus therefor
US6731832B2 (en) Detection of module insertion/removal in a modular optical network, and methods and apparatus therefor
US20030091267A1 (en) Node management architecture with customized line card handlers for a modular optical network, and methods and apparatus therefor
US7933266B2 (en) Configurable network router
US6856600B1 (en) Method and apparatus for isolating faults in a switching matrix
US6272154B1 (en) Reconfigurable multiwavelength network elements
US6950391B1 (en) Configurable network router
Maeda Management and control of transparent optical networks
EP2039036B1 (en) Method for separation of ip+optical management domains
US7747165B2 (en) Network operating system with topology autodiscovery
US7123806B2 (en) Intelligent optical network element
US7266297B2 (en) Optical cross-connect
US7660238B2 (en) Mesh with protection channel access (MPCA)
US7474850B2 (en) Reroutable protection schemes of an optical network
JPH11331224A (en) Wavelength division multiplex optical communication network provided with passive path through in each node
Muchanga et al. Inter-layer communication for improving restoration time in optical networks
CA2390586A1 (en) Network operating system with topology autodiscovery

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

121 Ep: the epo has been informed by wipo that ep was designated in this application
REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: JP

WWW Wipo information: withdrawn in national office

Country of ref document: JP