WO2002025869A2 - Broadband system with intelligent network devices - Google Patents
Broadband system with intelligent network devices Download PDFInfo
- Publication number
- WO2002025869A2 WO2002025869A2 PCT/US2001/029739 US0129739W WO0225869A2 WO 2002025869 A2 WO2002025869 A2 WO 2002025869A2 US 0129739 W US0129739 W US 0129739W WO 0225869 A2 WO0225869 A2 WO 0225869A2
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- network
- packet
- network element
- end user
- message
- Prior art date
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/28—Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
- H04L12/2801—Broadband local area networks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/28—Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
- H04L12/2854—Wide area networks, e.g. public data networks
- H04L12/2856—Access arrangements, e.g. Internet access
- H04L12/2858—Access network architectures
- H04L12/2859—Point-to-point connection between the data network and the subscribers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/28—Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
- H04L12/2854—Wide area networks, e.g. public data networks
- H04L12/2856—Access arrangements, e.g. Internet access
- H04L12/2869—Operational details of access network equipments
- H04L12/287—Remote access server, e.g. BRAS
- H04L12/2872—Termination of subscriber connections
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/28—Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
- H04L12/2854—Wide area networks, e.g. public data networks
- H04L12/2856—Access arrangements, e.g. Internet access
- H04L12/2869—Operational details of access network equipments
- H04L12/2898—Subscriber equipments
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/08—Configuration management of networks or network elements
- H04L41/0803—Configuration setting
- H04L41/0806—Configuration setting for initial configuration or provisioning, e.g. plug-and-play
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/12—Discovery or management of network topologies
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/50—Network service management, e.g. ensuring proper service fulfilment according to agreements
- H04L41/5003—Managing SLA; Interaction between SLA and QoS
- H04L41/5019—Ensuring fulfilment of SLA
- H04L41/5022—Ensuring fulfilment of SLA by giving priorities, e.g. assigning classes of service
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/20—Traffic policing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/24—Traffic characterised by specific attributes, e.g. priority or QoS
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/24—Traffic characterised by specific attributes, e.g. priority or QoS
- H04L47/2416—Real-time traffic
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/24—Traffic characterised by specific attributes, e.g. priority or QoS
- H04L47/2441—Traffic characterised by specific attributes, e.g. priority or QoS relying on flow classification, e.g. using integrated services [IntServ]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/32—Flow control; Congestion control by discarding or delaying data units, e.g. packets or frames
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/70—Admission control; Resource allocation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/70—Admission control; Resource allocation
- H04L47/72—Admission control; Resource allocation using reservation actions during connection setup
- H04L47/724—Admission control; Resource allocation using reservation actions during connection setup at intermediate nodes, e.g. resource reservation protocol [RSVP]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/70—Admission control; Resource allocation
- H04L47/78—Architectures of resource allocation
- H04L47/788—Autonomous allocation of resources
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/70—Admission control; Resource allocation
- H04L47/80—Actions related to the user profile or the type of traffic
- H04L47/801—Real time traffic
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/70—Admission control; Resource allocation
- H04L47/80—Actions related to the user profile or the type of traffic
- H04L47/805—QOS or priority aware
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/70—Admission control; Resource allocation
- H04L47/82—Miscellaneous aspects
- H04L47/822—Collecting or measuring resource availability data
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/70—Admission control; Resource allocation
- H04L47/82—Miscellaneous aspects
- H04L47/829—Topology based
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L61/00—Network arrangements, protocols or services for addressing or naming
- H04L61/50—Address allocation
- H04L61/5007—Internet protocol [IP] addresses
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L61/00—Network arrangements, protocols or services for addressing or naming
- H04L61/50—Address allocation
- H04L61/5007—Internet protocol [IP] addresses
- H04L61/5014—Internet protocol [IP] addresses using dynamic host configuration protocol [DHCP] or bootstrap protocol [BOOTP]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/60—Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client
- H04N21/61—Network physical structure; Signal processing
- H04N21/6106—Network physical structure; Signal processing specially adapted to the downstream path of the transmission network
- H04N21/6118—Network physical structure; Signal processing specially adapted to the downstream path of the transmission network involving cable transmission, e.g. using a cable modem
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/60—Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client
- H04N21/61—Network physical structure; Signal processing
- H04N21/6156—Network physical structure; Signal processing specially adapted to the upstream path of the transmission network
- H04N21/6168—Network physical structure; Signal processing specially adapted to the upstream path of the transmission network involving cable transmission, e.g. using a cable modem
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/10—Adaptations for transmission by electrical cable
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/16—Analogue secrecy systems; Analogue subscription systems
- H04N7/173—Analogue secrecy systems; Analogue subscription systems with two-way working, e.g. subscriber sending a programme selection signal
- H04N7/17309—Transmission or handling of upstream communications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04Q—SELECTING
- H04Q3/00—Selecting arrangements
- H04Q3/0016—Arrangements providing connection between exchanges
- H04Q3/0029—Provisions for intelligent networking
- H04Q3/0045—Provisions for intelligent networking involving hybrid, i.e. a mixture of public and private, or multi-vendor systems
Definitions
- the typical communication network deployed today by cable television service providers uses hybrid fiber/coax (HFC) technology.
- An example of such a network is shown in FIG. 1.
- the network includes a headend 10 connected to an optical network unit (ONU) 12 over optical fiber cable 11 using analog transmission, trunk amplifiers 14, taps 16, line extenders 18 and coax cable 20 (feeder 22, distribution 24 and drop 26 connected to homes 28).
- the network is considered hybrid because the connection between the ONU and the headend uses optical fiber cable in a physical star or point-to-point configuration while the connections between the ONU and the homes use coax cable in a tree and branch topology.
- An HFC network with a single ONU typically serves anywhere from 500 to 2000 homes.
- the feeder portion 22 includes trunk amplifiers 14 that are spaced every 2000 to 3000 feet, hi the distribution portion 24, taps 16 are added as needed to serve homes passed by the distribution coax cable 20.
- a tap typically serves between 2 and 8 homes to connect to individual homes over drops 26 that are up to 400 feet in length.
- Line extenders 18 are added in the distribution to boost the signals as needed.
- FIG. 2 shows the typical frequency spectrum for upstream and downstream transmission in the network.
- Downstream transmission of analog signals from the ONU 12 to the homes 28 generally occupies a bandwidth range that starts at 55 MHz and ends at 550, 750 or 860 MHz, depending on the type of network equipment used.
- the downstream analog bandwidth is divided into 6 MHz channels (8 MHz in Europe).
- the upstream transmission from the homes 28 to the ONU 12 is usually specified to occupy the bandwidth range between 5 and 45 MHz.
- the DOCSIS protocol has been developed for handling bi-directional signal transmission. Newer systems also use a band of frequencies located above the analog downstream band to provide downstream digital services. These digital services are delivered in 6 MHz channels at a typical data rate of 25 Mbps.
- ingress noise in the upstream direction is a problem that is due primarily to poor and irregular grounding of the drop coax cable terminated at the home. Because of the tree and branch topology, homes at the far end of the network experience much greater loss than do the homes that are near to the headend/ONU. In addition, the impulse response can be very different from home to home due to reflections.
- variable loss and variable impulse response requires the use of complex signal equalization at the receiver located at the headend/ONU. This equalization can require on the order of milliseconds to converge and can only correct for flat loss in the cable plant.
- the DOCSIS protocol may divide the upstream signal into subchannels, such as 10 or 20 subchannels of 1 MHz bandwidth each and uses Quadrature Phase-Shift Keying (QPSK) or 16 QAM (Quadrature Amplitude Modulation) signal modulation. Each such subchannel operates at about 1.5 Mbps for a total upstream bandwidth on the order of 10 to 20 Mbps.
- QPSK Quadrature Phase-Shift Keying
- 16 QAM Quadrature Amplitude Modulation
- CMTS Cable Modem Termination System
- trunk amplifiers and line extenders can drift which then requires manual measurements and realignments in the field.
- Component failure in the feeding branches of the existing system can render an entire neighborhood out of service.
- service provisioning requires manual labor in the field.
- Current network repair is primarily done in a reactionary mode, prompted by phone calls from affected customers.
- the present system uses point-to-point data links between intelligent network elements located in the feeder/distribution network to provide reliable, secure, symmetric, bi-directional broadband access.
- Digital signals are terminated at the intelligent network elements, switched and regenerated for transmission across additional upstream or downstream data links as needed to connect a home to a headend or router.
- the present system provides an overlay onto the existing cable television network such that the point-to-point data links are carried on the cable plant using bandwidth that resides above the standard upstream/downstream spectrum.
- the intelligent network elements can be co-located with or replace the standard network elements (i.e., trunk amplifiers, taps, line extenders, network interface at the home) to take advantage of existing network configurations.
- the standard network elements can be selectively replaced by the intelligent network elements in an incremental approach.
- the data links are made over relatively short runs of coax cable, which can provide greater bandwidth than the typical end-to-end feeder/distribution connection between a home and the headend or optical network unit.
- the bandwidth on a distribution portion in an embodiment of the invention is about 100 Mbps and is shared by only about 50 to 60 homes.
- the point-to-point nature of the present system allows a user to operate with statistically multiplexed rates on the order of an average of 2 Mbps while peaking up to 100 Mbps occasionally.
- By increasing the transmission rates in the feeder and distribution portions still higher user rates are possible. For example, increasing the feeder distribution bandwidth to 10 Gbps using separate fiber feeder and increasing the distribution bandwidth to 1 Gbps using 1 Gbps Ethernet or other RF technologies allows user rates at the home on the order of 100 Mbps.
- embodiments are described herein which employ hybrid fiber/coax cable plant, the principles of the present invention are applicable to alternate embodiments, which use all fiber cable in the feeder/distribution in tree and branch topologies.
- an intelligent network element includes a data switch, at least two transceivers and a processor.
- the data switch is a multiport Layer 2 data switch controlled by the processor and the transceiver comprisesTOOBaseT or 1 Gbps Ethernet or other data transmission technologies.
- the transmitter input and the receiver output are connected to respective output and input ports of the Layer 2 data switch.
- the transmitter output and the receiver input are coupled to the coax cable plant through respective up and down converters.
- the transmitters and receivers terminate data links, which connect with other upstream, or downstream intelligent network elements.
- the Layer 2 data switch multiplexes messages received on the data links across the multiple ports.
- the present invention provides a truly broadband access medium.
- Components are capable of being inserted into an existing HFC cable infrastructure to convert that infrastructure into a truly broadband, Quality of Service (QoS) enabled access medium.
- QoS Quality of Service
- the present invention offers cable service providers a low cost solution that enables them to provide to the home and small business users a high quality, high bandwidth access medium capable of delivering next generation services.
- the present approach also provides immediate data security starting at the customer drop. Since all of the network elements are addressable, a proactive rather than reactive maintenance approach is made possible. In addition, a switch-bypass capability incorporated in the intelligent network elements can be automatically invoked in the event of component malfunction thereby drastically reducing system unavailability to customers in a given neighborhood.
- FIG. 1 illustrates a conventional hybrid fiber/coax cable television feeder/distribution network.
- FIG. 2 shows the typical frequency spectrum for upstream and downstream communications over the network of FIG. 1.
- FIG. 3 illustrates an embodiment of a network configuration of intelligent network elements in accordance with the present invention for providing point-to- point data links between intelligent network elements in a broadband, bidirectional access system.
- FIG. 4 shows additional frequency spectrum included for upstream and downstream communications over a first embodiment of the network of FIG. 3.
- FIG. 5 shows additional frequency spectrum included for upstream and downstream communications over a second embodiment of the network of FIG. 3.
- FIG. 6 is a block diagram of interfaces to an optical distribution switch.
- FIG. 7 is a block diagram of interfaces to a network distribution switch.
- FIG. 8 is a block diagram of interfaces to a subscriber access switch.
- FIG. 9 is a block diagram of interfaces to a network interface unit.
- FIG. 10 is a block diagram of an embodiment of a network element for the network of FIG. 3.
- FIG. 11 is a diagram of a frame structure for use in the network of FIG. 3.
- FIG. 12A illustrates a data phase portion of the frame structure of FIG. 11.
- FIG. 12B illustrates an out-of-band (OOB) data/signaling section of the data phase portion of FIG. 12 A.
- FIG. 13 is a block diagram of a transmitter of the network element of FIG.
- FIG. 14 is a block diagram of a receiver of the network element of FIG. 10.
- FIG. 15 is a timing diagram showing the states in the receiver of FIG. 14.
- FIG. 16 is a block diagram of a PHY device of the network element of FIG. 10.
- FIG. 17 is a block diagram of a first embodiment of an RF complex.
- FIG. 18 is a block diagram of a second embodiment of an RF complex.
- FIG. 19 illustrates a packet structure
- FIG. 20 illustrates a header structure for the packet of FIG. 19.
- FIG. 21 illustrates a DHCP message stracture.
- FIG. 22 illustrates message flow for tag assignment.
- FIG. 23 illustrates DHCP options fields.
- FIG. 24 illustrates DHCP options fields at an originating network element.
- FIG. 25 illustrates DHCP options fields at an intermediate network element.
- FIG. 26 illustrates downstream message processing at an NTU.
- FIG. 27 shows a message structure for control messages.
- FIG. 28 illustrates upstream packet flow through a network interface unit.
- FIG. 29 illustrates a scheduler task in an embodiment.
- FIG. 30 illustrates traffic shaping/policing and transmission scheduling at a network interface unit in another embodiment.
- FIG. 31 is a flow diagram of packet mapping logic a network interface unit.
- FIG. 32 is a flow diagram of scheduler logic at a network interface unit.
- FIG. 33 is a flow diagram of transmitter logic at a network interface unit.
- FIG. 34 illustrates traffic queuing at an intermediate network element.
- FIGs. 35A, 35B illustrate scheduling logic at an intermediate network element.
- FIG. 36 illustrates flow control thresholding at an intermediate network element.
- FIGs. 37A-37G show message formats for resource request, request grant, request denial, resource commit, commit confirm, release confirm and resource release messages, respectively.
- FIG. 38 illustrates a state diagram of CAC server logic for keeping track of changes in the state of a connection and corresponding CAC server actions.
- FIG. 39 illustrates a setup message format.
- FIGs. 40A-40C illustrate subfields for the setup message of FIG. 39.
- FIG. 41 illustrates a message fonnat for request, teardown and get parameters.
- FIG. 42 illustrates a message format for a modify parameters message.
- FIG. 43 illustrates a connection parameters message format.
- FIG. 44 illustrates an exemplary topology.
- FIG. 45 illustrates a second embodiment of a network configuration of intelligent network elements in accordance with the present invention.
- FIG. 46 is a block diagram of a mini fiber node of the network of FIG. 45.
- FIG. 47 illustrates a state diagram for a legacy bootstrap.
- FIGs. 48A-48C illustrate gain-redistribution in a segment of the present system.
- FIG. 49 illustrates modem bootstrap communication along a segment of the present system.
- FIG. 50 illustrates a modem upstream bootstrap state machine.
- FIG. 51 illustrates a modem downstream bootstrap state machine.
- FIG. 52 illustrates modem BIST, bypass and fault recovery.
- INTELLIGENT NETWORK ELEMENT A device for receiving and transmitting signals over a transmission medium such as a coaxial cable including a data switch, two or more transceivers each connected to the medium, and a processor, and operable to pass through a portion of the signals on the coaxial cable at a lower bandwidth and operable to process and switch a portion of the signals on the medium at a higher bandwidth, the intelligent network elements including a distribution switch, subscriber access switch, intelligent line extender, and network interface unit. CONNECTION (also INTERCONNECTION):
- LINK (or DATA LINK):
- ADMISSIBLE REGION Data indicative of the current rate of message transmission, or bandwidth, currently being carried relative to the maximum message throughput which may be accommodated, or carried, over a connection or portion of a connection.
- CRITICAL SEGMENT A link which brings upstream traffic to an element at a speed which is lower than the speed at which the traffic is going to be carried beyond that element i.e. a bottleneck which defines the lowest throughput of the path back to the headend from an end user.
- FIG. 3 illustrates an embodiment of a network configuration in accordance with the present invention.
- This network configuration is described in U.S. Provisional Application No. 60/234,682 filed September 22, 2000 which is incorporated herein in its entirety.
- the network configuration also referred to herein as an Access Network, includes intelligent network elements each of which uses a physical layer technology that allows data connections to be carried over coax cable distribution facilities from every subscriber.
- point-to-point data links are established between the intelligent network elements over the coax cable plant. Signals are terminated at the intelligent network elements, switched and regenerated for transmission across upstream or downstream data links as needed to connect a home to the headend.
- the intelligent network elements are interconnected using the existing cable television network such that the point-to-point data links are carried on the cable plant using bandwidth that resides above the standard upstream/downstream spectrum.
- FIG. 4 shows the additional upstream and downstream bandwidth, nominally illustrated as residing at 1025 to 1125 MHz (upstream) and 1300 to 1400 MHz (downstream), though other bandwidths and frequencies can be used.
- the 100 Mbps upstream and downstream bandwidths are provided in the spectrum 750 to 860 MHz.
- intelligent network elements can co-exist with today's standard elements which allow signals up to about 1 GHz to be passed through.
- FIG. 4 shows the additional upstream and downstream bandwidth, nominally illustrated as residing at 1025 to 1125 MHz (upstream) and 1300 to 1400 MHz (downstream), though other bandwidths and frequencies can be used.
- the 100 Mbps upstream and downstream bandwidths are provided in the spectrum 750 to 860 MHz.
- intelligent network elements can co-exist with today's standard elements which allow signals up to about 1
- FIG. 5 shows a frequency spectrum allocation above the DOCSIS spectrum as defined in the following table for a second embodiment, with duplexing channel spectrums allocated in the 777.5MHz to 922.5MHz regime for lOOMb/s operation and in the 1 GHz to 2GHz regime for IGb/s operation. These are example frequencies and can vary depending on implementation.
- the intelligent network elements include an intelligent optical network unit or node 112, intelligent trunk amplifier 114, subscriber access switch (SAS) 116, intelligent line extender 118 and network interface unit (NTU) 119.
- a standard residential gateway or local area network 30 connected to the NIU 119 at the home is also shown.
- the trank amplifier 114 is also referred to herein as a distribution switch (DS).
- the intelligent network elements can be combined with or replace the respective standard network elements (FIG. 1) so as to take advantage of existing network configurations.
- the configuration shown includes ONU assembly 312 comprising standard ONU 12 and intelligent ONU 112 also referred to herein as an optical distribution switch (ODS).
- trunk amplifier or DA assembly 314 includes conventional trunk amp 14 and intelligent trunk amp 114; cable tap assembly 316 includes standard tap 16 and subscriber access switch 116; and line extender assembly 318 includes standard line extender 18 and intelligent line extender 118.
- the intelligent ONU or ODS is connected over line 15 to a router 110, which has connections to a server farm 130, a video server 138, a call agent 140 and IP network 142.
- the server farm 130 includes a Tag/Topology server 132, a network management system (NMS) server 134, a provisioning server 135 and a connection admission control (CAC) server 136, all coupled to an Ethernet bus which are described further herein.
- a headend 10 is shown having connections to a satellite dish 144 and CMTS
- the headend 10 delivers a conventional amplitude modulated optical signal to the ONU 12.
- This signal includes the analog video and DOCSIS channels.
- the ONU performs an optical to electrical (O/E) conversion and sends radio frequency (RF) signals over feeder coax cables 20 to the trunk amplifiers or DAs 14.
- RF radio frequency
- the present system includes intelligent network elements that can provide high bandwidth capacity to each home, hi the Access Network of the present invention, each intelligent network element provides switching of data packets for data flow downstream and statistical multiplexing and priority queuing for data flow upstream.
- the legacy video and DOCSIS data signals are able to flow through transparently because the intelligent network elements use a part of the frequency spectrum of the coax cable that does not overlap with the spectrum being used for legacy services.
- the network elements of the Access Network combine the legacy functions of distribution amplifiers and taps into intelligent devices that also provide switching of digital traffic in the downstream direction and queuing, prioritization and multiplexing in the upstream direction.
- the intelligent ONU or ODS 112 receives a high-speed data signal (e.g., Gigabit Ethernet) from router 110 on line 15.
- a high-speed data signal e.g., Gigabit Ethernet
- the Gigabit Ethernet packetized data is switched depending on its destination to the appropriate port 20A, 20B, 20C or 20D.
- the data is modulated into RF bandwidth signals and combined with the legacy RF signals received from the ONU 12 on line 12A for transmission over the feeder coax cables 20.
- Switching of the data is also performed at each DS 114 and SAS 116 until the data reaches the destination NIU 119, at which point the data is transmitted on the Home LAN, or Ethernet 30. Filtering and switching at each intelligent network element provides guaranteed privacy of user data downstream.
- the ODS 112 collects data from the ports 20A, 20B, 20C, 20D and separates the legacy data and video from the Gigabit Ethernet data.
- the legacy data and video signals are passed to the ONU on line 12A and the Gigabit Ethernet data is multiplexed, converted to optical signals and forwarded to the router on line 15.
- the ODS performs several functions that allow the Access Network to inter- work with any standard router and at the same time switch data efficiently through the Access Network.
- a standard Ethernet packet includes layer 2 and layer 3 address information and a data payload.
- the layer 2 information includes a destination MAC address that is 48 bits in length.
- the present approach provides for more efficient switching in the Access Network by associating a routing identification or Routing ED (RID) with each network element e.g. NTUs 119 in the Access Network.
- the RED is 12 bits in length and is included in an Access Network Header.
- the Tag/Topology server 132 (FIG. 3) assigns the REDs.
- the ODS 12 acts as a learning bridge to learn and maintain the MAC address ⁇ ->RID mapping and inserts the Access Network Header containing the RED of the destination element (e.g., NIU) for all packets going downstream into the Access Network.
- the ODS inserts a broadcast RED.
- the Gigabit Ethernet data is terminated, processed and switched onto the appropriate port(s) based on the entry for the corresponding RED in a routing table kept at the ODS.
- the routing table simply maps the REDs to the egress ports of the network element. For upstream packets received from the Access Network, the ODS strips off the Access Network Header and forwards a standard Ethernet packet to the router.
- the ODS communicates with the NMS 134 (FIG. 3) to provision the upstream and downstream traffic shaping criteria.
- the ODS uses this criteria to regulate the upstream and downstream traffic.
- the DS 314 has a coax 20 port coupled to an upstream ODS, SAS, or DS and at least four coax ports 22A, 22B, 22C and 22D coupled to downstream DSs or SASs.
- the DS receives legacy video/data and Gigabit Ethernet data from either the ODS or an upstream DS or SAS on the coax 20.
- the legacy video and data is amplified and propagated on all of the ports 22A, 22B, 22C and 22D.
- the Gigabit Ethernet data is processed and switched onto the appropriate port(s) based on the entry for the corresponding Routing ED in a routing table kept at the DS.
- the DS 314 receives Gigabit Ethernet data and legacy data signals from all four ports 22A, 22B, 22C and 22D and queues the Gigabit Ethernet data based on assigned priorities as described further herein.
- the DS also performs flow control to prevent its buffers from overflowing.
- the received upstream Gigabit Ethernet data from ports 22A, 22B, 22C and 22D is queued, prioritized and forwarded upstream.
- the legacy data is coupled directly into the upstream port. Referring now to FIG.
- the SAS 316 has a coax port 24A coupled to an upstream DS or SAS, a coax port 24B for coupling to a downstream SAS (or, possibly, a DS) and four coax drop ports 26A, 26B, 26C, 26D each for coupling to an NIU 119.
- coax port 24A receives legacy video/data and Gigabit Ethernet data from an upstream DS or SAS.
- Legacy video/data is propagated on the ports 24B and 26A, 26B, 26C, 26D.
- the Gigabit Ethernet data is processed and switched onto the appropriate drop port(s) 26A, 26B, 26C, 26D and/or forwarded to the downstream SAS (or, possibly, DS) on port 24B based on the entry for the corresponding Routing ED in a routing table kept at the SAS.
- the SAS 316 receives Gigabit Ethernet data and legacy data signals from all five ports 24B, 26A, 26B, 26C and 26D and queues the Gigabit Ethernet data based on assigned priorities as described further herein.
- the SAS also performs flow control to prevent its buffers from overflowing.
- the received Gigabit Ethernet upstream data is queued, prioritized and forwarded further upstream.
- the legacy data is coupled directly into the upstream port. Referring now to FIG. 9, the interfaces to the NIU 119 are shown.
- the NIU receives legacy video/data and 100 Mbps Ethernet data from the SAS 316 on drop 26.
- the legacy video/data and the 100 Mbps data signals are split by the NEU.
- the legacy video and data is transmitted over coax 33 and the Ethernet data stream on line 31 is processed and user data is transmitted to the Home LAN 30 (FIG. 3) via the 100BaseT Ethernet interface 31.
- Data processing includes checking the Routing ED to ensure privacy of user traffic and stripping the Access Network Header to form standard Ethernet packets for transmission on the Home LAN.
- the NIU performs a bridging function to prevent local user traffic from entering the Access Network.
- the NIU also provides a per service policing function which enables the service provider to enforce service level agreements and protect network resources.
- the NIU also inserts the Access Network Header. This data stream is combined with the legacy upstream traffic and forwarded to the SAS.
- the intelligent tap 116 (FIG. 3) and the intelligent network interface device 119 (FIG. 3) are modified to include one or more radio frequency (RF) transceivers which operate at an appropriate RF frequency, e.g., using Multichannel Multipoint Distribution Service (“MMDS”), Local Multipoint Distribution Systems (“LMDS”) or other frequencies.
- MMDS operates in the 2.1-2.7 GHz microwave band and LMDS operates at approximately 28 Ghz.
- the network element includes an RF complex 602, RF transmitter/receiver pairs or modems 604a-604n, a PHY (physical layer) device 606, a switch 608, microprocessor 610, memory 612, flash memory 617 and a local oscillator/phase locked loop (LO/PLL) 614. All of the components are common to embodiments of the ODS, DS, SAS and NIU.
- the ODS further includes an optical electrical interface.
- the NIU further includes a 100BaseT physical interface for connecting to the Home LAN 30 (FIG. 3).
- the RF complex is shown as having a bypass path 618A and a built in self test path 618B controlled by switches 618C, 618D which are described further herein.
- the number of modems, 604n generally, depends on the number of links that connect to the network element.
- DS 314 has five ports (FIG. 7) and thus has five modems 604.
- a SAS 316 has six ports (FIG. 8) and thus has six modems 604.
- the network element in FIG. 10 is shown having six ports indicated as ports 603, 605, 607, 609, 611 and 613.
- the ports 603, 605 correspond to upstream and downstream ports 24 A, 24B respectively and ports 607, 609, 611, 613 correspond to drop ports 26A, 26B, 26C, 26D respectively of the SAS shown in FIG. 8.
- the PHY device 606 provides physical layer functions between each of the modems 604 and the switch 608.
- the switch 608, controlled by the microprocessor 610, provides layer 2 switching functions and is referred to herein as the MAC device or simply MAC.
- the LO/PLL 614 provides master clock signals to the modems 604 at the channel frequencies indicated above in Table 1 and described further herein.
- a frame structure 620 used in the system is shown in FIG. 11.
- a frame 620 includes frame synchronization, symbol synchronization and a data phase.
- the frame synchronization (FS) is for a period of lus and the symbol synchronization (SS) uses a 400ns period.
- Carrier and framing synchronization is performed every lOus followed by 1280 bytes of Data Phase 621. It should be understood that other frame structures are possible and the frame structure described is only an example.
- the Data Phase is shown in FIG. 12A and includes 5 blocks 621 A of 256 bytes each.
- Each 256 byte block 621 A consists of 4 bytes for out-of-band (OOB) data/signaling 623 and 252 bytes of in-band data 624.
- OOB out-of-band
- FIG. 12B shows the fields for the OOB data/signaling 623.
- the fields include a start-of-packet pointer (8 bits) 625A, flow control (4 bits) 625B, out-of-band data (8 bits) 625C and CRC (8 bits) 625D. In addition, 4 bits are reserved.
- the start-of-packet pointer 625A indicates the start of a new MAC frame in the following 252 byte block 624 (FIG. 12A). A value greater than or equal to '252' indicates no new packet boundary in this block.
- the flow control bits 625B are used to carry flow control information from parent to child i.e. from a device such as DA, SAS, or ODS to another device that is directly connected to it on the downstream side.
- the out-of-band data bits 625C are used to carry out-of-band data from parent to child.
- the CRC 625D is used for CRC of the OOB data/signaling 623.
- the RF modem 604n (FIG. 10) is now described.
- a modulation system with spectral efficiency of 4 bits/s/Hz is used to provide high data rates within the allocated bandwidth.
- 16-state Quadrature Amplitude Modulation (16-QAM) is preferably used, which involves the quadrature multiplexing of two 4-level symbol channels.
- Embodiments of the network elements of the present system described herein support 100 Mb/s and 1 Gb/s Ethernet transfer rates, using the 16-QAM modulation at symbol rates of 31 or 311 MHz.
- a block diagram of one of the transmitter sections 604 A of the modem is shown in FIG. 13.
- the transmitter section includes at least two digital-to-analog converters (DACs) 630, low pass filters 632 and in-phase and quadrature multiplier stages 634, 636 respectively.
- a crystal oscillator 644 serves as the system clock reference, and is used by clock generator 646 and by carrier generation phase locked loop circuit (PLL) 642.
- Byte data is first mapped into parallel multi-bit streams by the byte-to-QAM mapper 628 in the PHY device 606 described in detail in connection with Fig. 16 for driving each of the DACs 630.
- the DAC outputs are low-pass filtered, and passed to the multiplier stages for modulation with in-phase (f) and quadrature (Q) carriers provided by the carrier generation PLL circuit 642.
- the up-converted, quadrature-multiplexed signal is mixed in mixer 638 and passed to an output power amplifier 640 for transmission to other intelligent network devices.
- FIG. 14 A block diagram for the receiver section 604B of the modem is shown in FIG. 14.
- the receiver section 604B includes low-noise amplifier (LNA) 650, equalizer 652 and automatic gain control (AGC) 654.
- LNA low-noise amplifier
- AGC automatic gain control
- the received signal from PHY 606 is boosted in the LNA 650 and corrected for frequency-dependent line loss in the equalizer 652.
- the equalized signal is passed through the AGC stage 654 to I and Q multiplier stages 656, 658, low pass filters 660 and analog-to-digital converters (ADC) 662.
- ADC analog-to-digital converters
- the I and Q channels are digitized and passed on to the QAM-to-byte mapper 629 for conversion to a byte-wide data stream in the PHY device 606 (FIG. 10).
- Carrier and clock recovery for use in synchronization at symbol and frame levels, are performed during periodic training periods described below.
- a carrier recovery PLL circuit 668 provides the I and Q carriers to the multipliers 656, 658.
- a clock recovery delay locked loop (DLL) circuit 676 provides clock to the QAM-to-byte mapper 629.
- DLL delay locked loop
- PLL and DLL paths that include F(s) block 674 and voltage controlled oscillator (NCXO) 670 are switched in using normally open switch 673 under control of SYNC timing circuit 672 in order to provide updated samples of phase/delay error correction information.
- FIG. 15 shows the training periods and data as parts of the frame structure. The frame structure is now described with reference to both FIGs. 14 and 15.
- the RF local oscillator may drift.
- the receiver updates carrier and timing.
- the receiver section 604B is in a training mode in which it receives a carrier recovery signal 675 followed by a symbol timing recovery signal 677.
- the NCXO 670 tunes in with the RF frequency/phase reference provided by F(s) block 674 (Fig. 14).
- the local oscillator in the carrier recovery PLL circuit 668 uses the VXCO as a reference and follows the NCXO (Fig. 15) to tune in.
- the receiver 604B At the falling edge of the carrier recovery period 675, the receiver 604B counts a programmable delay, then the receiver 604B enables the clock-recovery DLL circuit 676. This timing recovery occurs in relation to the symbol timing recovery signal 677.
- the SYNC timing circuit closes switch 673 to connect the carrier recovery PLL circuit 668 and clock recovery DLL circuit 676. Following these short update periods, the receiver is in a normal operational mode in which it receives data frames 620.
- FIG. 16 A block diagram of the PHY device 606 (FIG. 10) is shown in FIG. 16.
- the PHY includes a transmit section 606A and a receive section 606B. It should be understood that the PHY device 606 includes a pair of transmit and receive sections 606A, 606B for each corresponding RF modem 604 (FIG. 10).
- the PHY device in the network element in FIG. 10 includes six PHY transmit/receive section pairs 606A, 606B to connect to the corresponding six RF modems 604.
- the transmit section 606A includes transmit media independent interface (MU) 680, byte and symbol sign scrambler word generator 682, byte scrambler 684, Gray encoder and symbol sign scrambler (mapper) 686 and PHY framer 688.
- the mapper 686 corresponds to the byte-to-QAM mapper 628 (FIG. 13), described further below. Scrambling is used to balance the distribution of symbols and flows (polarity).
- the receive section 606B includes receive Mu 690, byte and symbol sign descrambler word generator 692, byte descrambler 694, Gray decoder and symbol sign descrambler (demapper) 696 and PHY deframer 698.
- the demapper 696 corresponds to the QAM-to-byte mapper 629 (FIG. 14).
- the PHY device provides interfaces to the MAC layer device 608 and the modems 604 (FIG. 10) in the network element.
- the PHY provides full-duplex conversion of byte data into 16-QAM wire symbols, and vice-versa, at a rate of 100 Mb/s or 1 Gb/s.
- the MAC device 608 (FIG. 10) runs all of its ports from one set of clocks; therefore, the PHY/MAC interface contains shallow byte- wide FIFOs to buffer data due to differences between the MAC clock and received clock rates.
- the PHY scrambles the byte data, breaks the bytes into 16-QAM symbols, and scrambles the signs of the symbols before passing the symbols on to the analog portion of the modem 604.
- the PHY collects 16-QAM symbols, descrambles the signs, packs the symbols into bytes, and descrambles the bytes before passing them on to the MAC device 608.
- a PHY is considered either a master or a slave, depending upon how it receives its clocks.
- a master PHY uses a transmit clock derived from the local reference crystal 644 (FIG. 13).
- a slave PHY uses a transmit clock derived from its partner receiver.
- the PHY that looks upstream is a slave PHY; the downstream and drop PHYs are all masters, using the local reference crystal.
- the PHY in an NIU 119 (FIG.
- the PHY 606 supports a MAC interface, referred to herein as the media-independent interface (MET), for 1 Gb/s and 100 Mb/s transport.
- the Tx ME 680 and Rx Mil 690 provide an interface indicated at 681, 683, 691 and defined in 2.
- the Mu includes transmit and receive FIFOs (not shown) which buffer byte data between the MAC and PHY devices.
- the transmit interface 606 A is now described in connection with Fig. 16.
- the MAC 608 (FIG. 10) asserts TX_EN when it is ready to begin transmitting data to the receiver section 606B of PHY device 606. While TX_EN is deasserted, the PHY sends frames with normal preambles but with random data. In this mode, the LFSR is not reset at the start of every frame.
- the MAC asserts TX_EN, the PHY completes the frame it is currently sending, sends normal frame resynchronization segments, sends a start-of-frame delimiter (SFD) segment, and begins transmitting data.
- SFD start-of-frame delimiter
- the PHY deasserts TX_RDY while TX_EN remains asserted. TX_RDY will assert for the first time shortly before the PHY sends the first SFD segment.
- the MAC may load data into it by asserting the TX_D V signal for one cycle of TX_CLK.
- the PHY When the Tx FEFO is close to full, the PHY will deassert the TX_RDY signal, and it will accept the byte of data currently on TXD. TX_RDY will assert during the periodic frame synchronization periods.
- the PHY generates its 311 MHz symbol clock from a 155 MHz local reference oscillator (if a master PHY) or from the demodulator (if a slave PHY).
- a master PHY also generates the 155 MHz MAC_CLK.
- the MAC side of the PHY Tx FEFO uses the 155 MHz MAC_CLK.
- the PHY When valid data is in the PHY Rx FEFO, the PHY asserts RX_DV. The PHY assumes that the MAC consumes valid data immediately, so the PHY advances the read pointer to the next valid entry. When the Rx FEFO is empty, the PHY deasserts RX_DV. If the PHY has properly received 2 of the previous 3 frames, the PHY asserts FS.
- the PHY does not have a 311 MHz symbol clock for the receiver; instead, it uses both edges of the 155 MHz clock supplied to it by the demodulator.
- the MAC side of the PHY Rx FEFO uses the 155 MHz MAC_CLK.
- the PHY and MAC use FS to support framing control.
- the PHY receiver 606b will deassert FS when it believes it has lost track of the framing sequence. If the PHY has not received an SFD segment in a span of 2 frame periods, the PHY will deassert FS. FS powers up deasserted.
- the digital PHY connects to the transmit modulator via 10 digital pins: two differential pairs for in-phase signal (I), two differential pairs for quadrature signal (Q), and an additional 2 pins to indicate when one or both of the in-phase signal (I) and/or quadrature signal (Q) should be tristated.
- the digital outputs connect to D-to-A converters in the Tx modulator section.
- the Rx demodulator slices the incoming symbols into 4 sets of 2-bit coded signals. There is one set of signals for each of I,, I 2 , Q l5 and Q 2 .
- the demodulator supplies a 155 MHz clock to the PHY, which it uses for synchronously loading the received symbols.
- the transmit section 606 A of the PHY accepts one byte per clock (155 MHz) when framing conditions permit.
- the PHY asserts TX_FULL to indicate that the MAC should stop sending new data on TXD ⁇ 7:0>.
- the clocks at the transmit and receive sections 606A, 606B of the PHY can have some discrepancy.
- the PHY framer 688 of the transmitter periodically sends certain special, non-data patterns that the receiver uses to re-acquire lock.
- the receiver uses the first segment of the frame to acquire carrier synchronization. After locking to the incoming carrier, the receiver uses the second segment of the frame to find the optimal point within each symbol time (maximum eye opening) at which to sample (slice) the symbol. After sufficient time for the receiver to locate the eye opening, a short, unique pattern - Start-of-Frame Delimiter (SFD) - is used to mark the start of the data payload.
- SFD Start-of-Frame Delimiter
- the transmit PHY 606A controls the framing, and tells the MAC when it is sending data, and when the MAC should pause its data transfer to the PHY. While the PHY is sending anything but the data payload, the PHY will assert TX_FULL. The MAC does not send new data to the PHY while TX_FULL is asserted.
- Two kinds of scrambling are performed in the transmitter. Bit scrambling tries to ensure a balanced distribution of symbols. This scrambling can minimize the likelihood of transmitting a sequence of low-amplitude signals. A good amplitude distribution may improve the performance of the receiver's AGC circuitry, and may be necessary for determining the threshold levels at the receiver's sheer. Sign scrambling tries to eliminate any DC component in the output signal.
- the bit and sign scrambler word generator 682 generates 8-bit words for bit-scrambling and 4-bit words for sign-scrambling. Bit scrambling occurs one byte at a time, in the bit scrambler 684, before the data has been split into 16-QAM symbols. Sign scrambling occurs after the symbols have been mapped, just before driving the off-chip D-to-A converters.
- the Gray Encoder (mapper) 686 also provides the sign scrambling function.
- a 33-bit LFSR generates the pseudo-random number sequence used for both types of scrambling.
- the LFSR polynomial is .
- the bit scrambling equations are listed in Tables 3 and 4.
- the symbols are sign-scrambled and converted to the virtual Gray code for output to the modulator as shown in Table 6.
- the deframer, descrambler and demapper elements of the receive section 606B (FIG. 16) are now described.
- the frame structure (FIG. 15) consists of several different segments, each with a particular purpose.
- the roughly 1 s carrier synchronization burst 675 is bracketed by brief periods where there is no signal transmission at all.
- the "front porch" 675A and “middle porch” 675B help the analog demodulator determine the start and end of the carrier synchronization burst.
- the analog demodulator must use a carrier envelope detector to identify the carrier synchronization burst 675.
- the digital PHY 606B After the carrier envelope detector signal falls (for the "middle porch"), the digital PHY 606B enables (closes) the symbol synchronization-tracking loop 676 after some delay.
- the digital PHY opens the symbol-tracking loop 676 after the symbol-tracking segment 677 ends (during the "back porch" 677A).
- the PHY begins searching for the SFD pattern after opening the symbol-tracking loop.
- the delay from carrier envelope signal deassertion until closing the symbol-tracking loop and the length of the symbol-tracking period are both programmable.
- the SFD search must check for four possibilities. Assume the SFD pattern consists of the 2 hex-digit code 0x01. Because of indeterminacy in the arrival time or latency of each link, the SFD byte may be received with the '0' on the I/Q 1 lines and the ' 1 ' on the II Q 2 lines, or vice versa. En addition, the demodulating mixer may or may not invert the phase of its outputs, potentially converting the '0' to 'F' and the ' 1' to 'E' . Fortunately, both / and Q will have the same inversion state. Taking all this into account, the SFD search much consider matching any of 4 patterns: 0x01, 0x10, OxFE, and OxEF. When the SFD pattern is matched, the topology of the match is stored and used to properly de-map each symbol and form byte-aligned words.
- the slicer-encoded signals are converted to digital signals as described in Table 7.
- the Descrambler 694 uses the same LFSR polynomial and seed as the Scrambler.
- the LFSR is initialized to the seed, and n is initialized to 0, upon detection of the SFD pattern.
- the receiver When RX_DV is asserted, the receiver sends one byte per clock (155 MHz) to the MAC on RXD ⁇ 7:0>.
- the PHY receiver derives the 155 MHz RX_CLK from the 155 MHz demodulator clocks, but the MAC side of the Rx FEFO is clocked with the 155 MHz MAC CLK.
- An embodiment of the RF complex 602 provides passive coupling and splitting of digital signals provided by the intelligent network elements and the legacy signals.
- the RF complex 602A shown in FIG. 17 includes diplexers 702, couplers 704, 706, 708, 710 and low pass filters 712, 714.
- the legacy signals transmitted to and received from lines 603, 605 are coupled and split through couplers 704, 706, 708, 710.
- the low pass filters 712, 714 block the digital signals provided by the intelligent network elements and pass the legacy signals above, e.g., 900 MHz to and from the ports 603, 605, 607, 609, 611, 613. Similar arrangements are made for connecting other standard network elements with the corresponding intelligent network devices.
- a second embodiment of the RF complex provides active functions, including equalization and amplification.
- the RF complex 602B shown in FEG. 18 includes diplexers 702, triplexers 705, coupler 707, low pass filters 709, bypass path 711, equalizers 724, amplifiers 726, power dividers 728 and power combiners 730.
- the amplifiers 726 provide the line-extender function of legacy HFC systems.
- the amplifiers 726 and equalizer 724 provide addressable attenuation and equalization capabilities for use in downstream Line Build Out (LBO) and coaxial skin-effect correction respectively. Further, addressable attenuation is also provided in the return-path for equalization of funneled noise. Return paths can be selectively disconnected in areas not requiring upstream services.
- the RF complex 602B also includes an automatic bypass path 711 that is switched in upon component failure.
- Switching Within the Access Network n a communications environment, there can be typically many user devices per household. Switching data traffic on the basis of the MAC addresses of the devices leads to very large 48-bit wide switching table entries.
- the Access Network of the present system assigns a unique 12-bit Routing ED (RED) to each network element (e.g., DS, SAS and NIU). En the case of the NIU, this NEU-ED identifies itself and a subscriber premises connected thereto and for switching within the access network all Internet appliances within the home are associated with it. Switching within the network takes place using the RED, thus reducing the size of the switching table tremendously.
- RED Routing ED
- the encapsulated packet 800 includes length indicator (LI) 801 and Ethernet packet allocations.
- the LI is comprised of 2 bytes (11 bits plus 5 for CRC).
- the Ethernet packet length can vary from 68 to 1536 bytes.
- the Ethernet packet is chopped up and transported in one or more in-band data segments 624 (FIG. 12A).
- the Ethernet packet allocations include destination MAC address 802, source MAC address 804, Access Network Header 806, type/length 808, layer 3 header 810, layer 3 payload 812 and frame check sequence (FCS) 813.
- FIG. 20 An exemplary format for the Access Network Header 806 is shown in FIG. 20.
- the format includes the following sub-fields: Reserved (13 bits) 814, Control (3 bits) 816, Quality of Service (QoS) (3 bits) 818, Unused (1 bit) 820 and Routing ED (RED) (12 bits) 822.
- the Control bits are used to indicate control information and are used in messaging and for triggering different actions at the intelligent network elements described further herein.
- the QoS bits are used to prioritize traffic.
- the Control bits and QoS bits are described further herein.
- packets can be routed to the appropriate DS, SAS or NIU. All user data is transmitted by the NIU onto the Home LAN using standard Ethernet frames.
- the 12-bit RED allows the system to address 4096 entities which can be used to indicate an entity (Unicast), a group of entities (for Multicast) or all entities (for Broadcast).
- the different REDs are specified as follows in Table 8.
- the RED is assigned to all network elements at boot time.
- the Tag/Topology server 132 (FIG. 3) is responsible for assigning the REDs that are also referred to herein interchangeably as Tags.
- the Tag/Topology Server acts as a Dynamic Host Configuration Protocol (DHCP) server for assigning the REDs and EP Addresses to the network elements of the Access Network.
- DHCP is a network protocol that enables a DHCP server to automatically assign an EP address to an individual computer or network device. DHCP assigns a number dynamically from a defined range of numbers configured for a given network.
- DHCP assigns an EP address when a system is started.
- the assignment process using the DHCP server works as follows.
- a user turns on a computer with a DHCP client.
- the client computer sends a broadcast request (called a DISCOVER), looking for a DHCP server to answer.
- a router directs the request to one or more DHCP servers.
- the server(s) send(s) a DHCP OFFER packet.
- the client sends a DHCP REQUEST packet to the desired server.
- the desired server sends a DHCP ACK packet.
- the format of a standard DHCP message 824 is shown in FIG. 21.
- the standard DHCP message includes standard fields denoted 825 and vendor specific options field 826.
- the standard fields 825 are used to carry JP address information and the vendor specific options field 826 is used to carry information regarding RED assignment and topology.
- Special control bits described further herein identify DHCP messages going upstream.
- a sequence of events leading to RED and EP Address assignment in the present system is described as follows and shown in FIG. 22.
- a newly installed or initialized network element e.g., DS 114, SAS 116 or NIU 119; FIG. 3 broadcasts a DHCPDISCOVER message looking for the Tag/Topology server 132.
- the options field 826 (FIG. 21) in the DHCPDISCOVER is populated to differentiate between a network element and other user devices.
- All "registered" devices in the upstream path between the initialized network element and the ODS 112 (FIG. 3) append their MAC Address and Physical Port numbers to the DHCPDISCOVER message in options field 826. This is done in order to construct a topology of the Access Network and is described further herein.
- a relay agent of the router 110 (FIG. 3) relays this message to all known DHCP servers.
- the Tag/Topology server 132 also receives this message and identifies that it comes from a valid network element.
- the Tag/Topology server sends back a DHCPOFFER that contains the EP Address and RED for the new network element.
- the Tag/Topology server can assign the RED based on topology if the need so arises. It also sets an option in the options field 826 to identify itself to the network element as the Tag/Topology server.
- Other DHCP servers may also send DHCPOFFER but they will not typically set the options field 826.
- the network element recognizes the DHCPOFFER from the Tag/Topology server and sends back a DHCPREQUEST. This message identifies the Tag/Topology server whose offer is accepted. It is also relayed to all other known DHCP servers to inform them that their offer was rejected.
- the manner in which the present system uses the DHCP options field 826 (FIG. 21) is now described.
- the options field 826 provides for up to 256 options. Of these, option codes 0-127 are standard options and option codes 128-254 are vendor specific. Each option has the following three sub-fields: Type/Option Code 1 byte Length 1 byte
- FIG. 23 A typical options field as used in the present system is shown in FIG. 23.
- the start of the options field is identified by a start of options sequence 830. This is followed by the DHCP Message Type Option 832 that indicates the type of DHCP Message.
- the end of the options field is indicated by the END option 842.
- vendor specific option codes can be used for tag assignment and topology discovery purposes as shown in Table 9.
- the option 834 includes type 171 (AB) and identifies the type of network element that is being initialized, in this case, a DS.
- Option 836 includes type 174 (AE) and indicates the number of elements that have attached their MAC/port information for purposes of topology discovery. In this case, option 836 indicates that the DHCP message includes information from two network elements.
- Options 838 and 840 include type 175 and indicate the actual MAC/port information for the new and intermediate elements, respectively.
- EP Addresses and Tags can be assigned indefinitely or for a fixed duration (finite lease length). En the latter case, the EP Addresses and Tags will expire if they are not renewed.
- the network element sends a DHCPREQUEST message. If the Tag/Topology server does not receive a renew message and the Tag expires, it is free to assign the Tag again to another network element. This can be done within the ambit of well-defined DHCP Messages.
- Knowledge of the logical location of all network elements assists in performing troubleshooting, flow control, systematic assignment of REDs to facilitate Wavelength Add Drop Multiplexing using Access Network Headers, network management and connection admission control.
- the Tag/Topology server 132 (FIG. 3) assigns Tags and EP Addresses and maintains an up to date topology of the network. As different network elements boot up and ask for Addresses and REDs using the
- the Tag/Topology server tracks and constructs a network topology.
- the Tag/Topology server can also request the Network Management Systems (NMS) 134 (FIG. 3) to prompt individual network elements to re-send their topology information at any time.
- NMS Network Management Systems
- Initial topology discovery takes place using standard DHCPDISCOVER messages. As a network element boots up, it broadcasts a DHCPDISCOVER request as described above. The control bits are set as described further herein.
- DHCP option fields 834, 836, 838 pertaining to topology discovery noted above in FIG. 23 are shown in more detail in FIG. 24.
- the topology information is constructed in DHCP Option 175.
- DHCP Option 174 contains the number of upstream elements that have already appended their information. Each subsequent network element adds its MAC address and the physical ingress port number on which it received this packet and increments the value of Option 174 by one.
- the DHCP Options fields are as indicated in FIG. 25.
- the Tag/Topology server can derive the logical location of the new network element from the information in the options field of a completed packet and assign REDs and EP Addresses accordingly.
- the Tag/Topology server may lose track of the topology momentarily. En such a situation, it may ask the NMS to prompt the target element(s) to resend their topology using DHCPFNFORM messages. En this case, the message structure remains the same as the DHCPDISCOVER and topology can be reconstructed. DHCPENFORM messages can also be sent periodically by the network element to ensure that topology information stays current.
- the NIU For upstream traffic, the NIU performs a bridging function at the ingress to the Access Network and prevents local user traffic from entering the Access Network. For legitimate user traffic, the NIU inserts the Access Network Header into all upstream packets.
- the Access Network Header includes the unique RED assigned to the NIU by the Tag/Topology server.
- QoS bits described further herein are added as provisioned on a per flow basis. AU network elements in the upstream path perform prioritization and queuing based on these QoS bit.
- the Access Network Header is discarded, the original Ethernet packet is reconstructed and handed to router 110 (FIG. 3).
- the ODS 112 inserts an Access Network Header into each downstream packet based on the layer-2 destination address and recomputes the CRC.
- This header contains the 12-bit Routing ED of the NEU that serves the user device. All network elements forward the packet to various egress ports based on the entry in their routing table for this Routing JD.
- the Access Network Header is stripped off and the original (layer-2) packet is reconstructed and transmitted on the Home LAN 30 (FIG. 3).
- Control bits 816 are part of the Access
- Network Header 806 Messaging packets with different control bit settings are processed differently by the different network elements.
- no network element within the Access Network initiates messages for devices downstream from it.
- Network elements can initiate messages for other devices upstream. These messages can include information such as heartbeats and routing table updates. It should be understood also that the principles of the present system can be applied to a network in which messaging between all network elements takes place. Significance of Control Bits for Downstream Packet Flow h the downstream direction, control bits are used to mark messages from
- Access Network servers These are special control messages that are meant for a network element and, for security reasons, should not reach the end user.
- the control bits are also useful for dynamically determining the NIU that serves an end device as described further herein.
- the routing ED (RED) of a DS or SAS uniquely identifies the device. Hence, any frame that has the unique RED of the DS or SAS is forwarded to the processor 610 (FIG. 10) associated with that DS or SAS. All frames with Broadcast RED are forwarded to the processor and to all downstream egress ports. All downstream messages to the DS/S AS are processed by the TCP/IP stack.
- the control bits have no significance at the DS/SAS as indicated in Table 10.
- the processing of downstream messages at the NIU is determined by the following factors.
- the RED of the NIU identifies the NEU and all user devices on its Home LAN. In other words, an NEU is not uniquely identified by an RED.
- the NEU is the last point at the Access Network before the data enters the subscriber premises. Hence, the NEU needs to filter out all control plane data. Control messages are identified using control bits and Destination MAC Addresses.
- the NIU needs to identify and respond to ARP-like messages from the CAC server for constructing the NEU-User device mapping.
- DHCP messages flow all the way to the Tag/Topology Server and all network elements along the way append some information and pass it on as described herein above. All upstream messaging between network elements and other servers takes place at Layer-3 using their respective EP Addresses and does not require special control bits.
- Table 12 The following messages are specified in Table 12:
- control bits are similar to those of packets initiated by the NEU as shown in Table 15.
- a network server farm 130 includes various servers 132, 134, 136 (FIG. 3) that need to communicate with the DSs, SASs and NIUs within the Access Network. All messaging between these servers and access network devices (e.g. SASs, DAs) takes place over UDP/EP.
- the upstream messages contain the RED of the source and appropriate Control Bits described above.
- the Layer-2 Destination Address is Broadcast;
- the Layer-2 Source Address is the MAC Address of the Source;
- the Layer-2 Protocol Indicator indicates Message Type.
- the "non-standard" messaging uses the approach described below and shown in FIG. 26.
- the exemplary message format 850 includes a message type (2 bytes) field 852, a message length (2 bytes) field 854 and an information field (1 to 1000 bytes) 856. It should be noted that all messaging between the network servers and a DS/SAS/NIU takes place over Layer-4 while all messaging within the Access Network takes place over Layer-2. However, the message structure shown in FIG. 26 remains the same. Based on the Message Type, the information is cast into the appropriate structure and processed.
- the message types include heartbeat or keep alive, routing table updates and NEU discovery messages.
- the Access Network includes a network management system (NMS) server 134 that is responsible for monitoring and supervision of the network elements and for provisioning services.
- the NMS server communicates with the network elements using standard SNMP commands.
- Each network element including ODSs, DSs, SASs and NIUs, includes a processor that is given an EP address and implements the SNMP protocol stack.
- the NMS server communicates with these processors to provision services, set control parameters, and retrieve performance and billing data collected by these processors.
- the network elements periodically transmit "stay alive" signals to their upstream peers; the status information based on the received stay alive signals can be communicated to the NMS server for use in fault diagnosis.
- the Access Network of the present invention provides a Quality of Service (QoS) aware, high bandwidth access medium over cable to homes and small business customers.
- QoS Quality of Service
- the Access Network is capable of supporting a variety of constant and variable bit rate bearer services with bandwidth requirements ranging from a few kilobits per second to several megabits per second, for example, and with
- CBR-RT Constant Bit Rate Real-Time
- VBR-RT Variable Bit Rate Real-Time
- Non-Real-Time delivery End-users are able to use these services to support applications such as voice telephony, video telephony, multi-media conferencing, voice and video streaming and other emerging services.
- the HFC plant already offers cable television and, in some cases, broadband
- Access Network can be implemented on the HFC plant without disrupting legacy systems available on this plant.
- QoS Quality of Service
- the Access Network of the present system provides QoS classes to support the various bearer services required by different end-user applications.
- the QoS classes are described as follows, though other additional services can be envisioned for support using the principles of the present system.
- QoS Class 1 is associated with Constant Bit Rate Real-Time Services
- CBR-RT This QoS class supports real time services such as Voice over EP (VoEP), which have very stringent delay requirements.
- VoIP Voice over EP
- the services belonging to Class 1 typically have a constant bit rate requirement although this class can also include variable bit rate services such as voice telephony with silence suppression. Most of the applications using this service class have a bit rate requirement of for example, a few 10s of kbps to 200 kbps. Total packet delay through the Access Network for this class is typically less than about 5 milliseconds.
- VBR-RT Variable Bit Rate Real-Time Services
- This QoS class supports the large variety of constant and variable rate bearer services that have a relatively less stringent delay requirement.
- Existing and emerging audio and video applications with a variable bit rate requirement typically dominate applications using Class 2.
- the average bandwidth needed for applications using the VBR-RT service class typically range from a few 100 kbps to a few Mbps..
- the total packet delay (excluding packetization delay) over the Access Network is typically within 15 milliseconds.
- VBR-nRT Variable Bit Rate Non-Real-Time Services with Throughput Guarantees.
- This QoS class supports VBR services with loose delay requirements, but with throughput guarantees. That is, the throughput received by an application using such a service is guaranteed over a suitably long time interval (e.g. 1 or 10 seconds); however, there are no guarantees for individual packet delays.
- Such a service can offer throughputs of several megabits per second, and is useful for applications such as video download, or data connections between offices located at different places.
- UBR Unspecified Bit Rate
- This QoS class supports UBR services which have no explicit delay or throughput requirements.
- the services in Class 4 are always available to an end-user, i.e., no call set up is required for an end-user application to be able to send or receive data using the Class 4 - UBR service. In this sense, the UBR service is much like what typical users of the Internet receive from the latter.
- the maximum packet size allowed for this class is made large enough (e.g., around 1600 bytes) to be consistent with packet sizes allowed on typical Ethernet implementations.
- the typical throughput end-users can receive via UBR services is substantially larger (e.g., a few Mbps) than what is available via DOCSIS.
- an Access Network Header 806 (FIG. 20) is inserted by the corresponding NIU.
- a packet belonging to the traffic stream associated with a particular service is identified as belonging to a specific QoS class on the basis of the QoS field 818 in the Access Network Header.
- the treatment received by a packet at all network elements such as DSs and SASs is determined entirely by the value stored in its QoS field.
- the NEU represents the ingress point of the Access Network, and, as such, plays an important role in the overall QoS management.
- the features provided at the NIU for QoS support include packet classification, traffic policing, egress buffer control and transmission scheduling.
- FIG. 28 shows stages of packet flow through an NEU.
- the packet flow stages include packet classifier 1302, per-service instance traffic policing 1304, service-specific packet processing 1306, QoS Class based egress buffer control 1310, transmission scheduler 1312 and modem buffer 1314.
- An incoming upstream packet is first processed through packet classifierl302, which identifies the service instance to which the packet belongs.
- the next stage the packet passes through is the policing stage 1304. This stage monitors the flow associated with each service instance, and drops all of those packets that do not meet the policing criteria.
- the packets that pass this stage are then handed over to the appropriate service modules 1308 where they undergo service specific processing.
- a packet is ready to be transmitted out. It is now handed to the egress buffer control stage 1310, which places the packet in the egress buffer associated with its QoS class, (Each QoS class has a fixed buffer space allocated to it on the egress side. There is no distinction between different service instances belonging to the same QoS class.)
- a packet that finds insufficient space in the egress buffer is dropped. Those that are accepted await their turn at transmission, which is determined by the transmission scheduler 1312.
- the transmission scheduler takes into account the relative priorities of different QoS classes and the flow control flags it has received from the upstream SAS device in its decisions regarding the time and order of packet transmission.
- a packet selected for transmission by the scheduler is copied into the modem buffer 1314, from where it is transmitted out over the drop.
- the NEU receives packets from end-user applications over the 1 OOBaseT
- the NEU as noted above, generates an Access Network header for the packet for use over the Access Network, and fills up the QoS field according to the QoS class associated with the corresponding traffic stream. Also, the NIU needs to police the traffic it receives from the subscriber to protect network resources. All of this processing requires identification of the service instance to which the packet belongs. Once a packet's service instance is determined, its QoS class and other processing requirements (e.g., VLAN tagging) can be determined as function of the service instance. Consequently, the first major step in the processing of an upstream packet is packet classification, meaning the identification of the service instance to which a packet belongs.
- the NIU uses a filtering table, such as one with the following format:
- Rows of the packet classification table identify the service instances associated with different flows. Not all address fields in Table 16 need to be filled with actual address fields to enable classification. "Wildcards" are also allowed, which match with any value in the corresponding field of the packet being classified. For instance, the first row of Table 16A has the "Source MAC Address” field filled with the MAC address (XYZ) of some device, and all other address fields filled with a "*" which is the wildcard. This means that all packets whose source MAC address equals XYZ should be considered as belonging to the service instance SI, regardless of the contents of the other address fields in that packet.
- the second row indicates that for a packet to be identified as belonging to service instance S2, its source EP address and port should be ABC and D, respectively, and the destination EP address and port should be EFG and H, respectively.
- Source and destination MAC addresses are wildcards in this case, meaning that they will be ignored in the identification of a packet as belonging to S2.
- the packet classification table allows multiple rows to correspond to the same service instance. This feature can be particularly useful in the handling of VPN services between office locations. In these situations, all devices in an office location can be communicating with other locations via the same VPN service (with some QoS guarantees.) These devices could be identified by their MAC addresses.
- the classification table then, would appear to be composed of multiple rows, each specifying the MAC address of a device in the source MAC address field (with the rest of the address fields filled with wildcards), and all having the identifier of the (same) VPN service in the service instance field.
- service instance associated with a packet Once the service instance associated with a packet is determined, it provides a pointer to the appropriate row of a "service descriptor" table, an example of which is shown in Table 16B:
- Table 16B Service Descriptor Table Each row of the service descriptor table corresponds to a unique service instance, and lists all the relevant attributes that define the service. Some of these attributes define the type of service, e.g. L2TP, VPN, which, in turn, define some of the processing that the packet undergoes at the NIU. Some more attributes can define additional actions to be performed on the packet. These actions include attaching a VLAN tag, or setting the TOS byte in a specific manner. The latter is a useful feature that can enable service providers using Diffserv based packet handling in their wide area network to provide preferential treatment to delay sensitive traffic even beyond the Access Network. These attributes are service specific or optional.
- each row of the service descriptor table contains some QoS related attributes which are defined (i.e. assigned a value to) for every service instance established at an NEU.
- QoS related attributes include sustainable bit rate (or throughput), maximum burst size, maximum packet size and QoS class.
- a call or connection that requires any service other than UBR uses an explicit call setup.
- the CAC server ensures that the system has adequate resources to deliver the required quality of service to that call before allowing the call to be established.
- the CAC server grants a call request to an end user device, it also informs the corresponding NIU of the flow identifier information associated with the corresponding call and its QoS class.
- the provisioning server interacts with the CAC server to ensure that the system has adequate resources to deliver the desired QoS. Once this interaction is over and it has been found that the desired service level can be supported, the CAC server informs the concerned NIU about the service being provisioned.
- This information includes the type of the service being provisioned, criteria for identifying packets belonging to the service instance, service specific and optional attributes, and mandatory (QoS related) attributes of the service.
- the NIU receives this information from the CAC server, it adds one or more rows to the packet classification table to identify packets belonging to that service instance, and adds one row to the service descriptor table to hold information related packet processing and QoS management associated with the service (instance) being set up.
- the CAC server informs the NIU of the corresponding event. The NEU, then, deletes from the packet classification and service descriptor tables the entries associated with the call or the provisioned service being discontinued.
- Traffic policing is desirable for QoS management in that without these features, there is no control over the traffic being released into the network at different QoS levels. This control is used for delivering the QoS associated with different service classes since certain quality metrics (e.g., low delays) cannot be achieved unless the total traffic associated with certain QoS classes is held within certain hmits. Traffic policing is also used to prevent "cheating" by subscribers.
- traffic policing at the NIU is implemented using "token buckets" which regulate the sustained flow rate of traffic into the network around it's negotiated value while allowing some local variability in the traffic inflow around its long-term average.
- the policing of traffic is performed on a "per service instance" basis. That is, there is a token bucket associated with each service instance set up at an N U, and it regulates the flow of traffic associated with that service instance.
- the traffic policing stage 1304 immediately follows the packet classification stage 1302. That is, traffic policing precedes the service specific packet processing stage 1306, which typically requires a significant amount of NIU processing capacity. This ordering ensures that the NIU processing capacity is not wasted on non-compliant packets that get dropped at the policing stage.
- token_size For each service instance, there is a token bucket, which is characterized by two parameters: token_size and max_burst_size which respectively determine the average sustainable traffic rate and the largest traffic burst that the token bucket allows into the network.
- the NIU maintains a clock, which is used for updating the state of the token buckets for all service instances, the NEU also maintains a state variable, X, for each token bucket. At the end of each clock period of duration T ms, the NEU updates the state variable, X. It does this using the following update equations:
- the token size parameter (token_size) of a token bucket determines the rate at which the corresponding service instance can transmit data on a sustained basis. Specifically, if token_size is measured in bits and the clock period, T, is measured in milliseconds, then the maximum sustainable data rate for the corresponding service class is given by token_size/T kbps.
- Packet handling and token bucket updates are done independently, in an asynchronous manner. Whenever a packet is received by an NIU from its subscriber interface, it is passed through the packet classifier to identify its service instance. Once the service instance of a packet is identified, the packet is handed to the corresponding traffic police block 1304. The latter compares the length of the packet with the current value of its state variable X. If X is smaller than the packet length, the packet is dropped right away. Otherwise, it is passed on to the next stage (service specific processing 1306) based on its service instance, and the value of X is decremented by the packet length. Service Specific Packet Processing
- Service specific packet processing involves the processing a packet undergoes at the NEU depending on the service instance to which it belongs. For instance, such processing, in the case of NLAN based VPN services, can include attaching a VLAN tag to the packet. It can be much more elaborate in the case of some other services such as L2TP based VPN services.
- the service modules 1308 which carry out service specific processing have FEFO queues where packets awaiting processing are lined up. Limits can be placed on the size of these queues in order to ensure that packets waiting for service modules having temporary problems (e.g. when under malicious attack) do not end up occupying large segments of memory. If a packet handed to a service module for service specific processing finds that the corresponding queue size has reached its limit, it is immediately dropped.
- Packet size restrictions can also be enforced at this stage. For instance, if a service instance is set up with a limit on the maximum packet size, a packet belonging to that service instance can be dropped at this stage if its size exceeds the corresponding limit. (Alternatively, packet size restrictions may be enforced at the traffic policing stage.)
- the packet processing stage 1306 also includes some service independent processing that all packets need to undergo. This service independent processing follows service specific processing, and includes such things as the attachment of a Access Network label, with the QoS field filled with the QoS class associated with the packet's service instance, and recalculation of the packet's Ethernet check-sum. At the end of this processing, a packet is ready to go out and is handed to the egress buffer control stage 1310.
- Each egress buffer is allocated a fixed amount of space to hold packets.
- the egress buffer control stage 1310 performs a simple operation. When a packet is handed to this stage, it checks if the egress buffer associated with the packet's QoS class (which is a function of its service instance) has adequate space (in terms of byte count) to hold the packet. If it does, the packet is placed in that buffer where it waits for its turn at transmission. Otherwise, it is dropped.
- the transmission scheduler 1312 has the responsibility to hand packets to the modem in a manner that is consistent with the priorities associated with different QoS classes and the flow control restrictions imposed by the SAS device. The latter requirement is important since a packet handed to the hardware for copying into the modem buffer cannot be stopped from being transmitted.
- QoS class 1 (which is associated with CBR-RT services) is at the highest priority, followed by QoS class 2, QoS class 3 and QoS class 4 in that order.
- the transmission scheduler observes absolute, but non-preemptive priorities between these QoS classes.
- the NIU periodically receives flow control flags from the SAS device, which, for each QoS class, indicate whether the NIU has permission to transmit a packet belonging that class.
- the NIU stores in a register the values of most recent flow control flags.it has received from the SAS.
- the transmission scheduler uses these values in its scheduling decisions Because of the fact that once packets are handed to the hardware (the DMA controller, in particular) for copying them into the modem buffer they cannot be stopped from being transmitted, tight coordination is required between the transmission scheduler and the hardware to ensure that only a manageable quantity of packets is handed to the hardware at any time. This is achieved as follows.
- the modem buffer 1314 has a limited amount of buffer space e.g., 3200 bytes. Periodically, e.g., every 100 microseconds, the hardware writes into a designated register the amount of memory space available in the modem buffer at that instant, and sends an interrupt to the CPU. The CPU processes this interrupt at the highest priority, and as part of this interrupt processing calls the function representing the transmission scheduler task.
- the transmission scheduler When the transmission scheduler is called, it reads (from the relevant registers) the values of the most recent flow control flags as well as the memory space (in bytes) available in the modem buffer. It uses this information in its scheduling decisions as shown in FIG. 29.
- the variable Available_Memory stands for the memory space (in bytes) available in the modem buffer.
- the CPU processes the hardware interrupt, it calls the transmission scheduler task, which reads the variable, Available_Memory, from a designated register.
- this variable is decremented (by the length of the packet) every time a packet is handed to the DMA controller for copying into the modem buffer.
- the DMA controller can copy the packets independently, without involving the CPU in this process.
- the copying can take at most 10 to 20 microseconds, which is well short of the interrupt interval.
- the total byte count of packets handed to the DMA controller on any one execution of the transmission scheduler never exceeds the available memory in the modem buffer. Also, if the transmission scheduler encounters a packet that cannot be copied into the modem buffer because it would violate the available memory constraint, the scheduler is exited even if there are smaller packets at lower priority levels in their respective egress buffers. This is to prevent small low priority packets from overtaking large higher priority packets because of their size.
- traffic shaping policing is done on a per QoS Class basis.
- This alternate traffic shaping/policing approach used at the NEU is described with reference to FIGs. 30-33.
- Queue[l] 862A, Queue[2] 862B, Queue[3] 862C, Queue[4] 862D (referred to generally as 862) are inputbuffers associated with the four QoS classes.
- the input buffers receive packets on lines 860A, 860B, 860C, 860D from QoS mapping logic 858, which determines the QoS class as noted above for incoming packets received from the end user devices on line 857.
- FC Flag 1 Flag 2
- Flag 3 Flag 4 indicated at 864A, 864B, 864C, 864D, respectively, are transmission control flags each of which corresponds to one of the four QoS classes.
- the FC flags are referred to generally as 864.
- TB token bucket
- Buffers 870A, 870B are transmit buffers, each of winch is large enough to store a full sized (e.g., maximum transmission unit or MTU size) packet.
- the NIU has two transmit buffers so that the transceiver can transmit one packet while scheduler 866 copies another packet into the other transmit buffer that is ready for transmission.
- Buffer flags 868A, 868B correspond to the transmit buffers.
- the traffic shaping/policing is managed by the scheduler 866.
- each token bucket is characterized by two parameters: token_size and max_burst_size which respectively determine the average sustainable traffic rate and the largest traffic burst that the token bucket allows into the network.
- token_size and max_burst_size which respectively determine the average sustainable traffic rate and the largest traffic burst that the token bucket allows into the network.
- the NEU uses a token bucket for each QoS class, rather than for each traffic flow.
- the parameters associated with the token bucket and the size of the input buffer for a QoS class can be determined by the total traffic associated with that class that is expected to pass through the particular NEU
- the NIU maintains a clock that is used for updating the state of the token buckets for all QoS classes.
- the NIU maintains a state variable, X, for each token bucket.
- the NIU updates the state variable X using the update equations (Eq. 1) and (Eq. 2) noted above for the first embodiment.
- the token size parameter (token_size) of a token bucket determines the rate at which the corresponding service class can transmit data on a sustained basis. Specifically, if token_size is measured in bits and the clock period, T, in milliseconds, then the maximum sustainable data rate for the corresponding service class is expressed as token_size/T kbps.
- FIG. 31 is a flow diagram that represents the packet mapping and input control logic of the NIU.
- a packet is received by an NEU from its subscriber interface at 904 and has undergone service specific processing, it is passed through the mapping logic 858 (FIG. 30) to identify its QoS class at 906.
- the NIU at step 908 checks if there is adequate space in the input buffer 862 associated with the QoS class of the packet to accommodate the packet.
- the packet is dropped (line 861 in FIG. 30) if the input buffer has insufficient space for the packet.
- the packet is admitted to the proper input buffer and scheduled for transmission by the scheduler 866 in a FEFO order within that class.
- the token buckets 865 regulate the outflow of the corresponding traffic streams.
- the accumulated credit as indicated by the state variable X for a particular token bucket exceeds the length of the first packet waiting in the corresponding input queue, it means that the packet can now be transmitted over the output link without violating the flow constraints imposed on the flow associated with the corresponding QoS class and represented by its token bucket.
- the SAS to which the NIU is directly connected periodically sends ON-OFF flow control signals to the NIU indicating to the latter from which of the four QoS classes the SAS is ready to receive traffic. Specifically, once every T F milliseconds (e.g., 10 us), the SAS sends to the NIU a signal carrying four indicators, one for each QoS class. If the indicator associated with a QoS class is ON (e.g., represented by a flag value of 1), the NIU is free to send the SAS traffic belonging to the particular QoS class. If the indicator value is OFF (e.g., represented by a flag value of 0), the NIU holds traffic belonging to that QoS class until the indicator value is changed to ON.
- T F milliseconds e.g. 10 us
- the NIU stores the most recent values of the indicators in the four flow control (FC) flag fields 864 (Flag 1, Flag 2, Flag 3 and Flag 4).
- FC flag fields indicate whether the NIU is free to forward traffic belonging to their respective QoS classes.
- the scheduler 866 carries out this task in a manner that obeys the constraints imposed by the token buckets and the flag values, while attempting to minimize the delays for the first three QoS classes (i.e., Classes 1, 2, 3).
- the NIU has two transmit buffers 870A, 870B (FIG. 30) so that the transceiver can transmit one packet while the scheduler 866 is copying another packet into the other transmit buffer that is ready for transmission.
- the pointers Next_Write_Buffer and Next_Read_Buffer and the buffer flags 868A, 868B i.e., Buffer 1 flag and Buffer 2 flag are used.
- the scheduler alternately uses the two transmit buffers to copy packets which are ready for transmission. However, it cannot copy a packet into a buffer unless the corresponding buffer flag 868 is ON which indicates that the buffer is empty. When the transmitter has copied a packet into a buffer, it turns the buffer flag OFF. The transceiver turns it back ON when it has finished transmitting the corresponding packet so that the transmit buffer is again empty.
- FIGs. 32 and 33 FIG. 32 is a flow diagram that illustrates the scheduler logic and FIG. 33 represents the transceiver logic. Referring to FIG. 32, the scheduler uses non-preemptive priorities between the QoS classes, with Class 1 at the highest priority level and Class 4 at the lowest.
- the scheduler selects for transmission the packet belonging to the higher priority class.
- the packet selected for transmission is copied into the transmit buffer indicated by the value of the Next_Write_Buffer provided the corresponding buffer flag 868 is ON.
- the scheduler copies a packet into a transmit buffer, it decrements the accumulated credit, the variable credit[p], for the corresponding token bucket by the length of the copied packet.
- the variable credit[p] is the same as the variable X in the foregoing equations.
- the scheduler scans the input queues going from the highest priority queue to the lowest priority queue to see which is the highest priority packet waiting at the head of its queue which is ready for transmission.
- the credits are decremented by the packet length and the packet is copied into the buffer indicated by Next_Write_Buffer at 928.
- the buffer flag associated with the buffer indicated by Next_Write_Buffer is turned OFF.
- the transceiver logic is illustrated in the flow diagram of FIG. 33.
- the transceiver transmits packets in the order in which the scheduler 866 has copied them into the transmit buffers 870 (FIG. 30).
- the transceiver maintains a pointer Next_Read_Buffer that is initially synchronized with the pointer Next_Write_Buffer maintained by the scheduler.
- the transceiver starts transmitting the contents of the buffer pointed to by Next_Read_Buffer at 938. Note that when the buffer flag is OFF it means that the corresponding transmit buffer has a packet that is ready for transmission.
- the buffer flag associated with that transmit buffer is turned ON at 942.
- the transceiver changes the Next_Read_Buffer to point it to the other transmit buffer and returns to 936 to wait until the corresponding buffer flag is turned OFF.
- the transceiver operates completely oblivious to the FC flags 864 (FIG..30) sent by the SAS and maintained by the NIU. If there is a packet in the transmit buffer pointed to by Next_Read_Buffer and the corresponding buffer flag is OFF, the transceiver transmits that packet regardless of the state of the FC flag maintained by the NIU. It is the responsibility of the scheduler not to copy packets into the transmit buffers if the SAS is not ready to receive them as indicated by the corresponding FC flags.
- QoS management features are described that are included at intermediate network elements such as DSs and SASs.
- DSs and SASs are distinct devices, they both include similar features to support QoS management.
- the QoS management features at SASs and DSs include upstream packet handling, upstream flow control and downstream packet handling.
- the upstream packet handling features ensure low latency for the delay sensitive, high priority packets while keeping the overall throughput high and maintaining fairness in the treatment of the traffic belonging to the lowest (UBR) priority class.
- an intelligent transmission scheduling discipline is used at the intermediate network elements. This scheduling discipline, which combines priorities with weighted round robin scheduling, provides for low delays for high priority traffic and fairness in the treatment of UBR traffic.
- the transmission scheduling discipline for intermediate network elements is defined such that for each such device, each of the top three QoS classes (i.e., ⁇ Classes 1,2 and 3) have a common queue while there are per link queues for the fourth (UBR) class (i.e., Class 4).
- UBR fourth
- Traffic streams of packets on the upstream links 950A, 950B, 950C for the top three QoS classes are queued in corresponding common queues 952A, 952B, 952C.
- QoS Class 1 packets of streams 950A-1, 950B-1, 950C-1 are queued in common queue 952A.
- QoS Class 2 packets of streams 950A-2, 950B-2, 950C-2 are queued in common queue 952B and QoS Class 3 packets of streams 950A-3, 950B-3, 950C-3 are queued in common queue 952C.
- packets in stream 950A-4 are queued in link queue 954A
- packets in stream 950B-4 are queued in link queue 954B
- packets in sfream 950C-4 are queued in link queue 954C.
- Transmission scheduler 956 manages the queues as described further herein.
- the transmission scheduling discipline observes strict, non-preemptive priorities between the QoS classes, and uses a weighted round robin discipline to allocate the available system capacity (i.e., trunk bandwidth) in a fair manner to UBR traffic carried by different links. Packets in the same queue follow the FEFO order among them. Moreover, it does not schedule a packet for transmission if the flow control flag for the corresponding QoS class is OFF.
- Strict priorities between classes means that if two packets belonging to different QoS classes are ready for transmission in their respective queues and the corresponding flow control flags are on, the scheduler takes up for transmission that packet which belongs to a higher priority class. Thus, after every transmission, the scheduler checks the queues to look for the highest priority packet that is ready for transmission. If the highest priority packet that is ready for transmission belongs to one of the top three QoS classes and if the corresponding flow control flag is ON, the packet is immediately taken up for transmission. If the queues associated with the top three QoS classes are empty, the per-link queues associated with the fourth (UBR) class are inspected for packets awaiting transmission.
- ULR per-link queues associated with the fourth
- FIGs. 35A, 35B illustrate the logic used by the scheduling discipline that combines priorities with weighted round robin scheduling.
- the loop indicated between blocks 1008 and 1012, 1016 (FIG. 35 A) relate to servicing of the priority queues for the top three QoS classes.
- the flow blocks in FIG. 35B relate to servicing of the per-link queues associated with the fourth (UBR) class.
- Every link queue associated with the lowest priority QoS class (class P in the notation of the flow diagram) has a service quantum associated with it.
- the quantum (or weight) associated with link queue J is denoted by Q ⁇ .
- the magnitude of the quantum associated with a link queue is proportional to the share of the available capacity that the scheduler intends to allocate to that queue, h addition, there is a parameter called Max that determines the maximum credit a queue can accumulate.
- the parameter Max as well as the service quanta associated with different links can be downloaded to the intermediate network elements by the NMS at system setup or provisioning time.
- a suitable value for the Max parameter is 1600 bytes which is large enough to accommodate the largest sized packets in the system.
- the system keeps track of the "credit" accumulated by each queue.
- the credit associated with link queue J is denoted by D ⁇ in FIGs. 35 A, 35B.
- the server moves to serve one of the queues, say link queue I as indicated at block 1028, it increments the credit D ⁇ by the amount Q ⁇ as indicated at block 1038. If D ⁇ is found to be greater than the parameter Max, D j is set equal to Max. The server then looks at the first packet in link queue J at 1042.
- this packet If the length of this packet is less than or equal to D ⁇ , it decrements D r by the length of the packet, removes the packet from link queue J and starts transmitting it at 1050.
- the packet transmission looks at the new packet that is now at the head of link queue J to see if its length is less than or equal to the current value of D ⁇ and repeats the process until either link queue J is empty at 1032 or the length of the first packet at the head of queue J is greater than D ⁇ . If it is the former, the server sets D ⁇ to zero at 1034, otherwise it leaves D ⁇ unchanged. In either case, the server moves on to the next link queue J+l at 1030 and repeats what it did at link queue J. hi case J corresponds to the last queue, the server moves back to the first queue.
- the transmission scheduling discipline works much the same way as the basic weighted round robin discipline.
- UBR lowest priority
- the accumulated credit for the former is incremented by its service quantum.
- a packet waiting at the head of a UBR queue cannot be scheduled for transmission unless the accumulated credit for that queue is at least equal to the packet's length when the server comes to that queue.
- the only difference is the presence of high priority queues that have a non-preemptive priority over the UBR queues.
- the high priority queues are inspected at 1008, 1010 (FIG. 35A) to see if any of them have a packet ready for transmission.
- the visit can be broken (after a packet transmission) in order to transmit higher priority packets that may have arrived in the mean time.
- the server finishes transmitting the high priority packets at 1016 it returns to the same UBR queue it was serving when it had to leave to serve the high priority queues.
- it is considered a continuation of the same visit to the UBR queue so that its credit is not incremented by its service quantum.
- the UBR queue's credit will be incremented the next time the server returns to it after visiting all other UBR queues.
- the variable LP_Active in FIGs. 35A, 35B indicates whether the server's visit to a UBR queue was broken to serve high priority queues.
- Upstream flow control is an important QoS management feature that is used at all intermediate network elements. Upstream flow control is used since the total rate at which different trunks or feeders can bring traffic to an intermediate element can far exceed the rate at which this traffic can be carried further upstream. Since the total rate at which the traffic can converge to a DS can be several Gbps, to address this problem by providing a large enough buffer space is impractical. Flow control provides a better approach. Upstream flow control in DSs and SASs is now described.
- each network element (a DS or a SAS) has one common queue for each of the top three QoS classes and per link queues (i.e. one for each link) for the UBR class.
- Each queue has a separate (and fixed) allocation of the buffer space set aside for the upstream traffic.
- the network element keeps track of the buffer occupancy and maintains a flag for each of these queues. It also has two parameters - a high threshold TH H and a low threshold TH L - for each queue.
- the parameters - buffer space allocation, high and low thresholds - are downloaded at system setup time.
- the flag values for all queues are initialized to ON (i.e. set to 1).
- the flags are turned ON and OFF in an asynchronous manner at packet arrivals and departures.
- the element ensures that it has adequate buffer space in the appropriate queue to accommodate the packet. If the queue has insufficient space, the packet is dropped. Otherwise it is filed in the queue.
- the flag does not change for a buffer level BL2 between the high and low thresholds as shown for buffer 1102.
- the network element (e.g., A) periodically sends the current values of the relevant flags to all of the network elements that are directly connected to it in the downstream direction.
- the element A will send to element B (which is directly connected to the former in the downstream direction) the current values of the flags associated with the top three priority queues as well as the current value of the flag associated with the queue meant for UBR traffic received by element A from element B.
- the flag values are sent to element B using the special "flow control" bits periodically inserted into the byte stream as described earlier.
- Each network element maintains four flags — one corresponding to each of the four QoS classes - for the traffic that it sends upstream.
- the flag values are updated when the element receives the flow control byte from the element directly connected to it in the upstream direction.
- the former updates the values of the flags it maintains for the four QoS classes.
- a network element can transmit (in the upstream direction) a packet belonging to a given QoS class only if the current flag value for that class is 1.
- element B can transmit a packet belonging to, say Class p, only if the current value of the flag associated Class p is 1.
- Downstream packet handling is rather simple when compared to upstream packet handling, h the downstream direction, each outgoing port is allocated a fixed share of the buffer space (equivalent to 2 to 3 MTU).
- the buffer space equivalent to 2 to 3 MTU.
- the latter looks up the routing table to identify the port the packet should be directed to. If the port has adequate buffer space to accommodate the packet, the packet is copied into the port's transmit buffer where it awaits transmission over the corresponding link.
- packets are handled FEFO regardless of their QoS class.
- a QoS feature that is included at the Head-End Router relates to rate control and prioritization of traffic belonging to different QoS classes.
- the rate control is used since there is no flow control and only limited buffering capability within the access network except at the ODS. Consequently, unless the head-end router releases downstream traffic in such a manner that it remains within the capacity of every link that carries it, there could be significant packet losses due to buffer overflows.
- a related feature that is used at the head-end router is buffering and prioritization of traffic.
- traffic from various sources accumulates at the router and the sum total bandwidth of the links bringing downstream traffic to the Access Network can easily exceed the bandwidth of the main access trunk. Consequently, the router buffers traffic to avoid potential packet losses.
- the capability to buffer traffic is complemented by an ability to prioritize traffic according to their QoS class so that high priority traffic does not suffer large delays in case there is a temporary traffic overload.
- Connection admission confrol is an important part of the overall QoS management scheme in the Access Network. Without connection admission control, the network cannot ensure that the system has adequate resources to deliver the desired QoS to the various kinds of connections that are established over the Access Network. Connection Admission control (CAC) is exercised via the Connection Admission Control Server (CAC server) 136 (FIG. 3). The features that are provided at the CAC server in order to exercise connection admission control are now described.
- CAC server Connection Admission Control Server
- the features provided at the CAC server include call agent interface features, provisioning interface features, NIU interface features, CAC server internal features, and signaling features.
- VoIP voice over EP
- SEP customer premises
- H.323 call agent
- CAC server 136 Within the Access Network, which forms the access portion of the end-to-end connection, it is the CAC server 136 (FIG.
- the call agent is completely oblivious to the state of the Access Network, which is likely to be dealing with many such call agents handling connection requests from many kinds of applications. Therefore, the call agent needs to interact with the CAC server to see if the Access Network has adequate resources for the call.
- the following protocol and the associated messages are defined to enable a call agent to interact with the CAC server to reserve resources for a connection.
- a simple protocol is defined to enable a call agent to interact with the CAC server. This protocol is intended to identify the features that need to be supported by the signaling protocol between the CAC server and call agents. Since the call agents are non-proprietary entities, the actual signaling protocol is that which is supported by the call agents, e.g., MGCP.
- the Resource_Request message When a call agent wants to reserve resources for a connection, it sends a Resource_Request message to the CAC server as shown in FIG. 37 A. All messages in this protocol begin with a message type field 1112 that is one byte long. Besides the message type field, the Resource_Request message includes an identifier 1114 of the call agent, an identifier 1116 of the connection for which resources are being requested, the EP address and port number 1117 of the end-user device attached to the Access Network that is involved in that connection, the IP address and port number 1118 of the far end device and a traffic descriptor 1120.
- the identifier of the call agent can be its public IP address.
- the identifier of the call is a four-byte integer that has been selected by the call agent to refer to that connection.
- the call agent can use this identifier to refer to this connection throughout its lifetime.
- the identifier of the call agent and the identifier of the connection are together used to identify the connection so that there is no confusion even if two different call agents used the same connection identifiers to refer to two different connections.
- the traffic descriptor 1120 contains five fields: the sustained throughput rate 1122 needed for the connection, the peak rate 1124 at which it is likely to transmit data, the maximum burst size 1126 that it can transmit at the peak rate, the maximum packet size 1127 and the delay requirement parameter 1128.
- the CAC server receives a Resource_Request message 1110, it inspects its internal resource usage database to see if the system has adequate resources to deliver the QoS needed for the connection being established.
- the CAC server updates its resource usage database to account for the connection being established, and responds to the call agent with a Request_Grant message 1130. If the system does not have adequate resources, the CAC server responds with a Request_Denial message 1134.
- the formats of these two messages are as shown in FIGs. 37B and 37C respectively. Each includes a CAC Server identifier field 1132. The Connection Identifier field 1116 used in both Request_Grant and
- RequestJDenial messages is filled with the same connection identifier used by the call agent in the original Resource_Request message.
- the call agent typically is involved in resource reservation activities with some other entities in the wide area network. If it receives the needed resources from all of these entities, it sends a Resource_Commit message to the CAC server. This message is an indication to the CAC server that the connection has been established. The CAC server responds to the Resource_Comm.it message with either a Co mit_Confirm message or a Release_Confirm message.
- the former is a confirmation that the resources have been committed to that connection; the latter is an indication that although the resource request received earlier from the call agent was granted by the CAC server, the CAC server now wants to tear down the connection. In the former case, the end-users involved in the call can proceed to exchange data.
- the call agent can tear down the connection by sending appropriate messages to the end-users.
- the formats of the Resource_Commit 1136, Commit_Confirm 1138 and Release_Confirm 1140 messages are as shown in FIGs. 37D, 37E and 37F respectively.
- the call agent sends a Resource_Release message 1142 to the CAC server.
- the CAC server releases the resources committed for that connection, updates its resource usage database and sends Release_Confirm message to the call agent.
- the format of the Resource_Release message is as shown in FIG. 37G.
- connection states include no connection 1150, NULL state 1154, RESERVED state 1166 and committed state 1170.
- a similar resource reservation protocol can be used between the Provisioning Server 135 (FIG. 3) and the CAC Server in order to enable the former to establish provisioned services.
- the messages and protocol state transitions in this case can be similar to the messages and state transitions described for the call agent above. However, the format of the messages in this case are more flexible to accommodate the variety of service options and packet classifier information needed to support provisioned services.
- an exemplary message format for Resource_Request and Resource_Commit Messages for Provisioned Services can include the following fields: Message type, Provisioning Server ED, NEU ED, Provisioned Service ED, Service type, Traffic Descriptor, Packet Classifier Information and -Service specific options.
- Request_Grant and Commit_Confirm Messages for Provisioned Services can include the following fields: Message type, CAC Server ID, NEU ED, Provisioned Service ED, Service type, Traffic Descriptor, Packet Classifier Information and Service specific options.
- Request_Denial and Release_Confirm Messages for Provisioned Services include ⁇ he fields Message type, CAC Server ED and Provisioned Service ED.
- a lesourceJRelease Message for Provisioned Services includes Message type, rovisioning Server ED and Provisioned Service ED fields.
- the NEU ED is an identifier of the NIU at vhich the service is being provisioned. It should be unique among all the NIUs )eing handled by the CAC server and the provisioning server. One possibility is to -65-
- connection states include no connection 1150, NULL state 1154, RESERVED state 1166 and committed state 1170.
- a similar resource reservation protocol can be used between the Provisioning Server 135 (FIG. 3) and the CAC Server in order to enable the former to establish provisioned services.
- the messages and protocol state transitions in this case can be similar to the messages and state transitions described for the call agent above.
- an exemplary message format for Resource_Request and Resource_Commit Messages for Provisioned Services can include the following fields: Message type, Provisioning Server ID, NIU ID, Provisioned Service ID, Service type, Traffic Descriptor, Packet Classifier Information an -Service specific options.
- Request_Grant and Commit_Confirm Messages for Provisioned Services can include the following fields: Message type, CAC Server JD, NIU ID, Provisioned Service ED, Service type, Traffic Descriptor, Packet Classifier Information and Service specific options.
- Request Denial and Release_Confirm Messages for Provisioned Services include ⁇ the fields Message type, CAC Server ED and Provisioned Service ED.
- a Resource_Release Message for Provisioned Services includes Message type, Provisioning Server JD and Provisioned Service ED fields.
- the NIU ED is an identifier of the NIU at which the service is being provisioned. It should be unique among all the NIUs being handled by the CAC server and the provisioning server. One possibility is to -66-
- the messages enable the CAC server to inform the NEU about ' the setting up, tearing down and modification of connections and provisioned services, and help it to update the filtering (i.e. packet classification) and ingress traffic policing parameters.
- An assumption here is that the CAC server merely informs the NIU about the traffic characteristics of a new connection being established; the NIU locally carries out the token bucket and buffer size computations.
- the messages involved in the interaction between the CAC server and the NIU include Setup Message, Request-Confirmed Message, Request-Denied Message, Teardown Message, Modify-Parameters Message, Get-Parameters Message and Conn-Parameters Message.
- the Setup message is used to inform an NIU about the establishment of a new connection or a provisioned service.
- the CAC server (after receiving a resource request from the concerned call agent or provisioning server and detennining that the Access Network has adequate resources to handle the connection) sends a Setup message to the NIU.
- the Setup message has the high-level structure shown in FIG. 39 and includes the following fields:
- MessageType 1404 MessageSequence Number 1406, Connection/Provisioned Service ED 1408, ServiceType 1410, Traffic-Descriptor 1412, Packet Classifier Information 1414 and Service Specific Options 1416.
- the structure of the Setup message identifies the information it needs to carry.
- the field Message Type identifies this message as a Setup message.
- Message Sequence Number field is used to identify copies of the same message in case multiple copies of the same message are received by the NIU. This could happen, for instance, if the NTU's response to a Setup message is lost so that the CAC server, believing that the original message may not have reached the NIU, retransmits it. However, when such retransmissions occur, the CAC server uses the same Message Sequence Number in the retransmitted message which enables the NIU to identify it as a copy of the original message (in case that message was received by the NIU.)
- the Connection / Provisioned Service Identifier provides an ID associated with the connection or provisioned service. This can be a concatenation of the call agent ED and the connection ED provided by the call agent. This identifier is assigned to the connection / provisioned service by the CAC server at setup time and used by it and the NEU to refer to the connection / provisioned service throughout its life.
- the next high-level field is Service Type. This field identifies the service associated with the connection / provisioned service.
- the service in this context refers to the kind of processing a packet undergoes at the NIU.
- the service type indicates the default service, which involves attaching an Access Network label to the packet with the QoS field filled in accordance with the QoS class associated with the packet and recomputation of its Ethernet check-sum.
- All switched connections which are set up through some interaction between a call agent and the CAC server, use the default service type.
- All services requiring special handling of packets e.g. L2TP, Ethernet over P, VLAN based VPN services
- the Traffic Descriptor field comes next. This field consists of four subfields as shown in FIG. 40A.
- the traffic descriptor field includes subfields for sustained rate 1418, maximum burst-size 1420, maximum packet size 1422 and QoS class 1424.
- the subfield Sustained Rate refers to the bit rate at which the corresponding connection or provisioned service can transmit data in a sustained manner.
- the Maximum Burst-Size subfield refers to the maximum burst length for which the connection / provisioned service can deliver data to the Access Network at line rate. -68-
- the parameters, Sustained Rate and Maximum Burst-Size, are used by the NIU to setup a token bucket based policing scheme for the connection / provisioned service.
- the Maximum Packet Size places a restriction on the size of packets the connection / provisioned service can generate.
- the QoS Class refers to the priority class associated with the connection.
- the next high-level field is the Packet Classifier field.
- the information contained in this field enables the NIU to identify packets that belong to the connection / provisioned service being established.
- the packet classifier could be as simple as the source EP address and port ID.
- packets originating from several designated devices may have to be identified as belonging to the same service.
- Future services and applications may require even greater flexibility in defining the characteristics to be used in packet classification. Consequently, a flexible structure has been defined for the Packet Classifier field.
- FIG. 40B shows this structure which includes three fixed subfields (with fixed lengths), followed by a variable number of "Entry" fields.
- the Packet Classifier field begins with the "Number of Entries" subfield 1426, which indicates how many Entry fields have been included in it.
- the next (fixed) subfield is the Source/Destination subfield 1428. It indicates if each Entry field contains source address(es) or destination address(es) or both. If the value contained in this field is 1, it means the entry fields contain source addresses; if it is 2, it means that the entry fields contain destination addresses; and if it is 3, then the entry fields contain source and destination addresses.
- the third fixed subfield is the MAC/IP Address subfield 1430.
- the Packet Classifier field enables the NIU to identify packets belonging to a given connection or service on the basis of source / destination MAC addresses, EP address - Port ED pairs or a combination thereof. It also allows wildcards, which match with any value contained in the corresponding field. -69-
- Table 17 lists all the possible combinations of values contained in the Source/Destination and MAC/EP address subfields and the corresponding contents of the entry subfields. Note that there are as many of these entry subfields as the value specified in the Number of Entries subfield.
- Table 17 Relationship between Source/Destination, MAC/TP Address fields and Entry fields
- the Service Specific Options field begins with a 'TSTumber of Options" subfield 1434.
- the value contained in this field indicates how many Option entries -70-
- Each Option entry has the "Type-Length- Value" structure 1442, 1444, 1446.
- the significance of Option Type depends on the Service Type defined earlier. For instance, if the Service Type indicates a VPN service based on VLAN tagging, then Option Type 1 could mean that the contents of the corresponding Option Value field indicate the VLAN tag to be used. On the other hand, for some other service type, Option Type 1 could possibly mean something entirely different.
- An NIU can respond to a Setup message with either a Request-Confirmed or a Request-Denied message. Both acknowledge receipt of the corresponding Setup message to the CAC server, hi addition, the Request-Confirmed message informs the CAC server that the NIU is going ahead with allocating its own resources to the connection (or provisioned service as the case maybe), whereas the Request-Denied message tells the CAC server that the connection / provisioned service cannot be established. Both messages have identical formats as shown in FIG. 41; they differ only in the Message Type field 1452, which tells whether it is a Request-Confirmed or Request-Denied message.
- the Message-Number field 1454 contains the value stored in the corresponding field of the Setup message in response to which the present Request-Confirmed or Request-Denied message is being sent. This helps the CAC server to relate the response to the correct request.
- a Connection / Provisioned Service Identifier field 1456 is also included.
- the call agent infonns the CAC server that the connection is being torn down.
- the CAC server after releasing the network resources allocated to the connection / provisioned service, sends a Teardown message to the concerned NIU.
- the NIU responds to the CAC server with a Request-Confirmed message and releases its local resources.
- the Teardown message can include a "wildcard" for the connection identifier parameter.
- the CAC server is requesting the NIU to tear down all of the connections / provisioned services it has established.
- the NIU then releases resources allocated to all connections / provisioned services and sends a Request-Confirmed message to the CAC server.
- the Teardown message has a structure similar to that of -71-
- the message type field identifies the type of message being sent.
- the Modify-Parameters message (FIG. 42) is used by the CAC server to request the NIU to modify the parameters associated with a connection or a provisioned service. Modification of parameters in this case involves changing the traffic descriptor, packet classifier or service specific options field for a connection or a provisioned service.
- the Modify-Parameters message has been designed to enable the CAC server to handle all of these possibilities.
- the Modify-Parameters message includes fields: MessageType 1460, Message Number 1462, Connection/Provisioned Service ED 1464, Modification Type 1466, Modification Details 1468.
- the Message Type field 1460 identifies this message as a Modify-Parameters message, whereas the contents of the Message Number and Connection/Provisioned Service ID fields have the same meaning as the corresponding fields in the Setup message.
- the field Modification Type 1466 specifies the kind of change the CAC server wishes to make to the connection or provisioned service identified in the Connection / Provisioned Service D field 1464. Table 18 gives the relationship between the contents of the Modification Type field and the corresponding parameter modification being sought.
- the contents of the Modification Details field depend on the action being sought by the CAC server (and specified by the value of the Modification Type field.) If the value of Modification Type is 1 or 2 (i.e. adding or deleting packet classifier entries), the Modification Details field appears as shown in FIG. 40B, with the number of entries subfield indicating the number of packet classifier entries being added or deleted. If the value of Modification Type is 3 (i.e. when the traffic descriptor is being changed), the Modification Details field has a structure as shown in FIG. 40A. This field carries the values of the new traffic descriptor parameters.
- the corresponding subfield in the Modification Details field will be filled with its current value (which is being left unchanged.) If the value of the Modification Type field is 4 (adding or changing one or more service options) or 5 (deleting one or more service options), the structure of the Modification Details fields is as shown in FIG. 40C, with the number of options subfield carrying the number of options being added or changed (in case Modification Type is 4), or the number of options being deleted (in case Modification Type is 5.)
- the CAC server can send a Get-Parameters message to the NIU to request it to send the parameters associated with a connection.
- the Get-Parameters message also has a structure similar to that shown in FIG. 41.
- an NIU When an NIU receives a Get-Parameters message, it inspects its local cache to see if the Connection / Provisioned Service Identifier field in the message matches with any of the connection identifiers that it has set up. If a match is found, the NIU responds with a Conn-Parameters message carrying the parameters associated with • the connection. If no match is found, the NIU responds with a Conn-Parameters message indicating that the connection identifier was invalid (i.e. with all of the relevant fields set to 0).
- the Conn-Parameters message uses the value contained in the Message Number field of the Get-Parameters message to help the CAC server relate the response (the Conn-Parameters message) to the correct request.
- the CAC server can send a Get-Parameters message to an NIU with the connection identifier parameter set to a wildcard. In this case, the message indicates a request for parameters associated with all connections established at the NIU.
- the NIU then responds with a Conn-Parameters message carrying the parameters associated with all the connections established at the NIU. To allow for the desired flexibility in this message, it has the structure shown in FIG.
- a Connection Parameter Set includes subfields: Connection/Provisioned Services ID 1482, Service Type 1484, Traffic Descriptor 1486, Packet Classifier 1488, Service Specific Options 1490.
- the message format shown in FIG. 43 provides the needed flexibility to allow the NIU to respond to different versions of the Get-Parameters message and also to be able to provide the various pieces of information associated with a connection or a provisioned service that are stored in its local memory.
- the Get-Parameters message asks for the parameters of a specific connection identified by its ED, if the NIU has a connection with that identifier, it sends a -74-
- Conn-Parameters message with the field "Number of Conn. Parameter Sets in Message” equal to 1, followed by the parameters associated with that connection. If the NIU does not have a connection with the specified identifier, it responds with a Conn-Parameters message where the "Number of Conn. Parameter Sets in Message” field is set to 1, followed by the Connection Parameters field in which the
- Connection ED is the same as what was specified in the Get-Parameters message, but the rest of the fields are all set to 0. Finally, if the Get-Parameters message uses a wildcard in the Connection Identifier field, the "Number of Connection Parameter Sets in Message" field in the Conn-Parameters response is set equal to the number of connections that have been established at the NEU, followed by that many sets of connection parameters. The field “Total Number of Connections" in all of these cases is set equal to the number of connections / provisioned service instances established at the NIU. The structure of the connection parameter subfields and their contents are similar to those of the corresponding fields in a Setup message.
- a segment is a portion of a cable that connects a network element to its next upstream neighbor. Because of the constraints on topology and link speeds described above, a critical segment is defined as that portion which brings upstream traffic to an element at a speed which is lower than the speed at which the traffic is going to be carried beyond that element.
- the critical segments are a function of topology and can be identified by processing the topological data that is available at the Tag/Topology server of the Access Network.
- FIG. 44 An exemplary topology is shown in FIG. 44.
- the elements include head end router 1200, DSs 1202, SASs 1204 and NIU 1206.
- Segments A, B, C are higher speed (1 Gbps) than segments D, E, F, G, H, I, J, K, L, M (100 Mbps). It can be seen that segments A, D, H, K are critical segments whereas the remaining segments are not.
- CAC Server Data Requirements The CAC server maintains four sets of data. They are: NIU Data, End User
- the NIU data identifies for each NIU the critical segments through which data injected into the Access Network by that NEU would traverse. This data is of the form shown in Table 19: NI j ( Segment u
- the NIU data can easily be gleaned from the topological data received from the Tag/Topology server. This data can be set up at the time of system set up and refreshed periodically.
- End User - NIU mappings are similar to the ARP caches maintained by edge routers and are maintained by the CAC server in a similar manner. These mappings contain the association between end user JP addresses and the identifiers of the corresponding NIU through which data destined for that end user needs to pass. Since these associations need to be learned in the same way ARP caches are built, the CAC server implements an ARP-like protocol in order to learn these mappings as described herein.
- Resource utilization data stores the utilization state for each of the critical segments.
- the parameters needed to define the utilization state of a critical segment depend on the connection admission control algorithm used to decide whether a connection request is to be granted.
- the connection admission control algorithm accepts or denies a connection request on the basis of the total bandwidth utilization due to the top three QoS classes to be supported on the Access Network.
- the utilization state of a critical segment is the three-tuple (U l5 U 2 , U 3 ) where the numbers U,, U 2 , U 3 respectively denote the total bandwidth utilization due to the three QoS classes on that critical segment.
- Connection data represents the information stored by the CAC server for each of the connections / provisioned services set up on the portion of the Access Network being controlled by the CAC server. This information enables the CAC server to respond to call agent (or provisioning server) messages involving these connections and identify the resources being used by each of them so that when one of them is terminated, the CAC server can release the resources that were set aside for that connection and update the utilization state of the critical segments involved in that connection.
- the connection data maintained in the CAC server includes the following fields: call agent / provisioning server ID, connection / provisioned service JD, connection state,service type, NEU ED, critical segments list, original traffic descriptor, derived traffic descriptor, packet classifier information and service specific options. -77-
- connection data the meanings of the fields call agent /provisioning server ED, and connection / provisioned service ED are clear from the above description of the Resource_Request and other messages received from the call agent / provisioning server.
- the meanings of the fields NEU ED and critical segment list are also clear. They respectively refer to the NIU through which the connection passes and the list of the critical segments traversed by the connection.
- the field connection state refers to the state of the connection. This field can take one of three values: NULL, RESERVED and COMMITTED. It is NULL when an entry for the connection is created but resources (on its critical segments) have not been reserved by the CAC server for the use of this connection.
- the CAC server reserves resources for the connection on its critical segments and updates the utilization state of these segments, the connection state is changed to RESERVED.
- the CAC server receives a Resource_Commit message from the call agent / provisioning server and responds to it with a Commit__Confirm message, it changes the state of the connection to COMMITTED.
- the Original Traffic Descriptor field has five subfields - Sustained Rate, Peak Rate, Maximum Burst Size, Maximum Packet Size and Delay Parameter - which store the values of the corresponding parameters associated with a connection or provisioned service.
- the CAC Server receives these values from the Call Agents or Provisioning Servers via the Resource_Request and other messages and uses these values while interacting with these agents.
- the Derived Traffic Descriptor field has four subfields - Sustained Rate, Maximum Burst Size, Maximum Packet Size and QoS Class.
- the Sustained Rate subfield of the Derived Traffic Descriptor field represents the effective bandwidth associated with that comiection or -78-
- the QoS Class subfield represents the QoS class accorded to the connection / provisioned service by the CAC server.
- the Maximum Burst Size and Maximum Packet Size subfields have the same interpretation as the corresponding subfields of the Original Traffic Descriptor.
- the CAC Server determines the effective bandwidth and QoS Class associated with a connection / provisioned service as functions of the various subfields of the Original Traffic Descriptor, and uses the Derived Traffic Descriptor in its messages to the NIU. The Original Traffic Descriptor remains hidden from the NIU.
- CAC Server Algorithms The algorithms that are used at the CAC server are described in relation to the tasks that are performed by these algorithms.
- the raw topology data received from the Tag/Topology server contains a list of all parent-child pairs and the speed of the link connecting the parent to the child. In this terminology, if two devices are immediate neighbors of one another, the upstream device is considered the parent, and the downstream device its child. Once the critical segments and their associated link speeds are identified, the CAC server can build the resource utilization data for these segments.
- this algorithm For each NIU, this algorithm identifies the critical segments on the path between the NIU and the head-end. This algorithm has much in common with the algorithm for identification of critical segments so that both of them can be combined.
- the CAC server maintains mappings that are similar in nature to the IP Address - MAC Address mappings maintained in an ARP cache.
- the CAC server uses an algorithm to maintain these mappings in a cache and to obtain new ones that are missing in the cache. If a mapping has not been used for a certain period of time, it will be deleted from the cache to prevent old, out-of-date mappings from misdirecting connections.
- the CAC server uses the appropriate mappings, first determines the NIU through which the connection would pass and the critical segments on its path.
- the CAC server invokes this algorithm to calculate the effective bandwidth associated with this connection.
- the CAC server uses the delay parameter part of the traffic descriptor, the CAC server identifies the QoS class to be assigned to the connection.
- the effective bandwidth computations take into account the sustained throughput rate and the peak rate and the maximum burst size for the connection.
- the effective bandwidth of a connection lies between the sustained throughput rate and the peak rate.
- the effective bandwidth is the same as the (constant) bit-rate associated with that connection.
- the CAC Server constructs the Derived Traffic Descriptor for a connection after determining its effective bandwidth and QoS Class.
- the Sustained Rate parameter of a connection's Derived Traffic Descriptor is the same as its effective bandwidth.
- the shaping parameters associated with the connection namely the token bucket rate and token bucket size, are computed using the sustained throughput rate and maximum burst size parameters of the connection's derived traffic descriptor.
- the Sustained Rate parameter of a connection's Original Traffic Descriptor is used as its effective bandwidth. This eliminates the need for complex algorithms that are typically needed for effective bandwidth computation. Also, with this definition, effective bandwidths become additive, which leads to significant simplification of the -80-
- connection admission control algorithm of the CAC server is used for determining whether a connection request can be granted or denied based on the current utilization levels of the critical segments of the Access Network.
- the connection admission control algorithm maintains an "admissible region" for each critical segment.
- the admissible region for a critical segment can be represented by a region in a three dimensional space where each dimension represents bandwidth utilization on that critical segment due to one of the three top QoS classes. En this representation of the admissible region, the effective bandwidth of a connection is its sustained throughput rate.
- the CAC server After the CAC server has identified the critical segments associated with a connection and its effective bandwidth, it uses the latter in conjunction with the existing utilization levels for the top three QoS classes on each of the critical segments to see if admitting the connection would lead to utilization levels outside of the admissible region on any of the critical segments. If the admissible region is violated in this manner on any of a connection's critical regions, the connection request is denied. Otherwise, the connection request is granted.
- This algorithm updates the utilization data for the critical segments in the Access Network whenever a connection is established or torn down.
- utilization is used in a rather general sense here, and, depending on the effective bandwidth computation being used, may or may not represent true utilization. If sustained throughput rate is used as the effective bandwidth of a connection, the utilization level on critical segment due a given QoS class represents the true bandwidth utilization due to that class on that segment.
- the CAC server communicates with three sets of entities: call agents, other servers and network elements.
- the signaling features implemented at the CAC server enable it to communicate with these entities.
- Communication with external entities such as call agents involves traffic flow over the service provider's network which is non-proprietary e.g., standard protocols such as TCP or UDP over EP.
- the CAC server implements these protocol stacks to support actual signaling that takes place between it and the call agents.
- the (higher level) signaling protocol is selected such that it is supported by the call agents interacting with the CAC server.
- the resource reservation protocol described above is intended to identify the requirements for the messages that need to be exchanged between the call agents and the CAC server for QoS management purposes. The actual protocol used depends upon what is supported by the call agents.
- Communication between the CAC server and other servers also takes place via the router so that the basic network and transport layer protocols to be used for this communication can be standard, namely TCP or UDP over EP.
- the actual signaling protocol that rides on top of this protocol stack can be proprietary.
- Communications between the CAC server and network elements also take place via the router so that the TCP or UDP over EP protocol stack can be used for fransport.
- the NIU is unaware of the user devices hanging off its Home-LAN.
- the CAC server is also unaware of the NIU that serves the device.
- this association is important for determining the state of the network for connection admission, provisioning the relevant NIU with QoS and
- Policing parameters and other actions Policing parameters and other actions.
- the CAC server may be -82-
- the CAC server sends a DISCOVER_NIU message downstream. This message has the End Device IP Address as Destination JP Address of the IP Packet. »The Router constructs a Layer-2 frame.
- the Destination MAC Address is the End Device MAC Address.
- the ODS looks at the source IP Address, identifies this message as a control message and inserts the appropriate control bits and RED corresponding to the end point. »The packet is routed through the Access Network (based on the RID) and reaches the NEU.
- the NIU recognizes the control bits and processes the packet (even though the Destination MAC and EP Address belong to the End Device). •The frame is parsed and the payload says it is a DISCOVER_NIU message. »The NIU responds with an NIU DENTIF Y message to the CAC server. This message is addressed directly to the CAC server.
- FIG. 45 A second embodiment of a network configuration is shown in FIG. 45, wherein digital transmissions associated with the intelligent network devices are carried over separate optical fibers at rates of about 10 Gbps or higher. This approach has the tremendous advantage of providing very high bandwidth (e.g., 100 Mbps) to each customer.
- a separate optical fiber 1 11 carries digital transmissions between the intelligent headend 110 and the intelligent ONU 112. This configuration also uses optical fiber 127 between the ONU assembly 312 and trunk amplifier assemblies 314A.
- the trunk amplifier assemblies 314A each include conventional trunk amplifier 14 and an intelligent optical node 514 also referred to as a mini Fiber Node (mFN) described further below. Because of the use of optical fiber in the feeder, the bandwidths can be increased in both the feeder and -83-
- the mEN is shown in FIG. 46 and provides Gigabit Ethernet and legacy services to 100 homes, relays legacy services to 3 additional ports, and facilitates fiber-optic transmission between the headend and successive mFNs.
- the mFN subsumes the function of legacy Video, DOCSIS, and Telephony RF transmission normally performed by the Distribution Amplifier (DA) found in conventional HFC systems.
- the mFN provides Wavelength Add Drop Multiplexing (WADM) 1220 in both upstream and downstream directions.
- Ethernet traffic from optical transceivers is combined/separated from legacy HFC RF signals traveling to/from the subscriber.
- MAC and QoS operations are performed by an ASIC within the mFN.
- the mFN includes 4 external optical connections 1217 and 5 external RF connections. For simplicity, only one of the 4 downstream RF paths 1219 is shown.
- Optical data flows through WADM structures 1220 in both directions in order to facilitate the adding/dropping of Gigabit Ethernet wavelengths.
- Optical data signals are detected and passed through a GbE circuit 1224 to a media independent interface (ME) circuit 1222 and to an RF modem chip-set 1216 for up-conversion and combination with legacy RF at the triplexing/coupling corporate-feed stracture 1214.
- RF data signals are demodulated and passed to the Mu 1222, where they are queued and passed to optical transmitters 1218.
- Downstream legacy RF is decoupled from distribution AC via two blocking capacitors, where it is amplified/equalized and passed to the triplexing/coupling corporate-feed structure 1214 for combination with GbE data traffic.
- AC power is fed to the local power supply 1226 for conversion and routing to bias circuitry.
- a very powerful aspect of the Access Network of the present system is that it affords many ancillary monitoring, management, auto-configuration, -84-
- the system provides fault tolerance by virtue of its distributed-gain approach.
- legacy amplification is activated only in select network elements in order to optimize analog video performance with respect to noise and linearity constraints.
- all attenuators are adjusted to provide target RF power levels at every point in the cable plant.
- the attenuators are adaptively controlled as necessary to compensate for temperature variations in amplifiers and coaxial cable plant.
- an activated amplifier fails, it can be bypassed, and an adjacent upstream amplifier is activated in its place to rapidly restore an acceptable quality of analog video to all customers downstream from the failure in question.
- those customers that are otherwise serviced by the failed amplifier will experience temporary degradation in signal quality until a technician is sent for repair work.
- the overall effect of the gain redistribution feature is to drive system availability arbitrarily close to unity.
- Slope equalization is initially set during the legacy bootstrap operation and is also adaptively controlled in order to compensate for drift that can occur from temperature variations of the legacy amplifiers.
- Return-path equalization for ingress control during DOCSIS upstream transmission is initially set during the bootstrap operation using cable loss estimates from within the downstream spectrum, and unless an override option is exercised via CMTS service requests to the EMS, the return-path equalization is adaptively controlled commensurate with downstream adaptations.
- the EMS can take complete control of any and all addressable -85-
- Activation of any given amplifier stage is determined from the measurement of input power at, the following SAS, and is chosen so as to meet the minimally acceptable CNR criterion.
- a state machine for legacy RF initialization is shown in FIG. 47.
- the first step is to measure, and then transmit to the parent, the legacy input power level at 1504. This input power data is used by the parent during its through path gain activation decision process.
- the next step at 1506 in the legacy initialization and auto-configuration process involves a restoration of settings from FLASH memory for possible rapid recovery from disruptions such as power hits.
- the channel status monitoring state is entered at 1508, where a comparison of new power and slope data to previous measurement results from FLASH is used to determine whether an adaptation cycle is in order. If there has been a significant change in cable plant characteristics since the last adjustment, a legacy-adjust hold is sent to the child, and another cycle of calibrations is performed.
- the downstream calibration process involves three basic steps.
- the first step 1516 involves determining whether the through path gain is required based on input power telemetry data from the downstream child.
- the second step involves adjustment of attenuator settings 1518 in order to provide 20dBmV at each of the drop ports.
- the last of the three steps involves the adjustment of a -86-
- Upstream return-path adjustments are initiated by the EMS, and require that data from all of the elements in the trunk and branch topology be sent to the EMS prior to onset of the adjustment process.
- EMS initial return-path attenuation settings will be extrapolated from loss data talcen during the downstream adjustment phase.
- CMTS initiated service requests via the EMS to fine tune attenuator settings will be accommodated. Otherwise, settings will be adapted in concert with downstream adaptations.
- non-subscribing ports After completion of legacy bootstrap, non-subscribing ports will be deactivated.
- the ' gain-redistribution feature of the present system provides a very powerful way to adapt to amplifier-failure events, and therefore, facilitate fault tolerance in the legacy channel.
- 48A-48C show an exemplary segment of the Access Network that includes a series of network elements 1550A, 1550B, 1550C, 1550D, 1550E.
- Initial amplifier activation choices are driven by the triple-C requirements of the cable-TN industry for analog video quality as indicated by the measurements shown in FIG. 48 A for each element.
- the adaptation process is triggered in the event of an otherwise unexplained drop in legacy input power at a given element, e.g., element 1550B, as shown in FIG. 48B.
- the suspect amplifier is bypassed, and the previous upstream amplifier is activated as shown in FIG. 48C. This approach is intuitively correct because the suspect amplifier would not have been originally activated unless absolutely necessary in order to preserve C ⁇ R in the first place.
- the legacy bootstrap automatically ripples downstream as previously explained.
- the Access Network system includes automatic configuration, measurement, adjustment, bootstrapping, provisioning, Built In Self-Test (BIST), and Fault Recovery mechanisms.
- a quadrature nulling based slope equalization technique combined with sub-ranging AGC capability, add analog adaptive channel equalization capability to thel 6-QAM modem embodiment described above (FIG. 3).
- a BIST feature when operated in conjunction with a companion modem, includes full analog loop-back BERT capability that can verify full lOOMb/s and IGb/s functionality of each modem under test at all carrier frequency options.
- an additional bypass switch feature allows for circumventing network elements with failed modems, and therefore, rapidly restoring the Access Network topology. Carrier and Symbol synchronization are performed using traditional PLL and DLL techniques respectively.
- an array of element measurement/adjustment data and keep-alive/heartbeat information is forwarded, via SNMP, to the EMS for storage as objects in a family of Management Information Bases (MDBS).
- MDBS Management Information Bases
- the M BS are also used to perform additional adjustments for return-path equalization of plant ingress in DOCSIS applications.
- the system initialization paradigm involves a ripple-fashion bootstrapping sequence, originating at each ODS and propagating along DS trunks, and in turn, along Tap branches as shown in FIG. 49 which shows a branch or segment 1558.
- BIST is performed between east and west modems of each element (parent 1560, child 1562 and grandchild 1564) to confirm full analog loop-back with acceptable BER performance.
- Upstream handshake operation After completion of BIST, the next step in the bootstrap process involves an upstream handshake operation. Downstream transmissions are not made until successful upstream handshake is complete. Upstream link establishment includes AGC and slope equalization, carrier recovery, frame recovery, and loop-back transmission to parent using a complementary carrier frequency. Successful upstream loop-back enables downstream transmissions and invokes upstream link status monitoring. Repeated failed attempts at upstream linlc establishment, or loss of upstream carrier SYNC, results in a BIST operation being triggered. Failure of BIST, or CPU, triggers bypass and manual servicing.
- Downstream linlc establishment includes complementary transmission at east port modems and default transmission on drop port modems (complementary transmission on all four DS ports).
- carrier and frame SYNC are followed by loop-back through each child.
- East port modem linlc inactivity timeouts triggers BIST, but drop modem link inactivity does not.
- FLASH memory is used to store element configuration settings for possible rapid recovery after disruptions such as those caused by, e.g., power hits.
- Modem bootstrap is initiated from either power-up, loss of carrier, or EMS reset states 1602.
- a BIST operation is performed 1604, which involves BER testing (BERT) of a full analog loop-back between east and west modems.
- BERT BER testing
- the BERT is performed over the entire carrier frequency and data rate space in both directions in order to ensure full upstream and downstream modem functionality.
- modem bypass mode 1606 is triggered, wherein traffic flows through the device for regeneration at the next modem location.
- legacy gain and slope is set based on input power level and upstream/downstream cable loss data stored in FLASH memory. If the CPU is not functional, then legacy gain is bypassed and equalizer/attenuator settings will revert back to hard-wired nominal values.
- the next step in the upstream boot operation involves the range control of an Automatic-Gain Control (AGC) circuit.
- AGC Automatic-Gain Control
- the most sensitive range setting is chosen, and the AGC control voltage is monitored 1608. If the control voltage is within the normal locking range, the most sensitive AGC range setting is appropriate, and carrier recovery can begin at 1614.
- an AGC control voltage outside of the normal locking range may require either a change of AGC range, or a wait for signal return, or both. For example, when the AGC is already in the most sensitive mode (with gain) and the signal power level remains below the AGC-lock limit, then nothing else can be done. En fact, the signal may not be present at all.
- the next step in the process involves recovering carrier.
- previous settings stored in FLASH are used as an initial seed at 1612. If unsuccessful, the local oscillator is cycled through each of the carrier frequency possibilities at 1614, 1616, 1618, 1622 in order to locate the parent's carrier. A correct carrier frequency choice by the local oscillator will enable a PLL to automatically lock onto the incoming signal, which in turn, will cause the Carrier SYNC (CS) flag to be set. On the other hand, failure to recover carrier will eventually result in another BIST operation at 1604.
- CS Carrier SYNC
- slope equalization is performed using a Quadrature Nulling (QN) technique starting at 1620.
- QN Quadrature Nulling
- the QN technique involves a four- way handshake, which is facilitated by the parent initially sending only hi-Phase Data.
- the QN technique involves adaptively adjusting the slope equalizer until Q-Channel integrated data power is at a local minimum. Beyond initial slope adjustment during -90-
- I-Channel data is looped back to the parent on the complementary carrier frequency.
- This upstream transmission mode is sustained until Q-Channel data is received from the parent, which is indicative of successful parental AGC, Carrier SYNC, and slope-equalization operations.
- Q-Channel data is received from the parent, which is indicative of successful parental AGC, Carrier SYNC, and slope-equalization operations.
- Q-Channel reception from parent reciprocation with Q-Channel data loop-back will take place, to complete the four-way handshake.
- RF Loop Back (RFLB) is complete (i.e., I and Q data is present)
- An automated DLL technique will slide the data in order to align for maximum eye opening, and hence minimum BER performance.
- framing can take place at 1636. Successful framing will cause the FS flag to be set, at which time a write to FLASH at 1638 will take place in order to store all current settings for possible rapid recovery in the event of power loss or cable cut, for example.
- the steady-state condition involves continuous monitoring of link status at 1642, including CS, FS, slope drift, etc.
- link status including CS, FS, slope drift, etc.
- a symbol-SYNC period QN procedure will take place at 1640 until a return to acceptable limits is attained.
- downstream bootstraps begin on the east port and each of the drop ports.
- a downstream bootstrap state machine is shown in FIG. 51.
- the first step at 1706 following the start of the process involves transmitting In-Phase data on a complementary carrier frequency from the east port in order to facilitate slope equalization at the receiving end.
- the first step also involves the restoration of equalizer settings from FLASH in preparation for possible rapid convergence during loop-back from the child.
- the next step involves the same AGC range setting procedure as in the upstream case, and requires a return carrier signal from the downstream modem. As in the upstream case, once the AGC is in lock, it is possible to establish carrier -91-
- the correct carrier frequency is known a priori. Therefore, the local oscillator frequency is set to f (which is the same carrier received during the upstream boot operation) in the case of the east modem.
- QN-based slope equalization can be performed. Successful slope equalization will be conveyed to the child by transmitting Q-Channel data as well as I-Channel data.
- the frame recovery process can begin at 1716 as was done in the upstream boot procedure.
- Post boot coaxial cable slope drift will be corrected in exactly the same manner as described in the upstream boot procedure.
- BIST MODEM Built In Self Test
- a BIST capability is included in the present system to facilitate troubleshooting, fault localization, and bypass activation in the event that a given modem is malfunctioning.
- An internal RF coupling path is provided in the network element (FIG. 10) to enable full analog loop-back capability.
- En the BIST mode BER testing is performed in both directions, at both data rates, and at all four carrier frequencies.
- the BIST operation involves the generation of a Pseudo-Random Bit Sequence (PRBS) in the CPU, which in turn, is loaded into the ASIC for transmission through the cascade of modems under test. Once received, the data is checked for bit errors, and the process is repeated with several PRBS vectors, at each carrier frequency, data rate, and direction setting.
- PRBS Pseudo-Random Bit Sequence
- FIG. 52 shows a segment or branch of parent, child, grandchild devices 1750 A, 1750B, 1750C, respectively.
- FIG. 52 shows a segment or branch of parent, child, grandchild devices 1750 A, 1750B, 1750C, respectively.
- a computer usable medium can include a readable memory device, such as a hard drive device, a CD-ROM, a DVD-ROM, or a computer diskette, having computer readable program code segments stored thereon.
- the computer readable medium can also include a communications or transmission medium, such as a bus or a communications link, either optical, wired, or wireless, having program code segments carried thereon as digital or analog data signals.
- the programs defined herein are deliverable in many forms, including but not limited to a) information permanently stored on non-writeable storage media such as ROM devices, b) information alterably stored on writeable storage media such as floppy disks, magnetic tapes, CDs, RAM devices, and other magnetic and optical media, or c) information conveyed to a computer through communication media, for example using baseband signaling or broadband signaling techniques, as in an electronic network such as the Internet or telephone modem lines.
- the operations and methods may be implemented in a software executable by a processor or as a set of instructions embedded in a carrier wave. Alternatively, the operations and methods may be embodied in whole or in part using hardware components, such as Application Specific Integrated Circuits (ASICs), state machines, controllers or other hardware components or devices, or a combination of hardware, software, and firmware components.
- ASICs Application Specific Integrated Circuits
Abstract
Description
Claims
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP01973405A EP1350391A2 (en) | 2000-09-22 | 2001-09-20 | Broadband system with intelligent network devices |
AU9298701A AU9298701A (en) | 2000-09-22 | 2001-09-24 | Broadband system with intelligent network devices |
Applications Claiming Priority (29)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US23468200P | 2000-09-22 | 2000-09-22 | |
US60/234,682 | 2000-09-22 | ||
US27881101P | 2001-03-26 | 2001-03-26 | |
US60/278,811 | 2001-03-26 | ||
US09/952,347 US6867750B2 (en) | 2000-09-12 | 2001-09-12 | Three dimensional display control apparatus and method, and storage medium |
US09/952,482 US7072360B2 (en) | 2000-09-22 | 2001-09-13 | Network architecture for intelligent network elements |
US09/952,479 US20020085589A1 (en) | 2000-09-22 | 2001-09-13 | System and method for assigning network data packet header |
US09/952,207 | 2001-09-13 | ||
US09/952,482 | 2001-09-13 | ||
US09/952,207 US20020105965A1 (en) | 2000-09-22 | 2001-09-13 | Broadband system having routing identification based switching |
US09/952,306 | 2001-09-13 | ||
US09/952,381 US20020075875A1 (en) | 2000-09-22 | 2001-09-13 | Broadband system with transmission scheduling and flow control |
US09/952,480 | 2001-09-13 | ||
US09/952,327 | 2001-09-13 | ||
US09/952,481 US7027394B2 (en) | 2000-09-22 | 2001-09-13 | Broadband system with traffic policing and transmission scheduling |
US09/952.479 | 2001-09-13 | ||
US09/952,321 | 2001-09-13 | ||
US09/952,373 US20020124111A1 (en) | 2000-09-22 | 2001-09-13 | System and method for message transmission based on intelligent network element device identifiers |
US09/952,374 US7146630B2 (en) | 2000-09-22 | 2001-09-13 | Broadband system with intelligent network devices |
US09/952,327 US20020097674A1 (en) | 2000-09-22 | 2001-09-13 | System and method for call admission control |
US09/952,322 US20020075805A1 (en) | 2000-09-22 | 2001-09-13 | Broadband system with QOS based packet handling |
US09/952,306 US20020085552A1 (en) | 2000-09-22 | 2001-09-13 | Broadband system having routing identification assignment |
US09/952,321 US7139247B2 (en) | 2000-09-22 | 2001-09-13 | Broadband system with topology discovery |
US09/952,480 US6948000B2 (en) | 2000-09-22 | 2001-09-13 | System and method for mapping end user identifiers to access device identifiers |
US09/952,373 | 2001-09-13 | ||
US09/952,481 | 2001-09-13 | ||
US09/952,381 | 2001-09-13 | ||
US09/952,322 | 2001-09-13 | ||
US09/952,374 | 2001-09-13 |
Publications (3)
Publication Number | Publication Date |
---|---|
WO2002025869A2 true WO2002025869A2 (en) | 2002-03-28 |
WO2002025869A3 WO2002025869A3 (en) | 2002-08-01 |
WO2002025869A8 WO2002025869A8 (en) | 2002-09-26 |
Family
ID=28047108
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2001/029739 WO2002025869A2 (en) | 2000-09-22 | 2001-09-20 | Broadband system with intelligent network devices |
Country Status (2)
Country | Link |
---|---|
EP (1) | EP1350391A2 (en) |
WO (1) | WO2002025869A2 (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2003010925A3 (en) * | 2001-07-10 | 2003-05-15 | Koninkl Philips Electronics Nv | Gateway for interconnecting networks |
NL1027487C2 (en) * | 2004-11-11 | 2006-05-12 | Caiw Netwerken B V | Network distributes radio, television and data signals and comprises one or more stsations in tree structure with branches in stations |
CN100463446C (en) * | 2005-08-11 | 2009-02-18 | 中兴通讯股份有限公司 | Method of automatic detection topology, set-up route table and implementing narrow-band service |
CN101287526B (en) * | 2005-05-13 | 2012-05-02 | 微软公司 | Real-time HD TV/video IP streaming to a game console |
US9432187B2 (en) | 2014-04-24 | 2016-08-30 | Avago Technologies General Ip (Singapore) Pte. Ltd. | Data scrambling initialization |
CN114448816A (en) * | 2021-12-30 | 2022-05-06 | 中国航空研究院 | Integrated IP networking method based on heterogeneous data chain |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107959629B (en) * | 2017-11-23 | 2018-09-14 | 林惠平 | Router automatic stand-by system |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5774458A (en) * | 1995-12-14 | 1998-06-30 | Time Warner Cable | Multiplex amplifiers for two-way communications in a full-service network |
US5812786A (en) * | 1995-06-21 | 1998-09-22 | Bell Atlantic Network Services, Inc. | Variable rate and variable mode transmission system |
WO1999022528A2 (en) * | 1997-10-24 | 1999-05-06 | Nokia Telecommunications Oy | Intelligent network switching point and control point |
US5963844A (en) * | 1996-09-18 | 1999-10-05 | At&T Corp. | Hybrid fiber-coax system having at least one digital fiber node and increased upstream bandwidth |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5923743A (en) * | 1995-05-15 | 1999-07-13 | Rockwell International Corporation | Single-wire data distribution system and method |
-
2001
- 2001-09-20 EP EP01973405A patent/EP1350391A2/en not_active Ceased
- 2001-09-20 WO PCT/US2001/029739 patent/WO2002025869A2/en active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5812786A (en) * | 1995-06-21 | 1998-09-22 | Bell Atlantic Network Services, Inc. | Variable rate and variable mode transmission system |
US5774458A (en) * | 1995-12-14 | 1998-06-30 | Time Warner Cable | Multiplex amplifiers for two-way communications in a full-service network |
US5963844A (en) * | 1996-09-18 | 1999-10-05 | At&T Corp. | Hybrid fiber-coax system having at least one digital fiber node and increased upstream bandwidth |
WO1999022528A2 (en) * | 1997-10-24 | 1999-05-06 | Nokia Telecommunications Oy | Intelligent network switching point and control point |
Non-Patent Citations (1)
Title |
---|
See also references of EP1350391A2 * |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2003010925A3 (en) * | 2001-07-10 | 2003-05-15 | Koninkl Philips Electronics Nv | Gateway for interconnecting networks |
NL1027487C2 (en) * | 2004-11-11 | 2006-05-12 | Caiw Netwerken B V | Network distributes radio, television and data signals and comprises one or more stsations in tree structure with branches in stations |
CN101287526B (en) * | 2005-05-13 | 2012-05-02 | 微软公司 | Real-time HD TV/video IP streaming to a game console |
CN100463446C (en) * | 2005-08-11 | 2009-02-18 | 中兴通讯股份有限公司 | Method of automatic detection topology, set-up route table and implementing narrow-band service |
US9432187B2 (en) | 2014-04-24 | 2016-08-30 | Avago Technologies General Ip (Singapore) Pte. Ltd. | Data scrambling initialization |
CN114448816A (en) * | 2021-12-30 | 2022-05-06 | 中国航空研究院 | Integrated IP networking method based on heterogeneous data chain |
CN114448816B (en) * | 2021-12-30 | 2023-10-10 | 中国航空研究院 | Integrated IP networking method based on heterogeneous data chain |
Also Published As
Publication number | Publication date |
---|---|
WO2002025869A3 (en) | 2002-08-01 |
WO2002025869A8 (en) | 2002-09-26 |
EP1350391A2 (en) | 2003-10-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8462626B2 (en) | System and method for mapping end user identifiers to access device identifiers | |
US7027394B2 (en) | Broadband system with traffic policing and transmission scheduling | |
US7835379B2 (en) | Network architecture for intelligent network elements | |
US7146630B2 (en) | Broadband system with intelligent network devices | |
US7139247B2 (en) | Broadband system with topology discovery | |
US20020075805A1 (en) | Broadband system with QOS based packet handling | |
US20020075875A1 (en) | Broadband system with transmission scheduling and flow control | |
US20020105965A1 (en) | Broadband system having routing identification based switching | |
US20020097674A1 (en) | System and method for call admission control | |
US20020124111A1 (en) | System and method for message transmission based on intelligent network element device identifiers | |
US20020085552A1 (en) | Broadband system having routing identification assignment | |
CA2520516C (en) | Methods and devices for regulating traffic on a network | |
US6993050B2 (en) | Transmit and receive system for cable data service | |
US6993353B2 (en) | Cable data service method | |
US7170905B1 (en) | Vertical services integration enabled content distribution mechanisms | |
US20060039380A1 (en) | Very high speed cable modem for increasing bandwidth | |
US20020133618A1 (en) | Tunneling system for a cable data service | |
US20020085589A1 (en) | System and method for assigning network data packet header | |
EP1350391A2 (en) | Broadband system with intelligent network devices | |
Dravida et al. | Broadband access over cable for next-generation services: A distributed switch architecture |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
DFPE | Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101) | ||
AK | Designated states |
Kind code of ref document: A3 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PH PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG UZ VN YU ZA ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A3 Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
AK | Designated states |
Kind code of ref document: C1 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PH PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG UZ VN YU ZA ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: C1 Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
CFP | Corrected version of a pamphlet front page | ||
WWE | Wipo information: entry into national phase |
Ref document number: 2001973405 Country of ref document: EP |
|
REG | Reference to national code |
Ref country code: DE Ref legal event code: 8642 |
|
WWP | Wipo information: published in national office |
Ref document number: 2001973405 Country of ref document: EP |
|
NENP | Non-entry into the national phase |
Ref country code: JP |