WO1992021188A1 - Channel utilization method and system for isdn - Google Patents

Channel utilization method and system for isdn Download PDF

Info

Publication number
WO1992021188A1
WO1992021188A1 PCT/US1991/003613 US9103613W WO9221188A1 WO 1992021188 A1 WO1992021188 A1 WO 1992021188A1 US 9103613 W US9103613 W US 9103613W WO 9221188 A1 WO9221188 A1 WO 9221188A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
channels
destination
channel
packet
Prior art date
Application number
PCT/US1991/003613
Other languages
French (fr)
Inventor
Ugo O. Gagliardi
Meichum Hsu
Paul Cummings
Cheng-Hsu Ko
Original Assignee
Bell Atlantic Network Services, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bell Atlantic Network Services, Inc. filed Critical Bell Atlantic Network Services, Inc.
Priority to CA002109632A priority Critical patent/CA2109632C/en
Priority to US08/094,115 priority patent/US5479407A/en
Priority to PCT/US1991/003613 priority patent/WO1992021188A1/en
Priority claimed from CA002109632A external-priority patent/CA2109632C/en
Priority to NZ242531A priority patent/NZ242531A/en
Publication of WO1992021188A1 publication Critical patent/WO1992021188A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q11/00Selecting arrangements for multiplex systems
    • H04Q11/04Selecting arrangements for multiplex systems for time-division multiplexing
    • H04Q11/0428Integrated services digital network, i.e. systems for transmission of different types of digitised signals, e.g. speech, data, telecentral, television signals
    • H04Q11/0478Provisions for broadband connections
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q11/00Selecting arrangements for multiplex systems
    • H04Q11/04Selecting arrangements for multiplex systems for time-division multiplexing
    • H04Q11/0428Integrated services digital network, i.e. systems for transmission of different types of digitised signals, e.g. speech, data, telecentral, television signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5614User Network Interface
    • H04L2012/5615Network termination, e.g. NT1, NT2, PBX
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5614User Network Interface
    • H04L2012/5616Terminal equipment, e.g. codecs, synch.
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5629Admission control
    • H04L2012/5631Resource management and allocation
    • H04L2012/5632Bandwidth allocation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q2213/00Indexing scheme relating to selecting arrangements in general and for multiplex systems
    • H04Q2213/13034A/D conversion, code compression/expansion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q2213/00Indexing scheme relating to selecting arrangements in general and for multiplex systems
    • H04Q2213/13141Hunting for free outlet, circuit or channel
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q2213/00Indexing scheme relating to selecting arrangements in general and for multiplex systems
    • H04Q2213/13176Common channel signaling, CCS7
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q2213/00Indexing scheme relating to selecting arrangements in general and for multiplex systems
    • H04Q2213/13209ISDN
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q2213/00Indexing scheme relating to selecting arrangements in general and for multiplex systems
    • H04Q2213/1322PBX
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q2213/00Indexing scheme relating to selecting arrangements in general and for multiplex systems
    • H04Q2213/13332Broadband, CATV, dynamic bandwidth allocation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q2213/00Indexing scheme relating to selecting arrangements in general and for multiplex systems
    • H04Q2213/13384Inter-PBX traffic, PBX networks, e.g. corporate networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q2213/00Indexing scheme relating to selecting arrangements in general and for multiplex systems
    • H04Q2213/13389LAN, internet
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q2213/00Indexing scheme relating to selecting arrangements in general and for multiplex systems
    • H04Q2213/13396Signaling in general, in-band signalling

Definitions

  • the present invention relates generally to channel utilization in data and voice communication systems , and mor e part i c u l arly to optimi z ing channe l utilization in Integrated Services Digital Networks
  • ISDN International Committee
  • CITT International Committee
  • An ISDN is structured by architecture closely following the OSI Seven Layer Reference Model.
  • the network provides services and the user accesses the services through the user-network interface.
  • a "channel" represents a specified portion of the information carrying capacity of an interface. Channels are classified by two types, Basic Rate ISDN (BRI) and Primary Rate ISDN (PRI).
  • BRI delivers two B-channels, each having a capacity of 64Kbps, capable of transmitting voice and data simultaneously.
  • a 16Kbps D-channel transmits call control messages and user packet data.
  • the PRI provides twenty three B-channels of 64Kbps capacity each for carrying voice, circuit switched data or packet data.
  • the D-channel is a 64Kbps signaling channel.
  • the B and D channels are logically multiplexed together at Layer 1 of the OSI Reference Model.
  • an "intelligent" device such as a digital PBX terminal controller or Local Area Network (LAN)
  • an ISDN terminal TE such as a voice or data terminal
  • NT1 Network Termination
  • Non-ISDN terminals TE may be connected to a Network Termination (NT2) [over the RS-232 Interface] and a Terminal Adapter TA.
  • NT2 in turn is connected over an "S/T-Interface", which is a four-wire bus, to a termination NT1 that performs functions such as signal conversion and maintenance of the electrical characteristics of the loop.
  • U-Interface At the local loop, a two-wire bus, termed the "U-Interface", or “Loop”, interconnects NT1 and a Loop Termination (LT) at the central office. Finally, the "U-Interface” is a bus between the local loop at the carrier end and exchange switching equipment. Details of this architecture are provided in ISDN: An Overview, Data Pro Research, Concepts & Technologies, MT 20-365; pp 101-110, published by McGraw Hill Incorporated (December 1988).
  • ISDN can be used to service the needs both of public data telephony and private networks.
  • access to public telephony is performed as shown in Figure 2(a).
  • An initiating host H first issues dialing commands to the public network to set up a connection with the destination host H, and then uses the connection to communicate with the destination host.
  • the connection either is circuit switched or packet switched.
  • Data communication in private networks incorporates routers, generically called Interface Message Processors (IMPs), which may also be used as hosts.
  • IMPs Interface Message Processors
  • a host A communicating with host C will take an appropriate route through an IMP interface.
  • a gateway operating at the upper layers of the OSI reference model accomplishes gateway functions, translating protocols used on dissimilar networks.
  • An ISDN gateway designated by GW in Figure 1, will operate entirely in the digital domain, carrying out all necessary protocol conversion between the host network and ISDN.
  • the gateways GW in Figure 2(b) carry out all necessary protocol conversion to enable the user to transparently access the ISDN.
  • a related commonly assigned application Serial No. to Gagliardi et al. entitled “Method of and System for Accessing Distributed Resources on ISDN", filed on even date herewith, is directed to implementation of the gateway GW to enable computers interconnected to each other and to the ISDN to share resources by issuing commands in operating system commands.
  • One computer can access a storage disk of another computer distributed anywhere in a virtual network established on the ISDN. Each disk throughout the networks has a unique name by which it is accessed. The configuration of the network and number of computers distributed on it are transparent to each user.
  • ISDN Interfacing of Personal Computers which emulates the current interface between a host in the private network and the IMPs that comprise the network, or by emulating the interface to a fixed point-to-point communication link, such as a leased Tl line. Users simply replace the connections to IMPs to connect to an ISDN gateway.
  • Figure 3(c) depicts a variation replacing the connection between an IMP or host and the communication line with that to an ISDN gateway; the IMPs and hosts in this configuration are equivalent.
  • Gagliardi et al. entitled “ISDN Interfacing of Local Area Networks", filed on even date herewith and assigned to the common assignee, describes a gateway supporting interconnection among hosts on the ISDN to form a "virtual".
  • a local area network, or LAN provides a cluster of interconnected hosts (computers), or nodes, on a medium. Each node can communicate with every other node; the network requires no central node or computer.
  • Base band LAN systems such as Ethernet, impress data signals directly on the network medium, whereas broad band systems modulate a very high frequency carrier with the data signal before impressing it on the medium.
  • the architecture of each LAN conforms with the OSI reference model.
  • Other physical media standards include Token Ring and Token Bus.
  • FIG. 4 depicts utilization of one type of gateway GW described in copending application entitled "ISDN Interfacing of Local Area Networks" for interconnecting a number of hosts to form a virtual LAN.
  • a variation shown in Fig. 5, provides ISDN gateways GW to interconnect to LANs of a common type or of different types using a common naming convention.
  • a remote host H is connected to a pair of hosts residing on a LAN through first and second gateways GW at the host and LAN branches of the ISDN.
  • the host side gateway GW1 is of a type described in the copending application entitled "ISDN Interfacing of Personal Computers".
  • the LAN side gateway GW2 in the Figure is of a type described in the copending application entitled "ISDN Interfacing of Local Area Networks".
  • Interconnection of hosts with a LAN facility or forming a virtual LAN is based on our discovery that point-to-point communications on the ISDN is of the same order of magnitude as that of a high speed local area network.
  • This discovery is counterintuitive, as the nominal throughput of an ISDN is 64Kbps whereas that of a local area network is in the range of between 10 and 16Mbps. Accordingly, it would appear that interconnecting a LAN to an ISDN would not be feasible from a performance point of view.
  • one object of the invention is to improve channel utilization for transmission of data through an ISDN.
  • Another object is to optimize utilization of ISDN B-channels to individual computers or a local area computer network.
  • Yet another object is to flexibly allocate bandwidth in an ISDN, avoiding arbitrary limits permitted for various types of data transfer.
  • a B-channel algorithm automatically determines when to allocate B-channels, how many to allocate and for how long. This enables transparent selection of available B-channels to provide ' the necessary bandwidth for different size file transfers.
  • the channel allocation algorithm comprises two components, a long term allocator and a short term allocator.
  • the long term allocator monitors recent traffic pattern to and from a destination and decides upon the bandwidth and types of channels to be allocated to that destination.
  • the short term allocator monitors the current size of the destination queues and the aging constraints of those queues, and decides whether to temporarily deviate from the decisions of the long term allocator in order to respond to situations that are not well handled by it.
  • An example is temporary or sudden surge of traffic to certain destinations not predicted based on the long term allocator algorithm.
  • long term allocation forms virtual B channels out of a B channel pool to allocate, deallocate or replace a channel so as to dynamically alter bandwidth.
  • Each virtual channel controlled by a corresponding processor, is operative in activated, deactivated, transient or steady states.
  • a restricted mode data to be transmitted is sequentially distributed, one packet at a time in a round-robin fashion over the activated virtual channels.
  • a far-end virtual processor receives and recovers the data in the same round-robin sequence.
  • the load-balancing mode distribution of transmissions over the channels follows a most-empty-channel-first protocol for transmission. The data is recovered by scanning global sequence numbers attached to the data, comparing and implementing a trace-back time-out presentation procedure.
  • ISDN traffic is compressed to improve data throughput; typically at least a 2:1 compression of data is carried out by the particular compression technique implemented. Examples in the preferred embodiments are based on run-length encoding and Huffman encoding as well as on variations of the Lempel-Ziv algorithms. Other compression techniques are possible.
  • Another aspect of the invention involves processing of packets by the gateway into "trains" to be shipped to a specific destination over ISDN.
  • trains may be compressed before transmission.
  • Each train has its own header including all packing and compression information to be used by the receiving site to rebuild the original packets and correctly dispatch them.
  • the trains are rebuilt into the original packet sequence as a result of numbering applied to each single packet.
  • any ISDN application is enabled with the ability to add or drop B-channels automatically, based on the amount of traffic encountered.
  • Data throughput between destinations is optimized by packaging variable numbers of packets in trains, compressing the packaged data, and transmitting when a train of data packets becomes available for transmission.
  • Figure 1 is a simplified circuit diagram of an integrated services digital network.
  • Figure 2(a) is a symbolic diagram showing a conventional public circuit or packet switched network.
  • Figure 2(b) is a symbolic diagram of a network functionally equivalent to that shown in Figure 2(a), implemented using ISDN gateways described in a copending application.
  • Figure 3(a) is a symbolic diagram of a conventional private network having private IMP interfaces or routers.
  • Figure 3(b) is a diagram showing a network functionally equivalent to that of Figure 3(a), implemented by ISDN gateways disclosed in a copending application.
  • Figure 3(c) is a variation of Figure 3(b) having a host-IMP interface.
  • Figure 4 is a symbolic diagram showing interconnection of multiple hosts through ISDN gateways to form a virtual network, described in a copending application.
  • FIG. 5 is a symbolic diagram showing interconnection of local area networks through ISDN gateways, described in another copending application.
  • Figure 6 is a symbolic diagram showing interconnection of a remote host to an existing local area network through ISDN gateways.
  • FIG. 7 is a simplified block diagram showing the hardware architecture of an ISDN gateway provided in accordance with the invention.
  • Figure 8 is a diagram of software architecture incorporated in the system master card shown in Figure 7.
  • Figure 9 is a diagram of software architecture shown in one embodiment of a LAN line card of Figure 7.
  • Figure 10 is a diagram showing software architecture incorporated in an ISDN line card shown in Figure 7.
  • Figure 11 is a diagram of software architecture incorporated in the SDLC line card shown in Figure 7.
  • Figure 12 is a diagram showing mapping of software to hardware components within the ISDN gateway.
  • Figure 13a is a diagram showing details of the intelligent ISDN manager and device driver shown in Figure 12.
  • Figure 13b is a symbolic diagram of the communication mode adapter incorporated in the ISDN line card of Figure 12.
  • FIG 14 is a flow chart of algorithms for carrying out LTA bandwidth allocation in accordance with an aspect of the invention.
  • Figure 15 is a flow chart of algorithms for selecting between LTA and STA channel allocation.
  • Figure 16 is a flow chart of channel processes responding to decisions made by both the LTA and STA channel allocation algorithms.
  • Figure 17 is a diagram showing development of a virtual channel and its components.
  • Figure 18 is a diagram of components for carrying out LTA channel allocation in accordance with another aspect of the invention.
  • Figure 19 is a flow chart describing assembly and compression of packet trains for transmission to a new B-channel.
  • Figure 20 is a flow chart showing compression, decompression and resequencing of packets.
  • Figures 21a and 21b are diagrams respectively of a packet and a data frame.
  • Figure 22 depicts a train of packets in accordance with a further aspect of the invention.
  • the invention provides dynamic bandwidth allocation in an ISDN gateway by allocating B-channels automatically in response to traffic monitored at each destination on the ISDN. Packets are packaged into trains, shipped to a destination when a train having a predetermined packet length becomes available. Data embedded in the trains is compressed for efficient bandwidth utilization.
  • an ISDN gateway 100 in accordance with the invention, comprises four hardware elements 102, 104, 106, 108 interconnected by a common bus 110.
  • the elements 102-108 preferably comprise individual circuit cards, although some or all of the elements may be incorporated in a single circuit board.
  • the board or boards preferably reside within a personal computer but alternatively may reside outside the computer.
  • Element 102 is a system master which implement infrequent user functions, such as configuration management and connection requests.
  • the system master 102 furthermore functions as a standard computing platform, emulating an IBM compatible PC or other computer standard.
  • LAN line card 104 implements firmware and hardware for specific IEEE 802 physical and data link protocol together with software carrying out host filtering. Different LAN line cards with corresponding protocols will be required for other LAN connections, e.g., 802.5 Token Ring and 802.3 Ethernet.
  • ISDN line card 106 comprises firmware and hardware implementing the ISDN physical, data link layer (LAPD), and the D-channel layer 3 protocol, and the physical layer of the B-channel.
  • Software incorporated in the ISDN line card 106 implements train protocol and B-channel allocation algorithms as well as data compression algorithms to support a virtual broad band capability of the gateway, also to be described later.
  • SDLC line card 108 includes firmware and hardware to implement the SNA physical and data link (SDLC) layers. This card is optional to the network.
  • system master element 102 shown in Figure 8, is configured with three layers of software underlying the system master 102(b) in a stack.
  • the underlying layers comprise call request management, configuration management and monitoring, 102(b), as well as layers 102(c), 102(d) for implementing a computing platform.
  • the software architecture of one embodiment of a LAN line card (LLC) 104 for 802.5 Token Ring comprises a functional layer 104(a) together with a layer 104(b) comprising a host filter, receiving and selectively processing packets addressed to the host, as well as underlying layers 104(c), 104(d) implementing the specific IEEE 802 physical and data link protocol (802.5) for Token Ring.
  • LLC LAN line card
  • the software architecture of the ISDN line card 106 comprises a functional layer 106(a), together with underlying layers 106(b)-106(f) for carrying out the requirements of the OSI reference model layers associated with the ISDN. These layers include a communication mode adapter at layer 106(b) (see also Figure 13a) and protocol services at layer 106(c).
  • Layer 106d (IIM) carries out gateway peer protocol and executing buffer allocation algorithms to be described in detail later. The peer protocols, applied on B-channels used by the gateway to implement train packing, compression and error handling, is also described later.
  • Figure 11 depicts the software architecture of the SDLC line card implementing the SNA physical and data link layers for carrying out inter-networking among hosts.
  • a USER software module 110 supports user application software and protocol stack, residing at layer 7 of the OSI Reference Model.
  • Configuration control manager 112 of the system master monitor bandwidth allocation and errors.
  • the system master 102 specifically manages configuration of the gateway, providing status, billing and information tracing on the use of resources in the gateway, as well as access to internal tables used by bridging and routing gateway services. Also provided by the system master are network management services and remote access to other system masters in the same interconnected network. Functions carried out by the system master 102 include updating of records, loaded on disk into memory resident data structures when the system is initiated.
  • Configuration control manager 112 controls parameters which define the resources available to the gateway and the access privileges to the system master 102.
  • a connection request manager incorporated in 112, defines how the gateway relates to other gateways for implementing gateway functions to a personal computer or to define an interconnected LAN providing a virtual network, described in the copending applications entitled "ISDN Interfacing of Personal Computers” and “ISDN Interfacing of Local Area Networks", incorporated herein by reference.
  • Configuration control manager 112 maintains a configuration table for parameters controlling the operation of the gateway.
  • the table implemented in software, defines the ISDN configuration including number of channels allocated to the gateway, characteristics of the channels and all directory numbers associated with the gateway. Furthermore stored are characteristics of the types of interconnections of the host side of the network, e.g., RS-232, DMA, etc., and default parameters for incoming calls, e.g., select all non-collect calls, allow collect calls with proper user information and collect calls only from particular numbers. Level of system monitoring, also retained by the control configuration manager tables, controls available options and enables the amount of overhead associated with those options to be limited. Password information for access to the gateway and user profiles determining what kind of activities are allowed to a given user or a class of users also are retained in the manager tables.
  • a connection request manager included in module 112, controls a table containing information required to set up a B-channel or remote gateway. This information includes at least one remote ISDN number for the connection and any user information required in the layer 3 set-up message.
  • the connection request table defines the interconnected LAN. During operation of the gateway, this table also records whether the connection is established and provides a handle for the connection which can be used to direct packets to a destination manager.
  • Configuration control manager 112 collects statistics for B-channels, including channel utilization including a statistical sampling to determine the percentage of idle capacity on the channel, possibly on a per-connection basis. Since each channel is bi-directional, measurement takes place both on the outgoing and incoming lines. Information also is maintained on any delay for packets in the system, as well as on the "high water mark" in the buffer pool for a given destination and the number of buffers discarded for a destination due to memory constraints. For channels that are connected, the percentage of frames sent or received in errors will also be recorded for use, among other ways to determine the type of data link protocol applied to a given destination.
  • Use of the gateway by nodes of an LAN is monitored by a system administrator (not shown in Figure 12) which records the amount of data sent or received by a node on a per-cargo destination basis, the number of connections initiated due to packets of a given node, and the arrival rate of packets from a particular node.
  • the system administrator further may monitor broadcast or multi-cast transmissions by a particular node, and the number of times a node is not found in tabulated data .
  • the system administrator restricts monitoring to traffic from specific nodes or uses statistical sampling of all traffic in the gateway.
  • LAN card 104 comprises principally a host filter 114.
  • the host filter 114 Upon arrival of a packet from another host within a local LAN, the host filter 114 examines the link-level address of the packet. If the packet is addressed to a local host other than the gateway, the packet is discarded. If the packet is addressed to remote hosts covered by a bridging service, it is passed to bridger 118 implemented in ISDN line card 106. If the packet is addressed to the gateway, and the packet indicates a service access point of the gateway protocol services, then it is passed to gateway services module 120. Finally, if the packet is addressed to the gateway, and the packet type is not recognized to be any of the service access points of the gateway protocol services, then it is passed to user module 110 in system master card 102 (LI/B) .
  • IIM intelligent ISDN manager
  • the packet is passed to the bridger 118, the packet is intended to be bridges over to another local area network.
  • gateway services module 120 the packet is intended to be processed by some gateway protocol service programs analogous to those in the user private protocol stack in user module 110.
  • the services provided are those commonly associated with the network or inter-network layer of communication protocols, including name and address resolution and some limited form of routing. An example is a simplified re-implementation of the network layer of an existing communication protocol.
  • These service programs in turn interact with the IIM 116 to pass data to remote hosts (LI/R).
  • a packet passed to the user module 110 is meant to be handled by some application or protocol software resident on the same node as the gateway.
  • An example of such a program is a private protocol router.
  • the user module 110 of system master card 102 interfaces with communication mode adapter (CMA) 122 of the ISDN line card 104 through a communication mode adaptor interface 124.
  • the CMA 122 provides various adaptation modules allowing the user module 110 to interface with destinations, such as a user modem, leased lines or virtual circuit telemetry which adapts existing USER software and interacts with public or private data networks that support virtual circuits.
  • the architecture of the CMA is depicted symbolically in Figure 13a.
  • the gateway services module 120 provides router functions for private protocol stacks. Name and address resolutions are performed at the user module 110, incorporating existing software of the private protocol stacks or one performed within gateway services module 120. In the latter case, the gateway services module 120 can be performed of at an existing private network/inter-network protocol stack, re-implemented as required for particular applications.
  • the intelligent ISDN manager (IIM) 116 receives packets of data and commands to set up virtual circuits.
  • the IIM 116 implements train packaging and B-channel allocation as well as data compression, forming important aspects of the invention. The content and operation of IIM 116 are described in detail hereinafter.
  • Bridger 118 operates at the 802 medium access (MAC) layer between LANs or single nodes. This module is used to create a much larger "virtual LAN” out of an arbitrary number of LANs and single nodes, described in copending application entitled "ISDN Interfacing of Local Area Networks".
  • An 802 MAC level bridge as a method to interconnect level 2 networks in this regard is particularly advantageous for several reasons. First, providing a connection at the MAC level ensures that all higher level protocols available across connected LANs will continue to work in a virtual LAN with no modification, and that network applications based on these higher level protocols will also operate in the same manner. A bridge in this respect supports many types of network systems on different LANs.
  • bridges in general can handle an order of magnitude more packets using the same processing power.
  • MAC level bridges operate in a conne ctionless mode and depend on high quality transmi s s ion line s to minimi ze retransmissions controlled by higher level protocols on an end-to-end basis.
  • Network routers tend to require much more overhead, because they operate in connection mode and in view of to the number of control messages required to allow for a dynamic routing. Given the disparity between ISDN bandwidth and LAN bandwidths, the ability to minimize the overhead needed to manage the LAN connection is considerable.
  • Peer protocol implemented by IIM 116 ensures that data packets arrive to the remote gateway in the same order as they were received by the local gateway. Peer protocol provides a transport level surface to sequence all packets for this purpose.
  • packet sequence refers to an ordered series of packets sharing a common source/destination pair; the series may or may not be continuous. For example, if a gateway on a first LAN receives a series of packets with the same source and destination, after the packets are shipped to a gateway on a second LAN the packets are released as a packet sequence in the same order with or without discarded packets. Packet sequencing does not ensure that all packets from the first LAN to the second will be placed on the second LAN in the original order, only that the order to a particular source/destination pair will be the same.
  • Packet sequences are mapped into cargo destinations, with many packet sequences being designated to a single cargo destination. Mapping of packet sequences to cargo destinations is supported by system master 110. Of particular importance, a cargo destination is divided into many trains of a format shown in Figure 22 in accordance with an aspect of the invention. A train contains an integral number of packets. There is no limit to the size of a train, although the size is predetermined and all gateways preferably use the same maximum train size so that fixed size buffers can be implemented.
  • the content of a train is compressed in accordance with another aspect of the invention, then "framed" by adding start and stop sequences, a checksum and other frame control information.
  • Peer protocol preferably specifies the description of a frame, including details of the frame header and data fields. This model enables the peer protocol to operate at three different levels, PS packets (803 packets having a PS header), trains and frames.
  • PS packets (803 packets having a PS header)
  • Figure 21a shows a PS packet comprising the following fields: PT (packet type), PSN (Packet Sequence Number), LEN (Length of Stripped Packet) and Data.
  • a data frame format shown in Figure 21b as another alternative, comprises the following fields: FS (Frame Start), FT (Frame Type), FSN (Frame Sequence Number), FC (Frame Control), Data, FCS (Frame Checksum) and FE (Frame End).
  • packet processing as the packets are moved from a first LAN to a second LAN assumes use of the gateway as a bridge at the 802 medium access control level (MAC), the following is a description of packet processing as the packets are moved from a first LAN to a second LAN.
  • packets addressed to the gateway are forwarded to the system master 102; packets addressed to other nodes on LAN 1 are discarded, and all other packets are passed to the packet sequence manager.
  • Complete 802.5 packets are built by adding start of frame and end of frame fields, completing the host filter 114 functions.
  • Calls from a cargo destination manager determine the cargo destination for the received packet.
  • the size each queue and the hold time of each queue not being serviced by the train builder is communicated to the B-channel allocation manager ( Figure 13(b)).
  • the PS header has a packet sequence number to each packet, and the initial packet in a sequence is assigned sequence number 1.
  • the PS header is removed from the 802 packet, checked and may be discarded.
  • the packet is sequenced as follows. If the packet sequence number is 1, the bridge manager is notified that a new conversation with a remote node has been initiated by the remote node, and the bridge manager will update the bridge data base. If the sequence number is not 1, the number is checked to see if there was a gap from the last packet received. If a gap exists, the packet is queued, and a timer is set. If the missing packet arrives before the timer expires, the queue is re-ordered, and the packets are released in order. Otherwise, the number of received packet is recorded, and the packet is released. If a packet with a lower sequence number subsequently arrives, it is discarded. The 802 packet type is checked and 802 protocol translation is carried out if necessary, i.e., translating an 802.3 packet to an 802.5 packet.
  • the train builder requests packets from the cargo destination manager, allocates and fills buffers and maintains the number of bytes used in the buffer. Received buffers are unpacked and passed to the level above after being parsed to ensure that no error in transmission has occurred.
  • a compressor compresses the input buffer using a compression algorithm in a manner to be described later. Incoming data is decompressed and tested to confirm that it will fit in a new buffer; if not, an error is indicated.
  • a data frame is built by adding a frame header and trailer to the active bits in the buffer.
  • the B-channel allocation manager indicates when the frame control bits should be set to "last frame".
  • the frame is gated onto the B-channel for the destination found in the buffer control block.
  • the buffer is freed after being gated to the B-channel.
  • For incoming data a buffer is allocated for the arriving frame.
  • the frame is checked using the FCS, and any frames in error are discarded.
  • Control frames are supplied to the system master.
  • Data frames are supplied to the compression manager, and the B-channel allocation manager receives a "last frame” setting in the frame control (FC) field.
  • FC frame control
  • An important aspect of the invention is B-channel allocation, implemented by an allocation algorithm to maximize utilization of the channels, minimize response times and the probability of losing data packets due to buffer overflow.
  • the channel allocation algorithm residing within the B-channel manager of IIM 116 ( Figure 12), includes commands to establish user-level connections to destinations and to send and receive data. Data passed to the IIM 116 is queued in destination queues.
  • the channel allocation algorithm monitors the status of the destination queues, and dynamically allocates ISDN channel bandwidth to these queues.
  • the following definitions support the channel allocation algorithm.
  • each destination Associated with each destination is a "destination queue”, denoted as Q(d), consisting of a pool of buffers.
  • the pool of buffers for destination d comprises the messages destined for d.
  • the number of buffers in Q(d) is denoted as b(d); it is assumed that B-channels output from or input to a buffer in parallel, with each buffer having a fixed maximum size.
  • a number of B-channels currently allocated to a destination B is denoted as B(d). If a queue Q(d) becomes too long relative to the number of channels allocated to d, an extra channel is allocated. On the other hand, if Q(d) is relatively short compared to the size of B(d), some channels are released. If no buffers are available for accumulation of a new train at a destination d, a "buffer fault" is created.
  • the channel allocation algorithm operates asynchronously to the buffer allocation algorithm described previously.
  • the channel allocation algorithm is portioned into two components, long term allocation and short term allocation.
  • Long term allocation monitors the recent historical track pattern to and from a destination, and decides upon the bandwidth and types of channels to be allocated to that destination.
  • a short term allocation monitors the current size of the destination queues and the aging constraints of these queues, and decides whether to deviate, temporarily, from decisions of the long term allocation algorithm. This enables response to situations that are not well-handled by long term allocation, e.g., situations arising due to a temporary or sudden surge of traffic to certain destinations not predicted based on the long term allocation algorithm.
  • Decisions on bandwidth allocation produced by the long term and short term allocation algorithms preferably are stored in charged memory, and are carried out at convenient intervals by ISDN channel processes to be described later.
  • B-channels can be used in a circuit switched mode or in a packet switched mode.
  • the circuit switched mode preferably is used only to transmit data between low-traffic destinations in view of the relatively high set-up tear down overhead required.
  • the packet switched mode is preferable for transmission to destinations with a low-traffic rate if there is a moderate response time requirement. For destinations that require higher traffic rate or have more stringent response time requirements, use of B-channels in the circuit switched mode is preferable. For destinations that do not have high traffic rate but have very stringent response time requirements, the B-channels should be dedicated in the circuit switched mode and deliberately underutilized.
  • the long term allocation algorithm is summarized in the flow chart of Figure 14.
  • the traffic rate R at each destination is monitored by a software flowmeter, which may be a counter of data quantity arriving at the destination queue within a particular interval of time.
  • Each software flowmeter is polled at fixed intervals (w).
  • the flowmeter is read, then reset, and the reading is used by the long term allocator LTA to determine how many B-channels are to be allocated to that destination.
  • One embodiment of a long term allocation algorithm LTA is as follows. Referring to Figure 14, step 150 reads a traffic rate R for a destination d, wherein R is defined as follows.
  • r Max ⁇ v_in (d, t-w, t)/w, v_out (d, t-w, t)/w ⁇ , where t is the time when the meter is read, v_in(d, t-w, t) is the accumulated input traffic volume for destination d between time t and t-w, and v_out(d, t-w, t) is that for the output queue.
  • step 152 an integer x, that is greater than or equal to zero, is found such that:
  • hbw cs is the achievable bandwidth once set up for a circuit switched mode B-channel. This number in practice is close to 64Kbs.
  • step 154 determines whether residual is greater than mbw cs, where mbw_cs is the minimum utilization of a circuit switched B-channel, or whether the response time is more stringent than moderate (step 156).
  • moderate is defined to be the response time that can be offered by a packet switched channel. If either condition is true, than one more B-channel is allocated to destination d (step 158).
  • a fraction of a packet switched channel is allocated to destination d, wherein the fraction f is determined as follows.
  • ebw_ps effective bandwidth of a packet switched B channel.
  • step 160 if destination d is not reachable by packet switched channels, or the packet is not of a nature that is suitably transmitted through packet switched channels, and the response time requirement is very relaxed (step 162), then a fraction of a circuit switched mode channel is allocated to destination d (step 164).
  • the fraction in this example is determined by the size of the residual.
  • the notation used in the above algorithm is as follows. x, f : decision variables
  • the algorithm is sensitive to the transmission delay of packet switched channels and the ISDN call set-up and tear down times for circuit switched mode.
  • the former is used to derive a boundary between moderate and height response times.
  • the latter is used to calculate the response times obtainable for time-shared fractional allocation of B-channels in circuit switched modes, which in turn determines the value of z which defines the boundary between moderate and very relaxed response time requirements.
  • the basic logic of the algorithm remains unchanged under different performance profiles.
  • meter window w
  • w the "meter window” w, which is selected such that the algorithm is sufficiently sensitive to short-term fluctuation in traffic intensity but is not too sensitive. If w is too small, a very short-termed surge in traffic may result in too many B-channels allocated and therefore will incur a high set-up/tear down overhead. If w is too large, the algorithm may not be responsive enough to a short term search, resulting in a fast destination queue build-up. An excessive amount of buffer space may be consumed and response time may be degraded.
  • a method using weighted averaging of traffic in multiple windows is provided. If a window system W with three windows w1, w2 and w3 is used, with weights wtl, wt2 and wt3, where the sum of wt1 - wt3 is unity, traffic rate r can be equated to R(d, t, W, ) computed as follows:
  • the long term allocation algorithm functions well in cases where there are sufficient B-channels available and decisions made by the LTA are feasible. This means that the total number of B-channels allocated by the LTA is smaller than the total number of B-channels subscribed, and sufficiently smaller such that the probability that an incoming call request finds all channels busy is very small, and further that the recent past history in fact represents a good basis for allocation. When such conditions are not met, short term allocation (STA) must be implemented.
  • STA makes decisions which override, temporarily, decisions made by LTA.
  • Channel processes described hereinafter, implement decisions made by LTA under normal circumstances .
  • the B-channel process checks to see if a decision has been made to de-allocate channels from the destination d. If so, it de-allocates itself and finds a new destination for which a decision has been made by LTA to have additional channels allocated.
  • unusual conditions such as a destination queue being ignored for too long, or a B-channel process encountering an empty destination queue, self-adjustment may be performed by the B-channel process to execute decisions rendered by STA.
  • Two examples to which STA responds are (1) if any buffer in the destination queue d is found to exceed an age limit, or (2) if the following quantity exceeds a relative queue size limit:
  • T k (d i ) size of kth train for
  • step 170 waits until a B-channel is freed from servicing in the input buffer and the output buffer, and then waits for an STA condition, such as the two conditions previously described, to be detected (step 172).
  • STA condition such as the two conditions previously described
  • LTA is implemented in step 174 unless the STA condition is detected.
  • STA channel allocation is carried out in step 176, and the process continues.
  • An important aspect of the invention involves the B-channel processes which respond to decisions made by both the LTA and STA.
  • channel process invocation is triggered in response to end of train transmission on any B-channel for input or output traffic.
  • the first part of the channel process algorithm at 180 prevents locking out trains associated with low traffic rate destinations, or locking out incoming call requests. Executing lockout prevention at the first part of the algorithm eliminates possibility of lockout. Since this part of the algorithm tends to disturb stable allocations of B-channels to high intensity traffic streams, probability of invocation preferably is modulated inversely to the number of B-channels assigned to the destination associated with the just freed B-channel.
  • the probability of invocation is higher if the B-channel is allocated fractionally to a high number of distinct. destinations. Probability of invocation is lower if more B-channels have been allocated to the same destination. In other words, the higher the traffic rate for a destination d, the lower the probability of causing a "wild" de-allocation of one of its B-channels to handle of locked out trains.
  • Sealed trains are time stamped. A check is made by STA over the entire buffer pool for all destinations to see if there are trains whose age exceeds a predetermined age limit. If any are found, absence of B-channels allocated to their destinations are checked to confirm that trains are truly locked out. If such a flag is found, the just freed B-channel is associated to all respective destinations, originating fractional allocations of B-channels, the oldest locked out train is selected and transmitted.
  • a condition wherein the age limit is set at a sufficiently high level, and if overall traffic does not exceed the capacity of the gateway is very rare. This condition must be rare to avoid thrashing in the allocation of B-channels, and must be handled as the first priority decision to avoid blocking trains associated with very low packet rates.
  • step 182 of the flow chart the B-channel is re-used to transmit it. Giving priority to the same destination re-use of the B-channel minimizes ISDN call overhead.
  • step 184 two subcases are considered.
  • the just freed B-channel is fractionally allocated to n destinations.
  • the remaining n-1 destinations are checked for waiting trains, and if one is found the channel is assigned to it for transmittal. This allows maintenance of "stable" pools of low traffic rate destinations. if none are found, the following second subcase is executed. If no complete train is waiting for destination d sub i, Flag Queue Size is checked, and if set, a new destination d sub j is computed as the one which contains the largest relative queue size. Thereafter, the channel process remains with decisions made by LTA.
  • Run-length encoding refers to replacement of sequences of identical source elements by a triple consisting of a special code indicating a "run" of elements, a count of the run length and the repeated element.
  • Huffman encoding translates fixed-sized source elements into variable-sized codes, where the size of the output code and bits is approximately the logarithm of the probability of the source element. For some kinds of data, the probabilities of elements are well known and fixed translation tables are available.
  • a translation table can be computed for a sum block of input data, and transferred along with the data to allow decompression. This requires two passes over the input during compression; table size must be significantly smaller than the data blocks.
  • the Lempel-Ziv algorithms convert variable-sized input elements, or strings, into fixed-sized codes. Allowing long input sequences gives the possibility for higher compression ratios than are possible with fixed-size source element algorithms, such as Huffman encoding.
  • the Lempel-Ziv algorithms are adaptive in that translation tables are built dynamically to reflect characteristics of the input, but require only a single pass during encoding. Decompression can be performed by dynamically building the same translation tables as were used during compression, using information implicit in the encoded data stream such as whether a particular code has been encountered.
  • the typical ranges of compression ratios of a compression algorithm selected to compress packet trains vary from about unity for certain kinds of inherently random data such as floating point data or previously compressed data, to eight for some data bases containing large amounts of redundant or null data .
  • the average compression ratios over mixed input types depends on the chosen samples, but ratios of between two and four for fairly large samples of mixed input found on real computer systems are common; two is proposed as an example in the preferred embodiment.
  • Error detection and correction as a surrounding protocol on compressed data to allow non-delivery of corrupted blocks, is required since any errors lead to catastrophic corruption of the rest of the packet train.
  • ISDN error rates appear to be similar to LAN error rates. No additional error handling mechanisms are required above those that already exist in LAN software.
  • the D-channel has built-in error protection by virtue of using the HDLC protocol which includes error detection, packet sequencing and re-transmission.
  • a controller 170 comprises a virtual B Allocator/Resource Manager 172 which opens a virtual B-channel with control attributes in table I below from the B-channel pool as requested. Manager 172 also assigns a virtual B processor 174 implemented from among a plurality of such processors, and handles related supporting global resources.
  • the virtual B processors 174 at run time or initialization time provide the service of transparent addition and deletion of new or existing component channels in its virtual B-channel based on virtual B attributes.
  • the processors 174 further provide transparent, multiplexing of client data.
  • Virtual B-channel monitor 176 carries out traffic and error monitoring of all virtual B-channels, data traffic flow and allocation, day allocation and replacement of channels to dynamically alter bandwidth are performed by the monitor 176.
  • Call set-up/disconnect processor 178 sets up and disconnects protocol on the D-channel.
  • Processor 178 interfaces to B-channel hardware to transform the B-channel call set-up or disconnect, and interfaces with the virtual B processors 174, resource manager 172 and client.
  • HDLC/LAPD links 180 provide the variable links for clients, and are optional.
  • Component channels allocated for a virtual B-channel have the following four defined states: activated, deactivated, transient and steady. In the deactivated state, data transfer is forbidden although control protocol exchange is not . Component channels allocated or joined are initially in the deactivated state. A channel, deactivated for a sender immediately following the deactive_b command, described later, is queued for transmission. For a receiver, a channel is deactivated immediately after the command is received. Only a bi-directionally deactivated channel is removed and physically disconnected from the virtual channel of which it was a member.
  • both data and control protocol transfer are permitted.
  • a channel is activated to a sender only after acknowledgement of the active b command transmitted is received.
  • a channel is activated immediately after this command is received.
  • the transient state is defined by guarding periods before acknowledgements are received from the far-end.
  • a virtual channel is transient when any component channels are transient. No data can be sent over a channel in the transient state.
  • a component channel is in the steady state if it is not in the transient state, that is, if it is activated or deactivated.
  • a virtual channel is in the steady state if all its components are in the steady state.
  • a virtual B-channel is operative in two multiplexing modes, restricted mode and forced load-balancing mode.
  • restricted mode data received for transmission is sequentially distributed, one packet at a time, in a round-robin fashion over the steady activated component channels; deactivated channels are skipped.
  • the far-end virtual B processor 174 receives and recovers data in the same round-robin sequence.
  • load balancing mode of operation the distribution for transmissions over the channels follows a protocol of most-empty-channel-first for transmission. The receiver recovers it by the global sequence numbers by scanning, comparing and carrying out trace-back time-out presentation. During either distribution process, active b commands, described below, are sent in-line on the deactivated component channels.
  • deactive b commands are sent in-line over these channels in place of the data in the sequence.
  • Time- fill SYNC_vB packets may be sent over channels in the restricted mode of operation to avoid "holes" in the data stream, and further can be used for time control in the load-balancing mode of operation.
  • command categories together with detailed format and bit definitions, used for controlling and multiplexing protocol.
  • This protocol assumes the use of flow control to be optional depending on implementation and environment.
  • protocol formats are for the restricted mode
  • the load-balancing mode is identical except that all the numbered command category header-bytes described below will be two bytes in length.
  • This protocol assumes the use of flow control to be optional, at least to some degree, depending on the implementation and environment. In the restricted mode, this can be controlled by client link level protocol; outstanding transmissions preferably should not result in difference of 26 generations or more between the receiving and the far-end transmitting loop.
  • the load- balancing mode has the same format except that all the numbered command category header-bytes described below will be two bytes in length.
  • Ossnnnnn one byte as header of client's data for transmit. * RQST_Bs
  • 101nnnnn one byte header of one or multiple control bytes.
  • the first control byte can further specify explicitly whether it is global. If it does not specify, it depends on the address bytes which follow. No address byte for the in-band operation is required, and multiple address bytes imply the in-band line is not explicitly included.
  • the explicit global commands have the leftmost bit on, followed by explicitly specified address bytes. If the leftmost two bits are both on, it is a broadcast, and no address bytes follow. Multiple bytes are useful for off-band control. At present, only one byte of in-band control is employed. The leftmost third bit is reserved, and the fourth bit is for system control point specification. These are external or internal indicated by E and I respectively as follows. The last four bits define the commands of this category. At present, these are:
  • DEACTIV Bs group uwxlz, w bit is reserved and u defines two system points.
  • DEL_Bs - OXOO ⁇ 0
  • DEX_Bs - DEX_Bs ⁇ 1
  • DEL_Bs followed by external setup
  • DIX_Bs - (DEX_BS ⁇ 0X10), DIL_Bs, followed by internal setup
  • ACTIV_Bs group uwxOz, w bit is reserved, u the same as defined above.
  • ADD_Bs - (OXOO ⁇ 2 ) , an external add command.
  • AID_Bs - (ADD_Bs ⁇ 0X10), an internally
  • 110nnnnn one byte only, used for time-fill and synchronization .
  • 111nnnnn one byte header, followed by no or multiple optional bytes. This is an acceptance acknowledge in reply to a RQST_Bs, and in-band is implied if no other bytes follow. Similar to the case in RQST Bs, the first by which follows can further specify whether it is global. The leftmost two bits have exactly the same meanings, which describe the scope of the possible further succeeding address bytes. The leftmost third bit, if on, turns the whole meaning of the response into a negative acknowledgement, instead. The last five bits are the same as for RQST_Bs .
  • MODE_lbc In the load-balancing mode, it handshakes with MODE_lbc, with thirteen (13) bits global sequence numbers, when opening a virtual B.
  • the attributes of Table I to follow after these headers are optional.
  • the mode negotiation can proceed only when all component channels are in deactivated states, or UNA will be received. if the optional attributes are not accepted, UNA will also be received. A REJ will be received if the remote end does not support or will not accept this request.
  • MODE_vB resets the generation number or the global sequence number for transmitter and the corresponding variable back to 0 to restart and resets the transmitter and receiver (loop) pointers back to the first component position.
  • This unnumbered positive response means acceptance in the mode negotiation.
  • the same MODE_vB packet will be sent back as bi-direction initialization, following the UA.
  • the UA is also used for acknowledgement to flow control commands RR and RNR. It is used for acceptance acknowledgement to
  • 10010100 a negative response for refusing, used similarly to UA.
  • nnnnn is a module 32 correlation generation number, or a zero based global sequence number, assigned to each round of transmissions in restricted mode of operation, or assigned and incremented by one for each numbered transmission in the load balanced mode.
  • the "ss" in the DATA command which has four values 11, 10, 01 nd 11, is for frame segmentation.
  • sequence numbers are for data synchronization across the component channels.
  • a receiver variable "nr”, used for synchronization, is incremented by one after each round, or each actual numbered reception. This is a recovery process corresponding to the far-end sender's sequence number increment for each round, or each data or numbered command transmission.
  • the selected virtual B processor 714 polls the receiving queue and then steps to the next component channel.
  • restricted mode if nothing arrives in the queue of an activated channel, the processor stays and waits indefinitely.
  • the processor waits indefinitely or steps after a time out period if a trace-back timer has been started, regardless of the channel states..
  • the processor hunts through all following contiguous deactivated channels for any arrival, and does trace-back get, steps or stays until the trace-back timer expires, if arrival time inversion is found.
  • the time-fill SYNC_vB commands are to be received over their preceding channels, if no other commands or data are available through these channels.
  • the receptions in load-balancing mode are always in hunting mode, across the full range of component channels.
  • the processor continues looking for the correct sequence number for presentation. Trace-back time-out presentation scheme applies in either mode. Parameters which characterize a virtual B-channel to be opened and used to monitor the channel hereinafter are listed in Table I .
  • short high_utiI_thres ; /* high effective % band width threshold */ short high_busy_util_thres; /* one high util out of high_ busy_ util*/ short high_action_thres ; /* aet on eentig high_busy_util_thres */ short high_action_permit; /* allow bandwidth increment actions */ short min_band_width; /* minimum allowed bandwidth */
  • short low_util_thres ; /* low effective % bandwidth threshold */ short low_busy_util_thres ; /• one low util out of low_busy_util */ short low_action_thres; /* act on contig low_busy_util_thres */ short low_action_permit ; /* allow bandwidth decrement actions */ short set_up_retry_thres ; /* max contig retry before giving up */ short no_action_ thres ; /* no action threshold time */
  • Sensing of traf f i c f or automatic control of bandwidth is carried out as follows . Assuming that the sampling rate is one sample every two seconds , the maximum bandwidth of the virtual channel opened is set to be five B-channels , definition of the high util thres is 75% , of a high_busy_util_thres is 5 , the high action_thres is ten and the initial channel opened is one B-channel ( 64Kbps ) . As transmission rate increases steadily from an original effective rate of 45Kbps to 55Kbp s , a pro c edure f or adding a new line is automatically initiated thirty seconds later . This assumes that the high action permit is true .
  • a new element channel is added and the new line utilization will be 42. 3% if traffic is maintained at 55Kbps .
  • An increase in input data rate thus drives up bandwidth growth . If the input data rate continues to increase, the bandwidth eventually will reach five B-channels as a maximum.
  • Bandwidth of an ISDN line is dynamically allocated by a gateway GW in accordance with the invention which monitors traffic to each destination and in response allocates or deallocates virtual B-channels in accordance with Figure 18.
  • a long term bandwidth allocation algorithm (LTA) shown in Figure 14 determines when and for how long to allocate B-channels to a destination queue under typical traffic conditions; a short term allocation algorithm (STA) allocates B-channels under other conditions (Figure 15).
  • Bandwidth utilization is optimized by packaging and transmitting data packets in "trains". The trains are compressed using a suitable data compression algorithm before transmission from the gateway to the ISDN.

Abstract

An ISDN gateway (GW, 100) dynamically allocates bandwidth by monitoring traffic at each destination queue and in response allocating or deallocating virtual B-channels. A long term allocation (LTA) algorithm is executed during usual traffic conditions to determine when, and for how long, virtual B-channels are to be allocated. During transient conditions, the gateway (GW, 100) executes a short term allocation (STA) algorithm. Bandwidth utilization is optimized by packaging data packets into trains that are transmitted to the destination when the train is completed and upon satisfaction of other conditions. Each train undergoes data compression by execution of a suitable compression algorithm.

Description

CHANNEL UTILIZATION METHOD AND SYSTEM FOR ISDN
Technical Field
The present invention relates generally to channel utilization in data and voice communication systems , and mor e part i c u l arly to optimi z ing channe l utilization in Integrated Services Digital Networks
( ISDN ) .
Background Art
ISDN is a relatively newly developed and emerging field of telecommunications which integrates computer and communications technologies to provide, worldwide, a common, all-digital network. This is based, in part, on standardizing the structure of digital protocols developed by the International Telegraph and Telephone Consultative Committee (CCITT). Despite the implementation of multiple networks within national boundaries, from a user's point of view there is a single, uniformly accessible, worldwide network capable of handling a broad range of telephone, data and other conventional and enhanced services.
A complete description of the architecture of ISDN is beyond the scope of this specification. For details, and for an extensive bibliography of references of ISDN, see Stallings, ISDN, An Introduction, MacMillan Publishing Company, 1989. An ISDN is structured by architecture closely following the OSI Seven Layer Reference Model. Within the framework of ISDN, the network provides services and the user accesses the services through the user-network interface. A "channel" represents a specified portion of the information carrying capacity of an interface. Channels are classified by two types, Basic Rate ISDN (BRI) and Primary Rate ISDN (PRI). BRI delivers two B-channels, each having a capacity of 64Kbps, capable of transmitting voice and data simultaneously. A 16Kbps D-channel transmits call control messages and user packet data. PRI provides twenty three B-channels of 64Kbps capacity each for carrying voice, circuit switched data or packet data. The D-channel is a 64Kbps signaling channel. The B and D channels are logically multiplexed together at Layer 1 of the OSI Reference Model.
With reference to Figure 1, the conventional ISDN interfaces are depicted. At the customer premises, an "intelligent" device, such as a digital PBX terminal controller or Local Area Network (LAN), can be connected to an ISDN terminal TE, such as a voice or data terminal, which is connected to a Network Termination (NT1). Non-ISDN terminals TE may be connected to a Network Termination (NT2) [over the RS-232 Interface] and a Terminal Adapter TA. The NT2 in turn is connected over an "S/T-Interface", which is a four-wire bus, to a termination NT1 that performs functions such as signal conversion and maintenance of the electrical characteristics of the loop.
At the local loop, a two-wire bus, termed the "U-Interface", or "Loop", interconnects NT1 and a Loop Termination (LT) at the central office. Finally, the "U-Interface" is a bus between the local loop at the carrier end and exchange switching equipment. Details of this architecture are provided in ISDN: An Overview, Data Pro Research, Concepts & Technologies, MT 20-365; pp 101-110, published by McGraw Hill Incorporated (December 1988).
ISDN can be used to service the needs both of public data telephony and private networks. In general, access to public telephony is performed as shown in Figure 2(a). An initiating host H first issues dialing commands to the public network to set up a connection with the destination host H, and then uses the connection to communicate with the destination host. The connection either is circuit switched or packet switched. Data communication in private networks incorporates routers, generically called Interface Message Processors (IMPs), which may also be used as hosts. In Figure 3(a), a host A communicating with host C will take an appropriate route through an IMP interface.
To establish communications between a host and the ISDN, a gateway operating at the upper layers of the OSI reference model accomplishes gateway functions, translating protocols used on dissimilar networks. An ISDN gateway, designated by GW in Figure 1, will operate entirely in the digital domain, carrying out all necessary protocol conversion between the host network and ISDN.
Copending application Serial No. to
Gagliardi et al. entitled "ISDN Interfacing of Personal Computers", filed on even date herewith and assigned to the assignee of the present invention, describes a gateway interconnecting personal computers and the ISDN in a manner shown symbolically in Figure 2(b). The gateways GW in Figure 2(b) carry out all necessary protocol conversion to enable the user to transparently access the ISDN.
A related commonly assigned application Serial No. to Gagliardi et al., entitled "Method of and System for Accessing Distributed Resources on ISDN", filed on even date herewith, is directed to implementation of the gateway GW to enable computers interconnected to each other and to the ISDN to share resources by issuing commands in operating system commands. One computer can access a storage disk of another computer distributed anywhere in a virtual network established on the ISDN. Each disk throughout the networks has a unique name by which it is accessed. The configuration of the network and number of computers distributed on it are transparent to each user.
In Figure 3(b), support for existing private networks is transparently accomplished by providing an ISDN gateway GW of a type described in copending application to Gagliardi et al., entitled
"ISDN Interfacing of Personal Computers", which emulates the current interface between a host in the private network and the IMPs that comprise the network, or by emulating the interface to a fixed point-to-point communication link, such as a leased Tl line. Users simply replace the connections to IMPs to connect to an ISDN gateway. Figure 3(c) depicts a variation replacing the connection between an IMP or host and the communication line with that to an ISDN gateway; the IMPs and hosts in this configuration are equivalent.
Copending application Serial No. to
Gagliardi et al., entitled "ISDN Interfacing of Local Area Networks", filed on even date herewith and assigned to the common assignee, describes a gateway supporting interconnection among hosts on the ISDN to form a "virtual". A local area network, or LAN, provides a cluster of interconnected hosts (computers), or nodes, on a medium. Each node can communicate with every other node; the network requires no central node or computer. Base band LAN systems, such as Ethernet, impress data signals directly on the network medium, whereas broad band systems modulate a very high frequency carrier with the data signal before impressing it on the medium. The architecture of each LAN conforms with the OSI reference model. Other physical media standards include Token Ring and Token Bus. Several different network operating systems have been adopted by the industry.
Figure 4 depicts utilization of one type of gateway GW described in copending application entitled "ISDN Interfacing of Local Area Networks" for interconnecting a number of hosts to form a virtual LAN. A variation, shown in Fig. 5, provides ISDN gateways GW to interconnect to LANs of a common type or of different types using a common naming convention. In Fig. 6, a remote host H is connected to a pair of hosts residing on a LAN through first and second gateways GW at the host and LAN branches of the ISDN. The host side gateway GW1 is of a type described in the copending application entitled "ISDN Interfacing of Personal Computers". The LAN side gateway GW2 in the Figure is of a type described in the copending application entitled "ISDN Interfacing of Local Area Networks".
Interconnection of hosts with a LAN facility or forming a virtual LAN is based on our discovery that point-to-point communications on the ISDN is of the same order of magnitude as that of a high speed local area network. This discovery is counterintuitive, as the nominal throughput of an ISDN is 64Kbps whereas that of a local area network is in the range of between 10 and 16Mbps. Accordingly, it would appear that interconnecting a LAN to an ISDN would not be feasible from a performance point of view.
One factor underlying this surprising "natural" performance adaptation between Mbps LANs and Kbps ISDN is due to a number of factors. One factor is the very significant difference between actual point-to-point LAN throughput and LAN rated bandwidth. For example, the throughput of a 10Mbps Ethernet-based LAN typically ranges from 56 to 79Kbps, less than the capacity of a single ISDN B- channel. Our observations indicate the surprising result that while the very large bandwidth can accommodate many simultaneous point-to-point sessions, it is fully compatible for the ISDN to provide long distance interlinking for such sessions.
In general, utilization of bandwidth between hosts on any network is very inefficient. In a file transfer operation, a large amount of data typically is transferred from one computer to another in a single continuous operation. This can occur when a local computer requires a local copy of remote data or updates a remote centralized data bank with fresh local data. Data transfer may be in the form of a data call lasting at least several hours, with bursts of from 100,000 to 1,000,000 bytes being transferred every ten to twenty minutes. Such an application utilizes the communication link with a duty cycle which is in the range of 30 to 2 seconds, assuming that the operating system of the LAN is capable of transferring 30 to 50Kbps. In this example, file transfer results in very low channel utilization, on the order from less than one percent to slightly greater than five percent. Channel utilization for data base query and transaction processing similarly is on the order of much less than one percent.
Disclosure of the Invention
Accordingly, one object of the invention is to improve channel utilization for transmission of data through an ISDN.
Another object is to optimize utilization of ISDN B-channels to individual computers or a local area computer network.
Yet another object is to flexibly allocate bandwidth in an ISDN, avoiding arbitrary limits permitted for various types of data transfer.
The above and other objects are satisfied by dynamically expanding the bandwidth allocated to an end user connection by using multiple ISDN B-channels simultaneously, and to adapt the allocated bandwidth to meet user requirements. A B-channel algorithm automatically determines when to allocate B-channels, how many to allocate and for how long. This enables transparent selection of available B-channels to provide ' the necessary bandwidth for different size file transfers.
In accordance with another aspect of the invention, the channel allocation algorithm comprises two components, a long term allocator and a short term allocator. The long term allocator monitors recent traffic pattern to and from a destination and decides upon the bandwidth and types of channels to be allocated to that destination. The short term allocator monitors the current size of the destination queues and the aging constraints of those queues, and decides whether to temporarily deviate from the decisions of the long term allocator in order to respond to situations that are not well handled by it. An example is temporary or sudden surge of traffic to certain destinations not predicted based on the long term allocator algorithm.
In a preferred embodiment, long term allocation forms virtual B channels out of a B channel pool to allocate, deallocate or replace a channel so as to dynamically alter bandwidth. Each virtual channel, controlled by a corresponding processor, is operative in activated, deactivated, transient or steady states.
There furthermore are two multiplexing modes, a restricted mode and a forced load-balancing mode. In the restricted mode, data to be transmitted is sequentially distributed, one packet at a time in a round-robin fashion over the activated virtual channels. A far-end virtual processor receives and recovers the data in the same round-robin sequence. In the load-balancing mode, distribution of transmissions over the channels follows a most-empty-channel-first protocol for transmission. The data is recovered by scanning global sequence numbers attached to the data, comparing and implementing a trace-back time-out presentation procedure.
In accordance with a further aspect of the invention, ISDN traffic is compressed to improve data throughput; typically at least a 2:1 compression of data is carried out by the particular compression technique implemented. Examples in the preferred embodiments are based on run-length encoding and Huffman encoding as well as on variations of the Lempel-Ziv algorithms. Other compression techniques are possible.
Another aspect of the invention involves processing of packets by the gateway into "trains" to be shipped to a specific destination over ISDN. These trains may be compressed before transmission. Each train has its own header including all packing and compression information to be used by the receiving site to rebuild the original packets and correctly dispatch them. At the receiving sites, the trains are rebuilt into the original packet sequence as a result of numbering applied to each single packet.
Accordingly, any ISDN application is enabled with the ability to add or drop B-channels automatically, based on the amount of traffic encountered. Data throughput between destinations is optimized by packaging variable numbers of packets in trains, compressing the packaged data, and transmitting when a train of data packets becomes available for transmission.
Still other objects and advantages of the present invention will become readily apparent to those skilled in this art from the following detailed description, wherein only the preferred embodiment of the invention are shown and described, simply by way of illustration of the best mode contemplated of carrying out the invention. As will be realized, the invention is capable of other and different embodiments, and its several details are capable of modifications in various obvious respects, all without departing from the invention. Accordingly, the drawing and description are to be regarded as illustrative in nature, and not as restrictive. Brief Description of Drawings
Figure 1 is a simplified circuit diagram of an integrated services digital network.
Figure 2(a) is a symbolic diagram showing a conventional public circuit or packet switched network.
Figure 2(b) is a symbolic diagram of a network functionally equivalent to that shown in Figure 2(a), implemented using ISDN gateways described in a copending application.
Figure 3(a) is a symbolic diagram of a conventional private network having private IMP interfaces or routers.
Figure 3(b) is a diagram showing a network functionally equivalent to that of Figure 3(a), implemented by ISDN gateways disclosed in a copending application.
Figure 3(c) is a variation of Figure 3(b) having a host-IMP interface.
Figure 4 is a symbolic diagram showing interconnection of multiple hosts through ISDN gateways to form a virtual network, described in a copending application.
Figure 5 is a symbolic diagram showing interconnection of local area networks through ISDN gateways, described in another copending application.
Figure 6 is a symbolic diagram showing interconnection of a remote host to an existing local area network through ISDN gateways.
Figure 7 is a simplified block diagram showing the hardware architecture of an ISDN gateway provided in accordance with the invention.
Figure 8 is a diagram of software architecture incorporated in the system master card shown in Figure 7. Figure 9 is a diagram of software architecture shown in one embodiment of a LAN line card of Figure 7.
Figure 10 is a diagram showing software architecture incorporated in an ISDN line card shown in Figure 7.
Figure 11 is a diagram of software architecture incorporated in the SDLC line card shown in Figure 7.
Figure 12 is a diagram showing mapping of software to hardware components within the ISDN gateway.
Figure 13a is a diagram showing details of the intelligent ISDN manager and device driver shown in Figure 12.
Figure 13b is a symbolic diagram of the communication mode adapter incorporated in the ISDN line card of Figure 12.
Figure 14 is a flow chart of algorithms for carrying out LTA bandwidth allocation in accordance with an aspect of the invention.
Figure 15 is a flow chart of algorithms for selecting between LTA and STA channel allocation.
Figure 16 is a flow chart of channel processes responding to decisions made by both the LTA and STA channel allocation algorithms.
Figure 17 is a diagram showing development of a virtual channel and its components.
Figure 18 is a diagram of components for carrying out LTA channel allocation in accordance with another aspect of the invention.
Figure 19 is a flow chart describing assembly and compression of packet trains for transmission to a new B-channel.
Figure 20 is a flow chart showing compression, decompression and resequencing of packets. Figures 21a and 21b are diagrams respectively of a packet and a data frame.
Figure 22 depicts a train of packets in accordance with a further aspect of the invention. BEST MODE FOR PRACTICING THE INVENTION
The invention provides dynamic bandwidth allocation in an ISDN gateway by allocating B-channels automatically in response to traffic monitored at each destination on the ISDN. Packets are packaged into trains, shipped to a destination when a train having a predetermined packet length becomes available. Data embedded in the trains is compressed for efficient bandwidth utilization.
Network Overview
Referring to Figure 7, an ISDN gateway 100, in accordance with the invention, comprises four hardware elements 102, 104, 106, 108 interconnected by a common bus 110. The elements 102-108 preferably comprise individual circuit cards, although some or all of the elements may be incorporated in a single circuit board. The board or boards preferably reside within a personal computer but alternatively may reside outside the computer.
Element 102 is a system master which implement infrequent user functions, such as configuration management and connection requests. The system master 102 furthermore functions as a standard computing platform, emulating an IBM compatible PC or other computer standard.
LAN line card 104 implements firmware and hardware for specific IEEE 802 physical and data link protocol together with software carrying out host filtering. Different LAN line cards with corresponding protocols will be required for other LAN connections, e.g., 802.5 Token Ring and 802.3 Ethernet.
ISDN line card 106 comprises firmware and hardware implementing the ISDN physical, data link layer (LAPD), and the D-channel layer 3 protocol, and the physical layer of the B-channel. Software incorporated in the ISDN line card 106 implements train protocol and B-channel allocation algorithms as well as data compression algorithms to support a virtual broad band capability of the gateway, also to be described later.
SDLC line card 108 includes firmware and hardware to implement the SNA physical and data link (SDLC) layers. This card is optional to the network.
The contents of elements 102-108 are described in more detail with reference to Figures 8-20. Hardware and software incorporating important aspects of the invention are integrated in ISDN line card 106.
Software Architecture
The software architecture of system master element 102, shown in Figure 8, is configured with three layers of software underlying the system master 102(b) in a stack. The underlying layers comprise call request management, configuration management and monitoring, 102(b), as well as layers 102(c), 102(d) for implementing a computing platform.
In Figure 9, the software architecture of one embodiment of a LAN line card (LLC) 104 for 802.5 Token Ring comprises a functional layer 104(a) together with a layer 104(b) comprising a host filter, receiving and selectively processing packets addressed to the host, as well as underlying layers 104(c), 104(d) implementing the specific IEEE 802 physical and data link protocol (802.5) for Token Ring.
In Figure 10, the software architecture of the ISDN line card 106 comprises a functional layer 106(a), together with underlying layers 106(b)-106(f) for carrying out the requirements of the OSI reference model layers associated with the ISDN. These layers include a communication mode adapter at layer 106(b) (see also Figure 13a) and protocol services at layer 106(c). Layer 106d (IIM) carries out gateway peer protocol and executing buffer allocation algorithms to be described in detail later. The peer protocols, applied on B-channels used by the gateway to implement train packing, compression and error handling, is also described later.
Figure 11 depicts the software architecture of the SDLC line card implementing the SNA physical and data link layers for carrying out inter-networking among hosts.
Mapping of software architecture shown in Figures 8-11 to hardware components of Figure 7 is symbolized in Figure 12. Within the system master card 102, a USER software module 110 supports user application software and protocol stack, residing at layer 7 of the OSI Reference Model. Configuration control manager 112 of the system master monitor bandwidth allocation and errors. The system master 102 specifically manages configuration of the gateway, providing status, billing and information tracing on the use of resources in the gateway, as well as access to internal tables used by bridging and routing gateway services. Also provided by the system master are network management services and remote access to other system masters in the same interconnected network. Functions carried out by the system master 102 include updating of records, loaded on disk into memory resident data structures when the system is initiated.
When these tables are updated, the disk is automatically updated as well.
Configuration control manager 112 controls parameters which define the resources available to the gateway and the access privileges to the system master 102. A connection request manager, incorporated in 112, defines how the gateway relates to other gateways for implementing gateway functions to a personal computer or to define an interconnected LAN providing a virtual network, described in the copending applications entitled "ISDN Interfacing of Personal Computers" and "ISDN Interfacing of Local Area Networks", incorporated herein by reference.
Configuration control manager 112 maintains a configuration table for parameters controlling the operation of the gateway. The table, implemented in software, defines the ISDN configuration including number of channels allocated to the gateway, characteristics of the channels and all directory numbers associated with the gateway. Furthermore stored are characteristics of the types of interconnections of the host side of the network, e.g., RS-232, DMA, etc., and default parameters for incoming calls, e.g., select all non-collect calls, allow collect calls with proper user information and collect calls only from particular numbers. Level of system monitoring, also retained by the control configuration manager tables, controls available options and enables the amount of overhead associated with those options to be limited. Password information for access to the gateway and user profiles determining what kind of activities are allowed to a given user or a class of users also are retained in the manager tables.
A connection request manager, included in module 112, controls a table containing information required to set up a B-channel or remote gateway. This information includes at least one remote ISDN number for the connection and any user information required in the layer 3 set-up message. For LAN interconnections, the connection request table defines the interconnected LAN. During operation of the gateway, this table also records whether the connection is established and provides a handle for the connection which can be used to direct packets to a destination manager.
Configuration control manager 112 collects statistics for B-channels, including channel utilization including a statistical sampling to determine the percentage of idle capacity on the channel, possibly on a per-connection basis. Since each channel is bi-directional, measurement takes place both on the outgoing and incoming lines. Information also is maintained on any delay for packets in the system, as well as on the "high water mark" in the buffer pool for a given destination and the number of buffers discarded for a destination due to memory constraints. For channels that are connected, the percentage of frames sent or received in errors will also be recorded for use, among other ways to determine the type of data link protocol applied to a given destination.
Use of the gateway by nodes of an LAN is monitored by a system administrator (not shown in Figure 12) which records the amount of data sent or received by a node on a per-cargo destination basis, the number of connections initiated due to packets of a given node, and the arrival rate of packets from a particular node. The system administrator further may monitor broadcast or multi-cast transmissions by a particular node, and the number of times a node is not found in tabulated data . To reduce the overhead of monitoring the bridging operation, the system administrator restricts monitoring to traffic from specific nodes or uses statistical sampling of all traffic in the gateway.
Whereas the system master card 102 performs the necessary configuration and reconfiguration functions for supporting gateway operations carried out by intelligent ISDN manager (IIM) 116 in ISDN line card 106, LAN card 104 comprises principally a host filter 114. Upon arrival of a packet from another host within a local LAN, the host filter 114 examines the link-level address of the packet. If the packet is addressed to a local host other than the gateway, the packet is discarded. If the packet is addressed to remote hosts covered by a bridging service, it is passed to bridger 118 implemented in ISDN line card 106. If the packet is addressed to the gateway, and the packet indicates a service access point of the gateway protocol services, then it is passed to gateway services module 120. Finally, if the packet is addressed to the gateway, and the packet type is not recognized to be any of the service access points of the gateway protocol services, then it is passed to user module 110 in system master card 102 (LI/B) .
More specifically, if the packet is passed to the bridger 118, the packet is intended to be bridges over to another local area network. The link-level address resolved by the bridged 118 using pre-configured tables. On the other hand, if the packet is passed to gateway services module 120, the packet is intended to be processed by some gateway protocol service programs analogous to those in the user private protocol stack in user module 110. The services provided are those commonly associated with the network or inter-network layer of communication protocols, including name and address resolution and some limited form of routing. An example is a simplified re-implementation of the network layer of an existing communication protocol. These service programs in turn interact with the IIM 116 to pass data to remote hosts (LI/R).
A packet passed to the user module 110 is meant to be handled by some application or protocol software resident on the same node as the gateway. An example of such a program is a private protocol router. The user module 110 of system master card 102 interfaces with communication mode adapter (CMA) 122 of the ISDN line card 104 through a communication mode adaptor interface 124. The CMA 122 provides various adaptation modules allowing the user module 110 to interface with destinations, such as a user modem, leased lines or virtual circuit telemetry which adapts existing USER software and interacts with public or private data networks that support virtual circuits.
The architecture of the CMA is depicted symbolically in Figure 13a. The gateway services module 120 provides router functions for private protocol stacks. Name and address resolutions are performed at the user module 110, incorporating existing software of the private protocol stacks or one performed within gateway services module 120. In the latter case, the gateway services module 120 can be performed of at an existing private network/inter-network protocol stack, re-implemented as required for particular applications. The intelligent ISDN manager (IIM) 116 receives packets of data and commands to set up virtual circuits. The IIM 116 implements train packaging and B-channel allocation as well as data compression, forming important aspects of the invention. The content and operation of IIM 116 are described in detail hereinafter.
Bridger 118 operates at the 802 medium access (MAC) layer between LANs or single nodes. This module is used to create a much larger "virtual LAN" out of an arbitrary number of LANs and single nodes, described in copending application entitled "ISDN Interfacing of Local Area Networks". An 802 MAC level bridge as a method to interconnect level 2 networks in this regard is particularly advantageous for several reasons. First, providing a connection at the MAC level ensures that all higher level protocols available across connected LANs will continue to work in a virtual LAN with no modification, and that network applications based on these higher level protocols will also operate in the same manner. A bridge in this respect supports many types of network systems on different LANs.
Furthermore, once the bridge has determined that a remote conversation is underway, forwarding of remote 802 packets is achieved on the order of 100 instructions, whereas network level routing generally requires on the order of several thousand instructions. This means that bridges in general can handle an order of magnitude more packets using the same processing power.
F inal ly , MAC level bridges operate in a conne ctionless mode and depend on high quality transmi s s ion line s to minimi ze retransmissions controlled by higher level protocols on an end-to-end basis. Network routers tend to require much more overhead, because they operate in connection mode and in view of to the number of control messages required to allow for a dynamic routing. Given the disparity between ISDN bandwidth and LAN bandwidths, the ability to minimize the overhead needed to manage the LAN connection is considerable.
Peer protocol implemented by IIM 116 ensures that data packets arrive to the remote gateway in the same order as they were received by the local gateway. Peer protocol provides a transport level surface to sequence all packets for this purpose.
The term "packet sequence", as used herein, refers to an ordered series of packets sharing a common source/destination pair; the series may or may not be continuous. For example, if a gateway on a first LAN receives a series of packets with the same source and destination, after the packets are shipped to a gateway on a second LAN the packets are released as a packet sequence in the same order with or without discarded packets. Packet sequencing does not ensure that all packets from the first LAN to the second will be placed on the second LAN in the original order, only that the order to a particular source/destination pair will be the same.
To support packet sequences, a header is added to each 802 packet identifying packet type, the sequence number for the packet and the packet size. Although 802.3 packets already have a length field, 802.5 packets do not so that peer protocol as implemented herein must provide one. Train Packaging
Packet sequences are mapped into cargo destinations, with many packet sequences being designated to a single cargo destination. Mapping of packet sequences to cargo destinations is supported by system master 110. Of particular importance, a cargo destination is divided into many trains of a format shown in Figure 22 in accordance with an aspect of the invention. A train contains an integral number of packets. There is no limit to the size of a train, although the size is predetermined and all gateways preferably use the same maximum train size so that fixed size buffers can be implemented.
The content of a train is compressed in accordance with another aspect of the invention, then "framed" by adding start and stop sequences, a checksum and other frame control information. Peer protocol preferably specifies the description of a frame, including details of the frame header and data fields. This model enables the peer protocol to operate at three different levels, PS packets (803 packets having a PS header), trains and frames. The following diagram shows transformations that a packet undergoes both as a packet arrives to a gateway from a local area network and as the packet is transferred back to a different local area network: .... .. 802.5 LAN 1 ......
(full 802.5 frame) ...... GW Lan Card .... (802.5 packet stripped
of fields not covered
by the checksum) ...... Packet Sequence Builder ......
(PS header +
stripped 802.5) ...... Train Builder ......
(buffer of PS packets)
Destination Known ...... Frame Builder ......
(Frame Header +
compressed Train (bit array) +
Frame Trailer) ...... B Channel ...... 64Kbs bit stream ...... ISDN ......
Figure 21a shows a PS packet comprising the following fields: PT (packet type), PSN (Packet Sequence Number), LEN (Length of Stripped Packet) and Data. A data frame format, shown in Figure 21b as another alternative, comprises the following fields: FS (Frame Start), FT (Frame Type), FSN (Frame Sequence Number), FC (Frame Control), Data, FCS (Frame Checksum) and FE (Frame End). The following example showing packet processing as the packets are moved from a first LAN to a second LAN assumes use of the gateway as a bridge at the 802 medium access control level (MAC), the following is a description of packet processing as the packets are moved from a first LAN to a second LAN. With reference to Figure 12 and 13(b), packets addressed to the gateway are forwarded to the system master 102; packets addressed to other nodes on LAN 1 are discarded, and all other packets are passed to the packet sequence manager. Complete 802.5 packets are built by adding start of frame and end of frame fields, completing the host filter 114 functions.
Calls from a cargo destination manager determine the cargo destination for the received packet. The size each queue and the hold time of each queue not being serviced by the train builder is communicated to the B-channel allocation manager (Figure 13(b)). The PS header has a packet sequence number to each packet, and the initial packet in a sequence is assigned sequence number 1. The PS header is removed from the 802 packet, checked and may be discarded.
Assuming the packet is valid, it is sequenced as follows. If the packet sequence number is 1, the bridge manager is notified that a new conversation with a remote node has been initiated by the remote node, and the bridge manager will update the bridge data base. If the sequence number is not 1, the number is checked to see if there was a gap from the last packet received. If a gap exists, the packet is queued, and a timer is set. If the missing packet arrives before the timer expires, the queue is re-ordered, and the packets are released in order. Otherwise, the number of received packet is recorded, and the packet is released. If a packet with a lower sequence number subsequently arrives, it is discarded. The 802 packet type is checked and 802 protocol translation is carried out if necessary, i.e., translating an 802.3 packet to an 802.5 packet.
The train builder requests packets from the cargo destination manager, allocates and fills buffers and maintains the number of bytes used in the buffer. Received buffers are unpacked and passed to the level above after being parsed to ensure that no error in transmission has occurred. A compressor compresses the input buffer using a compression algorithm in a manner to be described later. Incoming data is decompressed and tested to confirm that it will fit in a new buffer; if not, an error is indicated.
In frame management, a data frame is built by adding a frame header and trailer to the active bits in the buffer. The B-channel allocation manager indicates when the frame control bits should be set to "last frame". The frame is gated onto the B-channel for the destination found in the buffer control block. The buffer is freed after being gated to the B-channel. For incoming data, a buffer is allocated for the arriving frame. The frame is checked using the FCS, and any frames in error are discarded. Control frames are supplied to the system master. Data frames are supplied to the compression manager, and the B-channel allocation manager receives a "last frame" setting in the frame control (FC) field.
An important aspect of the invention is B-channel allocation, implemented by an allocation algorithm to maximize utilization of the channels, minimize response times and the probability of losing data packets due to buffer overflow. The channel allocation algorithm, residing within the B-channel manager of IIM 116 (Figure 12), includes commands to establish user-level connections to destinations and to send and receive data. Data passed to the IIM 116 is queued in destination queues. The channel allocation algorithm monitors the status of the destination queues, and dynamically allocates ISDN channel bandwidth to these queues. The following definitions support the channel allocation algorithm.
A "destination", which is a connection at the gateway level, typically coincides with a physical gateway. If peer protocol is compatible, all user-level connections of various types, such as virtual circuit, modem connections, LAN, packets, etc., can multiplex within the same gateway connection. If the peer gateways are incompatible, multiple gateway-level connections to a single physical gateway are necessary.
Associated with each destination is a "destination queue", denoted as Q(d), consisting of a pool of buffers. The pool of buffers for destination d comprises the messages destined for d. The number of buffers in Q(d) is denoted as b(d); it is assumed that B-channels output from or input to a buffer in parallel, with each buffer having a fixed maximum size.
A number of B-channels currently allocated to a destination B is denoted as B(d). If a queue Q(d) becomes too long relative to the number of channels allocated to d, an extra channel is allocated. On the other hand, if Q(d) is relatively short compared to the size of B(d), some channels are released. If no buffers are available for accumulation of a new train at a destination d, a "buffer fault" is created.
Events that define the length of a packet train trigger the invocation of the buffer allocation algorithm these events comprise packet arrivals at ISDN line card 106 for transmission to the public network. There are four conditions that must be considered:
1. If a destination d is known, and Q(d) contains an "open" buffer , that is , a buffer that is not filled to the maximum length, the packet is accumulated into the train.
2. If the destination d is known, but no "open" buffer is available in Q(d), a new buffer is created and a new train is started.
3. If the destination d is known, but an "open" buffer is about to fill up, the packet is accumulated in the current open buffer. Thereafter, the buffer is closed to seal the train.
4. If destination d is unknown, a new buffer pool is started for a new train.
The channel allocation algorithm, in accordance with the invention, operates asynchronously to the buffer allocation algorithm described previously. As one important aspect of the invention, the channel allocation algorithm is portioned into two components, long term allocation and short term allocation.
Long Term Channel Allocation
Long term allocation monitors the recent historical track pattern to and from a destination, and decides upon the bandwidth and types of channels to be allocated to that destination. A short term allocation monitors the current size of the destination queues and the aging constraints of these queues, and decides whether to deviate, temporarily, from decisions of the long term allocation algorithm. This enables response to situations that are not well-handled by long term allocation, e.g., situations arising due to a temporary or sudden surge of traffic to certain destinations not predicted based on the long term allocation algorithm. Decisions on bandwidth allocation produced by the long term and short term allocation algorithms preferably are stored in charged memory, and are carried out at convenient intervals by ISDN channel processes to be described later.
Channel allocation is described with reference to B-channels, although similar allocation can be carried out for D-channels as well. In general, B-channels can be used in a circuit switched mode or in a packet switched mode. The circuit switched mode preferably is used only to transmit data between low-traffic destinations in view of the relatively high set-up tear down overhead required. The packet switched mode is preferable for transmission to destinations with a low-traffic rate if there is a moderate response time requirement. For destinations that require higher traffic rate or have more stringent response time requirements, use of B-channels in the circuit switched mode is preferable. For destinations that do not have high traffic rate but have very stringent response time requirements, the B-channels should be dedicated in the circuit switched mode and deliberately underutilized.
The long term allocation algorithm (LTA) is summarized in the flow chart of Figure 14. The traffic rate R at each destination is monitored by a software flowmeter, which may be a counter of data quantity arriving at the destination queue within a particular interval of time. Each software flowmeter is polled at fixed intervals (w). The flowmeter is read, then reset, and the reading is used by the long term allocator LTA to determine how many B-channels are to be allocated to that destination. One embodiment of a long term allocation algorithm LTA, in accordance with the invention, is as follows. Referring to Figure 14, step 150 reads a traffic rate R for a destination d, wherein R is defined as follows. r = Max {v_in (d, t-w, t)/w, v_out (d, t-w, t)/w}, where t is the time when the meter is read, v_in(d, t-w, t) is the accumulated input traffic volume for destination d between time t and t-w, and v_out(d, t-w, t) is that for the output queue.
In step 152, an integer x, that is greater than or equal to zero, is found such that:
x * hbw_cs < r < (x+1) * hbw_cs
and xB channels in circuit switched mode are allocated to destination d, where hbw cs is the achievable bandwidth once set up for a circuit switched mode B-channel. This number in practice is close to 64Kbs.
Assuming that residual = r - x * hbw_cs, step 154 determines whether residual is greater than mbw cs, where mbw_cs is the minimum utilization of a circuit switched B-channel, or whether the response time is more stringent than moderate ( step 156). The term "moderate" is defined to be the response time that can be offered by a packet switched channel. If either condition is true, than one more B-channel is allocated to destination d (step 158).
On the other hand, if 0 ≤ residual ≤ mbw_cs and the response time requirement is moderate or more relaxed than moderate, and traffic is suitable for packet switched channels, a fraction of a packet switched channel is allocated to destination d, wherein the fraction f is determined as follows. ebw_ps = effective bandwidth of a packet switched B channel.
In step 160, if destination d is not reachable by packet switched channels, or the packet is not of a nature that is suitably transmitted through packet switched channels, and the response time requirement is very relaxed (step 162), then a fraction of a circuit switched mode channel is allocated to destination d (step 164). The fraction in this example is determined by the size of the residual. The notation used in the above algorithm is as follows. x, f : decision variables
t, r, residual: state variables
ebw_ps, hbw_cw: ISDN performance profile
mbw_cs, w: algorithm control parameters
response time requirements for packet transmission:
tight: ≤ 260ms
moderate: between z seconds and 260ms
very relaxed: ≤= z seconds. In addition, the algorithm is sensitive to the transmission delay of packet switched channels and the ISDN call set-up and tear down times for circuit switched mode. The former is used to derive a boundary between moderate and height response times. The latter is used to calculate the response times obtainable for time-shared fractional allocation of B-channels in circuit switched modes, which in turn determines the value of z which defines the boundary between moderate and very relaxed response time requirements. The basic logic of the algorithm remains unchanged under different performance profiles.
An important algorithm control parameter is the "meter window" w, which is selected such that the algorithm is sufficiently sensitive to short-term fluctuation in traffic intensity but is not too sensitive. If w is too small, a very short-termed surge in traffic may result in too many B-channels allocated and therefore will incur a high set-up/tear down overhead. If w is too large, the algorithm may not be responsive enough to a short term search, resulting in a fast destination queue build-up. An excessive amount of buffer space may be consumed and response time may be degraded.
To attenuate sensitivity, a method using weighted averaging of traffic in multiple windows is provided. If a window system W with three windows w1, w2 and w3 is used, with weights wtl, wt2 and wt3, where the sum of wt1 - wt3 is unity, traffic rate r can be equated to R(d, t, W, ) computed as follows:
R(d, W, t) - v(d, t-w1, t) / w1 * wt1
+ v(d, t-w1-w2, t-w1) / w2 *wt2
+ v(d, t-w1-w2-w3, t-w1-w2) / w3 * wt3
With this generalization, to allow the algorithm to be more sensitive to short term fluctuation and less to the long term pattern, wtl should be increased and wt3 decreased, and vice versa. Multiple counters are maintained for each destination queue using this strategy. Short Term Channel Allocator
The long term allocation algorithm functions well in cases where there are sufficient B-channels available and decisions made by the LTA are feasible. This means that the total number of B-channels allocated by the LTA is smaller than the total number of B-channels subscribed, and sufficiently smaller such that the probability that an incoming call request finds all channels busy is very small, and further that the recent past history in fact represents a good basis for allocation. When such conditions are not met, short term allocation (STA) must be implemented.
In general, STA makes decisions which override, temporarily, decisions made by LTA. Channel processes, described hereinafter, implement decisions made by LTA under normal circumstances . When a B-channel is just "freed" from servicing an input and output buffer, the B-channel process checks to see if a decision has been made to de-allocate channels from the destination d. If so, it de-allocates itself and finds a new destination for which a decision has been made by LTA to have additional channels allocated. However, when unusual conditions are detected, such as a destination queue being ignored for too long, or a B-channel process encountering an empty destination queue, self-adjustment may be performed by the B-channel process to execute decisions rendered by STA. Two examples to which STA responds are (1) if any buffer in the destination queue d is found to exceed an age limit, or (2) if the following quantity exceeds a relative queue size limit:
Figure imgf000034_0001
Where:
Tk(di) = size of kth train for
the ith destination
B (di) = no. of B-channels
assigned to the ith
destination
The above relationship is satisfied when the quantity of data in all trains to a particular destination exceeds the number of B-channels assigned to that destination.
Logic implementing execution of the STA is shown in figure 15, wherein step 170 waits until a B-channel is freed from servicing in the input buffer and the output buffer, and then waits for an STA condition, such as the two conditions previously described, to be detected (step 172). LTA is implemented in step 174 unless the STA condition is detected. In response to in STA condition, STA channel allocation is carried out in step 176, and the process continues.
An important aspect of the invention involves the B-channel processes which respond to decisions made by both the LTA and STA. Referring to Figure 16, channel process invocation is triggered in response to end of train transmission on any B-channel for input or output traffic. These are three different cases in reallocating a recently freed B-channel to release an emptied buffer to a free buffer pool:
1. The first part of the channel process algorithm at 180 prevents locking out trains associated with low traffic rate destinations, or locking out incoming call requests. Executing lockout prevention at the first part of the algorithm eliminates possibility of lockout. Since this part of the algorithm tends to disturb stable allocations of B-channels to high intensity traffic streams, probability of invocation preferably is modulated inversely to the number of B-channels assigned to the destination associated with the just freed B-channel.
The probability of invocation is higher if the B-channel is allocated fractionally to a high number of distinct. destinations. Probability of invocation is lower if more B-channels have been allocated to the same destination. In other words, the higher the traffic rate for a destination d, the lower the probability of causing a "wild" de-allocation of one of its B-channels to handle of locked out trains.
Sealed trains are time stamped. A check is made by STA over the entire buffer pool for all destinations to see if there are trains whose age exceeds a predetermined age limit. If any are found, absence of B-channels allocated to their destinations are checked to confirm that trains are truly locked out. If such a flag is found, the just freed B-channel is associated to all respective destinations, originating fractional allocations of B-channels, the oldest locked out train is selected and transmitted. A condition wherein the age limit is set at a sufficiently high level, and if overall traffic does not exceed the capacity of the gateway is very rare. This condition must be rare to avoid thrashing in the allocation of B-channels, and must be handled as the first priority decision to avoid blocking trains associated with very low packet rates.
2. If any complete train is waiting for a transmission to the same destination, in step 182 of the flow chart, the B-channel is re-used to transmit it. Giving priority to the same destination re-use of the B-channel minimizes ISDN call overhead.
3. If no complete train is waiting for transmission toward the same destination, step 184, two subcases are considered. In the first subcase, the just freed B-channel is fractionally allocated to n destinations. The remaining n-1 destinations are checked for waiting trains, and if one is found the channel is assigned to it for transmittal. This allows maintenance of "stable" pools of low traffic rate destinations. if none are found, the following second subcase is executed. If no complete train is waiting for destination d sub i, Flag Queue Size is checked, and if set, a new destination d sub j is computed as the one which contains the largest relative queue size. Thereafter, the channel process remains with decisions made by LTA.
Data Compression
Another aspect of the invention provides data compression of packet trains, such as implementation of algorithms based on run-length encoding and Huffman Encoding, and variations of Lempel-Ziv algorithms. Run-length encoding refers to replacement of sequences of identical source elements by a triple consisting of a special code indicating a "run" of elements, a count of the run length and the repeated element. Huffman encoding translates fixed-sized source elements into variable-sized codes, where the size of the output code and bits is approximately the logarithm of the probability of the source element. For some kinds of data, the probabilities of elements are well known and fixed translation tables are available. In more general cases, where the distribution of source elements is not known, a translation table can be computed for a sum block of input data, and transferred along with the data to allow decompression. This requires two passes over the input during compression; table size must be significantly smaller than the data blocks.
The Lempel-Ziv algorithms convert variable-sized input elements, or strings, into fixed-sized codes. Allowing long input sequences gives the possibility for higher compression ratios than are possible with fixed-size source element algorithms, such as Huffman encoding. The Lempel-Ziv algorithms are adaptive in that translation tables are built dynamically to reflect characteristics of the input, but require only a single pass during encoding. Decompression can be performed by dynamically building the same translation tables as were used during compression, using information implicit in the encoded data stream such as whether a particular code has been encountered.
The typical ranges of compression ratios of a compression algorithm selected to compress packet trains vary from about unity for certain kinds of inherently random data such as floating point data or previously compressed data, to eight for some data bases containing large amounts of redundant or null data . The average compression ratios over mixed input types depends on the chosen samples, but ratios of between two and four for fairly large samples of mixed input found on real computer systems are common; two is proposed as an example in the preferred embodiment.
Error detection and correction, as a surrounding protocol on compressed data to allow non-delivery of corrupted blocks, is required since any errors lead to catastrophic corruption of the rest of the packet train. ISDN error rates appear to be similar to LAN error rates. No additional error handling mechanisms are required above those that already exist in LAN software. The D-channel has built-in error protection by virtue of using the HDLC protocol which includes error detection, packet sequencing and re-transmission.
In accordance with an aspect of the invention, shown in Figure 18, a controller 170 comprises a virtual B Allocator/Resource Manager 172 which opens a virtual B-channel with control attributes in table I below from the B-channel pool as requested. Manager 172 also assigns a virtual B processor 174 implemented from among a plurality of such processors, and handles related supporting global resources. The virtual B processors 174 at run time or initialization time provide the service of transparent addition and deletion of new or existing component channels in its virtual B-channel based on virtual B attributes. The processors 174 further provide transparent, multiplexing of client data. Virtual B-channel monitor 176 carries out traffic and error monitoring of all virtual B-channels, data traffic flow and allocation, day allocation and replacement of channels to dynamically alter bandwidth are performed by the monitor 176.
Call set-up/disconnect processor 178 sets up and disconnects protocol on the D-channel. Processor 178 interfaces to B-channel hardware to transform the B-channel call set-up or disconnect, and interfaces with the virtual B processors 174, resource manager 172 and client. HDLC/LAPD links 180 provide the variable links for clients, and are optional.
Component channels allocated for a virtual B-channel have the following four defined states: activated, deactivated, transient and steady. In the deactivated state, data transfer is forbidden although control protocol exchange is not . Component channels allocated or joined are initially in the deactivated state. A channel, deactivated for a sender immediately following the deactive_b command, described later, is queued for transmission. For a receiver, a channel is deactivated immediately after the command is received. Only a bi-directionally deactivated channel is removed and physically disconnected from the virtual channel of which it was a member.
In the activated state, both data and control protocol transfer are permitted. A channel is activated to a sender only after acknowledgement of the active b command transmitted is received. For a receiver, a channel is activated immediately after this command is received.
The transient state is defined by guarding periods before acknowledgements are received from the far-end. A virtual channel is transient when any component channels are transient. No data can be sent over a channel in the transient state.
A component channel is in the steady state if it is not in the transient state, that is, if it is activated or deactivated. A virtual channel is in the steady state if all its components are in the steady state.
A virtual B-channel is operative in two multiplexing modes, restricted mode and forced load-balancing mode. In the restricted mode, data received for transmission is sequentially distributed, one packet at a time, in a round-robin fashion over the steady activated component channels; deactivated channels are skipped. The far-end virtual B processor 174 receives and recovers data in the same round-robin sequence. In the load balancing mode of operation, the distribution for transmissions over the channels follows a protocol of most-empty-channel-first for transmission. The receiver recovers it by the global sequence numbers by scanning, comparing and carrying out trace-back time-out presentation. During either distribution process, active b commands, described below, are sent in-line on the deactivated component channels. For component channels to be deactivated, deactive b commands are sent in-line over these channels in place of the data in the sequence. Time- fill SYNC_vB packets may be sent over channels in the restricted mode of operation to avoid "holes" in the data stream, and further can be used for time control in the load-balancing mode of operation.
By way of example, the following are command categories together with detailed format and bit definitions, used for controlling and multiplexing protocol. This protocol assumes the use of flow control to be optional depending on implementation and environment. Whereas the following protocol formats are for the restricted mode, the load-balancing mode is identical except that all the numbered command category header-bytes described below will be two bytes in length.
Control and Multiplexing
The following are the command categories and their detailed format with bit definitions. This protocol assumes the use of flow control to be optional, at least to some degree, depending on the implementation and environment. In the restricted mode, this can be controlled by client link level protocol; outstanding transmissions preferably should not result in difference of 26 generations or more between the receiving and the far-end transmitting loop. The load- balancing mode has the same format except that all the numbered command category header-bytes described below will be two bytes in length.
* DATA
Ossnnnnn: one byte as header of client's data for transmit. * RQST_Bs
101nnnnn: one byte header of one or multiple control bytes. The first control byte can further specify explicitly whether it is global. If it does not specify, it depends on the address bytes which follow. No address byte for the in-band operation is required, and multiple address bytes imply the in-band line is not explicitly included. The explicit global commands have the leftmost bit on, followed by explicitly specified address bytes. If the leftmost two bits are both on, it is a broadcast, and no address bytes follow. Multiple bytes are useful for off-band control. At present, only one byte of in-band control is employed. The leftmost third bit is reserved, and the fourth bit is for system control point specification. These are external or internal indicated by E and I respectively as follows. The last four bits define the commands of this category. At present, these are:
DEACTIV Bs group: uwxlz, w bit is reserved and u defines two system points. DEL_Bs - (OXOO ¦ 0), an external delete command DEX_Bs - (DEL_Bs ¦ 1), an DEL_Bs, followed by external setup
DIL_Bs - (DEL_Bs ¦ 0X10), internally initiated delete command
DIX_Bs - (DEX_BS ¦ 0X10), DIL_Bs, followed by internal setup
ACTIV_Bs group: uwxOz, w bit is reserved, u the same as defined above.
ADD_Bs - (OXOO ¦ 2 ) , an external add command. AID_Bs - (ADD_Bs ¦ 0X10), an internally
initiated ADD_Bs
* SYNC_vB
110nnnnn : one byte only, used for time-fill and synchronization .
* RA_Bs
111nnnnn: one byte header, followed by no or multiple optional bytes. This is an acceptance acknowledge in reply to a RQST_Bs, and in-band is implied if no other bytes follow. Similar to the case in RQST Bs, the first by which follows can further specify whether it is global. The leftmost two bits have exactly the same meanings, which describe the scope of the possible further succeeding address bytes. The leftmost third bit, if on, turns the whole meaning of the response into a negative acknowledgement, instead. The last five bits are the same as for RQST_Bs .
MODE_vB
1000000m: for virtual channel operation and format specification. THis has two members, MODE_rst with m = 0 and MODE_lbc with m = 1; MODE_rst defines the nnnnn five bit generation number in restricted mode. In the load-balancing mode, it handshakes with MODE_lbc, with thirteen (13) bits global sequence numbers, when opening a virtual B. The attributes of Table I to follow after these headers are optional. The mode negotiation can proceed only when all component channels are in deactivated states, or UNA will be received. if the optional attributes are not accepted, UNA will also be received. A REJ will be received if the remote end does not support or will not accept this request. In addition to this role of mode negotiations, MODE_vB resets the generation number or the global sequence number for transmitter and the corresponding variable back to 0 to restart and resets the transmitter and receiver (loop) pointers back to the first component position.
RR
10000010: Informs the far-end receiver that this end is ready.
10000011: Informs the far-end receiver that his end is not ready.
* ATTRB_vB
100000100: The parameters in Table I follow this for far-end negotiation.
* UA
10010000: This unnumbered positive response means acceptance in the mode negotiation. The same MODE_vB packet will be sent back as bi-direction initialization, following the UA. The UA is also used for acknowledgement to flow control commands RR and RNR. It is used for acceptance acknowledgement to
ATTRB_vB as well as unnumbered commands.
* UNA
10010100: a negative response for refusing, used similarly to UA.
* REJ
10011100: as explained in the MODE_vB description.
All these unnumbered are in-band, although MODE_vB is for global. The "nnnnn" is a module 32 correlation generation number, or a zero based global sequence number, assigned to each round of transmissions in restricted mode of operation, or assigned and incremented by one for each numbered transmission in the load balanced mode. The "ss" in the DATA command, which has four values 11, 10, 01 nd 11, is for frame segmentation.
Processing of these commands is independent of their arriving timing phase with respect to other channels, once dequeued. The sequence numbers are for data synchronization across the component channels. A receiver variable "nr", used for synchronization, is incremented by one after each round, or each actual numbered reception. This is a recovery process corresponding to the far-end sender's sequence number increment for each round, or each data or numbered command transmission.
Once mode negotiation is done, sequence number comparison is then started. The selected virtual B processor 714 polls the receiving queue and then steps to the next component channel. In restricted mode, if nothing arrives in the queue of an activated channel, the processor stays and waits indefinitely. In the case of MAC implementation, the processor waits indefinitely or steps after a time out period if a trace-back timer has been started, regardless of the channel states.. At a deactivated component channel, the processor hunts through all following contiguous deactivated channels for any arrival, and does trace-back get, steps or stays until the trace-back timer expires, if arrival time inversion is found. The time-fill SYNC_vB commands are to be received over their preceding channels, if no other commands or data are available through these channels. The receptions in load-balancing mode are always in hunting mode, across the full range of component channels. The processor continues looking for the correct sequence number for presentation. Trace-back time-out presentation scheme applies in either mode. Parameters which characterize a virtual B-channel to be opened and used to monitor the channel hereinafter are listed in Table I .
typedef struct {
long err_threshold; /* one out of err_threshold frames */ snort err_action_thres ; /* act on contig err_action_thres */ shore err_action_permit ; /* allow deactivation of errored cnannel */ short max_ band width; /* max allowed bandwidth */
short high_utiI_thres ; /* high effective % band width threshold */ short high_busy_util_thres; /* one high util out of high_ busy_ util*/ short high_action_thres ; /* aet on eentig high_busy_util_thres */ short high_action_permit; /* allow bandwidth increment actions */ short min_band_width; /* minimum allowed bandwidth */
short low_util_thres ; /* low effective % bandwidth threshold */ short low_busy_util_thres ; /• one low util out of low_busy_util */ short low_action_thres; /* act on contig low_busy_util_thres */ short low_action_permit ; /* allow bandwidth decrement actions */ short set_up_retry_thres ; /* max contig retry before giving up */ short no_action_ thres ; /* no action threshold time */
} VB_THRESHOLD;
Table I - Virtual B Channel Attributes (in C language representation) Traffic Sensing
Sensing of traf f i c f or automatic control of bandwidth is carried out as follows . Assuming that the sampling rate is one sample every two seconds , the maximum bandwidth of the virtual channel opened is set to be five B-channels , definition of the high util thres is 75% , of a high_busy_util_thres is 5 , the high action_thres is ten and the initial channel opened is one B-channel ( 64Kbps ) . As transmission rate increases steadily from an original effective rate of 45Kbps to 55Kbp s , a pro c edure f or adding a new line is automatically initiated thirty seconds later . This assumes that the high action permit is true . A new element channel is added and the new line utilization will be 42. 3% if traffic is maintained at 55Kbps . An increase in input data rate thus drives up bandwidth growth . If the input data rate continues to increase, the bandwidth eventually will reach five B-channels as a maximum.
When traffic decreases , assuming that the low_util thres is 30% , and the low_busy_util_thres and low action thres are five and ten , respectively, no line will be deleted from the virtual B if a pattern of the driving traffic does not have a utilization of less than 30% in any continuous ten seconds or there is no consecutive repeating ten or more times. If the low action permit is false, no reduction in virtual B-channels is permitted. The attributes of the virtual B thus define the bandwidth control behavior.
If line error rate is above a predefined threshold rate for a predetermined period of time, line replacement operations are reduced in addition to bandwidth control.
Conclusion
Bandwidth of an ISDN line is dynamically allocated by a gateway GW in accordance with the invention which monitors traffic to each destination and in response allocates or deallocates virtual B-channels in accordance with Figure 18. A long term bandwidth allocation algorithm (LTA) shown in Figure 14, determines when and for how long to allocate B-channels to a destination queue under typical traffic conditions; a short term allocation algorithm (STA) allocates B-channels under other conditions (Figure 15). Bandwidth utilization is optimized by packaging and transmitting data packets in "trains". The trains are compressed using a suitable data compression algorithm before transmission from the gateway to the ISDN.
In this disclosure, there is shown and described only the preferred embodiment of the invention, but, as aforementioned, it is to be understood that the invention is capable of use in various other combinations and environments and is capable of changes or modifications within the scope of the inventive concept as expressed herein.

Claims

Claims
1. In a communication network characterized by a multiplicity of channels for carrying packets of data on a medium between data servers and destinations, channel allocation apparatus, comprising:
means for establishing input and output destination queues;
data flowmeter means for measuring quantity of data arriving at each said input and output destination queue during a particular time interval;
means for polling each said data flowmeter means at fixed intervals;
long term channel allocation means responsive to said polling means for allocating or deallocating transmission channels to a particular destination on said medium depending upon data flow traffic to said destination and particular channel parameters; and
short term channel allocation means, overriding said long term channel allocation means in response to predetermined conditions, for allocating or deallocating transmission channels on said medium.
2. The apparatus of claim 1, including means for assembling pluralities of packets into trains, each consisting of a predetermined number of packets, and means for transmitting said trains onto said medium.
3. The apparatus of claim 1, wherein said predetermined conditions include inactivity at a destination queue for a time period greater than a predetermined period.
4. The apparatus of claim 1, wherein said predetermined conditions include an empty destination queue.
5. The apparatus of claim 1, wherein said predetermined conditions include a condition wherein a quantity of data directed to a particular destination exceeds the capacity of available channels assigned to that destination.
6. The apparatus of claim 5, wherein said data is arranged in trains consisting of a predetermined number of packets and said quantity of data is an accumulation of data comprising each said train.
7. The apparatus of claim 6, wherein each said train is provided with a header.
8. The apparatus of claim 6, wherein a unique number is assigned to each packet comprising a train.
9. The apparatus of claim 6, including means for receiving and rebuilding packets in accordance with said header.
10. The apparatus of claim 1, wherein said long term channel allocation means comprises
means for sampling data on said virtual channel at a predetermined sampling rate;
means for transmitting sampled data, one packet at a time, in a round-robin fashion over activated component channels, skipping deactivated channels; and means for receiving and recovering said data in the same round-robin sequence.
11. The apparatus of claim 8, wherein said long term channel allocation means comprises
means for sampling data on said virtual channel at a predetermined sampling rate;
means for transmitting sampled data, one packet at a time, in a most-empty-channel-first transmission sequence; and
means for recovering said sampled data using said sequence numbers and a comparing and trace-back time-out procedure.
PCT/US1991/003613 1991-05-24 1991-05-24 Channel utilization method and system for isdn WO1992021188A1 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
CA002109632A CA2109632C (en) 1991-05-24 1991-05-24 Channel utilization method and system for isdn
US08/094,115 US5479407A (en) 1991-05-24 1991-05-24 Channel utilization method and system for ISDN
PCT/US1991/003613 WO1992021188A1 (en) 1991-05-24 1991-05-24 Channel utilization method and system for isdn
NZ242531A NZ242531A (en) 1991-05-24 1992-04-28 Isdn short and long term channel allocation

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CA002109632A CA2109632C (en) 1991-05-24 1991-05-24 Channel utilization method and system for isdn
PCT/US1991/003613 WO1992021188A1 (en) 1991-05-24 1991-05-24 Channel utilization method and system for isdn

Publications (1)

Publication Number Publication Date
WO1992021188A1 true WO1992021188A1 (en) 1992-11-26

Family

ID=25676797

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1991/003613 WO1992021188A1 (en) 1991-05-24 1991-05-24 Channel utilization method and system for isdn

Country Status (2)

Country Link
NZ (1) NZ242531A (en)
WO (1) WO1992021188A1 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5555377A (en) * 1993-12-20 1996-09-10 International Business Machines Corporation System for selectively compressing data transferred in network in response to produced first output when network utilization exceeds first threshold and data length over limit
GB2299240A (en) * 1995-03-24 1996-09-25 Northern Telecom Ltd Digital communications system
DE19625142C2 (en) * 1996-04-04 1998-10-29 Siemens Ag Method for assigning telecommunication channels of different channel capacity in a hybrid telecommunication system, in particular an "ISDN <--> DECT-specific RLL / WLL" system
EP1115258A2 (en) * 1999-12-30 2001-07-11 Nortel Networks Limited Adaptively maintaining quality of service (QoS) in distributed PBX networks
GB2393355A (en) * 1999-10-22 2004-03-24 Virtual Access Ireland Ltd Multichannel communication bandwidth control
EP1786169A1 (en) * 1999-10-22 2007-05-16 Virtual Access Technology Limited Multi channel communication control system and method
US7990882B1 (en) 1999-12-30 2011-08-02 Avaya Inc. Adaptively maintaining quality of service (QoS) in distributed PBX networks

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4556972A (en) * 1983-12-27 1985-12-03 At&T Bell Laboratories Arrangement for routing data packets through a circuit switch
US4677616A (en) * 1985-09-11 1987-06-30 At&T Company Flow control scheme for a switching network
US4799219A (en) * 1986-08-27 1989-01-17 International Business Machines Corp. Pseudo synchronous transport mechanism in a communication network
US5012469A (en) * 1988-07-29 1991-04-30 Karamvir Sardana Adaptive hybrid multiple access protocols

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4556972A (en) * 1983-12-27 1985-12-03 At&T Bell Laboratories Arrangement for routing data packets through a circuit switch
US4677616A (en) * 1985-09-11 1987-06-30 At&T Company Flow control scheme for a switching network
US4799219A (en) * 1986-08-27 1989-01-17 International Business Machines Corp. Pseudo synchronous transport mechanism in a communication network
US5012469A (en) * 1988-07-29 1991-04-30 Karamvir Sardana Adaptive hybrid multiple access protocols

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5555377A (en) * 1993-12-20 1996-09-10 International Business Machines Corporation System for selectively compressing data transferred in network in response to produced first output when network utilization exceeds first threshold and data length over limit
GB2299240A (en) * 1995-03-24 1996-09-25 Northern Telecom Ltd Digital communications system
DE19625142C2 (en) * 1996-04-04 1998-10-29 Siemens Ag Method for assigning telecommunication channels of different channel capacity in a hybrid telecommunication system, in particular an "ISDN <--> DECT-specific RLL / WLL" system
GB2393355A (en) * 1999-10-22 2004-03-24 Virtual Access Ireland Ltd Multichannel communication bandwidth control
GB2393355B (en) * 1999-10-22 2004-06-30 Virtual Access Ireland Ltd Multi channel communication control system and method
EP1786169A1 (en) * 1999-10-22 2007-05-16 Virtual Access Technology Limited Multi channel communication control system and method
EP1115258A2 (en) * 1999-12-30 2001-07-11 Nortel Networks Limited Adaptively maintaining quality of service (QoS) in distributed PBX networks
EP1115258A3 (en) * 1999-12-30 2001-10-04 Nortel Networks Limited Adaptively maintaining quality of service (QoS) in distributed PBX networks
EP1447998A1 (en) * 1999-12-30 2004-08-18 Nortel Networks Limited Adaptively maintaining quality of service (Qos) in distributed PBX networks
US7990882B1 (en) 1999-12-30 2011-08-02 Avaya Inc. Adaptively maintaining quality of service (QoS) in distributed PBX networks
US8477602B2 (en) 1999-12-30 2013-07-02 Avaya Inc. Adaptively maintaining quality of service (QoS) in distributed PBX networks

Also Published As

Publication number Publication date
NZ242531A (en) 1995-04-27

Similar Documents

Publication Publication Date Title
US5479407A (en) Channel utilization method and system for ISDN
US5442630A (en) ISDN interfacing of local area networks
US5444703A (en) ISDN interfacing of personal computers
US5574861A (en) Dynamic allocation of B-channels in ISDN
EP0503207B1 (en) Adaptation device and method for efficient interconnection of data processing devices and networks
EP0965204B1 (en) Access to telecommunications networks in a multiservice environment
EP0605349B1 (en) Switched circuit connection management over public data networks for wide area networks
US5594732A (en) Bridging and signalling subsystems and methods for private and hybrid communications systems including multimedia systems
US8126009B2 (en) Method and apparatus for providing ringing timeout disconnect supervision in remote telephone extensions using voice over packet-data-network systems (VOPS)
US6763017B1 (en) Method and apparatus for voice port hunting of remote telephone extensions using voice over packet-data-network systems (VOPS)
US6977898B1 (en) Method for supporting high priority calls on a congested WAN link
WO1992021188A1 (en) Channel utilization method and system for isdn
US6009093A (en) Apparatus and method for interfacing private exchange to integrated services digital network
US6714516B1 (en) Congestion control mechanism for SSCOP protocol
US7009962B1 (en) Method and apparatus for providing forwarding on ring-no-answer for remote telephone extensions using voice over packet-data-network systems (VOPS)
WO1992021185A1 (en) Isdn interfacing of personal computers
WO1992021189A1 (en) Isdn interfacing of local area networks
CA2109632C (en) Channel utilization method and system for isdn
CA2109634C (en) Isdn interfacing of personal computers
CA2109633C (en) Isdn interfacing of local area networks
Cisco Configuration Guide
Cisco Configuration Guide
Cisco Configuration Guide
Cisco Configuration Guide
Cisco Configuration Guide

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AT AU BB BG BR CA CH DE DK ES FI GB HU JP KP KR LK LU MC MG MW NL NO PL RO SD SE SU US

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE BF BJ CF CG CH CI CM DE DK ES FR GA GB GR IT LU ML MR NL SE SN TD TG

WWE Wipo information: entry into national phase

Ref document number: 2109632

Country of ref document: CA

WWE Wipo information: entry into national phase

Ref document number: 08094115

Country of ref document: US

122 Ep: pct application non-entry in european phase
REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

122 Ep: pct application non-entry in european phase
122 Ep: pct application non-entry in european phase