CA1254984A - Interconnection of broadcast networks - Google Patents

Interconnection of broadcast networks

Info

Publication number
CA1254984A
CA1254984A CA000504239A CA504239A CA1254984A CA 1254984 A CA1254984 A CA 1254984A CA 000504239 A CA000504239 A CA 000504239A CA 504239 A CA504239 A CA 504239A CA 1254984 A CA1254984 A CA 1254984A
Authority
CA
Canada
Prior art keywords
packet
gateway
address
packets
trees
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired
Application number
CA000504239A
Other languages
French (fr)
Inventor
Walter D. Sincoskie
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Iconectiv LLC
Original Assignee
Bell Communications Research Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Family has litigation
First worldwide family litigation filed litigation Critical https://patents.darts-ip.com/?family=25085802&utm_source=google_patent&utm_medium=platform_link&utm_campaign=public_patent_search&patent=CA1254984(A) "Global patent litigation dataset” by Darts-ip is licensed under a Creative Commons Attribution 4.0 International License.
Application filed by Bell Communications Research Inc filed Critical Bell Communications Research Inc
Application granted granted Critical
Publication of CA1254984A publication Critical patent/CA1254984A/en
Expired legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/48Routing tree calculation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/46Interconnection of networks
    • H04L12/4604LAN interconnection over a backbone network, e.g. Internet, Frame Relay
    • H04L12/462LAN interconnection over a bridge based backbone
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/02Topology update or discovery
    • H04L45/04Interdomain routing, e.g. hierarchical routing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/16Multipoint routing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/48Routing tree calculation
    • H04L45/484Routing tree calculation using multiple routing trees

Abstract

Abstract of the Disclosure A process is disclosed for effecting transmission over a generally cyclic communication system comprising numerous networks interconnected by gateway pairs. Each gateway of each pair implements a store-and-forward protocol whereby each gateway forwards message packets propagated over its associated network except for any packets that are destined for a device which has previously appeared in the sending address of another packet.
In order to utilize this protocol as a basis for transmission, the system is covered with a set of spanning trees that satisfy capacity and reliability requirements.
Each spanning tree is assigned a unique identifier and each packet traversing the system is assigned to and conveys the specified spanning tree. Each gateway parses the packet to determine the assigned spanning tree and forwards the packet accordingly.
To mitigate system flooding by a newly connected device, the protocol may also incorporate a delay to allow the gateways to learn the location of the new device.

Description

Field of the Invention This invention relates generally to a communication system formed by interconnectirlg a plurality of independerlt r-etworks and, more particularly, to a methodology and arrangements for effecting intercor)rlection of the separate networks without imposing topological constraints.
Background of the Invention Oftentimes it is required to expand the coverage of local area networks (LANs) by interconrlectirlg these L~Ns to create a geographically-disperse metropolitan area network. Also, an organization such as a university or a company that operates a distinct set of LANs, say within a building complex may find it necessary to interconnect these LANs to effect a large LAN serving the entire complex.
If a device (e.g. computer video terminal, telephone set) connected to one LAN has only one path to a device on another L~N after interconnection thereby exhibiting a loop-free topology, interconnection may be achieved by connecting pairs of networks with a so-called gateway that executes a store-and-forward protocol. Such an interconnectiorl arrangemerlt is described as a transparent interconnection since the existence of the gateway is not visible to any of the devices in the linked system and, consequently, no modifications to the devices or the messages or paclsets propagated by the devices are required.
Recently, a number of references have discussed the methodology and associated circuitry for the transparent interconnectiorl arrangement. These include:
(1) "Local ~rea Network Applications", Telecommunications, September, 198~ by ~. Hawe and B. Stewart; (2) "An ~rchitecture for Transparently Interconnecting IEEE 802 ~' Loc~31 .~ea Networks", a Digital Equipment Corporation Technical paper submitted to ~he IEEE ~02 Stand~rds ~ommittee in October, 1934; and (3) "Transparent Interconnection of Local Networks with Bridges", Journal of Telecommunications Net~orks, October, 1934 by B. Ha~e, A. Kirby and B. Stewart. These references stress that the physical topology of the interconnected networks must be that of a branching tree. Gateways cannot transoarently interconnect local area networks t~at support alternative paths between local networks resulting in loo?s. In fact, in reference (1), a technique is suggested for transformirlg a general mesh topology to a loop-free topology so that the gateways may be utilized.
Tne requirement that the syste~ topology be loop-free is a severe one in general, and ultimately restricts the practical application of the conventional gateway arrangement. In order to satisfy channel capacity demands or to provide a degree of reliability, an interconnected system ~ill contain some loops in some portions of the topology. The conventional gateways always detect and remove these loops, preventing any improved red~ndancy or reliability. The problem of interconnecting loop or cyclic topologies at the physical or link layer has not been addressed by prior art references.
Summary of the Invention This restriction of requiring loop-free topologies for the transparent interconnection of local area networks with store-and-forward gateways is obviated, in accordance with the present invention, by a method that utilizes routing information conveyed by the message packets as well as tirne delay within the gateways.
Broadly speaking, the overall system topology is - - -represented by an undirec~ed, connected graph wherein networks map into vertices and gateways into edges. A set of spannin~
trees is defined for the graph to provide the required capacity and necessary redundancy. Each spanning tree is uniquely identified. Each message that traverses the overall system is assigned to a specific spanning tree so the packet travels between nodes along edges contained in the specific spanning tree. Each gateway, with an expanded store-and-forward protocol, parses the packet to determine the assigned spanning tree and forwards the message accordingly. The device originating the packet specifies the spanning tree identi~ier. To reduce numerous potentially unnecessary transmissions thr~ugh the system by a device newly connected to one of the networks, the protocol incorporates a delay to allow gateways to learn the location of the new device. The augmented protocol then implements a store-delay-forward algorithm.
The organization and operation of this invention will be better understood from a consideration of the detailed description thereof, which follows, when taken in conjunction with the accompanying drawing.
BriQf Descrir~tion of the Drawing FIG. 1 is a block diagram de2icting the gateway pair arrangement for interconnecting two local area networks;
FIG. 2 is an exemplary loop-free three-level hierarchical system depicting the interconnection of a plurality of local area networks with numerous gateway pair arrangements;
FIG. 3 illustrates the manner in which the first packet from a given local area network propagates throughout the system of FIG. 2 after network initialization and depicts the ca?ability of the network to learn of the location of the source of the first packet;
FIG. 4 depicts the propagation of a ret~rn packet through the system of FIG. 2 and illustrates that the network has "learned" of the location of the source of the reply packet;
FIG. 5 shows the propagation of the second packet from the given local area network of FIG. 2 and demonstrates that the gateways have "learned" the identification of the packet source and are now optimally routing all future packets from a given device;
FIGS. 6 and 7 are graphical representations of the system oE FIG. 2 illustrating two different spanning trees for the system;
FIG. 8 shows a cyclic graph extension for a portion of the system of FIG. 2 as well as two acyclic graphs that cover the extension;
FIG. 9 is a graph representing the system of FIG. 2 with two spannirlg tree overlays to enharlce system throughput;
FIG. 10 is a graph of a portiorl of FIG. 9 showing a replicated node to reduce traffic congestion in the single node and FIG. 11 is a flow diagram illustrative of the steps for propagating a packet through a general cyclic system.
Detailed Description For clarity of exposition, it is helpful initially to describe a conventional gateway arrangement focusing on methodology, so as to provide a basis for the fundamental concepts of the present inventiorl. This basis introduces foundational concepts and terminology which facilitate later presentation of the embodiments illustrative of the departure from the art.

1. Fundamental Basis -With reference to FIG. 1, two networks 101 and 201, labelled NETWORK ~ and NETWORK B, respectively, are interconnected by a pair of unidirectional gateways 102 and 202 to form overall communicatiorl system 100. Gateways 102 and 202 are linked in a manner such that gate 202 ignores transmissions by gate 102 on NEfrw~RK B. Gate 102 also ignores transmissions by yate 202 on NETWORK ~.
Each separate network typically serves a plurali~y of devices or hosts. In FIG. 1, hosts 111 and 112 and hosts 211 and 212 are connected to and communicate over networks 101 and 201, respectively. Each host is presumed to have a unique address and each network has broadcast capability, that is, if one host is transmitting a packet, all other gateways on the common network receive the packet.
Each packet propagated over a network includes a sending host address and a destination host address as well as the actual message data. One of the gateways interconnectirlg the two ne-tworks is arranged to receive every packet transmitted on a given network. For instance in FIG. 1, gateway 102 is the receiver ~or all packets transmitted on network 101.
A store-and-forward protocol implemented by each gateway is as follows: a gateway ~orwards each packet propagated over its associated network except for a packet that is destined for a host which has previously appeared in the sendirlg address of another packet. Thus, whenever a host first transmits a packet, each gateway the packet passes through "learns" of the location of the host. When a second host later sends a packet to the first host, the packet automatically proceeds over the optimal path through the system.
To illustrate this protocol in some detail, system 300 of FIG. 2 is considered. This arrangemerlt comprises numerous networks connected by pairs of gateways with each of the gateways executing the above algorithm.

In order to defirle the protocol precisely, the following notation is u-tilized:

Ni : Network number i (elements 30i), i=1,... 7 Gij : Gateway from network i to network j (elements 3ij) A B,C D E : Host computers (element 381 382 ..... 385 respectively) Pxy : Packet trarlsmitted from host X to host Y
: "is retrar-smitted on"
Dij : Set of hosts stored in Gij, for which transmission is blocked through Gi (Dij is called the "drop filter").

It is assumed that the arrangement of FIG. 2 has been initialized, that is, Dij = ~ (the empty set), Eor all i j and now there is a transmissior- from host E to host C (PEc):

1. PEC traverses N7
2. D73 = {E}~ and PEC N3 by G73
3. D36 = {E}, and PEC N6 by G36 D31 = {E}, and PEC Nl by G31 12 {E} and PEC N~ by G12 5- D2~ = {E} and PEc ~4 by G2~
D25 = {E}~ and PEC Ns by G2s PEC reaches its destination in step 3, but it continues and floods the network. This is diagrammed by the dotted lines in FIG. 3.

~2~

Now, a return packet PCE is considered with reference to FIG. 4:

1. PCE traverses N6 2. D63 = {C}~ and PCE N3 by G63 3~ D31 = {C,E}, and PCE is dropped by G31 D37 = {C}, and PCE N7 (and host E) by 537.

Since the location of host E was "known" from the transmission of PEC, the returrl packet took the optimum route through the network.
As a final consideration, it is su~posed that a second packet PEC is to be transmitted through the network, as depicted in FIG. 5:

1. PEc traverses N7 2. D73 = {E}~ and PEC N3 by G73 3. D31 = {C,E}, and PEC is dropped by G31 D36 = {E}~ a~7d PEC N6 (and host C) by G36.

In fact, for any k~E, PkE will only traverse the necessary networks~ Since PcE was only received by G63, G31 and G37, only part of the overall network "knows" the location oE host C. However, the rest of the network will learn about host C if and when it transmits to other hosts. Once steady-state is achieved, each Gij has a so-called "block" or "drop" list Dij that provides optimal routing and inhibits extraneous transmissiorls.

2. Extensiorls to Protocol Arrarlgemerlt It can be deduced from the above description that system 300 must have only one path between any pair of hosts in order to prevent a packet from "looping"

between two gateways forever. More specifically, if a system is mapped onto an undirected graph by mapping net~orks onto vertices and gateway pairs to edges, the resulting graph m~st be loop-free to properly function, that is, the topology of system 300 is restricted to that of an acyclic graph. FIG. 6 is a depiction of the topology of system 300 in graph form. Pairs oE gateways from FIG. 2 have been grouped and assigned the irldicia now used in FIG. 6 to define the graph edges (solid lines~.
This graph is acyclic since each pair of nodes is connected by only one path. If, for instance, nodes 306 and 307 were directly conrlected (shown as dashed edge 375 in FIG. 6), then the graph would be cyclic.
Systems that are cyclic cannot directly utilize the previously discussed store-and-forward protocol or algorithm. For example, from FIG~ 6 with nodes 306 and 307 also directly conrlected, a packet sent from a host on rlode 303 to a host on node 306 would loop forever in a path 303-307-306-303-307... 7 and the same packet would loop in another path 303--306-307-303-306.... The looping packets would saturate the gateways and networks on their loops and inhibit normal communications.
The solid-line structure of FIG. 6 is called a spanning tree, that is in a spanning tree every pair of nodes is connected by only one path. The graph of FIG. 7 depicts another spannirlg tree for the node arrangement of FIG. 6. In general, an arbitrary graph will have a plurality of spanning trees.
To implement the improvement in the gateway-protocol arrangement in accordance with one aspect of thepresent inverltion, a set of spanning trees is selected for the cyclic graph according -to predetermined guidelines.
Each spanrling tree is assigrled a unique identifier or number and each message traversing the system is assigned to a unique spanning tree via its identifier. ~ny gateway receiving this message determines the tree number and then routes the message over the specified spantlirlg tree and drops all packets of other spannirlg trees. Typically, the device originating the messa~e specifies the spanning tree number, either explicitly or implicitly. For instance, with the explicit approach, a "tree number" field coul~ be added to the packet specifications, say as an extra bit in the header of the packet. With the implicit approach, a spanning tree number could be generated from fields normally occurring in the packet such as the source and destination addresses. ~n appropriate example function might be spanning tree number =
(source 'exclusive or' des-tinatiorl) modulo N, where N is the number of spanning trees in the network.
This has the benefit that all traffic between a pair of hosts will travel on only one spanrling tree, thus minimi~irlg the occupied drop lists across the system.
The graph in the center of FIG. 8 is a reconstruction of both the solid and dashed line portions of the graph of FIG. 6 involving nodes 303, 306 and 307 as well as edges 364, 374 and 375. The yraph of FIG. 8 is cyclic. To arrive at this new configuration, it may be presumed, for example, that system 300 of FIG. 2 was modified to include a gateway pair (edge 375) interconrlecting nodes 306 and 307. Edge 375 may provide for increased message traffic from node 306 to 307 that edges 364 and 374 can no longer handle without overload.
Since the graph of FIG. 8 is cyclic, a set of spanning trees is selected. Two spannirlg trees that cover this graph are depicted in the left and right diagrams of FIG. 8 as graphs 401 and 402, respectively. If the notation M(SjDjT) is used, where S is the source node, D
is the se. of destinatiorl nodes and T is the spannirlg tree number, then one possible message routing assignment algorithm for nodes 303, 306 and 307 is as follows:

M(303j306,307,301,... j2), M(306j303,307,301,... ;1), and M(307;303,306,301,... ;2).

This particular assignment utilizes edge 375 only for messages originatirlg from node 306 9 presumably for load-balance purposes. If edge 375 becomes disabled, the devices associated with node 306 could be notified to change the assignment to M(306;303,307,301,...;2), thereby mairltairlirlg communicatior) within the system. The plurality of spannilly trees in a cyclic network provides redundancy between certain nodes although there may be some loss in performance during outages of corresponding gateways.
In general, gateways that appear in differerlt sparlnirlg trees must maintairl a drop list for each spanning tree. For instance, from FIG. 8, if D(k)ij represents the drop list for the kth spanrlirlg tree, then packets transmitted from host C to host E (PCE) on both trees yield upon start-up for edge 374:

D(1)37 = ~, D(2)37 = {C}, D(1)73 ={C} arld D(2)73 The necessity of maintairlirlg multiple drop lis-ts may be mitigated on an arbitrary graph by selecting most of the spannirlg trees irl a set so that no edge is contairled in more than one spannirlg tree.
Even when a system is acyclic, it is oftentimes necessary to utilize, in effect, multiple spanning trees so as to provide sufficient communicatiorl capacity.
~nother aspect of the present invention encompasses this situation. Illustrative of this case is a tree-shaped system, such as a public or private telephone network hierarchy, that covers a large geographic area~ These systems tend to bottleneck at or near the root. Instead of using disjoint spanning trees, wherein no edge is contairled in more than one spannirlg tree, capacity considerations require essentially identical spanning trees to be overlaid on the graph. This is demonstrated with reference to the graph of FIG. 9, which is FIG. 6 redrawn to show two spannirlg trees (one is solid, the o-ther is dashed) for a three-level hierarchy.
In this case, only one gateway pair implements edge 3~3 in each of the two spanning trees. However, edge 322 is implemented with -two gateway pairs, one for each spannirlg tree. Similarly, edges 353, 364 and 374 deploy one gateway pair for both trees, whereas edge 332 comprises two gateway pairs. Because of this strategic arrangement, the drop list for edge 343 (also edges 353, 364 and 374) is the same for each spannillg tree, so only one list must be mairltained~ Similarly, the drop list of edge 322 (edge 332 also) is the same, but this is less significant since two separate gateway pairs are utilized.
The network in FIG. 9 replicates the gateways between levels 1 and 2 (edges 322 and 332) to reduce congestion. However, congestion may also occur in the nodes. FIG. 10 shows a portion of the network of FIG~ 9 with node 301 replicated (node 308) to reduce the traffic passing through node 301. This network will work properly as long as no hosts are conrlected to nodes 301 or 308. If hosts are connected to nodes 301 and 308, gateway pairs must be added at edges 384 and 394 to provide complete connectivity.
The strategy of adding parallel edges arld nodes throughout a system has no limitation on number of levels that are replicated, or with the number of parallel edges and nodes that are added. ~ecause of this property, ~, networks with arbitrarily large throughput can be built.
These networks have the desirable property that unreplicated edges can be implemented with only a single drop filter, regardless of the number of spanning trees used.
FI~. 11 is a flow diagram illustrative oE the steps for propagating each packet throughout a system employing spanning trees. Block 401 indicates that a set of spanning trees for the system is preselected to satisfy capacity and reliability consideratiorls and then each tree is assigned a unique identifier. As is indicated by block 411, for each packet originated and then propagated over its associated source network by a device, a preselected spanning tree identifier is embedded in the packet. Each gateway that receives the propagated packet determines the identifier embedded in the packet as well as the source and destination devices originating and receiving the packet; this is depicted by block 421. ~s shown by block 422, the source address of the packet is inserted in the drop list for the particular spanning tree identifier if the address is not already present. Also, as indicated by decision block 423, if this gateway is not processing the par-ticular spannir-g tree found in the packet, the packet is dropped. In addition, as shown by decision block ~25, if the destination of the packet is found in the drop list for the spanning tree, then the packet is also dropped; otherwise the packet is broadcast by the gateway, as depicted by block 426. At the destination device, the packet is detected and an acknowledgement packet is generally returned to the source device, as indicated by block 431. The return packet may not necessarily traverse the same spanning tree as the original packet, but for efficiency reasons, it should traverse the same tree whenever feasible.
In any system, but particularly a large system, when a new host is placed into service, messages to the new host from any LAN may flood the system until the ~2~

system learns the location of the new host. To illustrate this point, which is yet another aspect of the present invention, the right-harld portion of FIG. 2, namely networks 303, 306 and 307 (N3, N6 and N7~, gateways 337 and 373 (G~7 and G73) and hosts 384(D) ar~d 385(E) are considered. It is supposed that host 384 is placed into service and there is a packet PED traversing network 307.
Since gateway G73 has not learned of host 3~4 yet, that is, the drop filter does not have host D in its list, normally this packet would flood the system. However, if gatewa~ G73 delays repeating packet PED Eor some time period (TD) that is greater than the average acknowledge response time of host 384, then gateway G73 will add host 334 to its drop list (D73 = {E,D,...}). Upon reconsideration of packet PED by gateway G73, retransmission to network 303 (N3) as well as the propagation of this packet throughout the system is avoided. This is a particularly important consideration if host 3~34 is generalized so as to describe any communications device, such as a telephone set or data set, and the networks 306,307,... are local switching networks .
This delay-before-forwarding protocol may also be combined with the spanrlirlg tree protocol to yield still another aspect of the present inventiorl. With reference to FIG. 11, a store-and-delay block followed by another decision block would be interposed between blocks 425 and 426. ~fter a predetermined delay in the first new block, the new decision block would test for the same condition as does block 425. If the destinatiorl address is now in the drop list, the packet is dropped; otherwise, the packet is processed by block 426.
In yet another aspect of the present inverltiorl, to further mitigate network flooding and reduce delays, the gateway pairs can be arranged to communicate their drop lists to each other. Therl, if host D is contairled in the drop list for gateway G37 indicating that host D is not on network 307 (D37 {...,D,...}), ED
repeatec~ immediately. The complete algorithm for gatewav 73 ~37 for packet PED, combining both communication and delay, becomes:

l. Lf D is contained in ~73, packet PED is not repeated;

2. If D is contairled in D37, packet PED N3;

3. If D is not contained in D37 and D is not contairled in D73, packet PED is delayed or stored in G73 for a time TD, and then resubmitted.

For a gateway pair that incorporates a time delay, block 425 of FIG. 11 may be expanded to execute a test to decide i~ the destinatiorl device is in the drop-list for the tree in the reverse propagation direction whene~er the test is not successful in the forward direction. If this additional test is successful, the packet is propagated on the associated spannirlg tree. If the test is not successful, the packet is stored for a preselected interval and is then resubmitted to block 425 for final disposition.
It is to be further understood that the methodologies and associated arrangements described are not limited to the specific forms disclosed, but may assume other embodiments limited only by the scope of the appended claims.

Claims (6)

WHAT IS CLAIMED IS:
1. A method for transmitting packets over a system comprising a plurality of networks interconnected by gateways, each of said packets having a sending address and a destination address, said method characterized by the steps of configuring each of said gateways to implement the routing algorithm of forwarding any of said packets received by said each gateway except for said any of said packets destined for one of said networks having a sending address appearing in any previously counted ones of said packets, said sending appearances being stored in a drop list within said each gateway, selecting a set of spanning trees for said system, conveying by each of said packets an identifier indicative of one of said trees, and, for said each packet, extracting said sending address and said destination address from said each packet within each of said gateways, if said sending address is not in said drop list of said each gateway, adding said sending address to said drop list, if said destination address is not in said drop list, storing said each packet for a predetermined time interval wherein subsequent ones of said packets are processed by said each gateway, and - Page 1 of Claims -routing of said each packet by said gateways through said system on one of said spanning trees in correspondence with said identifier whenever said destination address is not in said drop list after said time interval; otherwise, no longer storing said each packet within said each gateway.
2. The method as recited by claim 1 further comprising the step of returning an acknowledgement packet over said one of said trees.
3. A method for propagating packets by a gateway, each of said packets comprising a source address and a destination address, said method characterized by the steps of configuring said gateway to implement the routing algorithm of forwarding any of said packets received by said gateway except for any of said packets having a destination address appearing as the source address in any previously routed ones of said packets, said source appearance being stored in a drop list; and, for each packet, extracting said source address and said destination address from said packet within said gateway;
if said source address is not in said drop list, adding said source address to said drop list, if said destination address is not in said drop list, storing said packet for a predetermined time interval whereon - Page 2 of Claims -subsequent ones of said packets are processed by said gateway;
and routing said packet if said destination address is not in said drop list after said time interval; otherwise, no longer storing said packet within said gateway
4. A method for processing a packet by a first gateway from a gateway pair wherein said first gateway has a first drop list readable by the second gateway and said second gateway has a second drop list readable by said first gateway, said packet comprising a source address and a destination address, each said drop list comprising source addresses of previous packets processed by the corresponding gateway, and said method characterized by the steps of extracting said source address and said destination address from said packet within said first gateway, if said source address is not in said first drop list, inserting said source address in said first drop list, reading said second drop list and, if said destination address is in said second drop list, forwarding said packet by said first gateway, if said source address is not originally in said first drop list and said destination address is not in said second drop list, storing said packet by said first gateway for a predetermined time interval wherein subsequent packets are - Page 3 of Claims -processed by said gateway pair, and forwarding said packet if said destination address is not in said first drop list after said interval; otherwise, no longer storing said packet by said first gateway.
5. A method for transmitting a packet over a system comprising a plurality of networks interconnected by gateways, said packet originated by a source device connected to one of said networks and destined for a destination device connected to one of said networks, said packet including a source address and a destination address, and said method comprising the steps of defining an undirected graph representative of the system wherein said networks map onto graph nodes and said gateways map onto graph paths, defining a spanning tree on said graph such that every pair of said nodes is connected by only one of said paths and selecting a plurality of spanning trees for said graph according to predetermined system guidelines, configuring each gateway with source address lists in correspondence to the number of trees having said each gateway comprising one of said paths, assigning, by said source device, one of said trees to broadcast said packet and associating with said packet an identifier indicative of said one of said trees, - Page 4 of Claims -broadcasting said packet by said source device through the system on said one of said trees, for each gateway receiving said packet, (i) determining for each said packet said source address, said destination address and said packet identifier, (ii) if said receiving gateway does not process packets having said identifier, inhibiting forwarding of said packet; otherwise, inserting said source address in the corresponding one of said lists associated with said identifier (iii) inhibiting forwarding of said packet if said destination address is in said corresponding one of said lists; otherwise, storing said packet for a prescribed time interval and then submitting said packet for processing by step (iv), and, (iv) inhibiting forwarding of said packet if said destination address is in said corresponding list;
otherwise, forwarding said packet by said receiving gateway, and acknowledging the reception of said packet by said destination device by broadcasting a return packet over said one of said trees.
6. A method for transmitting a packet over a system comprising a plurality of networks interconnected by gateways, said packet - Page 5 of Claims -originated by a source device connected to one of said networks and destined for a destination device connected to one of said networks, said packet including a source address and a destination address, and said method comprising the steps of defining an undirected graph representative of the system wherein said networks comprise graph nodes and said gateway comprise graph paths, defining a spanning tree on said graph such that every pair of said nodes is connected by only one of said paths and selecting a plurality of spanning trees for said graph according to predetermined system guidelines, configuring each gateway with source address lists in correspondence to the number of trees having said each gateway comprising one of said paths, wherein said lists reduce to a common list whenever said selection of spanning trees results in identical ones of said lists for said each gateway, assigning, by said source device, one of said trees to broadcast said packet and associating with said packet an identifier indicative of said one of said trees, broadcasting said packet by said source device through the system on said one of said trees, and for each gateway receiving said packet, (i) determining for each said packet said source address, said destination address and said packet identifier, - Page 6 of Claims -(ii) if said receiving gateway does not process packets having said identifier, inhibiting forwarding of said packet; otherwise, inserting said source address in the corresponding one of said lists associated with said identifier, and (iii) inhibiting forwarding of said packet if said destination address is in said corresponding list;
otherwise, forwarding said packet by said receiving gateway.

- Page 7 of Claims -
CA000504239A 1985-08-26 1986-03-17 Interconnection of broadcast networks Expired CA1254984A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US06/769,555 US4706080A (en) 1985-08-26 1985-08-26 Interconnection of broadcast networks
US769,555 1991-10-01

Publications (1)

Publication Number Publication Date
CA1254984A true CA1254984A (en) 1989-05-30

Family

ID=25085802

Family Applications (1)

Application Number Title Priority Date Filing Date
CA000504239A Expired CA1254984A (en) 1985-08-26 1986-03-17 Interconnection of broadcast networks

Country Status (6)

Country Link
US (1) US4706080A (en)
EP (1) EP0233898B1 (en)
JP (1) JPH0652899B2 (en)
CA (1) CA1254984A (en)
DE (1) DE3686254T2 (en)
WO (1) WO1987001543A1 (en)

Families Citing this family (125)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5355371A (en) * 1982-06-18 1994-10-11 International Business Machines Corp. Multicast communication tree creation and control method and apparatus
JPH0648811B2 (en) * 1986-04-04 1994-06-22 株式会社日立製作所 Complex network data communication system
US5189414A (en) * 1986-09-30 1993-02-23 Kabushiki Kaisha Toshiba Network system for simultaneously coupling pairs of nodes
US4941089A (en) * 1986-12-12 1990-07-10 Datapoint Corporation Input/output network for computer system
US5133053A (en) * 1987-02-13 1992-07-21 International Business Machines Corporation Interprocess communication queue location transparency
US4835673A (en) * 1987-04-27 1989-05-30 Ncr Corporation Method and apparatus for sharing resources among multiple processing systems
JPH0787461B2 (en) * 1987-06-19 1995-09-20 株式会社東芝 Local Area Network System
US4969147A (en) * 1987-11-10 1990-11-06 Echelon Systems Corporation Network and intelligent cell for providing sensing, bidirectional communications and control
AU619514B2 (en) * 1987-11-10 1992-01-30 Echelon Corporation Network for providing sensing communications and control
US4955018A (en) * 1987-11-10 1990-09-04 Echelon Systems Corporation Protocol for network having plurality of intelligent cells
US4918690A (en) * 1987-11-10 1990-04-17 Echelon Systems Corp. Network and intelligent cell for providing sensing, bidirectional communications and control
DE3838945A1 (en) * 1987-11-18 1989-06-08 Hitachi Ltd NETWORK SYSTEM WITH LOCAL NETWORKS AND WITH A HIERARCHICAL CHOICE OF PATH
US5055999A (en) * 1987-12-22 1991-10-08 Kendall Square Research Corporation Multiprocessor digital data processing system
US5761413A (en) * 1987-12-22 1998-06-02 Sun Microsystems, Inc. Fault containment system for multiprocessor with shared memory
US5226039A (en) * 1987-12-22 1993-07-06 Kendall Square Research Corporation Packet routing switch
US5282201A (en) * 1987-12-22 1994-01-25 Kendall Square Research Corporation Dynamic packet routing network
US5341483A (en) * 1987-12-22 1994-08-23 Kendall Square Research Corporation Dynamic hierarchial associative memory
US5822578A (en) * 1987-12-22 1998-10-13 Sun Microsystems, Inc. System for inserting instructions into processor instruction stream in order to perform interrupt processing
US5251308A (en) * 1987-12-22 1993-10-05 Kendall Square Research Corporation Shared memory multiprocessor with data hiding and post-store
US4811337A (en) * 1988-01-15 1989-03-07 Vitalink Communications Corporation Distributed load sharing
US4868558A (en) * 1988-02-22 1989-09-19 Telefind Corp. Paging system lata switch
US4868860A (en) * 1988-02-22 1989-09-19 Telefind Corp. Paging system for entering pages by local telephone call
US4870410A (en) * 1988-02-22 1989-09-26 Telefind Corp. Paging system local switch
US4868562A (en) * 1988-02-22 1989-09-19 Telefind Corp. Paging system
US5121115A (en) * 1988-02-22 1992-06-09 Telefind Corporation Method of transmitting information using programmed channels
US4866431A (en) * 1988-02-22 1989-09-12 Telefind Corp. Paging system hub switch
US4876538A (en) * 1988-02-22 1989-10-24 Telefind Corp. Paging system sublocal switch
JPH01255340A (en) * 1988-04-05 1989-10-12 Hitachi Ltd Multinetwork system
WO1989012861A1 (en) * 1988-06-20 1989-12-28 United States Department Of Energy Interconnection networks
NL8802132A (en) * 1988-08-30 1990-03-16 Philips Nv LOCAL COMMUNICATION BUS SYSTEM, STATION FOR USE IN SUCH A SYSTEM, AND GATE CONNECTING ELEMENT FOR USE IN SUCH A SYSTEM, AND DEVICE CONTAINING SUCH A GATE CONNECTING ELEMENT.
US4912656A (en) * 1988-09-26 1990-03-27 Harris Corporation Adaptive link assignment for a dynamic communication network
US5115495A (en) * 1988-10-18 1992-05-19 The Mitre Corporation Communications network system using full-juncture and partial-juncture station status information for alternate-path distance-vector routing
US4922503A (en) * 1988-10-28 1990-05-01 Infotron Systems Corporation Local area network bridge
US5383179A (en) * 1988-12-15 1995-01-17 Laboratoire Europeen De Recherches Electroniques Avancees Message routing method in a system having several different transmission channels
US4947390A (en) * 1989-03-22 1990-08-07 Hewlett-Packard Company Method for data transfer through a bridge to a network requiring source route information
US5175765A (en) * 1989-05-09 1992-12-29 Digital Equipment Corporation Robust data broadcast over a distributed network with malicious failures
US5455865A (en) * 1989-05-09 1995-10-03 Digital Equipment Corporation Robust packet routing over a distributed network containing malicious failures
US5086428A (en) * 1989-06-09 1992-02-04 Digital Equipment Corporation Reliable broadcast of information in a wide area network
US5860136A (en) * 1989-06-16 1999-01-12 Fenner; Peter R. Method and apparatus for use of associated memory with large key spaces
US5138615A (en) * 1989-06-22 1992-08-11 Digital Equipment Corporation Reconfiguration system and method for high-speed mesh connected local area network
US5088091A (en) * 1989-06-22 1992-02-11 Digital Equipment Corporation High-speed mesh connected local area network
JPH0777375B2 (en) * 1989-09-29 1995-08-16 日本電気株式会社 Bus connection method
DE69031538T2 (en) * 1990-02-26 1998-05-07 Digital Equipment Corp System and method for collecting software application events
US5150360A (en) * 1990-03-07 1992-09-22 Digital Equipment Corporation Utilization of redundant links in bridged networks
US5128926A (en) * 1990-03-21 1992-07-07 Digital Equipment Corporation Updating link state information in networks
US5153595A (en) * 1990-03-26 1992-10-06 Geophysical Survey Systems, Inc. Range information from signal distortions
US5309437A (en) * 1990-06-29 1994-05-03 Digital Equipment Corporation Bridge-like internet protocol router
US6847611B1 (en) 1990-12-10 2005-01-25 At&T Corp. Traffic management for frame relay switched data service
US5404461A (en) * 1991-03-29 1995-04-04 International Business Machines Corp. Broadcast/switching apparatus for executing broadcast/multi-cast transfers over unbuffered asynchronous switching networks
US5250943A (en) * 1991-03-29 1993-10-05 International Business Machines Corporation GVT-NET--A Global Virtual Time Calculation Apparatus for Multi-Stage Networks
US5365228A (en) * 1991-03-29 1994-11-15 International Business Machines Corporation SYNC-NET- a barrier synchronization apparatus for multi-stage networks
US5424724A (en) * 1991-03-27 1995-06-13 International Business Machines Corporation Method and apparatus for enhanced electronic mail distribution
US5341372A (en) * 1991-04-10 1994-08-23 California Institute Of Technology Protocol for multiple node network
CA2078310A1 (en) * 1991-09-20 1993-03-21 Mark A. Kaufman Digital processor with distributed memory system
CA2078312A1 (en) 1991-09-20 1993-03-21 Mark A. Kaufman Digital data processor with improved paging
US5258999A (en) * 1991-10-03 1993-11-02 Motorola, Inc. Circuit and method for receiving and transmitting control and status information
DE69127198T2 (en) * 1991-10-14 1998-02-12 Ibm Routing of messages in a network consisting of local network segments connected by bridges
US5502726A (en) * 1992-01-31 1996-03-26 Nellcor Incorporated Serial layered medical network
US5398242A (en) * 1992-04-07 1995-03-14 Digital Equipment Corporation Automatically configuring LAN numbers
US5400333A (en) * 1992-04-07 1995-03-21 Digital Equipment Corporation Detecting LAN number misconfiguration
US5327424A (en) * 1992-04-07 1994-07-05 Digital Equipment Corporation Automatically configuring parallel bridge numbers
US5323394A (en) * 1992-04-07 1994-06-21 Digital Equipment Corporation Selecting optimal routes in source routing bridging without exponential flooding of explorer packets
US5844902A (en) * 1992-04-07 1998-12-01 Cabletron Systems, Inc. Assigning multiple parallel bridge numbers to bridges
JPH0653965A (en) * 1992-07-29 1994-02-25 Sony Corp Network system
JP3281043B2 (en) * 1992-08-06 2002-05-13 マツダ株式会社 Multiplex transmission equipment
US5630173A (en) * 1992-12-21 1997-05-13 Apple Computer, Inc. Methods and apparatus for bus access arbitration of nodes organized into acyclic directed graph by cyclic token passing and alternatively propagating request to root node and grant signal to the child node
DE69331705T2 (en) * 1992-12-21 2002-12-19 Apple Computer METHOD AND DEVICE FOR CONVERTING ANY TOPOLOGY FROM A NODE COLLECTION IN AN ACYCLIC DIRECTED GRAPH
US5862335A (en) * 1993-04-01 1999-01-19 Intel Corp. Method and apparatus for monitoring file transfers and logical connections in a computer network
US5434850A (en) 1993-06-17 1995-07-18 Skydata Corporation Frame relay protocol-based multiplex switching scheme for satellite
US6771617B1 (en) 1993-06-17 2004-08-03 Gilat Satellite Networks, Ltd. Frame relay protocol-based multiplex switching scheme for satellite mesh network
AU677393B2 (en) * 1993-07-08 1997-04-24 E-Talk Corporation Method and system for transferring calls and call-related data between a plurality of call centres
US5937051A (en) * 1993-07-08 1999-08-10 Teknekron Infoswitch Corporation Method and system for transferring calls and call-related data between a plurality of call centers
US5530808A (en) * 1993-12-10 1996-06-25 International Business Machines Corporation System for transferring ownership of transmitted data packet from source node to destination node upon reception of echo packet at source node from destination node
JP3542159B2 (en) * 1994-03-17 2004-07-14 株式会社日立製作所 Bridge with multiprocessor structure
US5555374A (en) * 1994-08-26 1996-09-10 Systech Computer Corporation System and method for coupling a plurality of peripheral devices to a host computer through a host computer parallel port
US8799461B2 (en) 1994-11-29 2014-08-05 Apple Inc. System for collecting, analyzing, and transmitting information relevant to transportation networks
US6029195A (en) * 1994-11-29 2000-02-22 Herz; Frederick S. M. System for customized electronic identification of desirable objects
US9832610B2 (en) 1994-11-29 2017-11-28 Apple Inc. System for collecting, analyzing, and transmitting information relevant to transportation networks
US6460036B1 (en) 1994-11-29 2002-10-01 Pinpoint Incorporated System and method for providing customized electronic newspapers and target advertisements
US5758257A (en) 1994-11-29 1998-05-26 Herz; Frederick System and method for scheduling broadcast of and access to video programs and other data using customer profiles
US20020178051A1 (en) 1995-07-25 2002-11-28 Thomas G. Scavone Interactive marketing network and process using electronic certificates
US5915010A (en) 1996-06-10 1999-06-22 Teknekron Infoswitch System, method and user interface for data announced call transfer
US6567410B1 (en) 1996-08-08 2003-05-20 Enterasys Networks, Inc. Assigning multiple parallel bridge numbers to bridges having three or more ports
US6029162A (en) * 1997-01-31 2000-02-22 Lucent Technologies, Inc. Graph path derivation using fourth generation structured query language
US6081524A (en) 1997-07-03 2000-06-27 At&T Corp. Frame relay switched data service
US6154462A (en) 1997-08-21 2000-11-28 Adc Telecommunications, Inc. Circuits and methods for a ring network
US6331985B1 (en) * 1997-08-21 2001-12-18 Adc Telecommunications, Inc. Telecommunication network with variable address learning, switching and routing
NO326260B1 (en) 1997-09-29 2008-10-27 Ericsson Telefon Ab L M Method of routing calls from a terminal in a first telecommunications network to a terminal in a second telecommunications network
US6049824A (en) * 1997-11-21 2000-04-11 Adc Telecommunications, Inc. System and method for modifying an information signal in a telecommunications system
JP3665460B2 (en) * 1997-12-05 2005-06-29 富士通株式会社 Route selection system, method, and recording medium by response time tuning of distributed autonomous cooperation type
US6202114B1 (en) 1997-12-31 2001-03-13 Cisco Technology, Inc. Spanning tree with fast link-failure convergence
CA2287304C (en) 1998-03-03 2003-10-21 Itron, Inc. Method and system for reading intelligent utility meters
US6442171B1 (en) * 1998-05-26 2002-08-27 Qualcomm Incorporated Logical topology and address assignment for interconnected digital networks
US6539546B1 (en) 1998-08-21 2003-03-25 Adc Telecommunications, Inc. Transport of digitized signals over a ring network
US6389030B1 (en) 1998-08-21 2002-05-14 Adc Telecommunications, Inc. Internet access over a ring network
US6570880B1 (en) 1998-08-21 2003-05-27 Adc Telecommunications, Inc. Control data over a ring network
US6711163B1 (en) 1999-03-05 2004-03-23 Alcatel Data communication system with distributed multicasting
US7315510B1 (en) 1999-10-21 2008-01-01 Tellabs Operations, Inc. Method and apparatus for detecting MPLS network failures
US7298693B1 (en) 1999-10-21 2007-11-20 Tellabs Operations, Inc. Reverse notification tree for data networks
AU1340201A (en) * 1999-10-21 2001-04-30 Tellabs Operations, Inc. Reverse notification tree for data networks
US7804767B1 (en) 1999-10-25 2010-09-28 Tellabs Operations, Inc. Protection/restoration of MPLS networks
US7630986B1 (en) 1999-10-27 2009-12-08 Pinpoint, Incorporated Secure data interchange
US6937576B1 (en) * 2000-10-17 2005-08-30 Cisco Technology, Inc. Multiple instance spanning tree protocol
US20030051007A1 (en) * 2001-07-14 2003-03-13 Zimmel Sheri L. Apparatus and method for optimizing telecommunication network design using weighted span classification for low degree of separation demands
US20030035379A1 (en) * 2001-07-14 2003-02-20 Zimmel Sheri L. Apparatus and method for optimizing telecommunication network design using weighted span classification
US20030023706A1 (en) * 2001-07-14 2003-01-30 Zimmel Sheri L. Apparatus and method for optimizing telecommunications network design using weighted span classification and rerouting rings that fail to pass a cost therehold
US20030046378A1 (en) * 2001-07-14 2003-03-06 Zimmel Sheri L. Apparatus and method for existing network configuration
US20030223379A1 (en) * 2002-05-28 2003-12-04 Xuguang Yang Method and system for inter-domain loop protection using a hierarchy of loop resolving protocols
KR100876780B1 (en) * 2002-06-05 2009-01-07 삼성전자주식회사 Method and apparatus for sharing a single internet protocol address, without converting network address in an internet access gateway for local network
US8867335B2 (en) * 2002-11-12 2014-10-21 Paradyne Corporation System and method for fault isolation in a packet switching network
GB0315745D0 (en) * 2003-07-04 2003-08-13 Novartis Ag Organic compounds
JP4370999B2 (en) * 2004-07-30 2009-11-25 日本電気株式会社 Network system, node, node control program, and network control method
US7889681B2 (en) * 2005-03-03 2011-02-15 Cisco Technology, Inc. Methods and devices for improving the multiple spanning tree protocol
FR2882939B1 (en) * 2005-03-11 2007-06-08 Centre Nat Rech Scient FLUIDIC SEPARATION DEVICE
EP1915585A2 (en) * 2005-07-29 2008-04-30 Automotive Systems Laboratory Inc. Magnetic crash sensor
JP4334534B2 (en) * 2005-11-29 2009-09-30 株式会社東芝 Bridge device and bridge system
US8077709B2 (en) 2007-09-19 2011-12-13 Cisco Technology, Inc. Redundancy at a virtual provider edge node that faces a tunneling protocol core network for virtual private local area network (LAN) service (VPLS)
US20090307503A1 (en) * 2008-06-10 2009-12-10 Condel International Technologies Inc. Digital content management systems and methods
JP5370017B2 (en) * 2009-06-15 2013-12-18 富士通株式会社 Relay system and relay method
WO2011044174A1 (en) * 2009-10-05 2011-04-14 Callspace, Inc Contextualized telephony message management
US8775245B2 (en) 2010-02-11 2014-07-08 News America Marketing Properties, Llc Secure coupon distribution
US8364700B2 (en) * 2010-05-21 2013-01-29 Vonage Network Llc Method and apparatus for rapid data access and distribution using structured identifiers
US8634419B2 (en) 2010-12-01 2014-01-21 Violin Memory Inc. Reliable and fast method and system to broadcast data
US8650285B1 (en) 2011-03-22 2014-02-11 Cisco Technology, Inc. Prevention of looping and duplicate frame delivery in a network environment
US20190306129A1 (en) * 2018-03-27 2019-10-03 Lenovo (Singapore) Pte. Ltd. Secure communication in a nondeterministic network

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CH584488A5 (en) * 1975-05-05 1977-01-31 Ibm
US4384322A (en) * 1978-10-31 1983-05-17 Honeywell Information Systems Inc. Asynchronous multi-communication bus sequence
US4384327A (en) * 1978-10-31 1983-05-17 Honeywell Information Systems Inc. Intersystem cycle control logic
US4231086A (en) * 1978-10-31 1980-10-28 Honeywell Information Systems, Inc. Multiple CPU control system
US4433376A (en) * 1978-10-31 1984-02-21 Honeywell Information Systems Inc. Intersystem translation logic system
US4234919A (en) * 1978-10-31 1980-11-18 Honeywell Information Systems Inc. Intersystem communication link
AT361726B (en) * 1979-02-19 1981-03-25 Philips Nv DATA PROCESSING SYSTEM WITH AT LEAST TWO MICROCOMPUTERS
US4307446A (en) * 1979-05-02 1981-12-22 Burroughs Corporation Digital communication networks employing speed independent switches
US4347498A (en) * 1979-11-21 1982-08-31 International Business Machines Corporation Method and means for demand accessing and broadcast transmission among ports in a distributed star network
FR2476349A1 (en) * 1980-02-15 1981-08-21 Philips Ind Commerciale DISTRIBUTED DATA PROCESSING SYSTEM
US4412285A (en) * 1981-04-01 1983-10-25 Teradata Corporation Multiprocessor intercommunication system and method
US4449182A (en) * 1981-10-05 1984-05-15 Digital Equipment Corporation Interface between a pair of processors, such as host and peripheral-controlling processors in data processing systems
US4597078A (en) * 1983-10-19 1986-06-24 Digital Equipment Corporation Bridge circuit for interconnecting networks

Also Published As

Publication number Publication date
EP0233898B1 (en) 1992-07-29
DE3686254T2 (en) 1993-03-11
JPS62502303A (en) 1987-09-03
DE3686254D1 (en) 1992-09-03
WO1987001543A1 (en) 1987-03-12
JPH0652899B2 (en) 1994-07-06
US4706080A (en) 1987-11-10
EP0233898A1 (en) 1987-09-02

Similar Documents

Publication Publication Date Title
CA1254984A (en) Interconnection of broadcast networks
US6717950B2 (en) Method and apparatus for priority-based load balancing for use in an extended local area network
US6963926B1 (en) Progressive routing in a communications network
Oran OSI IS-IS intra-domain routing protocol
Cheng et al. A loop-free extended Bellman-Ford routing protocol without bouncing effect
US6377543B1 (en) Path restoration of networks
US7096251B2 (en) Calculation of layered routes in a distributed manner
Maxemchuk Routing in the Manhattan street network
CN100559770C (en) Accelerate the method and apparatus of border gateway protocol convergence
KR101593349B1 (en) An ip fast reroute scheme offering full protection
US5265092A (en) Synchronization mechanism for link state packet routing
US5142531A (en) Data communications network
US7035227B2 (en) On-demand loop-free multipath routing (ROAM)
US20020181402A1 (en) Adaptive path discovery process for routing data packets in a multinode network
US7145878B2 (en) Avoiding overlapping segments in transparent LAN services on ring-based networks
Perlman A comparison between two routing protocols: OSPF and IS-IS
Murthy et al. Loop-free internet routing using hierarchical routing trees
US5245607A (en) Data network message broadcast arrangement
AU5939098A (en) MA alternate routeing for ISO 10589
Vutukury et al. A distributed algorithm for multipath computation
Khayou et al. A hybrid distance vector link state algorithm: distributed sequence number
Kartalopoulos A Manhattan fiber distributed data interface architecture
Song et al. Dynamic rerouting for ATM virtual path restoration
CA2212933C (en) Path restoration of networks
Schwartz et al. Routing Protocols

Legal Events

Date Code Title Description
MKEX Expiry