CA2192793A1 - A method and device for partitioning physical netword resources - Google Patents
A method and device for partitioning physical netword resourcesInfo
- Publication number
- CA2192793A1 CA2192793A1 CA002192793A CA2192793A CA2192793A1 CA 2192793 A1 CA2192793 A1 CA 2192793A1 CA 002192793 A CA002192793 A CA 002192793A CA 2192793 A CA2192793 A CA 2192793A CA 2192793 A1 CA2192793 A1 CA 2192793A1
- Authority
- CA
- Canada
- Prior art keywords
- logical
- route
- physical
- network
- links
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000000034 method Methods 0.000 title claims abstract description 37
- 238000000638 solvent extraction Methods 0.000 title claims abstract description 22
- 230000000903 blocking effect Effects 0.000 claims abstract 19
- 230000005540 biological transmission Effects 0.000 claims abstract 13
- 239000000872 buffer Substances 0.000 claims description 3
- 238000010586 diagram Methods 0.000 description 5
- 239000013256 coordination polymer Substances 0.000 description 3
- 238000000926 separation method Methods 0.000 description 3
- 150000001768 cations Chemical class 0.000 description 2
- 230000014509 gene expression Effects 0.000 description 2
- 238000007726 management method Methods 0.000 description 2
- 238000005192 partition Methods 0.000 description 2
- 101150063022 CHRD gene Proteins 0.000 description 1
- 101100274355 Danio rerio chd gene Proteins 0.000 description 1
- 241000282320 Panthera leo Species 0.000 description 1
- 241001163743 Perlodes Species 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 230000002457 bidirectional effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000000593 degrading effect Effects 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 239000002674 ointment Substances 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- TURAMGVWNUTQKH-UHFFFAOYSA-N propa-1,2-dien-1-one Chemical compound C=C=C=O TURAMGVWNUTQKH-UHFFFAOYSA-N 0.000 description 1
- -1 voice Chemical compound 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/32—Specific management aspects for broadband networks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/08—Configuration management of networks or network elements
- H04L41/0896—Bandwidth or capacity management, i.e. automatically increasing or decreasing capacities
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/14—Network analysis or design
- H04L41/142—Network analysis or design using statistical or mathematical methods
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04J—MULTIPLEX COMMUNICATION
- H04J2203/00—Aspects of optical multiplex systems other than those covered by H04J14/05 and H04J14/07
- H04J2203/0001—Provisions for broadband connections in integrated services digital network using frames of the Optical Transport Network [OTN] or using synchronous transfer mode [STM], e.g. SONET, SDH
- H04J2203/0057—Operations, administration and maintenance [OAM]
- H04J2203/0058—Network management, e.g. Intelligent nets
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04J—MULTIPLEX COMMUNICATION
- H04J2203/00—Aspects of optical multiplex systems other than those covered by H04J14/05 and H04J14/07
- H04J2203/0001—Provisions for broadband connections in integrated services digital network using frames of the Optical Transport Network [OTN] or using synchronous transfer mode [STM], e.g. SONET, SDH
- H04J2203/0064—Admission Control
- H04J2203/0067—Resource management and allocation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04J—MULTIPLEX COMMUNICATION
- H04J2203/00—Aspects of optical multiplex systems other than those covered by H04J14/05 and H04J14/07
- H04J2203/0001—Provisions for broadband connections in integrated services digital network using frames of the Optical Transport Network [OTN] or using synchronous transfer mode [STM], e.g. SONET, SDH
- H04J2203/0064—Admission Control
- H04J2203/0067—Resource management and allocation
- H04J2203/0071—Monitoring
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04J—MULTIPLEX COMMUNICATION
- H04J2203/00—Aspects of optical multiplex systems other than those covered by H04J14/05 and H04J14/07
- H04J2203/0001—Provisions for broadband connections in integrated services digital network using frames of the Optical Transport Network [OTN] or using synchronous transfer mode [STM], e.g. SONET, SDH
- H04J2203/0073—Services, e.g. multimedia, GOS, QOS
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/14—Network analysis or design
- H04L41/145—Network analysis or design involving simulating, designing, planning or modelling of a network
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Mathematical Analysis (AREA)
- Algebra (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Mathematical Optimization (AREA)
- Mathematical Physics (AREA)
- Probability & Statistics with Applications (AREA)
- Pure & Applied Mathematics (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
- Small-Scale Networks (AREA)
- Medicines Containing Material From Animals Or Micro-Organisms (AREA)
- Telephonic Communication Services (AREA)
Abstract
A method for partitioning physical transmission resources of a physical network is provided. At first, a set of logical networks is established on top of the physical network. The logical networks comprise nodes and logical links extending between the nodes so as to form the logical networks. The logical links are used by routes. Next, the capacities of the logical links of the logical networks are determined such that the route blocking probability on each individual route in each one of the logical networks is less than or equal to a maximum allowed blocking probability, given for each individual route. This is realized by distributing, for each individual route, the route blocking evenly among the logical links used by the individual route. Finally, the physical transmission resources are allocated among the logical links of the logical networks according to the determination. In addition, a device for partitioning physical transmission resources of a physical network is also disclosed.
Description
21 927~3 W~95~34973 1. /u~
A METHOD AND DEVICE FOR PARTITIONING
- PHYSICAL NETWORK RESOURCES
TPc~NTcAr FIELD OF THE INVENTION
The present invention relates to tPlr 1cation neL- and in partlcular to the partitioning of physical network L~UUL~~8.
r~A(~;~-u~ ART
A main chaL~uL~rlstic of a modern tPlF ~cation network is its ability to provide different services. One efficient way of providing said services is to lor~nAlly ~aL~Le the 1~SOULU~8 of a physical network - resource separation (see Fig. 1). On top of a physical network PN there is est~hl~qhPd a number cf logical neL- h~ LN, also Lef~LL~d to as logical or virtual ~ub--eL
each of which comprises nodes N and logical links LL inL~L~u-~necting the nodes. Each logical network forms a logical view of parts of the physical network or of the complete physical network. In particular, a first logical network LNl comprises one view of parts of the physical network and a second logical network LN2 comprises another view, different from that of the first logical network. The logical links of the various logical neL h4 share the capacities of physical links present in said physical network.
A physical network comprises switches S (physical nodes) or equivalents, physical links inLeLuu.~ecting said switches, and various al-XlllaTy devices. A physical link utilizes trAnr~iqcion equipment, such as fiber optic uul-duuLuL~, coaxial cables or radio links. In general, physical links are grouped into trunk groups TG which extend between said switches. There are access points to the physical network, to which access points access units such as telorhnnP sets,~ _LeL modems are uu.ne~L~d. Each physical link has limited LL ~ ' qq~on capacity.
Figure 2 is a simple schematic drawing PYrlAinln3 the relatlon-ship between physical links, logical links and also routes. A
simple underlying physical network with physical switches S and trunk groups TG, i.e. physical links, inLe u-~euLlng the W095134973 2 1 9 2 7 9 3 r~ /u~
--switches 18 illu~LL~Led. On top of this physical network a number cf logical net- are est~hl ~ ~h~cl, only one of which is shown in the drawing. The logical net~c h~ can be estAhl1ah~cl by a network manager, a network ope~dL~L or other organization. In our Swedish Patent Applicatlon 9403035-0, inu~Ly~L~Led herein by L~fsLen~e, there is described a method of creating and configur-ing logical networks. The single logical network shown comprises logical nodes Nl, N2, N3 ~LL~ n~ to physical switches Sl, S2 and S3 Le~pe~Llvely. Further the logical network comprises logical links LL inLeL~cn.le~Llng the logical nodes Nl-N3. A
physical link is lA,gic~lly subdivided into one or more logical links, each logical link having an individual traffic capacity LefeLLed to as logical link capacity. It is 1 L~IIL to note that each logical link may use more than one physical link or trunk group. To each node in each logical network there is usually associated a routing table, which is used to route a connection from node to node in the particular logical network starting from the node associated with the t~rm~ nAl that originates the connection and ending at the node associated with the t~rm1nAl which terminates said connection. Said nodes LoyeLheL form an origin-destination pair. A node pair with two routes is also illu~LL_Led. One of the routes is a direct route DR while the other one is an alternative route AR. In general, the links and the routes should be inLeL~Le~ed as being bidir-ectional.
In order to avoid m~co~ Llons the following definitions will be used: A route is a subset of logical links which belong to the same logical network, i.e. a route have to live in a single logical network. Note that it can be an arbitrary subset that is not n~c~ lly a path in the graph theoretic sense. N_~ _L Lhe-less, for practical puLyoses, routes are typically conceived as simple paths. The ccncepLlon of a route is used to define the way a c~l~le~Llon follows between nodes in a logical network. A node pair in a logical network, the nodes of which are associated with access points, is called an origin-destination (O-D) pair. In general, all node pairs in a logical network are not O-D pairs, WO95134973 2 1 92793 P~ u~
but instead some nodes in a logical network may be ini -iate nodes to which no access points are associated. A logical link is a subset of physical links.
InfuL-l,aLlon, such as voice, video and data, is LL~ OL LYd in logical ne~- h~ by means of different bearer services. r 1~
of bearer services are STM 64 (SY11UhLOnOUS IL~ 'cc~nn Mode with standard 64 kbit/s), STM 2 Mb (Syl~ull Ulluu~ TrAnrmiccinn Mode with 2 Mbit/s) and ATM (A&Y11UIILUIIUU8 Il~..sfer Mode). From a service network, such as PSTN (Public Swltched T~l~rhnn~ Network) and 8-ISDN ( BL uadband InteyLaLyd Servlces Dlgital Network), a request is sent to a logical network that a connection should be set up in the cuLLYD~onding loglcal network.
Although the physical network is given, it is n~c~cc-~y to decide how to define a set of logical neL on top of the physical network and how to distribute or partition said physical network LYDuuL~es among the logical nel- h~ by subdividing physical link capacities into logical link capacities associated with said logical nY~ . . Since the logical nel share the same given physical capacities, there is a trade-off between their quality:
GoS (Grade of Service) paL LYLa, call h1o~1n3 probabilities etc. can be improved in one of the logical nyl _ only at the price of degrading the quality in other logical ne~ . When ~nc1 ~ring a large and complex physical t~l r 1 r~tion network a rnnc~rable amount of logical links will exist, said logical links sharing the capacities of the physical network. It is not at all an easy task to design a method for partitioning physical network Ly uuLues among logical networks which does not require substantial , _LaLional power. In accordance with the present invention there is ~Lu~osed a strikingly simple and ~ strai~l.LLuLwaLd method for L~uuLue partitioning, the computa-tional ~ Y~ ty of which method is very small.
WO 9S/34973 2 1 9 2 7 9 3 r~l ~u~
.
SUMMARY OF THE~ lOrl On top of a physical network a number of logical nel ,lk~ are e8t~hl ~ ch~d in which logical links, used by routes, share the same physical LL ~cc~ r,n and switching L~uuLues. There are sever21 reasons for 1Og1CA11Y separating physical L~ouLues.
Loglcal L~UULU~ separation for offering different Grade of Service classes, virtual leased neL ' with yuaLa--L~ed 1~SUULU~8 and peak rate allocated virtual paths are some~ ,lrc of inLeL~Llng features in the design, ~ c1nn1ng and manage-ment of physical nel7 k~. However, it is still nrr~ ,y to decide how to distribute or partition said physical network LJsouLue8 among the logical nel ' . In general, the determina-tion of this resource partitioning requires substantial computa-tional power.
In a_ouLdGnce with a main aspect of the present invention there is provided a ,_LGL10nally very simple method for partitioning physical network L~SUuLu~8 among logical n~l- h~.
In accordance with a first aspect of the invention there is provided a method for L~8UULU~ partitioning, in which a set of logical n~L -Lh~ is est~hl i chrd on top of a physical network comprising physical tr~n~micc~rn and switching L~uuLo~R, said logical nel- kY comprising nodes and logical links extending between the nodes so as to define the topology of said logical ne7L ,Lh~. The logical links are used by routes inLeLuo~e~Llng the nodes of node pairs in the logical nel ' . Logical link c2pacities are deter~ nr~ such that the route hlr,rk~ng probabili-ty on each individual route in each one of the logical nel- k~
is less than or possibly equal to a maximum allowed ~1 ork~ ng probability, given for each individual route, by distributing the route ~hlQrk~ng evenly among the logical links used by the re~euL1ve route. Finally, the physical tr~nom~cc~rn resources are allocated among the logical links of the logical ne~7 according to the det~rri n~ logical link capacities.
~ woss/34973 21 927 93 r~. ~v~
In auuUL~ance with a second aspect of the inventlon there ls provlded a devlce for partitioning physical Ll, ~ nn L~uuLues among logical neL
BRIEF DESCRIPTION OF THE DRA~INGS
The novel features believed ul-al~L~Llstic of the lnventlon are set forth ln the A~ 7~d clalms. The lnventlon ltself, however, as well as other feaLul~s and adva..Lay~s thereof, wlll be best u-ldel~Luod by l~fel~.u~ to the detailed description of the speciflc: ' '1 L~ which follows, when read in con~unction with the a ylng drawings, wherein:
~lgure 1 shows a physlcal network, on top of whlch a number of logical neL.. are estAhl~hP8~ and an operatlon and support system (OSS) which controls the operation of the overall network, ~igure 2 is a Llc drawing ~YplAln~ng the relat~n~h~
between physical links and switches, logical links and nodes, and also routes, ~igure 3 ls a 8-~ Llc drawing of a B-ISDN network from the viewpoint of the Stratified Reference Model, ~igure 4 is a r-- Llc flow diagram illustrating a method ln acoo dal-ce wlth a general lnventlve concept of the present lnventlon, ~lgure 5 ls a flow diagram illustrating, ln.more detall, a method ln auuuldanu~ wlth a first p~efell~d. ~ L
of the invention, ~igure 6 18 a _ Llc flow dlagram illustrating how the method in accordance with a first ~ ~L~ d~ L of the present inventlon flexlbly adapt the overall network W095l34973 6 ~ u~
system to ~.h~nging traffic conditlons, but also to facillty failures and demands for new loglcal network topnlog1ec, ~K~K~ EM80DIMENTS OF THE INVENTION
An ~ L~.IL tool in network ana~ t, particularly the management and fli 'nnlng of large ATM neLwuLhh, ls the dlstrlbutlon of resources of a physlcal network among loglcal neL-~ that share the capacity of the physical network. There are several advanLay~s of log$cal l~suuLue separatlon:
- It has sradually been r~co~n~7~d ln the last couple of years that lt ls not at all easy to h~L~yLate services with very different demands to e.g. bandwidth, grade of service or congestion control functlons. In some cases lt turn out to be better to support dlfferent services by offering ~e~aLaL~ logical neL... h~, and limiting the degree of inL~yLaLIon to only partial rather than complete sharing of physical LL R.,C~ on and switching re~ouLces. Network , a~ L can be simplified if service classes are ar-ranged into groups in such a way that only those of similar ~lu~eLLles are handled Luy~LII~- in a logical network. For example, delay sensitive and loss sensitive service classes can possibly be managed and switched easier if the two groups are handled sepaLaL~ly in different logical subnet-works, rather than all mixed on a complete sharing basis.
MvL~uv~r, ln thls way they can be safely handled on call level wlthout golng down to cell level as e.g. ln prlority queues. Of course, within a logical network statistical multiplexing, priority queuing and other '-nl can still be applied among service classes that already have not too different characteristics;
- T Lallt ~LLUULUL~8 such as virtual leased neL. hh, required by large b~lC1n~CC users, and virtual LAN's are much easier to ~ 1- L, - A Virtual Path (VP), a standardized element of ATM network W09S134973 7 r~.,~r ,u~
architecture, can be cmnql~red as a speclal loglcal network;
- The physical network o~el~s more safely.
A physlcal network, e.g. a large ~ela ~mAtlon network, wlth ~ physlcal l~8uul~e8 is monQ1~red. In Flg. 1 there ls lllustrated a physlcal network PN on top of which a set of loglcal ne~-LNl, LN2, ..., LNX (as_ ~ ng there are X loglcal ne~ ~ ) ls estAhl1Qh~d. Each logical network comprises nodes N and logical llnks LL inL~luul~.eu~lng the nodes. The topology of these loglcal or virtual ne~ will in general differ from the topology of the underlylng physlcal network.
The network system is preferably controlled by an operation and support system OSS. An operation and support system OSS usually comprises a plUU~8~Ul system PS, t~rm1nAlq T and a control program module CPM with a number of control pluyl CP along with other ~I~Y~ ry devices. The architecture of the ~lUU~80 system is usually that of a multi~luu~sol system with several processors working in parallel. It is also poq-q1hle to use a hierarchical ~lUU~801 ~Llu~Lul~ with a number of regional ~lUC~5501~ and a central p-uu~ssUL. In addition, the switches themselves can be e~uipped with their own ~lUU~850L units in a not completely distributed system, where the control of certain functions are centralized. Alternatively, the p uU~8-Ul system may consist of a single pLUU~S01~ often a large capacity proces-sor.rlol~uv~r~ a database DB, preferably an inL~ lve database, comprising e.g. a description of the physical network, traffic information and other useful data about the t~l ~a ~ m~tiOn system, is uul~euLed to the OSS. Special data links, through which a network manager/u~ ul controls the switches, connect ~ the OSS with those switches which form part of the network system. The OSS contains e.g. functions for monitoring and controlling the physical network and the traffic.
From this operatlon and support system OSS the network manager estAhl~qh~q a number of loglcal neL k~ on top of the physical Wos~/34973 21 ~2793 r~ u~ ~
network by associat$ng dlfferent parts of the traffic with different parts of the L1 'RR~nn and gwltchlng l~uulues of the physlcal network. Thls can e.g. be realized by controlllng the port ~RR~, L of the swltches and cross connect devlces of the physical network, or by call r= 'cR1nn control ~Lucedules.
The process of est~hl~Rh~ng logical ne~ means that the topology of each one of the logical nel- h~ ls deflned. In other word~, the ~LluuLuLe of the nodes and loglcal llnks in each loglcal network ls det~rm~n~.
Convenlently, trafflc classes are allall~ed lnto groups ln such a way that those wlth slmllar demands to bandwldth are handled LoueL}lel ln a sepalaL~ loglcal network. By way of e~ample, all trafflc types requlrlng more than a glven amount of bandwldth can be l..L~ylated ln one loglcal network, and those trafflc types that regulre less bandwldth than thls glven amount can be i..L~yl~Led in another logical network. In other words, the two traffic groups are handled s~alaLely in different loglcal subnetworks. In partlcular, thls ls adva..La~euus for an ATM
network carrylng a wlde varlety of trafflc types. However, ln one t of the present lnventlon, each lndlvldual trafflc type is handled ln a ae~al ~ L~ loglcal network.
Preferably, the present lnventlon ls applled ln the 8-ISDN
(Broadband InL~LàLed Servlces Dlgltal Network) network envlron-ment. A fully developed B-ISDN network wlll have a very complex aLLUULU1~ wlth a number of overlald nei . One uullue~Lual model sultable of descrlblng overlald ne~ ls the Stratlfied ~eLelellue Model as described ln "The Stratlfled ~ef~l~noe Model:
An Open Architecture to B-ISDN" by T. Hadoung, B. Stavenow, J.
Dejean, ISS'90, Storkhnlm. In Flg. 3 a schematlc drawlng of a B-ISDN network from the vlewpolnt of the Stratlfied ~Lele--u~ Model is illu~Ll~Led ~the protocol vlewpolnt to the left and the network vlewpolnt to the rlght). Accordlngly, the B-ISDN wlll conslst of the fnll ing strata. A L1 'RRlnn stratum based on SDH (SyllcllIvlluus Dlgltal Hlerarchy) or equlvalent (SONET) at the bottom, a cross connect stratum based on elther SDH or ATM
~ WO95/34973 2 1 q2793 P l s~ ~u~
(AYyll~ VIIVUS ll~n~Lei Mode) on top of that, which acts as an inLLaaL~uvLuL~ for the ATM VP/VC stratum with switched connec-tions. Finally, the large set of pOsR1 hl e applications uses the cross connect stratum as an infLaxLLu~Lu.~. In one particular ~ t of the present invention it is the inL.a~L.uvL
network 'ell1ng the cross connect stratum in a P-ISDN overlaid network that is cnnR1clored. In general, this inLla~L,u~L
network is ~~fe~L~d to as a physical network.
Of course, it is to be ~Idel~Lood that the present invention can be applied to any physical tol~e ~n~tion network.
The physical trRn~iqq1nn ~vUlV~S, i.e. the tr~n~1~s1nn capacities of the physical links, have to be partitioned or distributed among the logical links of said logical nei ..L~ in some way. Since ATM has similarities with both packet switched and circuit switched ne~ c kx it is not a priori obvious which .u~elLles should have the y-~àL~L impact on a partitioning or ~ nn1ng model. In the data Ll~n~re- phase the similarities to packet switched neL7.-k~ are the largest. However, at the vv,~evLlon setup phase the similarities to circuit switching dominate, ~sp~c1~lly if a pl~v~l-Llve connection control concept with small ATM switch buffers has been adopted Lvy~Lhe, with the equivalent bandwidth concept. In an appic,acl- which models the call scale r , it is natural to view an ATM network as a multirate circuit switched network in which the most 1 ~ L~IIL
quality of service pal ~er is the connection blork1ng probabil-ity, i.e. the route blork1ng probability. In this context, there is provided a method in a~Ldallce with the present invention which designs the capacity values of the logical links of the various logical ne~ such that the route hlnr~1ng probability on any route in any of the logical networks does not exceed a maximum allowed hlnr~1ng value, given in advance for each route.
Fig. 4 shows a ~r' Llc flow diagram illustrating a method in nccordance with a general inventive concept of the present invention. In acco~dallce with the present invention a set of W0 95/34973 2 1 g 2 7 9 3 P.,~ us logiaal n~L7 ' is est~hl ~ch~ on top of a physical network comprising physical Ll 'Cc~on and switching L~YUULU~S, said logical nei- ' comprising nodes and logical links extending between the nodes 50 as to define the topology of sald logical n~L.~JLk~. Preferably, the logical neL h7j are completely ~paL~Led from each other. The logical links are used by routes i.,L~cv-~euLlng the nodes of node pairs in the logical neL~ kY.
Logical link capacities are det~rm~nPd such that the route hlor~n3 probability on each individual route in each one of the logical neL ' is less than or equal to a maximum allowed ~hlnr~lng probability, given for each individual route, by distributing the actual route b]or~ ng evenly among the logical links used by the L~yeuLlve route. Finally, the physical tr~nPm~ cqi nn resources are allocated among the loglcal links of the logical neL k~ according to the detPrm~nP~ logical link capacities.
As indicated in Fig. 3 the cross connect stratum can be realized by either SDH or ATM. If the cross connect stratum is based on SDH and the inLL~s7LuuLuL~ network is r~l 17~ng e.g. different quality of service classes by e~uuLU~ separation, the partition-ing can only be pe~ ' in integer portions of the STM modules of the SDH 7iLLUULUL~. On the other hand, if the cross connect is realized by ATM virtual paths then no integrality restriction exists and the partitioning can be peL r, - ~ in any real portions. ~I.a eLuL~, whether the cross connect stratum is based on SDH or ATM will have ~ ~~.-L implications for the partition-ing of the physical network L~huuL~a8. The SDH cross connect solution gives rise to a model that is discrete with regard to the logical link capacities, while the ATM cross connect solution gives rise to a continuous model. The continuous model requires that the ATM switches support partitioning on the individual input and output ports. For example, this is realized by multiple logical buffers at the output ports. In a preferred embodiment of the invention an inLL~,LLu~LuL~ network ~ ng the ATM
cross connect stratum is c~ncid~ed while in an alternative ~ L an inLLa~LLu~LuLe ~ ng the SDH crogg connect is woss/34s73 ll ~ , cnnR~red, as can be seen in Fig. 1.
At the first glance it might appear that partitionlng, as opposed to complete sharlng, ls a reductlon of the full flPY~h~l~ty of ATM. Thls ls however not the case if the partitioning is consid-ered on a general level. On a ~ullc~pLual level the complete sharlng schemes, e.g. prlorlty queulng, Vlrtual Spaclng etc. tell us how to reallze ~UUL~ sharlng on the cell level, while the partltlonlng a~Loa~l, seeks for the call scale ~haL~1L~Llstlcs, e.g. how to asslgn rates to varlous loglcal llnks, that is then to be realized on the cell level. In thls sense the complete partltionlng ap~L~a~h ~ 1~ L~, rather than ~YcludeR, the complete sharlng a~oa~lles.
Ma; Llcal ~L 1~ k and ~1 'on~no model C~nR~r a flxed physical network with N nodes and K physlcal links, on top of whlch a number of log~Rlly ~e~dL~Led logical neL k~ are esfAhl1Rh~d. If the total number of logical links over all loglcal ne~ ls denoted by J, and the capaclty of an individual logical link J ls denoted C~, then the vector of loglcal link capacities over all logical neL . can be written a8 C~ ( C1~ C2 ~ ~ ~ ~, CJ ) ~ These loglcal llnk capacltles are not known ln advance. In fact lt is desired to ~ 'nn the logical llnks of the loglcal ne~ hY wlth respect to capaclty.
The ln~d~n~e of physical llnks and logical links is e~L~ed by a K x J matrix S in which the ~:th entry in the k:th row is equal to l if logical link ; needs capacity on the k:th physical llnk, otherwlse sald entry ls O. Naturally, the sum of loglcal llnk capacltles on the same physlcal link cannot exceed the ~ capacity of the physlcal link. This physical constraint can be expressed as SC S Cph~"
where C is defined above, and Cph~, refer to the vector of glven physlcal link capacitles. In additlon lt ls requlred that C > O.
w095l34973 21 927 ~3 r~ . /u~
Assume that I traffic types are oarried in the complete network.
The role of these traffic types is prlmarily to handle different bandwidth re~uirements, but traffic types can be distinguished also with respect to different holding times or even priorities (trunk ~eseLvaLlon~. By convention, each route carries only a single type of traffic. Thls means that if several trafflc types are to be carried, they are ce~l~s~.Led by parallel routes.
Let R be the total set of routes over all logical nel~ k~, that is, R Uu Up U~ R ( 'P' ~) ( 1 ) where R~Pl) is the set of routes in logical network v r~Al ~ 7.1 ng ~ cation between node pair p regarding traffic type i. It is ~ Lant to ~,deLaL~-d that a route is not associated with more than one logical network. Each logical network is assumed to operate under fixed non-alternate routing.
Let ~r be the po1~nn~ ~n call arrival rate to route r, let l~r be the average holding time of calls on route r and let vr-~r/~r be the offered traffic to route r. Let v(~p,) be the ayy~yaL~d offered traffic of type i to node pair p in logical network v.
In a preferred ~ t the offered traffic for each route in each logical network is glven while in another preferred ~ L of the invention the above ayyl~yaL~d offered traffic is given for all logical n~ -lh~, node pairs and traffic types.
In the latter case, the load is e.g. distributed on all~lL~aL
paths.
Let B~ be the bl or~t n3 probabllity of logical link ~. Further, let L(r) be the set of logical links used by route r and denote by l(r) the length of route r, i.e. the number of logical links on route r.
In addition, there is assumed for each route r in each one of the logical networks a maximum allowed route bl nr~i n3 probability ~ W095134973 2 1 927 93 r~ /u~
B(r) to be given.
The ~ nn i ng task is to degign the loglcal link capaclties C~ for all ; such that the route blor-king requirements are satisfied, i.e. such that the route hlork~ng on any route r does not exceed B(r).
In accordance with a pl~relL~d ' '~ L of the present invention the ~ innlng is pelruL ~ baged on equivalent link blorking (ELB). The idea is to distrlbute, for each individual route, the route blnrking probability evenly among the logical links used by the individual route. Of course, a route may comprise a single logical link. In this case, the link blnrking and the route blork~n3 are the same.
Adopting the equivalent link blorkin~ r _ Lion, the probability that a call on route r is not blocked can now be e~L~ d as (1 - B~)1('). Cnn~ifiering the route blork~ng requirements, the minimum probability that a call on route r is not blocked is equal to 1-B(r). If the route hlor~ing requirements or constraints as defined above are to be satisfied then the following must hold for each route r and each logical link ~ ~ L(r):
1 - B(r) s (1 - B~)l(r~ (1) If R~ denotes the set of routes that use logical link ~ and if the B(r) values are different for these routes, then the lowest value of B(r), r~R~, is taken into account. In other words, the strictest requirement on route blnrking is cnn~id~red. Now, the following condition is obtained:
maXr~R~ (l - B (r) ) ~ B~ r) (2 ) This can also be e~ressed as:
woss/34s73 21q27q3 P l . ~u~ ~
- B~ 2 maXr6Rj(1 - B(r))~ ) (3 or as:
Bf ~ 1 - maxr6R~ (l - B(r) ) 1/l (r) (4) This means that the maximum po~ hl e value for the blo~ ng probability on logical link J, under the ~l ~lon of evenly distributed blo~k~ng, can be e~ ed as follows:
B~ maxr6R~ B ( r) ) 1/l (r) ( 5 ) Once the maximum p~ hle value for the link hlorlr~ng probability is calculated for each logical link in each one of the logical networks, the offered traffic to logical link ~ can be approxi-mated as:
Pj = ~ AjrVr ~; (1 - B~ )A~r (6) r6R.l "
where A~r is the amount of bandwidth that route r requires on logical link J. If route r does not ~L~V~LU~ logical link ~ then A~r is equal to zero.
Since the value of B~'~ and the ~uLL~ ng value of p~ are known for all J, the capacity C~ of logical link J can be calculated for all J by inverting numerically a b1o~kln3 function:
Bj~ = E(p~,C~) (7) Preferably, the simple analytic extension of Erlang's B-formula to any non-negative real value is used as hl orlr~ ng functlon.
However, to pL~6~LV~ generallty, any hlor~n3 functlon is allowed, that ls Jointly smooth in all variables.
Having obtained the logical link capacities C~ from the above model, it is n~c~s~ y to n~rr~ them such that they satisfy ~ WO95/34973 2 1 9 2 7 9 3 J ~ U~
the physical capacity constralnts, SC c Cph". If the capacity of the physlcal link k is CkPh", and the capacltles o~ the loglcal llnks that need capaclty on the k:th physlcal link are Ck1, ....
Ckn, then the nnrr~ ed logical link capacities associated with physical link k are Cki = o Ck Y, i =1, . . ., n ( 8 ) ~ Ck, This nnrm~7i7~tlon ~lOU~dUl~ ls p~l~ - a for all k.
The nnrr~ll ~e~.al logical link capacities satisfy the requirements on route hlnnkin3 for each route in each one of the various logical networks. In other words, if the physical ~1 ~CRIOn ~~SOu.~8 are allocated among the logical links of the logical neL ks in acuulda-lce with the above nnrr-ll ~ loglcal llnk capacities, then the hlnr~ln~ probability of any route r does not exceed B(r).
An efficient way of hAnal~n~ the co-existence of many different bandwidth demands (traffic types) is to model a non-unity bandwidth call by a seyuenu~ of 1n.1~ L unlty bandwldth calls. In the article "alQrk~n~ Probabilities in Multltrafflc Loss Systems: Insensltlvity, Asymptotlc Behavlor and Approxlma-tlons" by Labourdette and Hart ln IEEE Trans. C 1~tlons, 40 (1992/8) pp. 1355-1366. lt is proven that this approximation is correct in the a~ ~uLlc sense.
For a better ul.deL~ lding of the present invention a method in accordance wlth a ~l~Lell~d ~ of the lnventlon will be descrlbed with leLe~nc~ to the flow diagram of Fig. 5. First, a set of logical ne~wulk~ is estAhl~Qh~ on top of a physical network by assoclatlng dlfferent parts of the traffic with different parts of the physical trA-n-m~c~~nn and swltchlng ~esuul~es. Next, the maxlmum pn~lhle value for the blocklng probability on each loglcal llnk ln each one of the loyical ne~wuLk~ is calculated, glven the maxlmum allowed blo~k~n3 W09s/34973 2 ~ 92793 P~ 75 ~v~ ~
probability B( r) for each individual route in each one of the logical n~L ' , by distrlbutlng the route blork1ng evenly among the loglcal links that are used by the l~Y~euLlve route (expres-sion (5)). Then, the offered traffic corrpc-ponding to the calculated maxlmum llnk bl orkl n3 probablllty ls calculated for each logical link (expression (6)). Following thls, a flrst set of logical link capacitles associated with the various logical networks is detPrm1nPd by inverting numerically a continuous link hlnrk1ns function (~L~lon (7)) using the results of the previous steps as input variables. This first set of logical link capacities is nnrr~ (eA~1 sslon (8)) such that the physical capacity constraints are satisfied. Finally, the physical tL 'cc,~nn L~souL~8 of the physical network are allocated among the logical links of the logical neL h- in accoLdanc~
with the nnrr-l ~ 7Pd logical link capacities.
In general, the method and device according to the invention is utilized to ~ ~ nn the logical linkg of each one of the logical n~L h~ by cnnR1~Prlng route blonking requirements. The present invention does not optimize the network operation, the total carried traffic or network revenue, it only ~1 ~innc the logical net h~ taking into account the requirements on route blork~ n3 probabilities. If the overall hl nrk1 ng in a network system is low, then implicitly the total carried traffic will be high. Thereby, the invention cnnc1dPrs the total carried traffic or network revenue in an indirect way.
It should be u..deL~Luod by those skilled in the art that it is equally po~c1hle to ~i ~nn only one of the logical neL ( ' .
If e.g. only one logical network among the set of logical n~L- h5 es~Ahl~chp~ on top of the physical network is associated with the requirement that the route hl Qrki ng probability on the routes in the logical network should not exceed a maximum value given for each route, then, in one ~ '-'1 t, the capacities of only those logical links that belong to this particular logical network are detPrm~nPd.
~ w095~34973 2 1 92793 . ~ u~
If the cross connect is based on SDH, the partltionlng can only be peLr~ ' ln lnteger portions of the STM modules of the SDH
sLLu~LuLe, a8 mentloned above. In thls partlcular case, the real capaclty values obtalned from the method according to the first ~ i~L~d ~~ t of the invention are preferably rounded into integer values such that the physical constraints and the quality of service constraints are satisfied. In one : '~ L of the invention this ls realized by ~ n~l_p~ Lly repeated random rounding trials.
The method according to the first ~eLeLL~d : _'1 L of the invention is pLeLeLably peLf, ' by one or more control piOyL
CP of the control program module CPM of the operation and support system OSS. These control ~LUyL , in turn, are e~uuL~d by one or more of the pLUC~YsoLY ln the pLUC~8Yui system PS descrlbed above. The operatlon and support system OSS collects the requlred lnformatlon from the network system and uses thls lnformatlon LugeLller wlth the database DB and control program CP lnfoL...~Llon as lnput to the respectlve control pLUyL CP. FUL; - ~, the OSS controls the network switches through the data links 80 as to partitlon the physlcal link capacities among the logical links of the logical neL
Accordingly, the network manager can flexibly and very quickly adapt the overall network system to nh~ng1ng traffic conditions, such as changes in offered traffic, but also to facility failures and new demands on the logical network topology from e.g.
bl)A1n~R users, as is illu~LL~L~d in the s~' tlc flow diagram of Fig. 6. Once the method or device according to the invention has been applied to a physical network, then a set of logical n~L7.Lhs is es~hl1qh~d and the logical links of these logical neL ' are ~ 'nn~ such that the route blonk~ng requirement is satisfied for each route in each one of the logical neL- .
However, if, nt a later time, the topology of one or more logical neL7JLh~ have to be changed for some reason (facility failure or demands for new topolog~) or additional logical networks are L~yueYLed, then the complete set of steps according to the first wogsl34973 21 q27~3 P~- s ~u3 preferred~ of the present lnvention hQs to be ~elf in order to reconfigure the overall network. If no changes regarding the topology of the logical net ~ is n~c~ .y, but e.g. the offered traffic varies, then only the det~rm~n~ng and allocating steps of the present invention have to be carried out.
That is, the det~rm1n1ng step and the allocating step are LepeaLed in ~s~ul,~e to nhRnging traffic conditions 80 as to change the logical link capacities of the varlous logical net ' such that the route hlnnk1ng on any route r is at most B(r). This alteration of the logical link capacities is realized by the swltches and cross connect devices of the physical network in a very short perlod of time. Thus, the reali~ation of the present invention renders the operation of the complete physical network both safe and flPY~ hl e .
Since the method according to a ~l~ft~ ~d ~ L of the invention does not involve iterative calculations, the computa-tional , l~Y1ty is very small. Of course, there is a trade-off between the auuU~a~y of the result and the required ,_LaLlonal power.
The method according to the present invention offers a fast solution to the otherwise very _l~nAted lesuul~e partitioning problem. The solution can be L~ _L~d easily so as to dynami-cally follow ~.hAng~ ng network conditions.
Note that the r , ~ ing drawings are simple illustrative ,l~c illustrating the inventive concept of the present invention. In practice, the physical network and the logical ne; o h~ are, in general, very extensive with e.g. i..t ~iAte logical nodes which are not directly associated with access points and logical links using more than one physical link.
The ~mhQfli- Ls described above are merely given as le~, and it should be understood that the present invention is not limited thereto. It is of course p~cc1hle to embody the invention in ~perflfin forms other than those fl~cr~hPfl without departing from ~ W095/34973 2 ~ 9 2 7 9 3 . ~1 ~u~
the spirit of the invention. Further '~flcations and ~ u.~
ments which retain the basic underlying pr1nr~pl~R ~fCCl~8~ and claimed herein are within the scope and spirit of the invention.
ExPerlmental results The present lnvention has been tested on various ne~horkc. In particular, the lnvention was tried on a 6-node physlcal network on top of whlch flve dlfferent logical networks, each with four traffic classes, were est~hlich~. The distrlbution of traffic among the trafflc classes and the I ~ ty of bandwidth demands were varled and the total carried traffic or network revenue was measured. For not too in~ ~-IIC traffic condi-tions the a~Luaull was sati~rau~uLy. In addition, even for llnhal~nred dlstribution of traffic among the traffic classes the peLfoLI-~ance was good.
A METHOD AND DEVICE FOR PARTITIONING
- PHYSICAL NETWORK RESOURCES
TPc~NTcAr FIELD OF THE INVENTION
The present invention relates to tPlr 1cation neL- and in partlcular to the partitioning of physical network L~UUL~~8.
r~A(~;~-u~ ART
A main chaL~uL~rlstic of a modern tPlF ~cation network is its ability to provide different services. One efficient way of providing said services is to lor~nAlly ~aL~Le the 1~SOULU~8 of a physical network - resource separation (see Fig. 1). On top of a physical network PN there is est~hl~qhPd a number cf logical neL- h~ LN, also Lef~LL~d to as logical or virtual ~ub--eL
each of which comprises nodes N and logical links LL inL~L~u-~necting the nodes. Each logical network forms a logical view of parts of the physical network or of the complete physical network. In particular, a first logical network LNl comprises one view of parts of the physical network and a second logical network LN2 comprises another view, different from that of the first logical network. The logical links of the various logical neL h4 share the capacities of physical links present in said physical network.
A physical network comprises switches S (physical nodes) or equivalents, physical links inLeLuu.~ecting said switches, and various al-XlllaTy devices. A physical link utilizes trAnr~iqcion equipment, such as fiber optic uul-duuLuL~, coaxial cables or radio links. In general, physical links are grouped into trunk groups TG which extend between said switches. There are access points to the physical network, to which access points access units such as telorhnnP sets,~ _LeL modems are uu.ne~L~d. Each physical link has limited LL ~ ' qq~on capacity.
Figure 2 is a simple schematic drawing PYrlAinln3 the relatlon-ship between physical links, logical links and also routes. A
simple underlying physical network with physical switches S and trunk groups TG, i.e. physical links, inLe u-~euLlng the W095134973 2 1 9 2 7 9 3 r~ /u~
--switches 18 illu~LL~Led. On top of this physical network a number cf logical net- are est~hl ~ ~h~cl, only one of which is shown in the drawing. The logical net~c h~ can be estAhl1ah~cl by a network manager, a network ope~dL~L or other organization. In our Swedish Patent Applicatlon 9403035-0, inu~Ly~L~Led herein by L~fsLen~e, there is described a method of creating and configur-ing logical networks. The single logical network shown comprises logical nodes Nl, N2, N3 ~LL~ n~ to physical switches Sl, S2 and S3 Le~pe~Llvely. Further the logical network comprises logical links LL inLeL~cn.le~Llng the logical nodes Nl-N3. A
physical link is lA,gic~lly subdivided into one or more logical links, each logical link having an individual traffic capacity LefeLLed to as logical link capacity. It is 1 L~IIL to note that each logical link may use more than one physical link or trunk group. To each node in each logical network there is usually associated a routing table, which is used to route a connection from node to node in the particular logical network starting from the node associated with the t~rm~ nAl that originates the connection and ending at the node associated with the t~rm1nAl which terminates said connection. Said nodes LoyeLheL form an origin-destination pair. A node pair with two routes is also illu~LL_Led. One of the routes is a direct route DR while the other one is an alternative route AR. In general, the links and the routes should be inLeL~Le~ed as being bidir-ectional.
In order to avoid m~co~ Llons the following definitions will be used: A route is a subset of logical links which belong to the same logical network, i.e. a route have to live in a single logical network. Note that it can be an arbitrary subset that is not n~c~ lly a path in the graph theoretic sense. N_~ _L Lhe-less, for practical puLyoses, routes are typically conceived as simple paths. The ccncepLlon of a route is used to define the way a c~l~le~Llon follows between nodes in a logical network. A node pair in a logical network, the nodes of which are associated with access points, is called an origin-destination (O-D) pair. In general, all node pairs in a logical network are not O-D pairs, WO95134973 2 1 92793 P~ u~
but instead some nodes in a logical network may be ini -iate nodes to which no access points are associated. A logical link is a subset of physical links.
InfuL-l,aLlon, such as voice, video and data, is LL~ OL LYd in logical ne~- h~ by means of different bearer services. r 1~
of bearer services are STM 64 (SY11UhLOnOUS IL~ 'cc~nn Mode with standard 64 kbit/s), STM 2 Mb (Syl~ull Ulluu~ TrAnrmiccinn Mode with 2 Mbit/s) and ATM (A&Y11UIILUIIUU8 Il~..sfer Mode). From a service network, such as PSTN (Public Swltched T~l~rhnn~ Network) and 8-ISDN ( BL uadband InteyLaLyd Servlces Dlgital Network), a request is sent to a logical network that a connection should be set up in the cuLLYD~onding loglcal network.
Although the physical network is given, it is n~c~cc-~y to decide how to define a set of logical neL on top of the physical network and how to distribute or partition said physical network LYDuuL~es among the logical nel- h~ by subdividing physical link capacities into logical link capacities associated with said logical nY~ . . Since the logical nel share the same given physical capacities, there is a trade-off between their quality:
GoS (Grade of Service) paL LYLa, call h1o~1n3 probabilities etc. can be improved in one of the logical nyl _ only at the price of degrading the quality in other logical ne~ . When ~nc1 ~ring a large and complex physical t~l r 1 r~tion network a rnnc~rable amount of logical links will exist, said logical links sharing the capacities of the physical network. It is not at all an easy task to design a method for partitioning physical network Ly uuLues among logical networks which does not require substantial , _LaLional power. In accordance with the present invention there is ~Lu~osed a strikingly simple and ~ strai~l.LLuLwaLd method for L~uuLue partitioning, the computa-tional ~ Y~ ty of which method is very small.
WO 9S/34973 2 1 9 2 7 9 3 r~l ~u~
.
SUMMARY OF THE~ lOrl On top of a physical network a number of logical nel ,lk~ are e8t~hl ~ ch~d in which logical links, used by routes, share the same physical LL ~cc~ r,n and switching L~uuLues. There are sever21 reasons for 1Og1CA11Y separating physical L~ouLues.
Loglcal L~UULU~ separation for offering different Grade of Service classes, virtual leased neL ' with yuaLa--L~ed 1~SUULU~8 and peak rate allocated virtual paths are some~ ,lrc of inLeL~Llng features in the design, ~ c1nn1ng and manage-ment of physical nel7 k~. However, it is still nrr~ ,y to decide how to distribute or partition said physical network LJsouLue8 among the logical nel ' . In general, the determina-tion of this resource partitioning requires substantial computa-tional power.
In a_ouLdGnce with a main aspect of the present invention there is provided a ,_LGL10nally very simple method for partitioning physical network L~SUuLu~8 among logical n~l- h~.
In accordance with a first aspect of the invention there is provided a method for L~8UULU~ partitioning, in which a set of logical n~L -Lh~ is est~hl i chrd on top of a physical network comprising physical tr~n~micc~rn and switching L~uuLo~R, said logical nel- kY comprising nodes and logical links extending between the nodes so as to define the topology of said logical ne7L ,Lh~. The logical links are used by routes inLeLuo~e~Llng the nodes of node pairs in the logical nel ' . Logical link c2pacities are deter~ nr~ such that the route hlr,rk~ng probabili-ty on each individual route in each one of the logical nel- k~
is less than or possibly equal to a maximum allowed ~1 ork~ ng probability, given for each individual route, by distributing the route ~hlQrk~ng evenly among the logical links used by the re~euL1ve route. Finally, the physical tr~nom~cc~rn resources are allocated among the logical links of the logical ne~7 according to the det~rri n~ logical link capacities.
~ woss/34973 21 927 93 r~. ~v~
In auuUL~ance with a second aspect of the inventlon there ls provlded a devlce for partitioning physical Ll, ~ nn L~uuLues among logical neL
BRIEF DESCRIPTION OF THE DRA~INGS
The novel features believed ul-al~L~Llstic of the lnventlon are set forth ln the A~ 7~d clalms. The lnventlon ltself, however, as well as other feaLul~s and adva..Lay~s thereof, wlll be best u-ldel~Luod by l~fel~.u~ to the detailed description of the speciflc: ' '1 L~ which follows, when read in con~unction with the a ylng drawings, wherein:
~lgure 1 shows a physlcal network, on top of whlch a number of logical neL.. are estAhl~hP8~ and an operatlon and support system (OSS) which controls the operation of the overall network, ~igure 2 is a Llc drawing ~YplAln~ng the relat~n~h~
between physical links and switches, logical links and nodes, and also routes, ~igure 3 ls a 8-~ Llc drawing of a B-ISDN network from the viewpoint of the Stratified Reference Model, ~igure 4 is a r-- Llc flow diagram illustrating a method ln acoo dal-ce wlth a general lnventlve concept of the present lnventlon, ~lgure 5 ls a flow diagram illustrating, ln.more detall, a method ln auuuldanu~ wlth a first p~efell~d. ~ L
of the invention, ~igure 6 18 a _ Llc flow dlagram illustrating how the method in accordance with a first ~ ~L~ d~ L of the present inventlon flexlbly adapt the overall network W095l34973 6 ~ u~
system to ~.h~nging traffic conditlons, but also to facillty failures and demands for new loglcal network topnlog1ec, ~K~K~ EM80DIMENTS OF THE INVENTION
An ~ L~.IL tool in network ana~ t, particularly the management and fli 'nnlng of large ATM neLwuLhh, ls the dlstrlbutlon of resources of a physlcal network among loglcal neL-~ that share the capacity of the physical network. There are several advanLay~s of log$cal l~suuLue separatlon:
- It has sradually been r~co~n~7~d ln the last couple of years that lt ls not at all easy to h~L~yLate services with very different demands to e.g. bandwidth, grade of service or congestion control functlons. In some cases lt turn out to be better to support dlfferent services by offering ~e~aLaL~ logical neL... h~, and limiting the degree of inL~yLaLIon to only partial rather than complete sharing of physical LL R.,C~ on and switching re~ouLces. Network , a~ L can be simplified if service classes are ar-ranged into groups in such a way that only those of similar ~lu~eLLles are handled Luy~LII~- in a logical network. For example, delay sensitive and loss sensitive service classes can possibly be managed and switched easier if the two groups are handled sepaLaL~ly in different logical subnet-works, rather than all mixed on a complete sharing basis.
MvL~uv~r, ln thls way they can be safely handled on call level wlthout golng down to cell level as e.g. ln prlority queues. Of course, within a logical network statistical multiplexing, priority queuing and other '-nl can still be applied among service classes that already have not too different characteristics;
- T Lallt ~LLUULUL~8 such as virtual leased neL. hh, required by large b~lC1n~CC users, and virtual LAN's are much easier to ~ 1- L, - A Virtual Path (VP), a standardized element of ATM network W09S134973 7 r~.,~r ,u~
architecture, can be cmnql~red as a speclal loglcal network;
- The physical network o~el~s more safely.
A physlcal network, e.g. a large ~ela ~mAtlon network, wlth ~ physlcal l~8uul~e8 is monQ1~red. In Flg. 1 there ls lllustrated a physlcal network PN on top of which a set of loglcal ne~-LNl, LN2, ..., LNX (as_ ~ ng there are X loglcal ne~ ~ ) ls estAhl1Qh~d. Each logical network comprises nodes N and logical llnks LL inL~luul~.eu~lng the nodes. The topology of these loglcal or virtual ne~ will in general differ from the topology of the underlylng physlcal network.
The network system is preferably controlled by an operation and support system OSS. An operation and support system OSS usually comprises a plUU~8~Ul system PS, t~rm1nAlq T and a control program module CPM with a number of control pluyl CP along with other ~I~Y~ ry devices. The architecture of the ~lUU~80 system is usually that of a multi~luu~sol system with several processors working in parallel. It is also poq-q1hle to use a hierarchical ~lUU~801 ~Llu~Lul~ with a number of regional ~lUC~5501~ and a central p-uu~ssUL. In addition, the switches themselves can be e~uipped with their own ~lUU~850L units in a not completely distributed system, where the control of certain functions are centralized. Alternatively, the p uU~8-Ul system may consist of a single pLUU~S01~ often a large capacity proces-sor.rlol~uv~r~ a database DB, preferably an inL~ lve database, comprising e.g. a description of the physical network, traffic information and other useful data about the t~l ~a ~ m~tiOn system, is uul~euLed to the OSS. Special data links, through which a network manager/u~ ul controls the switches, connect ~ the OSS with those switches which form part of the network system. The OSS contains e.g. functions for monitoring and controlling the physical network and the traffic.
From this operatlon and support system OSS the network manager estAhl~qh~q a number of loglcal neL k~ on top of the physical Wos~/34973 21 ~2793 r~ u~ ~
network by associat$ng dlfferent parts of the traffic with different parts of the L1 'RR~nn and gwltchlng l~uulues of the physlcal network. Thls can e.g. be realized by controlllng the port ~RR~, L of the swltches and cross connect devlces of the physical network, or by call r= 'cR1nn control ~Lucedules.
The process of est~hl~Rh~ng logical ne~ means that the topology of each one of the logical nel- h~ ls deflned. In other word~, the ~LluuLuLe of the nodes and loglcal llnks in each loglcal network ls det~rm~n~.
Convenlently, trafflc classes are allall~ed lnto groups ln such a way that those wlth slmllar demands to bandwldth are handled LoueL}lel ln a sepalaL~ loglcal network. By way of e~ample, all trafflc types requlrlng more than a glven amount of bandwldth can be l..L~ylated ln one loglcal network, and those trafflc types that regulre less bandwldth than thls glven amount can be i..L~yl~Led in another logical network. In other words, the two traffic groups are handled s~alaLely in different loglcal subnetworks. In partlcular, thls ls adva..La~euus for an ATM
network carrylng a wlde varlety of trafflc types. However, ln one t of the present lnventlon, each lndlvldual trafflc type is handled ln a ae~al ~ L~ loglcal network.
Preferably, the present lnventlon ls applled ln the 8-ISDN
(Broadband InL~LàLed Servlces Dlgltal Network) network envlron-ment. A fully developed B-ISDN network wlll have a very complex aLLUULU1~ wlth a number of overlald nei . One uullue~Lual model sultable of descrlblng overlald ne~ ls the Stratlfied ~eLelellue Model as described ln "The Stratlfled ~ef~l~noe Model:
An Open Architecture to B-ISDN" by T. Hadoung, B. Stavenow, J.
Dejean, ISS'90, Storkhnlm. In Flg. 3 a schematlc drawlng of a B-ISDN network from the vlewpolnt of the Stratlfied ~Lele--u~ Model is illu~Ll~Led ~the protocol vlewpolnt to the left and the network vlewpolnt to the rlght). Accordlngly, the B-ISDN wlll conslst of the fnll ing strata. A L1 'RRlnn stratum based on SDH (SyllcllIvlluus Dlgltal Hlerarchy) or equlvalent (SONET) at the bottom, a cross connect stratum based on elther SDH or ATM
~ WO95/34973 2 1 q2793 P l s~ ~u~
(AYyll~ VIIVUS ll~n~Lei Mode) on top of that, which acts as an inLLaaL~uvLuL~ for the ATM VP/VC stratum with switched connec-tions. Finally, the large set of pOsR1 hl e applications uses the cross connect stratum as an infLaxLLu~Lu.~. In one particular ~ t of the present invention it is the inL.a~L.uvL
network 'ell1ng the cross connect stratum in a P-ISDN overlaid network that is cnnR1clored. In general, this inLla~L,u~L
network is ~~fe~L~d to as a physical network.
Of course, it is to be ~Idel~Lood that the present invention can be applied to any physical tol~e ~n~tion network.
The physical trRn~iqq1nn ~vUlV~S, i.e. the tr~n~1~s1nn capacities of the physical links, have to be partitioned or distributed among the logical links of said logical nei ..L~ in some way. Since ATM has similarities with both packet switched and circuit switched ne~ c kx it is not a priori obvious which .u~elLles should have the y-~àL~L impact on a partitioning or ~ nn1ng model. In the data Ll~n~re- phase the similarities to packet switched neL7.-k~ are the largest. However, at the vv,~evLlon setup phase the similarities to circuit switching dominate, ~sp~c1~lly if a pl~v~l-Llve connection control concept with small ATM switch buffers has been adopted Lvy~Lhe, with the equivalent bandwidth concept. In an appic,acl- which models the call scale r , it is natural to view an ATM network as a multirate circuit switched network in which the most 1 ~ L~IIL
quality of service pal ~er is the connection blork1ng probabil-ity, i.e. the route blork1ng probability. In this context, there is provided a method in a~Ldallce with the present invention which designs the capacity values of the logical links of the various logical ne~ such that the route hlnr~1ng probability on any route in any of the logical networks does not exceed a maximum allowed hlnr~1ng value, given in advance for each route.
Fig. 4 shows a ~r' Llc flow diagram illustrating a method in nccordance with a general inventive concept of the present invention. In acco~dallce with the present invention a set of W0 95/34973 2 1 g 2 7 9 3 P.,~ us logiaal n~L7 ' is est~hl ~ch~ on top of a physical network comprising physical Ll 'Cc~on and switching L~YUULU~S, said logical nei- ' comprising nodes and logical links extending between the nodes 50 as to define the topology of sald logical n~L.~JLk~. Preferably, the logical neL h7j are completely ~paL~Led from each other. The logical links are used by routes i.,L~cv-~euLlng the nodes of node pairs in the logical neL~ kY.
Logical link capacities are det~rm~nPd such that the route hlor~n3 probability on each individual route in each one of the logical neL ' is less than or equal to a maximum allowed ~hlnr~lng probability, given for each individual route, by distributing the actual route b]or~ ng evenly among the logical links used by the L~yeuLlve route. Finally, the physical tr~nPm~ cqi nn resources are allocated among the loglcal links of the logical neL k~ according to the detPrm~nP~ logical link capacities.
As indicated in Fig. 3 the cross connect stratum can be realized by either SDH or ATM. If the cross connect stratum is based on SDH and the inLL~s7LuuLuL~ network is r~l 17~ng e.g. different quality of service classes by e~uuLU~ separation, the partition-ing can only be pe~ ' in integer portions of the STM modules of the SDH 7iLLUULUL~. On the other hand, if the cross connect is realized by ATM virtual paths then no integrality restriction exists and the partitioning can be peL r, - ~ in any real portions. ~I.a eLuL~, whether the cross connect stratum is based on SDH or ATM will have ~ ~~.-L implications for the partition-ing of the physical network L~huuL~a8. The SDH cross connect solution gives rise to a model that is discrete with regard to the logical link capacities, while the ATM cross connect solution gives rise to a continuous model. The continuous model requires that the ATM switches support partitioning on the individual input and output ports. For example, this is realized by multiple logical buffers at the output ports. In a preferred embodiment of the invention an inLL~,LLu~LuL~ network ~ ng the ATM
cross connect stratum is c~ncid~ed while in an alternative ~ L an inLLa~LLu~LuLe ~ ng the SDH crogg connect is woss/34s73 ll ~ , cnnR~red, as can be seen in Fig. 1.
At the first glance it might appear that partitionlng, as opposed to complete sharlng, ls a reductlon of the full flPY~h~l~ty of ATM. Thls ls however not the case if the partitioning is consid-ered on a general level. On a ~ullc~pLual level the complete sharlng schemes, e.g. prlorlty queulng, Vlrtual Spaclng etc. tell us how to reallze ~UUL~ sharlng on the cell level, while the partltlonlng a~Loa~l, seeks for the call scale ~haL~1L~Llstlcs, e.g. how to asslgn rates to varlous loglcal llnks, that is then to be realized on the cell level. In thls sense the complete partltionlng ap~L~a~h ~ 1~ L~, rather than ~YcludeR, the complete sharlng a~oa~lles.
Ma; Llcal ~L 1~ k and ~1 'on~no model C~nR~r a flxed physical network with N nodes and K physlcal links, on top of whlch a number of log~Rlly ~e~dL~Led logical neL k~ are esfAhl1Rh~d. If the total number of logical links over all loglcal ne~ ls denoted by J, and the capaclty of an individual logical link J ls denoted C~, then the vector of loglcal link capacities over all logical neL . can be written a8 C~ ( C1~ C2 ~ ~ ~ ~, CJ ) ~ These loglcal llnk capacltles are not known ln advance. In fact lt is desired to ~ 'nn the logical llnks of the loglcal ne~ hY wlth respect to capaclty.
The ln~d~n~e of physical llnks and logical links is e~L~ed by a K x J matrix S in which the ~:th entry in the k:th row is equal to l if logical link ; needs capacity on the k:th physical llnk, otherwlse sald entry ls O. Naturally, the sum of loglcal llnk capacltles on the same physlcal link cannot exceed the ~ capacity of the physlcal link. This physical constraint can be expressed as SC S Cph~"
where C is defined above, and Cph~, refer to the vector of glven physlcal link capacitles. In additlon lt ls requlred that C > O.
w095l34973 21 927 ~3 r~ . /u~
Assume that I traffic types are oarried in the complete network.
The role of these traffic types is prlmarily to handle different bandwidth re~uirements, but traffic types can be distinguished also with respect to different holding times or even priorities (trunk ~eseLvaLlon~. By convention, each route carries only a single type of traffic. Thls means that if several trafflc types are to be carried, they are ce~l~s~.Led by parallel routes.
Let R be the total set of routes over all logical nel~ k~, that is, R Uu Up U~ R ( 'P' ~) ( 1 ) where R~Pl) is the set of routes in logical network v r~Al ~ 7.1 ng ~ cation between node pair p regarding traffic type i. It is ~ Lant to ~,deLaL~-d that a route is not associated with more than one logical network. Each logical network is assumed to operate under fixed non-alternate routing.
Let ~r be the po1~nn~ ~n call arrival rate to route r, let l~r be the average holding time of calls on route r and let vr-~r/~r be the offered traffic to route r. Let v(~p,) be the ayy~yaL~d offered traffic of type i to node pair p in logical network v.
In a preferred ~ t the offered traffic for each route in each logical network is glven while in another preferred ~ L of the invention the above ayyl~yaL~d offered traffic is given for all logical n~ -lh~, node pairs and traffic types.
In the latter case, the load is e.g. distributed on all~lL~aL
paths.
Let B~ be the bl or~t n3 probabllity of logical link ~. Further, let L(r) be the set of logical links used by route r and denote by l(r) the length of route r, i.e. the number of logical links on route r.
In addition, there is assumed for each route r in each one of the logical networks a maximum allowed route bl nr~i n3 probability ~ W095134973 2 1 927 93 r~ /u~
B(r) to be given.
The ~ nn i ng task is to degign the loglcal link capaclties C~ for all ; such that the route blor-king requirements are satisfied, i.e. such that the route hlork~ng on any route r does not exceed B(r).
In accordance with a pl~relL~d ' '~ L of the present invention the ~ innlng is pelruL ~ baged on equivalent link blorking (ELB). The idea is to distrlbute, for each individual route, the route blnrking probability evenly among the logical links used by the individual route. Of course, a route may comprise a single logical link. In this case, the link blnrking and the route blork~n3 are the same.
Adopting the equivalent link blorkin~ r _ Lion, the probability that a call on route r is not blocked can now be e~L~ d as (1 - B~)1('). Cnn~ifiering the route blork~ng requirements, the minimum probability that a call on route r is not blocked is equal to 1-B(r). If the route hlor~ing requirements or constraints as defined above are to be satisfied then the following must hold for each route r and each logical link ~ ~ L(r):
1 - B(r) s (1 - B~)l(r~ (1) If R~ denotes the set of routes that use logical link ~ and if the B(r) values are different for these routes, then the lowest value of B(r), r~R~, is taken into account. In other words, the strictest requirement on route blnrking is cnn~id~red. Now, the following condition is obtained:
maXr~R~ (l - B (r) ) ~ B~ r) (2 ) This can also be e~ressed as:
woss/34s73 21q27q3 P l . ~u~ ~
- B~ 2 maXr6Rj(1 - B(r))~ ) (3 or as:
Bf ~ 1 - maxr6R~ (l - B(r) ) 1/l (r) (4) This means that the maximum po~ hl e value for the blo~ ng probability on logical link J, under the ~l ~lon of evenly distributed blo~k~ng, can be e~ ed as follows:
B~ maxr6R~ B ( r) ) 1/l (r) ( 5 ) Once the maximum p~ hle value for the link hlorlr~ng probability is calculated for each logical link in each one of the logical networks, the offered traffic to logical link ~ can be approxi-mated as:
Pj = ~ AjrVr ~; (1 - B~ )A~r (6) r6R.l "
where A~r is the amount of bandwidth that route r requires on logical link J. If route r does not ~L~V~LU~ logical link ~ then A~r is equal to zero.
Since the value of B~'~ and the ~uLL~ ng value of p~ are known for all J, the capacity C~ of logical link J can be calculated for all J by inverting numerically a b1o~kln3 function:
Bj~ = E(p~,C~) (7) Preferably, the simple analytic extension of Erlang's B-formula to any non-negative real value is used as hl orlr~ ng functlon.
However, to pL~6~LV~ generallty, any hlor~n3 functlon is allowed, that ls Jointly smooth in all variables.
Having obtained the logical link capacities C~ from the above model, it is n~c~s~ y to n~rr~ them such that they satisfy ~ WO95/34973 2 1 9 2 7 9 3 J ~ U~
the physical capacity constralnts, SC c Cph". If the capacity of the physlcal link k is CkPh", and the capacltles o~ the loglcal llnks that need capaclty on the k:th physlcal link are Ck1, ....
Ckn, then the nnrr~ ed logical link capacities associated with physical link k are Cki = o Ck Y, i =1, . . ., n ( 8 ) ~ Ck, This nnrm~7i7~tlon ~lOU~dUl~ ls p~l~ - a for all k.
The nnrr~ll ~e~.al logical link capacities satisfy the requirements on route hlnnkin3 for each route in each one of the various logical networks. In other words, if the physical ~1 ~CRIOn ~~SOu.~8 are allocated among the logical links of the logical neL ks in acuulda-lce with the above nnrr-ll ~ loglcal llnk capacities, then the hlnr~ln~ probability of any route r does not exceed B(r).
An efficient way of hAnal~n~ the co-existence of many different bandwidth demands (traffic types) is to model a non-unity bandwidth call by a seyuenu~ of 1n.1~ L unlty bandwldth calls. In the article "alQrk~n~ Probabilities in Multltrafflc Loss Systems: Insensltlvity, Asymptotlc Behavlor and Approxlma-tlons" by Labourdette and Hart ln IEEE Trans. C 1~tlons, 40 (1992/8) pp. 1355-1366. lt is proven that this approximation is correct in the a~ ~uLlc sense.
For a better ul.deL~ lding of the present invention a method in accordance wlth a ~l~Lell~d ~ of the lnventlon will be descrlbed with leLe~nc~ to the flow diagram of Fig. 5. First, a set of logical ne~wulk~ is estAhl~Qh~ on top of a physical network by assoclatlng dlfferent parts of the traffic with different parts of the physical trA-n-m~c~~nn and swltchlng ~esuul~es. Next, the maxlmum pn~lhle value for the blocklng probability on each loglcal llnk ln each one of the loyical ne~wuLk~ is calculated, glven the maxlmum allowed blo~k~n3 W09s/34973 2 ~ 92793 P~ 75 ~v~ ~
probability B( r) for each individual route in each one of the logical n~L ' , by distrlbutlng the route blork1ng evenly among the loglcal links that are used by the l~Y~euLlve route (expres-sion (5)). Then, the offered traffic corrpc-ponding to the calculated maxlmum llnk bl orkl n3 probablllty ls calculated for each logical link (expression (6)). Following thls, a flrst set of logical link capacitles associated with the various logical networks is detPrm1nPd by inverting numerically a continuous link hlnrk1ns function (~L~lon (7)) using the results of the previous steps as input variables. This first set of logical link capacities is nnrr~ (eA~1 sslon (8)) such that the physical capacity constraints are satisfied. Finally, the physical tL 'cc,~nn L~souL~8 of the physical network are allocated among the logical links of the logical neL h- in accoLdanc~
with the nnrr-l ~ 7Pd logical link capacities.
In general, the method and device according to the invention is utilized to ~ ~ nn the logical linkg of each one of the logical n~L h~ by cnnR1~Prlng route blonking requirements. The present invention does not optimize the network operation, the total carried traffic or network revenue, it only ~1 ~innc the logical net h~ taking into account the requirements on route blork~ n3 probabilities. If the overall hl nrk1 ng in a network system is low, then implicitly the total carried traffic will be high. Thereby, the invention cnnc1dPrs the total carried traffic or network revenue in an indirect way.
It should be u..deL~Luod by those skilled in the art that it is equally po~c1hle to ~i ~nn only one of the logical neL ( ' .
If e.g. only one logical network among the set of logical n~L- h5 es~Ahl~chp~ on top of the physical network is associated with the requirement that the route hl Qrki ng probability on the routes in the logical network should not exceed a maximum value given for each route, then, in one ~ '-'1 t, the capacities of only those logical links that belong to this particular logical network are detPrm~nPd.
~ w095~34973 2 1 92793 . ~ u~
If the cross connect is based on SDH, the partltionlng can only be peLr~ ' ln lnteger portions of the STM modules of the SDH
sLLu~LuLe, a8 mentloned above. In thls partlcular case, the real capaclty values obtalned from the method according to the first ~ i~L~d ~~ t of the invention are preferably rounded into integer values such that the physical constraints and the quality of service constraints are satisfied. In one : '~ L of the invention this ls realized by ~ n~l_p~ Lly repeated random rounding trials.
The method according to the first ~eLeLL~d : _'1 L of the invention is pLeLeLably peLf, ' by one or more control piOyL
CP of the control program module CPM of the operation and support system OSS. These control ~LUyL , in turn, are e~uuL~d by one or more of the pLUC~YsoLY ln the pLUC~8Yui system PS descrlbed above. The operatlon and support system OSS collects the requlred lnformatlon from the network system and uses thls lnformatlon LugeLller wlth the database DB and control program CP lnfoL...~Llon as lnput to the respectlve control pLUyL CP. FUL; - ~, the OSS controls the network switches through the data links 80 as to partitlon the physlcal link capacities among the logical links of the logical neL
Accordingly, the network manager can flexibly and very quickly adapt the overall network system to nh~ng1ng traffic conditions, such as changes in offered traffic, but also to facility failures and new demands on the logical network topology from e.g.
bl)A1n~R users, as is illu~LL~L~d in the s~' tlc flow diagram of Fig. 6. Once the method or device according to the invention has been applied to a physical network, then a set of logical n~L7.Lhs is es~hl1qh~d and the logical links of these logical neL ' are ~ 'nn~ such that the route blonk~ng requirement is satisfied for each route in each one of the logical neL- .
However, if, nt a later time, the topology of one or more logical neL7JLh~ have to be changed for some reason (facility failure or demands for new topolog~) or additional logical networks are L~yueYLed, then the complete set of steps according to the first wogsl34973 21 q27~3 P~- s ~u3 preferred~ of the present lnvention hQs to be ~elf in order to reconfigure the overall network. If no changes regarding the topology of the logical net ~ is n~c~ .y, but e.g. the offered traffic varies, then only the det~rm~n~ng and allocating steps of the present invention have to be carried out.
That is, the det~rm1n1ng step and the allocating step are LepeaLed in ~s~ul,~e to nhRnging traffic conditions 80 as to change the logical link capacities of the varlous logical net ' such that the route hlnnk1ng on any route r is at most B(r). This alteration of the logical link capacities is realized by the swltches and cross connect devices of the physical network in a very short perlod of time. Thus, the reali~ation of the present invention renders the operation of the complete physical network both safe and flPY~ hl e .
Since the method according to a ~l~ft~ ~d ~ L of the invention does not involve iterative calculations, the computa-tional , l~Y1ty is very small. Of course, there is a trade-off between the auuU~a~y of the result and the required ,_LaLlonal power.
The method according to the present invention offers a fast solution to the otherwise very _l~nAted lesuul~e partitioning problem. The solution can be L~ _L~d easily so as to dynami-cally follow ~.hAng~ ng network conditions.
Note that the r , ~ ing drawings are simple illustrative ,l~c illustrating the inventive concept of the present invention. In practice, the physical network and the logical ne; o h~ are, in general, very extensive with e.g. i..t ~iAte logical nodes which are not directly associated with access points and logical links using more than one physical link.
The ~mhQfli- Ls described above are merely given as le~, and it should be understood that the present invention is not limited thereto. It is of course p~cc1hle to embody the invention in ~perflfin forms other than those fl~cr~hPfl without departing from ~ W095/34973 2 ~ 9 2 7 9 3 . ~1 ~u~
the spirit of the invention. Further '~flcations and ~ u.~
ments which retain the basic underlying pr1nr~pl~R ~fCCl~8~ and claimed herein are within the scope and spirit of the invention.
ExPerlmental results The present lnvention has been tested on various ne~horkc. In particular, the lnvention was tried on a 6-node physlcal network on top of whlch flve dlfferent logical networks, each with four traffic classes, were est~hlich~. The distrlbution of traffic among the trafflc classes and the I ~ ty of bandwidth demands were varled and the total carried traffic or network revenue was measured. For not too in~ ~-IIC traffic condi-tions the a~Luaull was sati~rau~uLy. In addition, even for llnhal~nred dlstribution of traffic among the traffic classes the peLfoLI-~ance was good.
Claims (15)
1. In a physical network, comprising physical transmission and switching resources, a method for partitioning said physical transmission resources among logical networks, said method comprising the steps of:
establishing a set of logical networks on top of said physical network, said logical networks comprising nodes and logical links, wherein said logical links are used by routes interconnecting the nodes of node pairs in the logical networks;
determining capacities, hereinafter referred to as logical link capacities, of said logical links; and allocating said physical transmission resources among said logical links of said logical networks according to said determined logical link capacities, c h a r a c t e r i z e d in that said step of determining logical link capacities is performed under the distribution, for each individual route, of the route blocking probability evenly among the logical links used by the individual route such that the route blocking probability on each individual route in each one of said logical networks is less than or equal to a maximum allowed blocking probability given for each individual route.
establishing a set of logical networks on top of said physical network, said logical networks comprising nodes and logical links, wherein said logical links are used by routes interconnecting the nodes of node pairs in the logical networks;
determining capacities, hereinafter referred to as logical link capacities, of said logical links; and allocating said physical transmission resources among said logical links of said logical networks according to said determined logical link capacities, c h a r a c t e r i z e d in that said step of determining logical link capacities is performed under the distribution, for each individual route, of the route blocking probability evenly among the logical links used by the individual route such that the route blocking probability on each individual route in each one of said logical networks is less than or equal to a maximum allowed blocking probability given for each individual route.
2. A method in accordance with claim 1, wherein said determining step and said allocating step are repeated in response to changing traffic conditions so as to adapt the partitioning of said physical transmission resources to the prevailing traffic.
3. A method in accordance with claim 1 or 2, c h a r a c t e r i z e d in that said step of determining logical link capacities is performed, for each individual logical link, with consideration to the lowest value of the given maximum allowed route blocking probabilities that are associated with the routes that use the individual logical link.
4. A method in accordance with claim 1, c h a r a c t e r i z e d in that said establishing step comprises the step of controlling the port assignment of said physical switching resources.
5. A method in accordance with claim 1, c h a r a c t e r i z e d in that said step of allocating comprises the step of using logical buffers at output ports of said physical switching resources.
6. A method in accordance with claim 1, wherein said physical network is an infrastructure network modelling the ATM cross connect stratum in a B-ISDN overlaid network.
7. A method in accordance with claim 1, c h a r a c t e r i z e d in that said determining step comprises the step of calculating, for each individual logical link, a maximum possible value for the blocking probability on said individual logical link taking the lowest value of the given maximum allowed blocking probabilities that are associated with the routes that use said individual logical link into account, said calculating step being performed under said evenly distribution of route blocking.
8. A method in accordance with claim 7, c h a r a c t e r i z e d in that said determining step further comprises the step of calculating, for each logical link, a value representative of the offered traffic to said logical link given said calculated maximum possible link blocking probability values.
9. A method in accordance with claim 7 or 8, c h a r a c t e r i z e d in that information, for each route, about the logical links that are used by said route are given as input data.
10. A method in accordance with claim 8, c h a r a c t e r i z e d in that route offered traffic values and values representative of the bandwidth that each route requires on each logical link are given as input data when calculating said offered traffic representing values.
11. A method in accordance with claim 8, c h a r a c t e r i z e d in that said determining step further comprises the step of inverting numerically a link blocking function using said calculated maximum possible link blocking probability values and said calculated offered traffic representing values as input variables so as to generate first logical link capacities.
12. A method in accordance with claim 11, c h a r a c t e r i z e d in that said determining step further comprises the step of normalizing said first logical link capacities such that they satisfy physical capacity constraints so as to generate said logical link capacities.
13. In a physical network, comprising physical transmission and switching resources, a method for partitioning said physical transmission resources among logical links, said method comprising the steps of:
establishing logical nodes and said logical links on top of said physical network, wherein said logical links are used by routes interconnecting the nodes of node pairs;
determining logical link capacities; and allocating said physical transmission resources among said logical links according to said determined logical link capacities, c h a r a c t e r i z e d in that said step of determining logical link capacities is performed under the distribution, for each individual route, of the route blocking probability evenly among the logical links used by the individual route such that the route blocking probability on each individual route in each one of said logical networks is less than or equal to a maximum allowed blocking probability given for each individual route.
establishing logical nodes and said logical links on top of said physical network, wherein said logical links are used by routes interconnecting the nodes of node pairs;
determining logical link capacities; and allocating said physical transmission resources among said logical links according to said determined logical link capacities, c h a r a c t e r i z e d in that said step of determining logical link capacities is performed under the distribution, for each individual route, of the route blocking probability evenly among the logical links used by the individual route such that the route blocking probability on each individual route in each one of said logical networks is less than or equal to a maximum allowed blocking probability given for each individual route.
14. In a physical network, comprising physical transmission resources, a device for partitioning said physical transmission resources among logical networks, said device comprising:
means for establishing a set of logical networks on top of said physical network, said logical networks comprising nodes and logical links, wherein said logical links are used by routes interconnecting nodes;
means for determining logical link capacities; and means for allocating said physical transmission resources among said logical links of said logical networks according to said determined logical link capacities, c h a r a c t e r i z e d in that said determining means determined said logical link capacities under the distribution, for each individual route, of the route blocking probability evenly among the logical links used by the individual route such that the route blocking probability on each individual route in each one of said logical networks is less than or equal to a maximum allowed blocking probability given for each individual route.
means for establishing a set of logical networks on top of said physical network, said logical networks comprising nodes and logical links, wherein said logical links are used by routes interconnecting nodes;
means for determining logical link capacities; and means for allocating said physical transmission resources among said logical links of said logical networks according to said determined logical link capacities, c h a r a c t e r i z e d in that said determining means determined said logical link capacities under the distribution, for each individual route, of the route blocking probability evenly among the logical links used by the individual route such that the route blocking probability on each individual route in each one of said logical networks is less than or equal to a maximum allowed blocking probability given for each individual route.
15. A device in accordance with claim 14, c h a r a c t e r i z e d in that said establishing means comprises means for controlling the port assignment of said physical switching resources.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
SE9402059-1 | 1994-06-13 | ||
SE9402059A SE9402059D0 (en) | 1994-06-13 | 1994-06-13 | Methods and apparatus for telecommunications |
Publications (1)
Publication Number | Publication Date |
---|---|
CA2192793A1 true CA2192793A1 (en) | 1995-12-21 |
Family
ID=20394357
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CA002192793A Abandoned CA2192793A1 (en) | 1994-06-13 | 1995-06-12 | A method and device for partitioning physical netword resources |
Country Status (9)
Country | Link |
---|---|
US (1) | US6104699A (en) |
EP (2) | EP0765554B1 (en) |
JP (2) | JPH10504426A (en) |
CN (2) | CN1080501C (en) |
AU (2) | AU692884B2 (en) |
CA (1) | CA2192793A1 (en) |
DE (2) | DE69533064D1 (en) |
SE (1) | SE9402059D0 (en) |
WO (2) | WO1995034973A2 (en) |
Families Citing this family (202)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB9606708D0 (en) * | 1996-03-29 | 1996-06-05 | Plessey Telecomm | Bandwidth bidding |
GB2317533A (en) * | 1996-07-29 | 1998-03-25 | Northern Telecom Ltd | Communications network |
US5844886A (en) * | 1996-12-30 | 1998-12-01 | Telefonaktiebolaget Lm Ericsson (Publ.) | System and method for network optimization using code blocking |
IL120449A0 (en) | 1997-03-13 | 1997-07-13 | Ben Ami Raphael | Apparatus and method for expanding communication networks |
EP1013131A2 (en) * | 1997-03-13 | 2000-06-28 | Urizen Ltd. | Apparatus and method for expanding communication networks |
GB2332334A (en) * | 1997-12-10 | 1999-06-16 | Northern Telecom Ltd | Trail management system |
US6434619B1 (en) * | 1998-04-29 | 2002-08-13 | Alcatel Canada Inc. | Internet-enabled service management system and method |
JP3609256B2 (en) * | 1998-05-19 | 2005-01-12 | 株式会社日立製作所 | Network management device, node device, and network management system |
US6999421B1 (en) * | 1998-10-26 | 2006-02-14 | Fujitsu Limited | Adjustable connection admission control method and device for packet-based switch |
US6381237B1 (en) | 1998-10-30 | 2002-04-30 | Nortel Networks Limited | Trail explorer and method for exploring trails in a communication network |
US6442507B1 (en) | 1998-12-29 | 2002-08-27 | Wireless Communications, Inc. | System for creating a computer model and measurement database of a wireless communication network |
US6850946B1 (en) | 1999-05-26 | 2005-02-01 | Wireless Valley Communications, Inc. | Method and system for a building database manipulator |
US6317599B1 (en) * | 1999-05-26 | 2001-11-13 | Wireless Valley Communications, Inc. | Method and system for automated optimization of antenna positioning in 3-D |
US6493679B1 (en) * | 1999-05-26 | 2002-12-10 | Wireless Valley Communications, Inc. | Method and system for managing a real time bill of materials |
US6499006B1 (en) * | 1999-07-14 | 2002-12-24 | Wireless Valley Communications, Inc. | System for the three-dimensional display of wireless communication system performance |
US7243054B2 (en) | 1999-07-14 | 2007-07-10 | Wireless Valley Communications, Inc. | Method and system for displaying network performance, cost, maintenance, and infrastructure wiring diagram |
US6986137B1 (en) * | 1999-09-28 | 2006-01-10 | International Business Machines Corporation | Method, system and program products for managing logical processors of a computing environment |
US6519660B1 (en) * | 1999-09-28 | 2003-02-11 | International Business Machines Corporation | Method, system and program products for determining I/O configuration entropy |
US6611500B1 (en) * | 1999-11-04 | 2003-08-26 | Lucent Technologies, Inc. | Methods and apparatus for derivative-based optimization of wireless network performance |
US6810422B1 (en) | 2000-01-14 | 2004-10-26 | Lockheed Martin Tactical Defense Systems | System and method for probabilistic quality of communication service determination |
FI20001312A (en) * | 2000-05-31 | 2001-12-01 | Nokia Networks Oy | Formation of a telecommunications network |
FI20001314A (en) * | 2000-05-31 | 2001-12-01 | Nokia Networks Oy | Breakdown of telecommunications network |
US6971063B1 (en) * | 2000-07-28 | 2005-11-29 | Wireless Valley Communications Inc. | System, method, and apparatus for portable design, deployment, test, and optimization of a communication network |
US6912203B1 (en) * | 2000-07-31 | 2005-06-28 | Cisco Technology, Inc. | Method and apparatus for estimating delay and jitter between many network routers using measurements between a preferred set of routers |
US7096173B1 (en) | 2000-08-04 | 2006-08-22 | Motorola, Inc. | Method and system for designing or deploying a communications network which allows simultaneous selection of multiple components |
US7085697B1 (en) | 2000-08-04 | 2006-08-01 | Motorola, Inc. | Method and system for designing or deploying a communications network which considers component attributes |
US6625454B1 (en) | 2000-08-04 | 2003-09-23 | Wireless Valley Communications, Inc. | Method and system for designing or deploying a communications network which considers frequency dependent effects |
US7680644B2 (en) * | 2000-08-04 | 2010-03-16 | Wireless Valley Communications, Inc. | Method and system, with component kits, for designing or deploying a communications network which considers frequency dependent effects |
US7246045B1 (en) | 2000-08-04 | 2007-07-17 | Wireless Valley Communication, Inc. | System and method for efficiently visualizing and comparing communication network system performance |
US7055107B1 (en) | 2000-09-22 | 2006-05-30 | Wireless Valley Communications, Inc. | Method and system for automated selection of optimal communication network equipment model, position, and configuration |
US6973622B1 (en) * | 2000-09-25 | 2005-12-06 | Wireless Valley Communications, Inc. | System and method for design, tracking, measurement, prediction and optimization of data communication networks |
EP1344123A4 (en) * | 2000-12-18 | 2007-04-25 | Wireless Valley Comm Inc | Textual and graphical demarcation of location, and interpretation of measurements |
US7133410B2 (en) * | 2001-02-12 | 2006-11-07 | Tellabs Operations, Inc. | Method and system for designing ring-based telecommunications networks |
US7164883B2 (en) * | 2001-02-14 | 2007-01-16 | Motorola. Inc. | Method and system for modeling and managing terrain, buildings, and infrastructure |
US7006466B2 (en) * | 2001-03-09 | 2006-02-28 | Lucent Technologies Inc. | Dynamic rate control methods and apparatus for scheduling data transmissions in a communication network |
DE10116835A1 (en) * | 2001-04-04 | 2002-10-17 | Alcatel Sa | Fast restoration mechanism e.g. for determining restoration capacity in transmission network, involves determining mesh in network with mesh closed sequence of bi-directional links traversing each node part of mesh exactly once |
US7031253B1 (en) * | 2001-06-01 | 2006-04-18 | Cisco Technology, Inc. | Method and apparatus for computing a path through specified elements in a network |
US6880002B2 (en) * | 2001-09-05 | 2005-04-12 | Surgient, Inc. | Virtualized logical server cloud providing non-deterministic allocation of logical attributes of logical servers to physical resources |
CA2411806A1 (en) * | 2001-11-16 | 2003-05-16 | Telecommunications Research Laboratory | Wide-area content-based routing architecture |
US7574496B2 (en) | 2001-11-30 | 2009-08-11 | Surgient, Inc. | Virtual server cloud interfacing |
KR100428721B1 (en) * | 2001-12-04 | 2004-04-28 | 주식회사 케이티 | Method of Generation and Contolling Logical Port in ATM Exchange System |
US7574323B2 (en) * | 2001-12-17 | 2009-08-11 | Wireless Valley Communications, Inc. | Textual and graphical demarcation of location, and interpretation of measurements |
US7339897B2 (en) * | 2002-02-22 | 2008-03-04 | Telefonaktiebolaget Lm Ericsson (Publ) | Cross-layer integrated collision free path routing |
US6990666B2 (en) * | 2002-03-18 | 2006-01-24 | Surgient Inc. | Near on-line server |
US7257584B2 (en) | 2002-03-18 | 2007-08-14 | Surgient, Inc. | Server file management |
US7804785B2 (en) * | 2002-04-19 | 2010-09-28 | Avaya Inc. | Network system having an instructional sequence for performing packet processing and optimizing the packet processing |
US7197553B2 (en) * | 2002-04-19 | 2007-03-27 | Nortel Networks Limited | Network system having a virtual-service-module |
US7246178B2 (en) * | 2002-05-07 | 2007-07-17 | Nortel Networks Limited | Methods and systems for changing a topology of a network |
US7386628B1 (en) | 2002-05-08 | 2008-06-10 | Nortel Networks Limited | Methods and systems for processing network data packets |
US7346709B2 (en) * | 2002-08-28 | 2008-03-18 | Tellabs Operations, Inc. | Methods for assigning rings in a network |
US8463947B2 (en) * | 2002-08-28 | 2013-06-11 | Tellabs Operations, Inc. | Method of finding rings for optimal routing of digital information |
US7319675B1 (en) * | 2002-09-04 | 2008-01-15 | At&T Mobility Ii Llc | Systems and methods for calculating call blocking for alternate call routing schemes |
CN100459534C (en) | 2002-10-07 | 2009-02-04 | 日本电信电话株式会社 | Layer network node and network constituted throuth said nodes, the node and layer network thereof |
US7295119B2 (en) | 2003-01-22 | 2007-11-13 | Wireless Valley Communications, Inc. | System and method for indicating the presence or physical location of persons or devices in a site specific representation of a physical environment |
US7295960B2 (en) * | 2003-01-22 | 2007-11-13 | Wireless Valley Communications, Inc. | System and method for automated placement or configuration of equipment for obtaining desired network performance objectives |
US20040259555A1 (en) * | 2003-04-23 | 2004-12-23 | Rappaport Theodore S. | System and method for predicting network performance and position location using multiple table lookups |
US20040259554A1 (en) * | 2003-04-23 | 2004-12-23 | Rappaport Theodore S. | System and method for ray tracing using reception surfaces |
US7287186B2 (en) | 2003-06-02 | 2007-10-23 | Surgient Inc. | Shared nothing virtual cluster |
CN100370736C (en) * | 2003-09-02 | 2008-02-20 | 华为技术有限公司 | Method for managing resources based on planes |
US7643484B2 (en) * | 2003-09-26 | 2010-01-05 | Surgient, Inc. | Network abstraction and isolation layer rules-based federation and masquerading |
US7769004B2 (en) * | 2003-09-26 | 2010-08-03 | Surgient, Inc. | Network abstraction and isolation layer for masquerading machine identity of a computer |
US7382738B2 (en) * | 2003-11-24 | 2008-06-03 | Nortel Networks Limited | Method and apparatus for computing metric information for abstracted network links |
US7969971B2 (en) * | 2004-10-22 | 2011-06-28 | Cisco Technology, Inc. | Ethernet extension for the data center |
US7801125B2 (en) * | 2004-10-22 | 2010-09-21 | Cisco Technology, Inc. | Forwarding table reduction and multipath network forwarding |
US7830793B2 (en) * | 2004-10-22 | 2010-11-09 | Cisco Technology, Inc. | Network device architecture for consolidating input/output and reducing latency |
US7602720B2 (en) * | 2004-10-22 | 2009-10-13 | Cisco Technology, Inc. | Active queue management methods and devices |
US8238347B2 (en) | 2004-10-22 | 2012-08-07 | Cisco Technology, Inc. | Fibre channel over ethernet |
US7564869B2 (en) | 2004-10-22 | 2009-07-21 | Cisco Technology, Inc. | Fibre channel over ethernet |
US20070070898A1 (en) * | 2005-09-29 | 2007-03-29 | Khrais Nidal N | Channel resource allocation based upon call blocking probabilities |
US7961621B2 (en) | 2005-10-11 | 2011-06-14 | Cisco Technology, Inc. | Methods and devices for backward congestion notification |
CN100544300C (en) * | 2005-11-02 | 2009-09-23 | 华为技术有限公司 | Realize the method for signalling network interconnection |
US8078728B1 (en) | 2006-03-31 | 2011-12-13 | Quest Software, Inc. | Capacity pooling for application reservation and delivery |
US8924524B2 (en) | 2009-07-27 | 2014-12-30 | Vmware, Inc. | Automated network configuration of virtual machines in a virtual lab data environment |
US8892706B1 (en) | 2010-06-21 | 2014-11-18 | Vmware, Inc. | Private ethernet overlay networks over a shared ethernet in a virtual environment |
US8619771B2 (en) | 2009-09-30 | 2013-12-31 | Vmware, Inc. | Private allocated networks over shared communications infrastructure |
CN101001395B (en) * | 2006-12-30 | 2010-10-27 | 华为技术有限公司 | Method and device of selecting call route |
US8259720B2 (en) | 2007-02-02 | 2012-09-04 | Cisco Technology, Inc. | Triple-tier anycast addressing |
US8149710B2 (en) | 2007-07-05 | 2012-04-03 | Cisco Technology, Inc. | Flexible and hierarchical dynamic buffer allocation |
US8121038B2 (en) | 2007-08-21 | 2012-02-21 | Cisco Technology, Inc. | Backward congestion notification |
EP2582092A3 (en) | 2007-09-26 | 2013-06-12 | Nicira, Inc. | Network operating system for managing and securing networks |
US7860012B2 (en) * | 2007-12-18 | 2010-12-28 | Michael Asher | Employing parallel processing for routing calls |
US8194674B1 (en) | 2007-12-20 | 2012-06-05 | Quest Software, Inc. | System and method for aggregating communications and for translating between overlapping internal network addresses and unique external network addresses |
US8195774B2 (en) | 2008-05-23 | 2012-06-05 | Vmware, Inc. | Distributed virtual switch for virtualized computer systems |
KR101460848B1 (en) | 2009-04-01 | 2014-11-20 | 니시라, 인크. | Method and apparatus for implementing and managing virtual switches |
US8842679B2 (en) | 2010-07-06 | 2014-09-23 | Nicira, Inc. | Control system that elects a master controller instance for switching elements |
US9525647B2 (en) | 2010-07-06 | 2016-12-20 | Nicira, Inc. | Network control apparatus and method for creating and modifying logical switching elements |
US8964528B2 (en) | 2010-07-06 | 2015-02-24 | Nicira, Inc. | Method and apparatus for robust packet distribution among hierarchical managed switching elements |
US9680750B2 (en) | 2010-07-06 | 2017-06-13 | Nicira, Inc. | Use of tunnels to hide network addresses |
US10103939B2 (en) | 2010-07-06 | 2018-10-16 | Nicira, Inc. | Network control apparatus and method for populating logical datapath sets |
US20140036726A1 (en) * | 2011-04-13 | 2014-02-06 | Nec Corporation | Network, data forwarding node, communication method, and program |
US9043452B2 (en) | 2011-05-04 | 2015-05-26 | Nicira, Inc. | Network control apparatus and method for port isolation |
US8964767B2 (en) | 2011-08-17 | 2015-02-24 | Nicira, Inc. | Packet processing in federated network |
US8958298B2 (en) | 2011-08-17 | 2015-02-17 | Nicira, Inc. | Centralized logical L3 routing |
US9203701B2 (en) | 2011-10-25 | 2015-12-01 | Nicira, Inc. | Network virtualization apparatus and method with scheduling capabilities |
US9178833B2 (en) | 2011-10-25 | 2015-11-03 | Nicira, Inc. | Chassis controller |
US9137107B2 (en) | 2011-10-25 | 2015-09-15 | Nicira, Inc. | Physical controllers for converting universal flows |
US9288104B2 (en) | 2011-10-25 | 2016-03-15 | Nicira, Inc. | Chassis controllers for converting universal flows |
EP2748714B1 (en) | 2011-11-15 | 2021-01-13 | Nicira, Inc. | Connection identifier assignment and source network address translation |
WO2013158920A1 (en) | 2012-04-18 | 2013-10-24 | Nicira, Inc. | Exchange of network state information between forwarding elements |
US9231892B2 (en) | 2012-07-09 | 2016-01-05 | Vmware, Inc. | Distributed virtual switch configuration and state management |
US9471385B1 (en) * | 2012-08-16 | 2016-10-18 | Open Invention Network Llc | Resource overprovisioning in a virtual machine environment |
US9432215B2 (en) | 2013-05-21 | 2016-08-30 | Nicira, Inc. | Hierarchical network managers |
US9571386B2 (en) | 2013-07-08 | 2017-02-14 | Nicira, Inc. | Hybrid packet processing |
US10218564B2 (en) | 2013-07-08 | 2019-02-26 | Nicira, Inc. | Unified replication mechanism for fault-tolerance of state |
US9602312B2 (en) | 2013-07-08 | 2017-03-21 | Nicira, Inc. | Storing network state at a network controller |
US9282019B2 (en) | 2013-07-12 | 2016-03-08 | Nicira, Inc. | Tracing logical network packets through physical network |
US9197529B2 (en) | 2013-07-12 | 2015-11-24 | Nicira, Inc. | Tracing network packets through logical and physical networks |
US9407580B2 (en) | 2013-07-12 | 2016-08-02 | Nicira, Inc. | Maintaining data stored with a packet |
US9952885B2 (en) | 2013-08-14 | 2018-04-24 | Nicira, Inc. | Generation of configuration files for a DHCP module executing within a virtualized container |
US9887960B2 (en) | 2013-08-14 | 2018-02-06 | Nicira, Inc. | Providing services for logical networks |
US9973382B2 (en) | 2013-08-15 | 2018-05-15 | Nicira, Inc. | Hitless upgrade for network control applications |
US9432204B2 (en) | 2013-08-24 | 2016-08-30 | Nicira, Inc. | Distributed multicast by endpoints |
US9577845B2 (en) | 2013-09-04 | 2017-02-21 | Nicira, Inc. | Multiple active L3 gateways for logical networks |
US9503371B2 (en) | 2013-09-04 | 2016-11-22 | Nicira, Inc. | High availability L3 gateways for logical networks |
US9674087B2 (en) | 2013-09-15 | 2017-06-06 | Nicira, Inc. | Performing a multi-stage lookup to classify packets |
US9602398B2 (en) | 2013-09-15 | 2017-03-21 | Nicira, Inc. | Dynamically generating flows with wildcard fields |
US9596126B2 (en) | 2013-10-10 | 2017-03-14 | Nicira, Inc. | Controller side method of generating and updating a controller assignment list |
US10063458B2 (en) | 2013-10-13 | 2018-08-28 | Nicira, Inc. | Asymmetric connection with external networks |
US9785455B2 (en) | 2013-10-13 | 2017-10-10 | Nicira, Inc. | Logical router |
US9967199B2 (en) | 2013-12-09 | 2018-05-08 | Nicira, Inc. | Inspecting operations of a machine to detect elephant flows |
US9548924B2 (en) | 2013-12-09 | 2017-01-17 | Nicira, Inc. | Detecting an elephant flow based on the size of a packet |
US9996467B2 (en) | 2013-12-13 | 2018-06-12 | Nicira, Inc. | Dynamically adjusting the number of flows allowed in a flow table cache |
US9569368B2 (en) | 2013-12-13 | 2017-02-14 | Nicira, Inc. | Installing and managing flows in a flow table cache |
US9602392B2 (en) | 2013-12-18 | 2017-03-21 | Nicira, Inc. | Connectivity segment coloring |
US9602385B2 (en) | 2013-12-18 | 2017-03-21 | Nicira, Inc. | Connectivity segment selection |
US9590901B2 (en) | 2014-03-14 | 2017-03-07 | Nicira, Inc. | Route advertisement by managed gateways |
US9313129B2 (en) | 2014-03-14 | 2016-04-12 | Nicira, Inc. | Logical router processing by network controller |
US9419855B2 (en) | 2014-03-14 | 2016-08-16 | Nicira, Inc. | Static routes for logical routers |
US9225597B2 (en) | 2014-03-14 | 2015-12-29 | Nicira, Inc. | Managed gateways peering with external router to attract ingress packets |
US9647883B2 (en) | 2014-03-21 | 2017-05-09 | Nicria, Inc. | Multiple levels of logical routers |
US9503321B2 (en) | 2014-03-21 | 2016-11-22 | Nicira, Inc. | Dynamic routing for logical routers |
US9413644B2 (en) | 2014-03-27 | 2016-08-09 | Nicira, Inc. | Ingress ECMP in virtual distributed routing environment |
US9893988B2 (en) | 2014-03-27 | 2018-02-13 | Nicira, Inc. | Address resolution using multiple designated instances of a logical router |
US9385954B2 (en) | 2014-03-31 | 2016-07-05 | Nicira, Inc. | Hashing techniques for use in a network environment |
US9985896B2 (en) | 2014-03-31 | 2018-05-29 | Nicira, Inc. | Caching of service decisions |
US10193806B2 (en) | 2014-03-31 | 2019-01-29 | Nicira, Inc. | Performing a finishing operation to improve the quality of a resulting hash |
US9794079B2 (en) | 2014-03-31 | 2017-10-17 | Nicira, Inc. | Replicating broadcast, unknown-unicast, and multicast traffic in overlay logical networks bridged with physical networks |
US9602422B2 (en) | 2014-05-05 | 2017-03-21 | Nicira, Inc. | Implementing fixed points in network state updates using generation numbers |
US9379956B2 (en) | 2014-06-30 | 2016-06-28 | Nicira, Inc. | Identifying a network topology between two endpoints |
US9553803B2 (en) | 2014-06-30 | 2017-01-24 | Nicira, Inc. | Periodical generation of network measurement data |
US9742881B2 (en) | 2014-06-30 | 2017-08-22 | Nicira, Inc. | Network virtualization using just-in-time distributed capability for classification encoding |
US9858100B2 (en) | 2014-08-22 | 2018-01-02 | Nicira, Inc. | Method and system of provisioning logical networks on a host machine |
US10250443B2 (en) | 2014-09-30 | 2019-04-02 | Nicira, Inc. | Using physical location to modify behavior of a distributed virtual network element |
US9768980B2 (en) | 2014-09-30 | 2017-09-19 | Nicira, Inc. | Virtual distributed bridging |
US10020960B2 (en) | 2014-09-30 | 2018-07-10 | Nicira, Inc. | Virtual distributed bridging |
US11178051B2 (en) | 2014-09-30 | 2021-11-16 | Vmware, Inc. | Packet key parser for flow-based forwarding elements |
US10511458B2 (en) | 2014-09-30 | 2019-12-17 | Nicira, Inc. | Virtual distributed bridging |
US10469342B2 (en) | 2014-10-10 | 2019-11-05 | Nicira, Inc. | Logical network traffic analysis |
US10079779B2 (en) | 2015-01-30 | 2018-09-18 | Nicira, Inc. | Implementing logical router uplinks |
US10038628B2 (en) | 2015-04-04 | 2018-07-31 | Nicira, Inc. | Route server mode for dynamic routing between logical and physical networks |
US9923760B2 (en) | 2015-04-06 | 2018-03-20 | Nicira, Inc. | Reduction of churn in a network control system |
US10225184B2 (en) | 2015-06-30 | 2019-03-05 | Nicira, Inc. | Redirecting traffic in a virtual distributed router environment |
CN105162716A (en) * | 2015-07-28 | 2015-12-16 | 上海华为技术有限公司 | Flow control method and apparatus under NFV configuration |
US10230629B2 (en) | 2015-08-11 | 2019-03-12 | Nicira, Inc. | Static route configuration for logical router |
US10057157B2 (en) | 2015-08-31 | 2018-08-21 | Nicira, Inc. | Automatically advertising NAT routes between logical routers |
US10204122B2 (en) | 2015-09-30 | 2019-02-12 | Nicira, Inc. | Implementing an interface between tuple and message-driven control entities |
US10095535B2 (en) | 2015-10-31 | 2018-10-09 | Nicira, Inc. | Static route types for logical routers |
US10333849B2 (en) | 2016-04-28 | 2019-06-25 | Nicira, Inc. | Automatic configuration of logical routers on edge nodes |
US10484515B2 (en) | 2016-04-29 | 2019-11-19 | Nicira, Inc. | Implementing logical metadata proxy servers in logical networks |
US11019167B2 (en) | 2016-04-29 | 2021-05-25 | Nicira, Inc. | Management of update queues for network controller |
US10841273B2 (en) | 2016-04-29 | 2020-11-17 | Nicira, Inc. | Implementing logical DHCP servers in logical networks |
US10091161B2 (en) | 2016-04-30 | 2018-10-02 | Nicira, Inc. | Assignment of router ID for logical routers |
US10205651B2 (en) | 2016-05-13 | 2019-02-12 | 128 Technology, Inc. | Apparatus and method of selecting next hops for a session |
US10560320B2 (en) | 2016-06-29 | 2020-02-11 | Nicira, Inc. | Ranking of gateways in cluster |
US10153973B2 (en) | 2016-06-29 | 2018-12-11 | Nicira, Inc. | Installation of routing tables for logical router in route server mode |
US10454758B2 (en) | 2016-08-31 | 2019-10-22 | Nicira, Inc. | Edge node cluster network redundancy and fast convergence using an underlay anycast VTEP IP |
US10341236B2 (en) | 2016-09-30 | 2019-07-02 | Nicira, Inc. | Anycast edge service gateways |
US10212071B2 (en) | 2016-12-21 | 2019-02-19 | Nicira, Inc. | Bypassing a load balancer in a return path of network traffic |
US10237123B2 (en) | 2016-12-21 | 2019-03-19 | Nicira, Inc. | Dynamic recovery from a split-brain failure in edge nodes |
US10742746B2 (en) | 2016-12-21 | 2020-08-11 | Nicira, Inc. | Bypassing a load balancer in a return path of network traffic |
US10616045B2 (en) | 2016-12-22 | 2020-04-07 | Nicira, Inc. | Migration of centralized routing components of logical router |
US10200306B2 (en) | 2017-03-07 | 2019-02-05 | Nicira, Inc. | Visualization of packet tracing operation results |
US10637800B2 (en) | 2017-06-30 | 2020-04-28 | Nicira, Inc | Replacement of logical network addresses with physical network addresses |
US10681000B2 (en) | 2017-06-30 | 2020-06-09 | Nicira, Inc. | Assignment of unique physical network addresses for logical network addresses |
US10608887B2 (en) | 2017-10-06 | 2020-03-31 | Nicira, Inc. | Using packet tracing tool to automatically execute packet capture operations |
US10511459B2 (en) | 2017-11-14 | 2019-12-17 | Nicira, Inc. | Selection of managed forwarding element for bridge spanning multiple datacenters |
US10374827B2 (en) | 2017-11-14 | 2019-08-06 | Nicira, Inc. | Identifier that maps to different networks at different datacenters |
US10999220B2 (en) | 2018-07-05 | 2021-05-04 | Vmware, Inc. | Context aware middlebox services at datacenter edge |
US11184327B2 (en) | 2018-07-05 | 2021-11-23 | Vmware, Inc. | Context aware middlebox services at datacenter edges |
CN108958940A (en) * | 2018-07-09 | 2018-12-07 | 苏州浪潮智能软件有限公司 | A kind of computer processing method and device |
US10931560B2 (en) | 2018-11-23 | 2021-02-23 | Vmware, Inc. | Using route type to determine routing protocol behavior |
US10735541B2 (en) | 2018-11-30 | 2020-08-04 | Vmware, Inc. | Distributed inline proxy |
US10797998B2 (en) | 2018-12-05 | 2020-10-06 | Vmware, Inc. | Route server for distributed routers using hierarchical routing protocol |
US10938788B2 (en) | 2018-12-12 | 2021-03-02 | Vmware, Inc. | Static routes for policy-based VPN |
US10778457B1 (en) | 2019-06-18 | 2020-09-15 | Vmware, Inc. | Traffic replication in overlay networks spanning multiple sites |
US11095480B2 (en) | 2019-08-30 | 2021-08-17 | Vmware, Inc. | Traffic optimization using distributed edge services |
US11641305B2 (en) | 2019-12-16 | 2023-05-02 | Vmware, Inc. | Network diagnosis in software-defined networking (SDN) environments |
US11283699B2 (en) | 2020-01-17 | 2022-03-22 | Vmware, Inc. | Practical overlay network latency measurement in datacenter |
US11616755B2 (en) | 2020-07-16 | 2023-03-28 | Vmware, Inc. | Facilitating distributed SNAT service |
US11606294B2 (en) | 2020-07-16 | 2023-03-14 | Vmware, Inc. | Host computer configured to facilitate distributed SNAT service |
US11611613B2 (en) | 2020-07-24 | 2023-03-21 | Vmware, Inc. | Policy-based forwarding to a load balancer of a load balancing cluster |
US11902050B2 (en) | 2020-07-28 | 2024-02-13 | VMware LLC | Method for providing distributed gateway service at host computer |
US11451413B2 (en) | 2020-07-28 | 2022-09-20 | Vmware, Inc. | Method for advertising availability of distributed gateway service and machines at host computer |
US11558426B2 (en) | 2020-07-29 | 2023-01-17 | Vmware, Inc. | Connection tracking for container cluster |
US11570090B2 (en) | 2020-07-29 | 2023-01-31 | Vmware, Inc. | Flow tracing operation in container cluster |
US11196628B1 (en) | 2020-07-29 | 2021-12-07 | Vmware, Inc. | Monitoring container clusters |
US11736436B2 (en) | 2020-12-31 | 2023-08-22 | Vmware, Inc. | Identifying routes with indirect addressing in a datacenter |
US11336533B1 (en) | 2021-01-08 | 2022-05-17 | Vmware, Inc. | Network visualization of correlations between logical elements and associated physical elements |
US11784922B2 (en) | 2021-07-03 | 2023-10-10 | Vmware, Inc. | Scalable overlay multicast routing in multi-tier edge gateways |
US11687210B2 (en) | 2021-07-05 | 2023-06-27 | Vmware, Inc. | Criteria-based expansion of group nodes in a network topology visualization |
US11711278B2 (en) | 2021-07-24 | 2023-07-25 | Vmware, Inc. | Visualization of flow trace operation across multiple sites |
US11677645B2 (en) | 2021-09-17 | 2023-06-13 | Vmware, Inc. | Traffic monitoring |
Family Cites Families (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4669113A (en) * | 1985-04-26 | 1987-05-26 | At&T Company | Integrated network controller for a dynamic nonhierarchical routing switching network |
US4713806A (en) * | 1986-03-14 | 1987-12-15 | American Telephone And Telegraph Company, At&T Bell Laboratories | Communication system control arrangement |
US5297137A (en) * | 1991-01-30 | 1994-03-22 | International Business Machines Corporation | Process for routing data packets around a multi-node communications network |
GB2253970A (en) * | 1991-03-15 | 1992-09-23 | Plessey Telecomm | Data network management |
JPH05207068A (en) * | 1992-01-27 | 1993-08-13 | Nec Corp | Packet network design system |
US5381404A (en) * | 1992-07-14 | 1995-01-10 | Mita Industrial Co., Ltd. | Packet-switching communication network and method of design |
US5289303A (en) * | 1992-09-30 | 1994-02-22 | At&T Bell Laboratories | Chuted, optical packet distribution network |
US5345444A (en) * | 1992-09-30 | 1994-09-06 | At&T Bell Laboratories | Chuted, growable packet switching arrangement |
JPH0793645B2 (en) * | 1993-01-11 | 1995-10-09 | 日本電気株式会社 | Signal connection controller |
CA2124974C (en) * | 1993-06-28 | 1998-08-25 | Kajamalai Gopalaswamy Ramakrishnan | Method and apparatus for link metric assignment in shortest path networks |
JP3672341B2 (en) * | 1993-07-21 | 2005-07-20 | 富士通株式会社 | Communication network separation design method and its management method |
SE9500838L (en) * | 1994-06-13 | 1995-12-14 | Ellemtel Utvecklings Ab | Device and method for allocating resources of a physical network |
US5872918A (en) * | 1995-07-14 | 1999-02-16 | Telefonaktiebolaget Lm Erisson (Publ) | System and method for optimal virtual path capacity dimensioning with broadband traffic |
US5764740A (en) * | 1995-07-14 | 1998-06-09 | Telefonaktiebolaget Lm Ericsson | System and method for optimal logical network capacity dimensioning with broadband traffic |
-
1994
- 1994-06-13 SE SE9402059A patent/SE9402059D0/en unknown
-
1995
- 1995-06-12 DE DE69533064T patent/DE69533064D1/en not_active Expired - Fee Related
- 1995-06-12 AU AU27587/95A patent/AU692884B2/en not_active Ceased
- 1995-06-12 WO PCT/SE1995/000703 patent/WO1995034973A2/en active IP Right Grant
- 1995-06-12 DE DE69534216T patent/DE69534216D1/en not_active Expired - Fee Related
- 1995-06-12 EP EP95922842A patent/EP0765554B1/en not_active Expired - Lifetime
- 1995-06-12 JP JP8502037A patent/JPH10504426A/en active Pending
- 1995-06-12 EP EP95922843A patent/EP0765552B1/en not_active Expired - Lifetime
- 1995-06-12 JP JP8502038A patent/JPH10506243A/en active Pending
- 1995-06-12 CN CN95194459A patent/CN1080501C/en not_active Expired - Fee Related
- 1995-06-12 AU AU27586/95A patent/AU688917B2/en not_active Ceased
- 1995-06-12 US US08/765,159 patent/US6104699A/en not_active Expired - Lifetime
- 1995-06-12 WO PCT/SE1995/000704 patent/WO1995034981A2/en active IP Right Grant
- 1995-06-12 CN CN95194460.6A patent/CN1104120C/en not_active Expired - Fee Related
- 1995-06-12 CA CA002192793A patent/CA2192793A1/en not_active Abandoned
Also Published As
Publication number | Publication date |
---|---|
US6104699A (en) | 2000-08-15 |
SE9402059D0 (en) | 1994-06-13 |
WO1995034981A3 (en) | 1996-02-08 |
AU2758795A (en) | 1996-01-05 |
AU692884B2 (en) | 1998-06-18 |
AU2758695A (en) | 1996-01-05 |
DE69533064D1 (en) | 2004-06-24 |
EP0765552B1 (en) | 2004-05-19 |
EP0765554B1 (en) | 2005-05-18 |
JPH10504426A (en) | 1998-04-28 |
JPH10506243A (en) | 1998-06-16 |
EP0765552A2 (en) | 1997-04-02 |
EP0765554A2 (en) | 1997-04-02 |
DE69534216D1 (en) | 2005-06-23 |
WO1995034973A2 (en) | 1995-12-21 |
CN1080501C (en) | 2002-03-06 |
CN1154772A (en) | 1997-07-16 |
WO1995034973A3 (en) | 1996-02-01 |
CN1155360A (en) | 1997-07-23 |
WO1995034981A2 (en) | 1995-12-21 |
AU688917B2 (en) | 1998-03-19 |
CN1104120C (en) | 2003-03-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CA2192793A1 (en) | A method and device for partitioning physical netword resources | |
Ohta et al. | Dynamic bandwidth control of the virtual path in an asynchronous transfer mode network | |
US5764740A (en) | System and method for optimal logical network capacity dimensioning with broadband traffic | |
CA2202542C (en) | Virtual private network | |
EP0166765B1 (en) | Arrangement for routing data packets through a circuit switch | |
Friesen et al. | Resource management with virtual paths in ATM networks | |
CA2226480A1 (en) | System and method for optimal virtual path capacity dimensioning with broadband traffic | |
US20030067653A1 (en) | System and method for slot deflection routing | |
US5309430A (en) | Telecommunication system | |
JPH10173710A (en) | Exchange, exchange system for communication network and route-designating method | |
Gupta et al. | On Routing in ATM Networks. | |
Hwang | LLR routing in homogeneous VP-based ATM networks | |
EP0832529B1 (en) | Atm local access | |
Addie et al. | Bandwidth switching and new network architectures | |
Lee et al. | An efficient near-optimal algorithm for the joint traffic and trunk routing problem in self-planning networks | |
Gersht et al. | Dynamic bandwidth-allocation and path-restoration in SONET self-healing networks | |
Erfani et al. | An expert system-based approach to capacity allocation in a multiservice application environment | |
Filipiak | Shaping interworking MANs into an evolving B-ISDN | |
Medhi et al. | Dimensioning and computational results for wide-area broadband networks with two-level dynamic routing | |
JP3786371B6 (en) | System and method for optimal dimensioning of logical network capacity due to band traffic | |
Onvural et al. | On the amount of bandwidth allocated to virtual paths in ATM networks | |
Lee et al. | QoS restoration using a disjoint path group in ATM networks | |
CA2192794C (en) | Enhancement of network operation and performance | |
Hwang | Adaptive routing in VP-based ATM networks | |
Aydemir et al. | Virtual path assignment in ATM networks |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
EEER | Examination request | ||
FZDE | Discontinued |