WO2015106795A1 - Methods and systems for selecting resources for data routing - Google Patents

Methods and systems for selecting resources for data routing Download PDF

Info

Publication number
WO2015106795A1
WO2015106795A1 PCT/EP2014/050565 EP2014050565W WO2015106795A1 WO 2015106795 A1 WO2015106795 A1 WO 2015106795A1 EP 2014050565 W EP2014050565 W EP 2014050565W WO 2015106795 A1 WO2015106795 A1 WO 2015106795A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
flow
network
optionally
flows
Prior art date
Application number
PCT/EP2014/050565
Other languages
French (fr)
Inventor
Hayim Porat
Original Assignee
Huawei Technologies Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co., Ltd. filed Critical Huawei Technologies Co., Ltd.
Priority to CN201480036854.3A priority Critical patent/CN105379204B/en
Priority to PCT/EP2014/050565 priority patent/WO2015106795A1/en
Publication of WO2015106795A1 publication Critical patent/WO2015106795A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/302Route determination based on requested QoS
    • H04L45/308Route determination based on user's profile, e.g. premium users
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/70Admission control; Resource allocation
    • H04L47/82Miscellaneous aspects
    • H04L47/822Collecting or measuring resource availability data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/70Admission control; Resource allocation
    • H04L47/82Miscellaneous aspects
    • H04L47/823Prediction of resource usage
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/70Admission control; Resource allocation
    • H04L47/83Admission control; Resource allocation based on usage prediction

Definitions

  • the present application relates to methods and systems for selecting routes for data flows in a communication network and to methods and systems for selecting network resource requirements for transmission of data flows within a data communication network.
  • Communication networks for example, cloud and/or data center networks are required to provision large numbers and/or sizes of flows in the network while adhering to a SLA (Service Level Agreement) associated with each of the flows.
  • the SLA may define fines for failing to provide a level of service, for example, failure to deliver data packets.
  • networks are oversubscribed.
  • Oversubscription is the act of provisioning the same resource several times over. For example, two flows each requiring 100Mbps (Megabit per second) may be provisioned over a link with a nominal bandwidth of 100Mbps. This provisioning amounts to 2 times oversubscription
  • Statistical Multiplexing is a data transmission method which is based on the premise that packet based flows transmit intermittently and independently, and therefore may be interleaved over the same resource.
  • Another possible solution is to intentionally fail to adhere to the SLA, and pay the fines for SLA breaches.
  • This solution may be implemented, for example, in datacenters, where the flows are relatively short lived. Paying fines may be less costly than underutilizing the network resources.
  • Service providers try to balance these two contradictory solutions. On the one hand, if links are only partially used, the network resources may be wasted without proper return on investment. On the other hand, if the SLA is breached, then fines are paid out, which may cause a financial loss for the service provider.
  • a method of classifying flows of data through a data communication network for selecting routes comprising: monitoring data flows in a data communication network; generating a statistical classifier based on the monitored flows; receiving a request for a route in a data communication network for transmission of a flow of data packets; classifying the flow based on the generated statistical classifier to predict network resource requirements for transmission of the flow through the network; selecting the route for the classified flow; and generating a signal indicative of the selected route so that the flow is routed in the data communication network through the selected route.
  • the classifying further comprises a certainty of the prediction of actual usage of network resources by the flow.
  • the method further comprises receiving a request for a prediction of the network resource routing requirements for transmission of the flow of data in the data communication network and predicting the network resource requirements based on the statistical classifier.
  • the predicted network resource requirements are calculated as a function of nominal network resource reservations of the flow.
  • classifying further comprises classifying to predict at least one of the risk and the cost of failing to adhere to a service level agreement of the flow.
  • classifying further comprises adjusting the predicted network resource requirements in view of a selected risk of non- adherence to a service level agreement having associated fines due to non-adherence.
  • the monitoring and generating the statistical classifier are performed using big-data analytics.
  • the monitoring and generating the statistical classifier are performed asynchronously with respect to the receiving, classifying, selecting, and generating the signal.
  • the statistical classifier is based on a Collaborative Filtering system.
  • the method further comprises monitoring adherence to a service level agreement defined by nominal resource requirements of the flow, during transmission of the flow over the selected route that utilizes the predicted network resource requirements.
  • the monitoring and generating the statistical classifier are continuously performed in an iterative manner.
  • the method further comprises recalibrating nominal network resource reservations of the flow to the predicted network resource requirements.
  • monitoring data flows comprises identifying user context data of the data flows
  • generating the statistical classifier comprises generating the statistical classifier based on the identified user context data
  • selecting the route comprises accessing a routing dataset storing a plurality of different routing parameters per link.
  • the method further comprises collecting data from the data communication network, and updating the routing dataset using the collected data, the updating performed asynchronously with respect to accessing the routing dataset for route selection.
  • the method according to any of the implementation forms of the first aspect or the first aspect as such being carried out by a predictive analysis unit programmed to carry out the steps of the method.
  • a flow of data is classified using a statistical classifier to predict required network resources for transmission of data flows within a data communication network.
  • the statistical classifier is based on current and/or previous patterns of data flows through the network.
  • a route in the network may be selected for the classified flow of data.
  • Implementation forms of the method and/or system of the present invention can involve performing or completing selected tasks manually, automatically, or a combination thereof. Moreover, according to actual instrumentation and equipment of implementation forms of the method and/or system of the present invention, several selected tasks could be implemented by hardware, by software or by firmware or by a combination thereof using an operating system.
  • a data processor such as a computing platform for executing a plurality of instructions.
  • the data processor includes a volatile memory for storing instructions and/or data and/or a non- volatile storage, for example, a magnetic hard-disk and/or removable media, for storing instructions and/or data.
  • a network connection is provided as well.
  • a display and/or a user input device such as a keyboard or mouse are optionally provided as well.
  • FIG. 1 is a flow chart of a method of classifying data flows, in accordance with some embodiments of the present invention
  • FIG. 2 is a block diagram of a system for classifying data flows, in accordance with some embodiments of the present invention.
  • FIG. 3 is a flowchart of a method of selecting routes for transmission of data flows based on classified data flows, using the method of FIG.1, in accordance with some embodiments of the present invention
  • FIG. 4 is a block diagram of a system for selecting routes for transmission of data flows based classified data flows, using the system of FIG. 2, in accordance with some embodiments of the present invention.
  • FIG. 5 is a schematic diagram of an exemplary design of a system for selecting routes for transmission of data flows, according to the system of FIG. 4.
  • the present invention relates to methods and systems for selecting routes for data flows in a communication network and to methods and systems for selecting network resource requirements for transmission of data flows within a data communication network.
  • An aspect of some embodiments of the present invention relates to systems and/or methods for classifying flows of data (e.g., packets) through a data communication network, based on multiparameter monitoring of current and/or previous network data flow patterns.
  • the classification of the new data flow is performed by a statistical classifier.
  • the statistical classifier is constructed from data collected from multiple-parameter monitoring of current and/or previous data flow patterns for the same client requesting the new data flow.
  • the statistical classifier is constructed from data collected from multiple-parameter monitoring of current and/or previous data flow patterns of other clients.
  • the phrase predicted network resource requirements means the network resources that are predicted to be needed for transmission of the dataflow through the network using a selected route.
  • classifying the flow based on the generated statistical classifier predicts the network resource requirements.
  • network resource requirements include: bandwidth to accommodate the data flow request, latency, error rate, jitter, packet loss rate, and/or other transmission related requirements.
  • the predicted network resource requirements may or may not match the actual network resources used during the data flow transmission.
  • the statistical certainty and/or statistical error between the predicted network resource requirements and the actual network resources used is selected and/or estimated. The prediction may be performed to be within the statistical error.
  • a certainty of the actual usage of network resources by the flow is predicted. For example, the predicted bandwidth is about 30 Mbps with a certainty of about 90%.
  • the predicted network resource requirements may increase utilization of network infrastructure resources, and may reduce the risk of paying fines, for example, due to failure to adhere to service level agreements. Utilization of network resources may be increased by allocating and reserving only the actual required resources. The risk of paying fines due to uncalculated over provisioning of network resources may be reduced.
  • predicting network resource requirements in view of the risk of missing the SLA and/or the corresponding fines may control risk of fine payouts, for example, by taking into account the confidence level of required resource estimation.
  • the predicted network user requirements may increase adherence to the service level agreement, which may improve user experience.
  • the network resource requirements are predicted in reference to nominal network resources of the flow of data.
  • the nominal required bandwidth for a new flow might be about 100 megabit per second (Mbps), and the predicted bandwidth is about 30 Mbps.
  • the predicted network resource requirement is less than the nominal network resource reservation associated with the request.
  • the difference between the predicted and nominal values is used to transmit other data flows.
  • a statistical classifier is generated based on the monitored data flows within the data communication network.
  • the prediction of the network resource requirements is performed using the statistical classifier.
  • Generating the statistical classifier and/or prediction using the statistical classifier is, for example, based on a learning classifier system, based on a recommender system (e.g., Collaborative Filtering), and/or other suitable systems.
  • the monitoring and the corresponding generation of data sets are performed in an iterative manner, for example, based on a predefined rate and/or as network conditions change.
  • values associated with the monitored data flows are collected and/or analyzed using big-data analytics.
  • big-data means a data set that is too large for capturing and/or processing within an acceptable range of time, for example, monitoring the data flows and generating the statistical classifier in response to the received request for predicting the network resource use.
  • big-data analytics which may be used include: MapReduce methodology.
  • the nominal network resource reservation request is defined by a routing policy, for example, a Service Level Agreement (SLA).
  • SLA Service Level Agreement
  • the risk of failing to adhere to the SLA when reserving the predicted network resource requirements is estimated.
  • the cost of failing to adhere to the SLA is estimated, for example, total fines.
  • the statistical classifier is generated based on user context information associated with the data flows.
  • user context information means details about the dataflow associated with the requesting user and/or other users.
  • the received request is associated with a certain user context.
  • the network resources are predicted in view of the certain user context. For example, the behavior of data flows from one user as compared to data flows from other users.
  • the nominal network resource reservation is recalibrated according to the predicted network resource requirements. For example, predicted future requests for nominal network resources for the client are adjusted. In another example, future predicted SLAs for clients with similar user profiles are designed according to the prediction.
  • FIG. 1 is a method of classifying data flows for route selection, in accordance with some embodiments of the present invention.
  • FIG. 2 is a block diagram of a system 200 for classifying data flows for route selection, in accordance with some embodiments of the present invention.
  • the method of FIG. 1 may be performed by system 200 of FIG. 2.
  • system 200 is a predictive analysis unit programmed to carry out the steps of the method.
  • System 200 and/or the method select routes for a prospective new flow by classifying the data flow to predict usage of network resources.
  • the route is selected for the classified flow.
  • Implementing the predicted usage of network resources, by reservation of the predicted resources, may reduce or prevent non-adherence to the SLA associated with the new flow.
  • Network resource use may be optimized by the method and/or system.
  • the predictions are made using a statistical classifier, for example, a predictive model, data mining techniques, or other methods.
  • Prediction algorithms may be based on machine learning techniques, for example:
  • the predicted usage of network resources by the new flow may denote the minimum network resource reservation needed.
  • data of past and/or existing flows within the network are analyzed to form the statistical classifier for to predicting the resources needed by the new flows.
  • System 200 comprises a hardware processor 208 in electrical communication with one or more non-transitory memories 210 storing one or more program modules and/ or databases containing instructions for execution by processor 208.
  • one or more memories 210 are designed to be suitable for big-data analytics, for example, direct-attached storage such as solid state drives using high capacity serial advanced technology attachment.
  • a predictive analysis module 228 monitors a data communication network 202.
  • module 228 monitors data flows within network 202.
  • monitoring is performed at the network provider level, for example, by the internet service provider (ISP).
  • ISP internet service provider
  • Module 228 monitors, for example, data within the packets themselves (e.g., payload, type of packet, length of packet), transmission data of network flows (e.g., average transmission time per packet, number of hops, packet loss, jitter, length of time packets were transmitted), user profiles (e.g., identity of user sending the data, SLA of the user, type of application), or other data.
  • data monitoring is continuous. All data flows may be monitored, or a selected subset may be monitored (e.g., per use).
  • network 202 is an autonomous system that presents a commonly defined routing policy to external networks, for example, the internet.
  • Network 202 may be owned by a single entity (for example, an internet service provider, a telecommunication company, or other organizations), or by multiple entities that may connect different networks together to form the single autonomous system.
  • network 202 is a packet switching network.
  • module 228 gathers data from network 202, for example, from network elements (e.g., bandwidth and/or latency for the flow through a router), and/or from the data packets themselves (e.g., reading header information). Alternately or additionally, module 228 gathers data from a user database 230. Optionally, data is gathered per flow within network 202. Each user may have multiple flows within the network Data may be gathered for the overall transmission of the data flow, for example, total transmission time from end terminal to end terminal. Data may be gathered per link between two nodes, for example, transmission time between the nodes using the link.
  • network elements e.g., bandwidth and/or latency for the flow through a router
  • module 228 gathers data from a user database 230.
  • data is gathered per flow within network 202. Each user may have multiple flows within the network Data may be gathered for the overall transmission of the data flow, for example, total transmission time from end terminal to end terminal. Data may be gathered per link between two nodes, for example
  • user database 230 contains parameters denoting user context information.
  • database 230 stores big-data.
  • the user context data may be associated with current flows within network 202, with previous flows within network 202, may not be related to current and/or previous flows, and/or may be related to potential flows having associated flow requests and/or associated prediction requests.
  • Examples of the gathered data based on the monitored flows include: the user (e.g., profile, ID), the application (e.g., associated with the dataflow), source (e.g., IP address), destination (e.g., IP address), requested resources (e.g., BW, latency), actual usage of the resource (e.g., minimum, maximum, average), lifespan (e.g., time duration of the flow), fines for SLA breaches, and/or other variables.
  • the user e.g., profile, ID
  • the application e.g., associated with the dataflow
  • source e.g., IP address
  • destination e.g., IP address
  • requested resources e.g., BW, latency
  • actual usage of the resource e.g., minimum, maximum, average
  • lifespan e.g., time duration of the flow
  • data may be collected, for example, per link, per device, and/or per interface.
  • the collected data may be combined, for example, for all links in the transmission route, for all devices encountered in the transmission route, and for all interfaces in the transmission route.
  • the gathered data is analyzed, for example, by module 228.
  • a statistical classifier is constructed based on the gathered data.
  • the analysis is performed by a suitable algorithm, module and/or system, for example, a learning algorithm, a predictive modeling algorithm, a recommender system, and/or other suitable algorithms.
  • a recommender algorithm is a Collaborative Filtration algorithm.
  • the analysis is performed for one or more existing flows, for example, each flow in network 202.
  • the analysis is performed for one or more previous flows, for example, each flow in network 202 during the last 4 hours, last 24 hours, last week, similar day of the week, similar date of the year, or other periods in time.
  • the statistical classifier and/or analyzed data are stored in a dataflow dataset 226, for example, a database, a table, a hash-table, a tree, a directed graph, a record, an array, a linked list, or other suitable data structures.
  • dataset 226 stores big-data.
  • data from other associated databases are stored in dataflow dataset 226, for example, routing table data related to the analyzed flows, and/or other data related to the analyzed flows.
  • dataflow dataset 226 is updated, by iterations of monitoring (e.g., block 102) and/or generating the statistical classifier (e.g., block 104). Iterations may be performed continuously.
  • dataset 226 is maintained in an updated state, for example, according to best-efforts and/or resource availability.
  • the data within dataset 226 is used for the prediction process even when the update of dataset 226 lags being actual current network conditions.
  • the current data within dataset 226 may be updated enough to allow for accurate predictions, for example, within a margin of error.
  • the updates are decoupled from the rest of the process of predicting the network resource requirements (e.g., blocks 108-122).
  • the requests do not trigger a corresponding update of dataset 226.
  • the updates do not trigger a corresponding response to pending requests.
  • reading from dataset 226 e.g., to predict the network requirements
  • writing to dataset 226 may be performed asynchronously and/or independently.
  • the asynchronous reading and writing processes may be performed by separate entities and/or processes.
  • the updates may occur at a preset rate (e.g., user defined), and/or at present intervals (e.g. automatically set by software). Updates may be performed continuously. Updates may be performed at dynamic rates, for example, changing according to network conditions and/or available resources.
  • the decoupling of the processes of updating dataset 226 and predicting the network resource requirements may allow the use of big-data analytics.
  • the big-data analytics may improve the accuracy of the prediction, using additional available information associated with the data flows and/or users.
  • a request is received for a prediction of network resource requirements for transmission of a flow of data within network 202.
  • the request for prediction is triggered by a request for a route within network 202 for the flow.
  • the request may originate from a requesting entity 206, for example, a server within system 200 (e.g., to route data between two nodes and/or terminals within the network) and/or a server external to system 200 (e.g., to route data entering network 202 across and/or out of network 202, for example, the terminals being located outside of network 202).
  • a server within system 200 e.g., to route data between two nodes and/or terminals within the network
  • a server external to system 200 e.g., to route data entering network 202 across and/or out of network 202, for example, the terminals being located outside of network 202).
  • network 202 is centrally managed by a network management system.
  • the network management system may act as requesting entity 206 to issue prediction requests.
  • requesting entity 206 issues a request for a route for a new dataflow.
  • the request is for a change in route in an existing dataflow.
  • the request is for re-instatement of a previous data flow, for example, an expired dataflow and/or an occasional dataflow.
  • the request for the route may be received, for example, by a routing module 234 for selecting routes through network 202.
  • Routing module 234 may be an off-the shelf system such as a router, a route selection module 412 described with reference to FIG. 4, or other software and/or hardware for selecting data routes.
  • Routing module 234 may issue a request for prediction of network utilization resources associated with the new dataflow.
  • the prediction request may be received, for example, by a flow parameter transformation module 232.
  • one or both of the requests for the route selection and prediction are received by predictive analysis module 228.
  • nominal network resource requirements for the new dataflow are identified, for example, by accessing user database 230 and/or other sources of data.
  • a routing policy of the new dataflow defines the network resource requirements.
  • the routing policy is, for example, a SLA between the client and the service provider, a policy internal to the service provider itself, a policy based on the profile of the client, or other policies.
  • there are different levels of the routing policy for example, for same the client but for different data, for different periods of time, and/or other defined variables. Different levels of the routing policy may define different values for the nominal network resource requirements.
  • flow parameter transformation module 232 identifies the nominal network resource requirements and sends the identified nominal values to predictive analysis module 228.
  • predictive analysis module 228 identifies the nominal network resource requirements.
  • nominal values are not identified. For example, if there is no SLA for the client, then the data is classified as low-priority, and/or other factors. In such a case, the request for prediction may be based on a best-effort of transmission of the data using available resources, without interfering with other data flows that have higher priorities and/or a SLA
  • the required network resources are predicted for the dataflow.
  • the prediction is performed by predictive analysis module 228.
  • the dataflow is classified by the statistical classification to predict the network resource requirements.
  • the predicted value is calculated as a function of the nominal value, for example, a percentage of the nominal value.
  • the received nominal values may be modified per the function, and returned as the predicted values.
  • the predicted values are stored in dataflow dataset 226.
  • the predicted values are stored in association with the stored identified nominal values.
  • dataflow dataset 226 is represented as a table, for example, a flow analysis table 514 described with reference to FIG. 5.
  • the table contains a row for Known or identified nominal values, and another row for the Predicted values.
  • the table contains one or more columns Paraml, Param2, Param3, ParamN, for storing values associated with different network resource requirements and/or routing parameters.
  • the table may be multidimensional, for example, with another dimension for the Flow ID of the requesting new flow, current and/or previous flows of the same client, current and/or previous flows of other clients, and/or other flows
  • the nominal and/or predicted values may represent requirements for overall data transport, or for partial data transport, for example, different requirements for different links and/or other internal network divisions.
  • dataflow dataset 226 is represented by other suitable data structures, for example, records, trees, graphs, objects, linked lists, and/or other suitable structures.
  • the prediction of the resource requirements is performed in view of a selected risk of non-adherence to the SLA and/or associated fines due to the non-adherence.
  • the predicted resource requirements e.g., using the statistical classifier
  • the predicted resource requirements are adjusted according to the level of selected risk and/or associated fines.
  • a risk analysis algorithm is used to calculate risks using multiple parameters.
  • the predicted adjusted resource requirements denote the most optimal solution, for example, higher network resource utilization in view of lower risks and lower total fine payout, while increasing revenue. For example, a 90% pre-selected risk of paying a fine may result in relatively higher resource requirement reservations (e.g., higher BW allocation) than a 50% preselected risk of paying the fine.
  • an 80%> pre-selected risk of paying a predetermined fine of $10000 may result in relatively higher resource requirement reservations than the 80%) pre-selected risk of paying a predetermined fine of $1000.
  • the risk may be selected, for example, automatically by software (e.g., according to risk analysis algorithms), manually by the network operator, and/or pre-set by the manufacturer.
  • the predicted network resource requirements are provided, for example, as a generated signal, as one or more data packets, and/or using other information transfer methods.
  • the predicted resource requirements are provided by predictive analysis module 228 to flow parameter transformation module 232, and/or to routing module 234.
  • the predicted resource requirements are reserved for the requesting dataflow, for example, by flow parameter transformation module 232.
  • the nominal values have already been reserved for the requesting dataflow, and are recalibrated according to the predicted network resource requirements.
  • a route is selected for the classified dataflow, optionally based on the predicted network resource requirements.
  • routing module 234 selects the route using a routing table 236, optionally based on the predicted resource requirements.
  • Routing table 236 may be a standard routing table associated with a router for selecting routes, a multi-tier routing dataset 404 described with reference to FIG. 4, and/or other databases storing information for selecting data routes.
  • Additional links may be available for selecting the route based on the predicted network resource requirements, as compared to links available for selecting the route based on the nominal network resource requirements. For example, for predicted BW values significantly less than nominal BW values, many more links may be available to accommodate the lower BW(i.e., predicted) than links available to accommodate the higher BW (i.e., nominal).
  • the data packets are transmitted within network 202 using the selected route.
  • adherence to the routing policy is monitored during transmission of the data packets with implementation of the predicted network resource requirements.
  • Adherence to the SLA may be monitored for the new data flow and/or for other data flows through network 202.
  • adherence to the nominal values within the SLA is monitored during implementation of data routing using the predicted values.
  • some instances of failure to meet the SLA are allowed and may be expected (e.g., statistical variation), for example, when overall profits are increased in view of increased optimized utilization of network resources.
  • overall revenue and/or profits are monitored when routing data using the predicted network resources.
  • the revenue and/or profits are compared to routing data using the nominal requested resources.
  • the process of classifying the dataflow is repeated (e.g., one or more of blocks 108, 110, 112, 114, 116, 118 and/or 120).
  • the process is repeated for each new dataflow request. For example, for requests by the same client for several different data flows, and/or for requests by different clients.
  • the process is adjusted.
  • the process is adjusted in view of the monitoring of adherence to the SLA. For example, if a certain data flow does not adhere to the SLA using the current predicted requirements, another classification may be made so that routing using the new predicted values improves adherence to the SLA.
  • system 200 has an interface 218 for electrical communication between processor 208 and requesting entity 206 and/or the network management system.
  • system 200 has an interface 220 for electrical communication between processor 208 and network 202.
  • system 200 is sold as a box.
  • Interface 218 is connected to the network management system.
  • Interface 220 is connected to the communication network.
  • at least some parts of system 200 are sold as software, for example, loaded and run as part of the network management system.
  • system 200 is in electrical communication with one or more input elements 222 for a user to enter input into processor 208, for example, a touchscreen, a keyboard, a mouse, voice recognition, and/or other elements.
  • input elements 222 for a user to enter input into processor 208, for example, a touchscreen, a keyboard, a mouse, voice recognition, and/or other elements.
  • the user may enter, for example, the routing policies.
  • system 200 is in electrical communication with one or more output elements 224 for a user to view data from processor 208, for example, a screen, a mobile device (e.g., Smartphone), a printer, a laptop, a remote computer, or other devices.
  • Output element 224 may be used, for example, to view routing table 236, to upgrade software, to view configurations, and/or to debug the system.
  • FIG. 3 is a method of selecting routes for classified data flows, in accordance with some embodiments of the present invention.
  • the method of FIG. 3 combines the method of classifying data flows as described with reference to FIG. 1.
  • FIG. 4 is a block diagram of a system 400 for selecting data routes for classified flows, in accordance with some embodiments of the present invention.
  • System 400 is a combination of elements from system 200 of FIG. 2, with route selection elements.
  • the method of FIG. 3 may be performed by system 400 of FIG. 4.
  • System 400 and/or the method of FIG. 3 may improve data routing, for example, improved utilization of network resources, lower risk of paying SLA fines, lower total fines paid, selection of better routes for data flows. Big-data analytical methods may be used to improve the data routing.
  • the system and/or method select and/or calculate data routes based on user context information.
  • the network resource requirements are predicted based on the user context information.
  • the data routes may be selected based on the predicted requirements.
  • the system and/or method select and/or calculate data routes in view of tiered routing policies, for example, a tiered SLA.
  • the network resource requirements are predicted in view of the tiered SLA, as applied to the requesting dataflow.
  • the route is selected based on the requirements and/or according to multi-tier routing dataset 404 storing multiple different routing parameters per link (e.g., between two network nodes).
  • One or more of the routing parameters may denote tiered routing policies, for example, each routing parameter denotes a different tier of the policy.
  • multi-tier routing dataset 404 is updated.
  • dataflow dataset 226 is updated.
  • Multi-tier routing dataset 404 and/or dataflow dataset 226 are updated in an asynchronous manner with respect to the rest of the route selection process (one or more of blocks 304-312).
  • multi-tier routing dataset 404 is updated with data from the classification of the prospective data flow (e.g. block 112 and/or block 306).
  • routing parameters of dataset 404 represent different costs for each link between two nodes in the network.
  • the costs may be updated based on the classified data flow results.
  • costs may be updated to reflect predicted network resource requirements, instead of nominal network resource requirements.
  • the statistical classifier is constructed based on data within multi-tier routing dataset 404, for example the different costs for the different links may be used to classify the prospective new flow.
  • data collected from network 202 is stored in a network database 416.
  • the data may be, for example, key performance indicators, metrics and/or other values.
  • Data may be collected by a route analysis module 414, other modules, other systems, and/or databases.
  • the stored data may be processed to populate parameters within dataflow dataset 226 and/or multi-tier routing dataset 404.
  • the data collection and/or processing may be performed using big-data analytics.
  • Multi-tier routing dataset 404 may correspond to routing table 236 of FIG. 2, with additional functionality.
  • Dataset 404 contains links between nodes in network 202.
  • Each link is associated with multiple routing parameters, for example, actual monetary cost of the link, bandwidlh of the link, latency of the link, link utilization (e.g., real time), user defined parameters, or other parameters.
  • the routing parameters may be, for example, cost associated parameters, where each parameter represents different criteria for cost.
  • the multi-constraint routing parameters allow for multi-constraint routing.
  • system 400 receives a request for routing one or more data packets through network 202.
  • the request may be issued by requesting entity 206.
  • a routing policy associated with the received request is identified, for example, a SLA.
  • a best-effort approach may be used.
  • the request is received by route selection module 412 for selecting a route.
  • Module 412 may correspond to routing module 234 of FIG. 2 having additional functionality to select routes according to subsets of multiple parameters.
  • the flow is classified based on the statistical classifier to predict the actual network resource requirements.
  • the prediction may be performed, for example, as described with reference to the method of FIG.1 and/or system 200 of FIG. 2.
  • a route for the classified data flow is selected, for example, by module 412.
  • module 412 selects the route based on the predicted network resource requirements. For example, the route is selected to satisfy the modified network requirements instead of the nominal requirements. Different paths may be selected using the modified requirements than would be selected using the nominal requirements.
  • route selection module 412 accesses multi-tier routing dataset 404.
  • dataset 404 is accessed as defined by the identified routing policy. For example, a subset of the routing parameters within dataset 404 that correspond to the indentified routing policy are accessed. The accessed parameters may be used in selecting routes, for example, calculating a least cost route. Alternatively or additionally, the routing parameters represent raw values for calculating one or more metrics, for example, using a function. The metric calculations may be performed on-the-fly according to the received routing request. For example, different routing policies may define different equations for calculating metrics using different subsets of routing parameters as variables.
  • the route is selected by selecting each potential link in the route based on a subset of the multiple different routing parameters from each potential link, the subset defined by the routing policy.
  • the selected route is provided, for example, as a signal, as one or more data packets, and/or using other information transfer methods.
  • the selected route is provided to requesting entity 206.
  • the data packets are transmitted within network 202 using the selected route.
  • transmission of the data packets is performed while adhering to the SLA, optionally according to the pre-selected level of risk and/or according to the pre-selected fine payouts.
  • FIG. 5 is an exemplary design of the system of FIG. 4, in accordance with some embodiments of the present invention.
  • a routing system 500 for selection of routes based on classified prospective data flows is in electrical communication with a data communication network 502 under central management by a network control 504. Routing system 500 receives requests for route selection issued by control 504. System 500 selects the route, and provides the selected route back to control 504.
  • System 500 contains flow analysis table 514 for storing nominal and predicted values associated with dataflows in network 202. Additional details of table 514 are provided herein.
  • System 500 contains a multi-tier routing table 506, having multiple cost columns associated with each link.
  • Each cost column represents different criteria for cost.
  • each column may represent cost per CoS (e.g., in systems where flows are classified to a predetermined class of service).
  • a metric is calculated from the cost column values, at the time of path calculation, on a per flow basis (e.g., in systems without pre-determined CoS).
  • a path computation engine 508 accesses table 506. Access may be performed according to CoS groups and/or by a per-flow policy. Path computation engine 508 selects data routes in view of the predicted requirements, for example, using a flow parameter transformation module.
  • a big-data analysis engine 510 and/or a big-data database 512 collect data from network 502. Based on the collected data (stored within database 512), engine 510 calculates values for the cost columns of table 506 (e.g., single metrics per parameter and/or a cost function) and/or classifies the data flow to calculate values for the predicted parameters within flow analysis table 514 (e.g., using a predictive routing analysis module).
  • engine 510 calculates values for the cost columns of table 506 (e.g., single metrics per parameter and/or a cost function) and/or classifies the data flow to calculate values for the predicted parameters within flow analysis table 514 (e.g., using a predictive routing analysis module).
  • a user information database 516 stores collected network data used to calculate the predicted values of table 514, and/or user associated details (e.g., SLA, user profile, and/or other data).
  • a path request is sent by control 504 to engine 508.
  • Engine 508 sends a prediction request to big data analysis engine 510.
  • Engine 510 accesses flow analysis table 514 and classifies the data flow to calculate the requested predicted requirement values (e.g., modified from the associated nominal values).
  • Engine 510 returns the predicted values to path calculation engine 508.
  • Engine 508 accesses table 506. The access may be performed in one of two modes. In a first class of service mode, the path request is classified into one of several classes according to predefined rules. The cost column corresponding to the class is used to select the least cost route. In a second per- flow SLA mode, the columns represent raw metrics. Engine 508 creates a cost function onthe-fly by creating a temporary cost column that combines together several metric columns according to a predefined cost function. Engine 508 calculates the least cost route in view of the predicted values, and returns the path to control 504.
  • a first class of service mode the path request is classified into one of several classes according to predefined rules.
  • the cost column corresponding to the class is used to select the least cost route.
  • the columns represent raw metrics.
  • Engine 508 creates a cost function onthe-fly by creating a temporary cost column that combines together several metric columns according to a predefined cost function.
  • Engine 508 calculates the least cost route in view of the predicted values, and returns
  • Table 506 and/or table 514 are updated asynchronously from the path selection process described in the previous paragraph. Key performance indicators and/or other metrics are gathered periodically from network 502 and stored in database 512 and/or database 516. Big-data engine 510 queries database 512, and calculates values for the cost and/or metric columns (depending on the model of CoS and/or on-the-fly model) of table 506. Big-data engine 510 updates routing table 506. Big-data engine 510 queries databases 512 and/or 516, and calculates values for the predicted parameters of table 514. Big-data engine 510 updates flow analysis table 514.
  • compositions, methods or structure may include additional ingredients, steps and/or parts, but only if the additional ingredients, steps and/or parts do not materially alter the basic and novel characteristics of the claimed composition, method or structure.
  • a compound or “at least one compound” may include a plurality of compounds, including mixtures thereof
  • range format is merely for convenience and brevity and should not be construed as an inflexible limitation on the scope of the present invention. Accordingly, the description of a range should be considered to have specifically disclosed all the possible subranges as well as individual numerical values within that range. For example, description of a range such as from 1 to 6 should be considered to have specifically disclosed subranges such as from 1 to 3, from 1 to 4, from 1 to 5, from 2 to 4, from 2 to 6, from 3 to 6 etc., as well as individual numbers within that range, for example, 1, 2, 3, 4, 5, and 6. This applies regardless of the breadth of the range.

Abstract

There is provided a method of classifying flows of data through a data communication network for selecting routes, the method comprising: monitoring data flows in a data communication network; generating a statistical classifier based on the monitored flows; receiving a request for a route in a data communication network for transmission of a flow of data packets; classifying the flow based on the generated statistical classifier to predict network resource requirements for transmission of the flow through the network; selecting the route for the classified flow; and generating a signal indicative of the selected route so that the flow is routed in the data communication network through the selected route.

Description

METHODS AND SYSTEMS FOR SELECTING RESOURCES FOR DATA ROUTING
TECHNICAL FIELD
The present application relates to methods and systems for selecting routes for data flows in a communication network and to methods and systems for selecting network resource requirements for transmission of data flows within a data communication network.
BACKGROUND
Communication networks, for example, cloud and/or data center networks are required to provision large numbers and/or sizes of flows in the network while adhering to a SLA (Service Level Agreement) associated with each of the flows. The SLA may define fines for failing to provide a level of service, for example, failure to deliver data packets. In contrast, to economically and efficiently use network resources, for example, in order to gain from the statistical multiplexing nature of model packet based communication networks, networks are oversubscribed.
Oversubscription is the act of provisioning the same resource several times over. For example, two flows each requiring 100Mbps (Megabit per second) may be provisioned over a link with a nominal bandwidth of 100Mbps. This provisioning amounts to 2 times oversubscription
Statistical Multiplexing is a data transmission method which is based on the premise that packet based flows transmit intermittently and independently, and therefore may be interleaved over the same resource.
Where oversubscribing flows associated with SLAs, there is a risk of failing to adhere to the SLA, which may result in fines to the service provider. One possible solution is to guarantee BW (bandwidth) availability to the data flow. The guaranteed BW should not be oversubscribed, to avoid SLA related fines for failing to deliver the data. However, the reserved BW may be wasted, as packet flows transmit intermittently, leaving the BW unused during the non-transmission periods. Leaving the links underutilized to reserve the BW for the flows may be implemented, for example, in carrier networks. As the flows are long lived (e.g., continuous and/or transmitted for long periods of time), the fines for breaching the SLA are larger than the cost of underutilizing the network infrastructure.
Another possible solution is to intentionally fail to adhere to the SLA, and pay the fines for SLA breaches. This solution may be implemented, for example, in datacenters, where the flows are relatively short lived. Paying fines may be less costly than underutilizing the network resources.
Service providers try to balance these two contradictory solutions. On the one hand, if links are only partially used, the network resources may be wasted without proper return on investment. On the other hand, if the SLA is breached, then fines are paid out, which may cause a financial loss for the service provider.
SUMMARY
It is an object of the invention to provide systems and/or methods that classify flows of data through a data communication network, for selecting routes for the data flow.
The foregoing and other objects are achieved by the features of the independent claims. Further implementation forms are apparent from the dependent claims, the description and the figures.
According to a first aspect, a method of classifying flows of data through a data communication network for selecting routes comprising: monitoring data flows in a data communication network; generating a statistical classifier based on the monitored flows; receiving a request for a route in a data communication network for transmission of a flow of data packets; classifying the flow based on the generated statistical classifier to predict network resource requirements for transmission of the flow through the network; selecting the route for the classified flow; and generating a signal indicative of the selected route so that the flow is routed in the data communication network through the selected route.
In a first possible implementation of the method according to the first aspect, the classifying further comprises a certainty of the prediction of actual usage of network resources by the flow.
In a second possible implementation form of the method according to the first aspect as such or according to the first implementation form of the first aspect, the method further comprises receiving a request for a prediction of the network resource routing requirements for transmission of the flow of data in the data communication network and predicting the network resource requirements based on the statistical classifier.
In a third possible implementation form of the method according to the first aspect as such or according to any of the preceding implementation forms of the first aspect, the predicted network resource requirements are calculated as a function of nominal network resource reservations of the flow.
In a fourth possible implementation form of the method according to the first aspect as such or according to any of the preceding implementation forms of the first aspect, classifying further comprises classifying to predict at least one of the risk and the cost of failing to adhere to a service level agreement of the flow.
In a fifth possible implementation form of the method according to the first aspect as such or according to any of the preceding implementation forms of the first aspect, classifying further comprises adjusting the predicted network resource requirements in view of a selected risk of non- adherence to a service level agreement having associated fines due to non-adherence.
In a sixth possible implementation form of the method according to the first aspect as such or according to any of the preceding implementation forms of the first aspect, the monitoring and generating the statistical classifier are performed using big-data analytics.
In a seventh possible implementation form of the method according to the first aspect as such or according to any of the preceding implementation forms of the first aspect, the monitoring and generating the statistical classifier are performed asynchronously with respect to the receiving, classifying, selecting, and generating the signal.
In an eighth possible implementation form of the method according to the first aspect as such or according to any of the preceding implementation forms of the first aspect, the statistical classifier is based on a Collaborative Filtering system.
In a ninth possible implementation form of the method according to the first aspect as such or according to any of the preceding implementation forms of the first aspect, the method further comprises monitoring adherence to a service level agreement defined by nominal resource requirements of the flow, during transmission of the flow over the selected route that utilizes the predicted network resource requirements. In a tenth possible implementation form of the method according to the first aspect as such or according to any of the preceding implementation forms of the first aspect, the monitoring and generating the statistical classifier are continuously performed in an iterative manner.
In an eleventh possible implementation form of the method according to the first aspect as such or according to any of the preceding implementation forms of the first aspect, the method further comprises recalibrating nominal network resource reservations of the flow to the predicted network resource requirements.
In a twelfth possible implementation form of the method according to the first aspect as such or according to any of the preceding implementation forms of the first aspect, monitoring data flows comprises identifying user context data of the data flows, and generating the statistical classifier comprises generating the statistical classifier based on the identified user context data.
In a thirteenth possible implementation form of the method according to the first aspect as such or according to any of the preceding implementation forms of the first aspect, selecting the route comprises accessing a routing dataset storing a plurality of different routing parameters per link.
In a fourteenth possible implementation form of the method according to the first aspect as such or according to the thirteenth implementation form of the first aspect, the method further comprises collecting data from the data communication network, and updating the routing dataset using the collected data, the updating performed asynchronously with respect to accessing the routing dataset for route selection.
According to a further aspect the method according to any of the implementation forms of the first aspect or the first aspect as such being carried out by a predictive analysis unit programmed to carry out the steps of the method.
According to a further aspect computer program having a program code for performing the method according to any of the implementation forms of the first aspect or the first aspect as such, when the computer program runs on a computer is provided.
According to another aspect, a flow of data is classified using a statistical classifier to predict required network resources for transmission of data flows within a data communication network. The statistical classifier is based on current and/or previous patterns of data flows through the network. A route in the network may be selected for the classified flow of data. Unless otherwise defined, all technical and/or scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which the present invention pertains. Although methods and materials similar or equivalent to those described herein can be used in the practice or testing of implementation forms of the present invention, exemplary methods and/or materials are described below. In case of conflict, the patent specification, including definitions, will control. In addition, the materials, methods, and examples are illustrative only and are not intended to be necessarily limiting.
Implementation forms of the method and/or system of the present invention can involve performing or completing selected tasks manually, automatically, or a combination thereof. Moreover, according to actual instrumentation and equipment of implementation forms of the method and/or system of the present invention, several selected tasks could be implemented by hardware, by software or by firmware or by a combination thereof using an operating system.
For example, hardware for performing selected tasks according to implementation forms of the present invention could be implemented as a chip or a circuit. As software, selected tasks according to implementation forms of the present invention could be implemented as a plurality of software instructions being executed by a computer using any suitable operating system. In an exemplary implementation form of the present invention, one or more tasks according to exemplary implementation forms of method and/or system as described herein are performed by a data processor, such as a computing platform for executing a plurality of instructions. Optionally, the data processor includes a volatile memory for storing instructions and/or data and/or a non- volatile storage, for example, a magnetic hard-disk and/or removable media, for storing instructions and/or data. Optionally, a network connection is provided as well. A display and/or a user input device such as a keyboard or mouse are optionally provided as well.
BRIEF DESCRIPTION OF THE DRAWINGS
Some embodiments of the present invention are herein described, by way of example only, with reference to the accompanying drawings. With specific reference now to the drawings in detail, it is stressed that the particulars shown are by way of example and for purposes of illustrative discussion of embodiments of the present invention. In this regard, the description taken with the drawings makes apparent to those skilled in the art how embodiments of the present invention may be practiced. FIG. 1 is a flow chart of a method of classifying data flows, in accordance with some embodiments of the present invention;
FIG. 2 is a block diagram of a system for classifying data flows, in accordance with some embodiments of the present invention;
FIG. 3 is a flowchart of a method of selecting routes for transmission of data flows based on classified data flows, using the method of FIG.1, in accordance with some embodiments of the present invention;
FIG. 4 is a block diagram of a system for selecting routes for transmission of data flows based classified data flows, using the system of FIG. 2, in accordance with some embodiments of the present invention; and
FIG. 5 is a schematic diagram of an exemplary design of a system for selecting routes for transmission of data flows, according to the system of FIG. 4.
DESCRIPTION OF SPECIFIC EMBODIMENTS
The present invention relates to methods and systems for selecting routes for data flows in a communication network and to methods and systems for selecting network resource requirements for transmission of data flows within a data communication network.
An aspect of some embodiments of the present invention relates to systems and/or methods for classifying flows of data (e.g., packets) through a data communication network, based on multiparameter monitoring of current and/or previous network data flow patterns. Optionally, the classification of the new data flow is performed by a statistical classifier. Optionally, the statistical classifier is constructed from data collected from multiple-parameter monitoring of current and/or previous data flow patterns for the same client requesting the new data flow. Alternatively or additionally, the statistical classifier is constructed from data collected from multiple-parameter monitoring of current and/or previous data flow patterns of other clients.
As used herein, the phrase predicted network resource requirements means the network resources that are predicted to be needed for transmission of the dataflow through the network using a selected route. Optionally, classifying the flow based on the generated statistical classifier predicts the network resource requirements. Examples of network resource requirements include: bandwidth to accommodate the data flow request, latency, error rate, jitter, packet loss rate, and/or other transmission related requirements. The predicted network resource requirements may or may not match the actual network resources used during the data flow transmission. Optionally, the statistical certainty and/or statistical error between the predicted network resource requirements and the actual network resources used is selected and/or estimated. The prediction may be performed to be within the statistical error. Alternatively or additionally, a certainty of the actual usage of network resources by the flow is predicted. For example, the predicted bandwidth is about 30 Mbps with a certainty of about 90%.
The predicted network resource requirements may increase utilization of network infrastructure resources, and may reduce the risk of paying fines, for example, due to failure to adhere to service level agreements. Utilization of network resources may be increased by allocating and reserving only the actual required resources. The risk of paying fines due to uncalculated over provisioning of network resources may be reduced. Optionally, predicting network resource requirements in view of the risk of missing the SLA and/or the corresponding fines may control risk of fine payouts, for example, by taking into account the confidence level of required resource estimation. The predicted network user requirements may increase adherence to the service level agreement, which may improve user experience.
Optionally, the network resource requirements are predicted in reference to nominal network resources of the flow of data. For example, the nominal required bandwidth for a new flow might be about 100 megabit per second (Mbps), and the predicted bandwidth is about 30 Mbps.
Optionally, the predicted network resource requirement is less than the nominal network resource reservation associated with the request. Optionally, the difference between the predicted and nominal values is used to transmit other data flows.
Optionally, a statistical classifier is generated based on the monitored data flows within the data communication network. Optionally, the prediction of the network resource requirements is performed using the statistical classifier. Generating the statistical classifier and/or prediction using the statistical classifier is, for example, based on a learning classifier system, based on a recommender system (e.g., Collaborative Filtering), and/or other suitable systems. Optionally, the monitoring and the corresponding generation of data sets are performed in an iterative manner, for example, based on a predefined rate and/or as network conditions change.
Optionally, values associated with the monitored data flows are collected and/or analyzed using big-data analytics. As used herein, the term big-data means a data set that is too large for capturing and/or processing within an acceptable range of time, for example, monitoring the data flows and generating the statistical classifier in response to the received request for predicting the network resource use. Examples of big-data analytics which may be used include: MapReduce methodology.
Optionally, the nominal network resource reservation request is defined by a routing policy, for example, a Service Level Agreement (SLA). Optionally, the risk of failing to adhere to the SLA when reserving the predicted network resource requirements is estimated. Alternatively or additionally, the cost of failing to adhere to the SLA is estimated, for example, total fines.
Optionally, the statistical classifier is generated based on user context information associated with the data flows. As described herein, the phrase user context information" means details about the dataflow associated with the requesting user and/or other users. Optionally, the received request is associated with a certain user context. Optionally, the network resources are predicted in view of the certain user context. For example, the behavior of data flows from one user as compared to data flows from other users.
Optionally, the nominal network resource reservation is recalibrated according to the predicted network resource requirements. For example, predicted future requests for nominal network resources for the client are adjusted. In another example, future predicted SLAs for clients with similar user profiles are designed according to the prediction.
Before explaining at least one embodiment of the present invention in detail, it is to be understood that the present invention is not necessarily limited in its application to the details of construction and the arrangement of the components and/or methods set forth in the following description and/or illustrated in the drawings and/or the Examples. The present invention is capable of other embodiments or of being practiced or carried out in various ways.
Referring now to the drawings, FIG. 1 is a method of classifying data flows for route selection, in accordance with some embodiments of the present invention. Reference is also made to FIG. 2, which is a block diagram of a system 200 for classifying data flows for route selection, in accordance with some embodiments of the present invention. The method of FIG. 1 may be performed by system 200 of FIG. 2. For example, system 200 is a predictive analysis unit programmed to carry out the steps of the method. System 200 and/or the method select routes for a prospective new flow by classifying the data flow to predict usage of network resources. Optionally, the route is selected for the classified flow. Implementing the predicted usage of network resources, by reservation of the predicted resources, may reduce or prevent non-adherence to the SLA associated with the new flow. Network resource use may be optimized by the method and/or system. Optionally, the predictions are made using a statistical classifier, for example, a predictive model, data mining techniques, or other methods.
Prediction algorithms may be based on machine learning techniques, for example:
• Artificial neural network
• Hierarchical Clustering
• Collaborative filtering
• Content-based filtering
The predicted usage of network resources by the new flow may denote the minimum network resource reservation needed. Optionally, data of past and/or existing flows within the network are analyzed to form the statistical classifier for to predicting the resources needed by the new flows.
System 200 comprises a hardware processor 208 in electrical communication with one or more non-transitory memories 210 storing one or more program modules and/ or databases containing instructions for execution by processor 208. Optionally, one or more memories 210 are designed to be suitable for big-data analytics, for example, direct-attached storage such as solid state drives using high capacity serial advanced technology attachment.
Optionally, at 102, a predictive analysis module 228 monitors a data communication network 202. Optionally, module 228 monitors data flows within network 202. Optionally, monitoring is performed at the network provider level, for example, by the internet service provider (ISP). Module 228 monitors, for example, data within the packets themselves (e.g., payload, type of packet, length of packet), transmission data of network flows (e.g., average transmission time per packet, number of hops, packet loss, jitter, length of time packets were transmitted), user profiles (e.g., identity of user sending the data, SLA of the user, type of application), or other data. Optionally, data monitoring is continuous. All data flows may be monitored, or a selected subset may be monitored (e.g., per use).
Optionally, network 202 is an autonomous system that presents a commonly defined routing policy to external networks, for example, the internet. Network 202 may be owned by a single entity (for example, an internet service provider, a telecommunication company, or other organizations), or by multiple entities that may connect different networks together to form the single autonomous system. Optionally, network 202 is a packet switching network.
Optionally, module 228 gathers data from network 202, for example, from network elements (e.g., bandwidth and/or latency for the flow through a router), and/or from the data packets themselves (e.g., reading header information). Alternately or additionally, module 228 gathers data from a user database 230. Optionally, data is gathered per flow within network 202. Each user may have multiple flows within the network Data may be gathered for the overall transmission of the data flow, for example, total transmission time from end terminal to end terminal. Data may be gathered per link between two nodes, for example, transmission time between the nodes using the link.
Optionally, user database 230 contains parameters denoting user context information. Optionally, database 230 stores big-data. The user context data may be associated with current flows within network 202, with previous flows within network 202, may not be related to current and/or previous flows, and/or may be related to potential flows having associated flow requests and/or associated prediction requests.
Examples of the gathered data based on the monitored flows include: the user (e.g., profile, ID), the application (e.g., associated with the dataflow), source (e.g., IP address), destination (e.g., IP address), requested resources (e.g., BW, latency), actual usage of the resource (e.g., minimum, maximum, average), lifespan (e.g., time duration of the flow), fines for SLA breaches, and/or other variables.
For each flow, data may be collected, for example, per link, per device, and/or per interface. The collected data may be combined, for example, for all links in the transmission route, for all devices encountered in the transmission route, and for all interfaces in the transmission route.
Optionally, at 104, the gathered data is analyzed, for example, by module 228. Optionally, a statistical classifier is constructed based on the gathered data. Optionally, the analysis is performed by a suitable algorithm, module and/or system, for example, a learning algorithm, a predictive modeling algorithm, a recommender system, and/or other suitable algorithms. An example of a recommender algorithm is a Collaborative Filtration algorithm.
Optionally, the analysis is performed for one or more existing flows, for example, each flow in network 202. Alternatively or additionally, the analysis is performed for one or more previous flows, for example, each flow in network 202 during the last 4 hours, last 24 hours, last week, similar day of the week, similar date of the year, or other periods in time. Optionally, the statistical classifier and/or analyzed data are stored in a dataflow dataset 226, for example, a database, a table, a hash-table, a tree, a directed graph, a record, an array, a linked list, or other suitable data structures. Optionally, dataset 226 stores big-data. Optionally, data from other associated databases are stored in dataflow dataset 226, for example, routing table data related to the analyzed flows, and/or other data related to the analyzed flows.
At 106, dataflow dataset 226 is updated, by iterations of monitoring (e.g., block 102) and/or generating the statistical classifier (e.g., block 104). Iterations may be performed continuously. Optionally, dataset 226 is maintained in an updated state, for example, according to best-efforts and/or resource availability. Optionally, the data within dataset 226 is used for the prediction process even when the update of dataset 226 lags being actual current network conditions. The current data within dataset 226 may be updated enough to allow for accurate predictions, for example, within a margin of error.
Optionally, the updates are decoupled from the rest of the process of predicting the network resource requirements (e.g., blocks 108-122). Optionally the requests do not trigger a corresponding update of dataset 226. Alternatively or additionally, the updates do not trigger a corresponding response to pending requests. Optionally, reading from dataset 226 (e.g., to predict the network requirements) and writing to dataset 226 (e.g., updates) may be performed asynchronously and/or independently. The asynchronous reading and writing processes may be performed by separate entities and/or processes.
The updates may occur at a preset rate (e.g., user defined), and/or at present intervals (e.g. automatically set by software). Updates may be performed continuously. Updates may be performed at dynamic rates, for example, changing according to network conditions and/or available resources.
The decoupling of the processes of updating dataset 226 and predicting the network resource requirements may allow the use of big-data analytics. The big-data analytics may improve the accuracy of the prediction, using additional available information associated with the data flows and/or users.
Optionally, at 108, a request is received for a prediction of network resource requirements for transmission of a flow of data within network 202. Alternatively or additionally, the request for prediction is triggered by a request for a route within network 202 for the flow.
The request may originate from a requesting entity 206, for example, a server within system 200 (e.g., to route data between two nodes and/or terminals within the network) and/or a server external to system 200 (e.g., to route data entering network 202 across and/or out of network 202, for example, the terminals being located outside of network 202).
Optionally, network 202 is centrally managed by a network management system. The network management system may act as requesting entity 206 to issue prediction requests.
Optionally, requesting entity 206 issues a request for a route for a new dataflow. Alternatively or additionally, the request is for a change in route in an existing dataflow. Alternatively or additionally, the request is for re-instatement of a previous data flow, for example, an expired dataflow and/or an occasional dataflow. The request for the route may be received, for example, by a routing module 234 for selecting routes through network 202. Routing module 234 may be an off-the shelf system such as a router, a route selection module 412 described with reference to FIG. 4, or other software and/or hardware for selecting data routes. Routing module 234 may issue a request for prediction of network utilization resources associated with the new dataflow. The prediction request may be received, for example, by a flow parameter transformation module 232. Alternatively or additionally, one or both of the requests for the route selection and prediction are received by predictive analysis module 228.
Optionally, at 110, nominal network resource requirements for the new dataflow are identified, for example, by accessing user database 230 and/or other sources of data. Optionally, a routing policy of the new dataflow defines the network resource requirements. The routing policy is, for example, a SLA between the client and the service provider, a policy internal to the service provider itself, a policy based on the profile of the client, or other policies. Optionally, there are different levels of the routing policy, for example, for same the client but for different data, for different periods of time, and/or other defined variables. Different levels of the routing policy may define different values for the nominal network resource requirements.
Optionally, flow parameter transformation module 232 identifies the nominal network resource requirements and sends the identified nominal values to predictive analysis module 228. Alternatively or additionally, predictive analysis module 228 identifies the nominal network resource requirements.
Alternatively, nominal values are not identified. For example, if there is no SLA for the client, then the data is classified as low-priority, and/or other factors. In such a case, the request for prediction may be based on a best-effort of transmission of the data using available resources, without interfering with other data flows that have higher priorities and/or a SLA At 112, the required network resources are predicted for the dataflow. Optionally, the prediction is performed by predictive analysis module 228. Optionally, the dataflow is classified by the statistical classification to predict the network resource requirements.
Optionally, the predicted value is calculated as a function of the nominal value, for example, a percentage of the nominal value. The received nominal values may be modified per the function, and returned as the predicted values.
Optionally, the predicted values are stored in dataflow dataset 226. Optionally, the predicted values are stored in association with the stored identified nominal values.
In one example, dataflow dataset 226 is represented as a table, for example, a flow analysis table 514 described with reference to FIG. 5. The table contains a row for Known or identified nominal values, and another row for the Predicted values. The table contains one or more columns Paraml, Param2, Param3, ParamN, for storing values associated with different network resource requirements and/or routing parameters. The table may be multidimensional, for example, with another dimension for the Flow ID of the requesting new flow, current and/or previous flows of the same client, current and/or previous flows of other clients, and/or other flows The nominal and/or predicted values may represent requirements for overall data transport, or for partial data transport, for example, different requirements for different links and/or other internal network divisions. Alternatively or additionally, dataflow dataset 226 is represented by other suitable data structures, for example, records, trees, graphs, objects, linked lists, and/or other suitable structures.
Optionally, the prediction of the resource requirements is performed in view of a selected risk of non-adherence to the SLA and/or associated fines due to the non-adherence. Optionally, the predicted resource requirements (e.g., using the statistical classifier) are adjusted according to the level of selected risk and/or associated fines. Optionally, a risk analysis algorithm is used to calculate risks using multiple parameters. Optionally, the predicted adjusted resource requirements denote the most optimal solution, for example, higher network resource utilization in view of lower risks and lower total fine payout, while increasing revenue. For example, a 90% pre-selected risk of paying a fine may result in relatively higher resource requirement reservations (e.g., higher BW allocation) than a 50% preselected risk of paying the fine. In another example, an 80%> pre-selected risk of paying a predetermined fine of $10000 may result in relatively higher resource requirement reservations than the 80%) pre-selected risk of paying a predetermined fine of $1000. The risk may be selected, for example, automatically by software (e.g., according to risk analysis algorithms), manually by the network operator, and/or pre-set by the manufacturer. Optionally, at 114, the predicted network resource requirements are provided, for example, as a generated signal, as one or more data packets, and/or using other information transfer methods.
Optionally, the predicted resource requirements are provided by predictive analysis module 228 to flow parameter transformation module 232, and/or to routing module 234.
Optionally, at 116, the predicted resource requirements are reserved for the requesting dataflow, for example, by flow parameter transformation module 232. Alternatively or additionally, the nominal values have already been reserved for the requesting dataflow, and are recalibrated according to the predicted network resource requirements.
At 118, a route is selected for the classified dataflow, optionally based on the predicted network resource requirements. Optionally, routing module 234 selects the route using a routing table 236, optionally based on the predicted resource requirements. Routing table 236 may be a standard routing table associated with a router for selecting routes, a multi-tier routing dataset 404 described with reference to FIG. 4, and/or other databases storing information for selecting data routes.
Additional links may be available for selecting the route based on the predicted network resource requirements, as compared to links available for selecting the route based on the nominal network resource requirements. For example, for predicted BW values significantly less than nominal BW values, many more links may be available to accommodate the lower BW(i.e., predicted) than links available to accommodate the higher BW (i.e., nominal).
Optionally, the data packets are transmitted within network 202 using the selected route.
Optionally, at 120, adherence to the routing policy is monitored during transmission of the data packets with implementation of the predicted network resource requirements. Adherence to the SLA may be monitored for the new data flow and/or for other data flows through network 202.
Optionally, adherence to the nominal values within the SLA is monitored during implementation of data routing using the predicted values.
Optionally, some instances of failure to meet the SLA are allowed and may be expected (e.g., statistical variation), for example, when overall profits are increased in view of increased optimized utilization of network resources. Optionally, overall revenue and/or profits are monitored when routing data using the predicted network resources. Optionally, the revenue and/or profits are compared to routing data using the nominal requested resources.
Optionally, at 122, the process of classifying the dataflow is repeated (e.g., one or more of blocks 108, 110, 112, 114, 116, 118 and/or 120). For example, the process is repeated for each new dataflow request. For example, for requests by the same client for several different data flows, and/or for requests by different clients.
Optionally, the process is adjusted. Optionally, the process is adjusted in view of the monitoring of adherence to the SLA. For example, if a certain data flow does not adhere to the SLA using the current predicted requirements, another classification may be made so that routing using the new predicted values improves adherence to the SLA.
Referring back to FIG. 2, optionally, system 200 has an interface 218 for electrical communication between processor 208 and requesting entity 206 and/or the network management system.
Optionally, system 200 has an interface 220 for electrical communication between processor 208 and network 202.
Optionally, system 200 is sold as a box. Interface 218 is connected to the network management system. Interface 220 is connected to the communication network. Alternatively or additionally, at least some parts of system 200 are sold as software, for example, loaded and run as part of the network management system.
Optionally, system 200 is in electrical communication with one or more input elements 222 for a user to enter input into processor 208, for example, a touchscreen, a keyboard, a mouse, voice recognition, and/or other elements. The user may enter, for example, the routing policies.
Optionally, system 200 is in electrical communication with one or more output elements 224 for a user to view data from processor 208, for example, a screen, a mobile device (e.g., Smartphone), a printer, a laptop, a remote computer, or other devices. Output element 224 may be used, for example, to view routing table 236, to upgrade software, to view configurations, and/or to debug the system.
Reference is now made to FIG. 3, which is a method of selecting routes for classified data flows, in accordance with some embodiments of the present invention. The method of FIG. 3 combines the method of classifying data flows as described with reference to FIG. 1. Reference is also made to FIG. 4, which is a block diagram of a system 400 for selecting data routes for classified flows, in accordance with some embodiments of the present invention. System 400 is a combination of elements from system 200 of FIG. 2, with route selection elements. The method of FIG. 3 may be performed by system 400 of FIG. 4. System 400 and/or the method of FIG. 3 may improve data routing, for example, improved utilization of network resources, lower risk of paying SLA fines, lower total fines paid, selection of better routes for data flows. Big-data analytical methods may be used to improve the data routing.
Optionally, the system and/or method select and/or calculate data routes based on user context information. Optionally, the network resource requirements are predicted based on the user context information. The data routes may be selected based on the predicted requirements.
Optionally, the system and/or method select and/or calculate data routes in view of tiered routing policies, for example, a tiered SLA. Optionally, the network resource requirements are predicted in view of the tiered SLA, as applied to the requesting dataflow. Optionally, the route is selected based on the requirements and/or according to multi-tier routing dataset 404 storing multiple different routing parameters per link (e.g., between two network nodes). One or more of the routing parameters may denote tiered routing policies, for example, each routing parameter denotes a different tier of the policy.
Optionally, at 302, multi-tier routing dataset 404 is updated. Alternatively or additionally, dataflow dataset 226 is updated. Multi-tier routing dataset 404 and/or dataflow dataset 226 are updated in an asynchronous manner with respect to the rest of the route selection process (one or more of blocks 304-312).
Optionally, multi-tier routing dataset 404 is updated with data from the classification of the prospective data flow (e.g. block 112 and/or block 306). For example, routing parameters of dataset 404 represent different costs for each link between two nodes in the network. The costs may be updated based on the classified data flow results. For example, costs may be updated to reflect predicted network resource requirements, instead of nominal network resource requirements. Alternatively or additionally, the statistical classifier is constructed based on data within multi-tier routing dataset 404, for example the different costs for the different links may be used to classify the prospective new flow.
Optionally, data collected from network 202 is stored in a network database 416. The data may be, for example, key performance indicators, metrics and/or other values. Data may be collected by a route analysis module 414, other modules, other systems, and/or databases. The stored data may be processed to populate parameters within dataflow dataset 226 and/or multi-tier routing dataset 404. The data collection and/or processing may be performed using big-data analytics.
Multi-tier routing dataset 404 may correspond to routing table 236 of FIG. 2, with additional functionality. Dataset 404 contains links between nodes in network 202. Each link is associated with multiple routing parameters, for example, actual monetary cost of the link, bandwidlh of the link, latency of the link, link utilization (e.g., real time), user defined parameters, or other parameters. The routing parameters may be, for example, cost associated parameters, where each parameter represents different criteria for cost. Optionally, the multi-constraint routing parameters allow for multi-constraint routing.
At 304, system 400 receives a request for routing one or more data packets through network 202. The request may be issued by requesting entity 206.
Optionally, a routing policy associated with the received request is identified, for example, a SLA. Alternatively, there is no routing policy. A best-effort approach may be used.
Optionally, the request is received by route selection module 412 for selecting a route.
Module 412 may correspond to routing module 234 of FIG. 2 having additional functionality to select routes according to subsets of multiple parameters.
At 306, the flow is classified based on the statistical classifier to predict the actual network resource requirements. The prediction may be performed, for example, as described with reference to the method of FIG.1 and/or system 200 of FIG. 2.
At 308, a route for the classified data flow is selected, for example, by module 412. Optionally, module 412 selects the route based on the predicted network resource requirements. For example, the route is selected to satisfy the modified network requirements instead of the nominal requirements. Different paths may be selected using the modified requirements than would be selected using the nominal requirements.
Optionally, route selection module 412 accesses multi-tier routing dataset 404. Optionally, dataset 404 is accessed as defined by the identified routing policy. For example, a subset of the routing parameters within dataset 404 that correspond to the indentified routing policy are accessed. The accessed parameters may be used in selecting routes, for example, calculating a least cost route. Alternatively or additionally, the routing parameters represent raw values for calculating one or more metrics, for example, using a function. The metric calculations may be performed on-the-fly according to the received routing request. For example, different routing policies may define different equations for calculating metrics using different subsets of routing parameters as variables.
Optionally, the route is selected by selecting each potential link in the route based on a subset of the multiple different routing parameters from each potential link, the subset defined by the routing policy.
Optionally, at 310, the selected route is provided, for example, as a signal, as one or more data packets, and/or using other information transfer methods. Optionally, the selected route is provided to requesting entity 206.
Optionally, at 312, the data packets are transmitted within network 202 using the selected route. Optionally, transmission of the data packets is performed while adhering to the SLA, optionally according to the pre-selected level of risk and/or according to the pre-selected fine payouts.
Reference is now made to FIG. 5, which is an exemplary design of the system of FIG. 4, in accordance with some embodiments of the present invention.
A routing system 500 for selection of routes based on classified prospective data flows is in electrical communication with a data communication network 502 under central management by a network control 504. Routing system 500 receives requests for route selection issued by control 504. System 500 selects the route, and provides the selected route back to control 504.
System 500 contains flow analysis table 514 for storing nominal and predicted values associated with dataflows in network 202. Additional details of table 514 are provided herein.
System 500 contains a multi-tier routing table 506, having multiple cost columns associated with each link. Each cost column represents different criteria for cost. For example, each column may represent cost per CoS (e.g., in systems where flows are classified to a predetermined class of service). In another example, a metric is calculated from the cost column values, at the time of path calculation, on a per flow basis (e.g., in systems without pre-determined CoS).
A path computation engine 508 accesses table 506. Access may be performed according to CoS groups and/or by a per-flow policy. Path computation engine 508 selects data routes in view of the predicted requirements, for example, using a flow parameter transformation module.
A big-data analysis engine 510 and/or a big-data database 512 collect data from network 502. Based on the collected data (stored within database 512), engine 510 calculates values for the cost columns of table 506 (e.g., single metrics per parameter and/or a cost function) and/or classifies the data flow to calculate values for the predicted parameters within flow analysis table 514 (e.g., using a predictive routing analysis module).
A user information database 516 stores collected network data used to calculate the predicted values of table 514, and/or user associated details (e.g., SLA, user profile, and/or other data).
In operation, a path request is sent by control 504 to engine 508. Engine 508 sends a prediction request to big data analysis engine 510. Engine 510 accesses flow analysis table 514 and classifies the data flow to calculate the requested predicted requirement values (e.g., modified from the associated nominal values). Engine 510 returns the predicted values to path calculation engine 508.
Engine 508 accesses table 506. The access may be performed in one of two modes. In a first class of service mode, the path request is classified into one of several classes according to predefined rules. The cost column corresponding to the class is used to select the least cost route. In a second per- flow SLA mode, the columns represent raw metrics. Engine 508 creates a cost function onthe-fly by creating a temporary cost column that combines together several metric columns according to a predefined cost function. Engine 508 calculates the least cost route in view of the predicted values, and returns the path to control 504.
Table 506 and/or table 514 are updated asynchronously from the path selection process described in the previous paragraph. Key performance indicators and/or other metrics are gathered periodically from network 502 and stored in database 512 and/or database 516. Big-data engine 510 queries database 512, and calculates values for the cost and/or metric columns (depending on the model of CoS and/or on-the-fly model) of table 506. Big-data engine 510 updates routing table 506. Big-data engine 510 queries databases 512 and/or 516, and calculates values for the predicted parameters of table 514. Big-data engine 510 updates flow analysis table 514.
It is expected that during the life of a patent maturing from this application many relevant data communication networks and/or databases will be developed and the scope of the term data communication network and/or database is intended to include all such new technologies a priori.
As used herein the term "about" refers to ± 10 %.
The terms "comprises", "comprising", "includes", "including", "having" and their conjugates mean "including but not limited to".
The term "consisting of means "including and limited to". The term "consisting essentially of means that the composition, method or structure may include additional ingredients, steps and/or parts, but only if the additional ingredients, steps and/or parts do not materially alter the basic and novel characteristics of the claimed composition, method or structure.
As used herein, the singular form "a", "an" and "the" include plural references unless the context clearly dictates otherwise. For example, the term "a compound" or "at least one compound" may include a plurality of compounds, including mixtures thereof
Throughout this application, various embodiments of this present invention may be presented in a range format. It should be understood that the description in range format is merely for convenience and brevity and should not be construed as an inflexible limitation on the scope of the present invention. Accordingly, the description of a range should be considered to have specifically disclosed all the possible subranges as well as individual numerical values within that range. For example, description of a range such as from 1 to 6 should be considered to have specifically disclosed subranges such as from 1 to 3, from 1 to 4, from 1 to 5, from 2 to 4, from 2 to 6, from 3 to 6 etc., as well as individual numbers within that range, for example, 1, 2, 3, 4, 5, and 6. This applies regardless of the breadth of the range.
Whenever a numerical range is indicated herein, it is meant to include any cited numeral (fractional or integral) within the indicated range. The phrases "ranging/ranges between" a first indicate number and a second indicate number and "ranging/ranges from" a first indicate number "to" a second indicate number are used herein interchangeably and are meant to include the first and second indicated numbers and all the fractional and integral numerals therebetween.
It is appreciated that certain features of the present invention, which are, for clarity, described in the context of separate embodiments, may also be provided in combination in a single embodiment. Conversely, various features of the present invention, which are, for brevity, described in the context of a single embodiment, may also be provided separately or in any suitable subcombination or as suitable in any other described embodiment of the present invention. Certain features described in the context of various embodiments are not to be considered essential features of those embodiments, unless the embodiment is inoperative without those elements.
Although the present invention has been described in conjunction with specific embodiments thereof, it is evident that many alternatives, modifications and variations will be apparent to those skilled in the art. Accordingly, it is intended to embrace all such alternatives, modifications and variations that fall within the spirit and broad scope of the appended claims. All publications, patents and patent applications mentioned in this specification are herein incorporated in their entirety by reference into the specification, to the same extent as if each individual publication, patent or patent application was specifically and individually indicated to be incorporated herein by reference. In addition, citation or identification of any reference in this application shall not be construed as an admission that such reference is available as prior art to the present invention. To the extent that section headings are used, they should not be construed as necessarily limiting.

Claims

A method of classifying flows of data through a data communication network for selecting routes comprising: monitoring data flows in the data communication network (102); receiving a request (108) for a route in a data communication network for transmission of a flow of data packets; generating a statistical classifier (104) based on the monitored data flows; classifying the flow (112) based on the generated statistical classifier to predict network resource requirements for transmission of the flow through the network; selecting the route (118) for the classified flow; and generating a signal indicative of the selected route so that the flow is routed (118) in the data communication network through the selected route.
The method of claim 1 , wherein the classifying further comprises a certainty of the prediction of actual usage of network resources by the flow.
The method of claim 1 or claim 2, further comprising receiving a request for a prediction of the network resource routing requirements for transmission of the flow of data in the data communication network and predicting the network resource requirements (118) based on the statistical classifier.
The method of any of claims 1-3, wherein the predicted network resource requirements are calculated as a function of nominal network resource reservations of the flow.
The method of any of claims 1-4, wherein classifying further comprises classifying to predict at least one of the risk and the cost of failing to adhere to a service level agreement of the flow.
The method of any of claims 1-5, wherein classifying further comprises adjusting (122) the predicted network resource requirements in view of a selected risk of non-adherence to a service level agreement having associated fines due to non-adherence.
The method of any of claims 1-6, wherein the monitoring (102) and generating the statistical classifier (104) are performed using big-data analytics.
8. The method of any of claims 1-7, wherein the monitoring (102) and generating the statistical classifier (104) are performed asynchronously with respect to the receiving (108), classifying (112), selecting (118), and generating the signal (118).
9. The method of any of claims 1-8, wherein the statistical classifier is based on a Collaborative Filtering system.
10. The method of any of claims 1-9, further comprising monitoring adherence to a service level agreement (120) defined by nominal resource requirements of the flow, during transmission of the flow over the selected route that utilizes the predicted network resource requirements.
11. The method of any of claims 1-10, wherein the monitoring (102) and generating the statistical classifier (104) are continuously performed in an iterative manner (106).
12. The method of any of claims 1-11, further comprising recalibrating nominal network resource reservations of the flow to the predicted network resource requirements.
13. The method of any of claims 1-12, wherein monitoring data flows (102) comprises identifying user context data of the data flows, and generating the statistical classifier (104) comprises generating the statistical classifier based on the identified user context data.
14. A predictive analysis unit programmed to carry out the steps of the method according to one of claim 1 - 13.
15. Computer program having a program code for performing the method according to one of claim 1 - 13, when the computer program runs on a computer.
PCT/EP2014/050565 2014-01-14 2014-01-14 Methods and systems for selecting resources for data routing WO2015106795A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201480036854.3A CN105379204B (en) 2014-01-14 2014-01-14 Method and system for the resource for selecting data to route
PCT/EP2014/050565 WO2015106795A1 (en) 2014-01-14 2014-01-14 Methods and systems for selecting resources for data routing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/EP2014/050565 WO2015106795A1 (en) 2014-01-14 2014-01-14 Methods and systems for selecting resources for data routing

Publications (1)

Publication Number Publication Date
WO2015106795A1 true WO2015106795A1 (en) 2015-07-23

Family

ID=49956206

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2014/050565 WO2015106795A1 (en) 2014-01-14 2014-01-14 Methods and systems for selecting resources for data routing

Country Status (2)

Country Link
CN (1) CN105379204B (en)
WO (1) WO2015106795A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017045640A1 (en) * 2015-09-18 2017-03-23 中兴通讯股份有限公司 Associated stream bandwidth scheduling method and apparatus in data center
EP3399704A4 (en) * 2016-05-17 2019-01-09 Huawei Technologies Co., Ltd. Method and apparatus for determining routing policy
CN111768283A (en) * 2020-07-01 2020-10-13 厦门力含信息技术服务有限公司 Financial big data analysis method of improved collaborative filtering algorithm model
US10972364B2 (en) 2019-05-15 2021-04-06 Cisco Technology, Inc. Using tiered storage and ISTIO to satisfy SLA in model serving and updates
US11240153B1 (en) 2020-07-31 2022-02-01 Cisco Technology, Inc. Scoring policies for predictive routing suggestions

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108259367B (en) * 2018-01-11 2022-02-22 重庆邮电大学 Service-aware flow strategy customization method based on software defined network
CN109743200B (en) * 2018-12-25 2022-01-25 人和未来生物科技(长沙)有限公司 Resource feature-based cloud computing platform computing task cost prediction method and system
CN110471893B (en) * 2019-08-20 2022-06-03 曾亮 Method, system and device for sharing distributed storage space among multiple users
CN111737371B (en) * 2020-08-24 2020-11-13 上海飞旗网络技术股份有限公司 Data flow detection classification method and device capable of dynamically predicting
CN114615183B (en) * 2022-03-14 2023-09-05 广东技术师范大学 Routing method, device, computer equipment and storage medium based on resource prediction

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1069801A1 (en) * 1999-07-13 2001-01-17 International Business Machines Corporation Connections bandwidth right sizing based on network resources occupancy monitoring
US6459682B1 (en) * 1998-04-07 2002-10-01 International Business Machines Corporation Architecture for supporting service level agreements in an IP network
US20020145981A1 (en) * 2001-04-10 2002-10-10 Eric Klinker System and method to assure network service levels with intelligent routing

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060149841A1 (en) * 2004-12-20 2006-07-06 Alcatel Application session management for flow-based statistics
US7782793B2 (en) * 2005-09-15 2010-08-24 Alcatel Lucent Statistical trace-based methods for real-time traffic classification
CN101610433A (en) * 2009-07-10 2009-12-23 北京邮电大学 The multi-constraint condition routing selection method that a kind of support policy is resolved

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6459682B1 (en) * 1998-04-07 2002-10-01 International Business Machines Corporation Architecture for supporting service level agreements in an IP network
EP1069801A1 (en) * 1999-07-13 2001-01-17 International Business Machines Corporation Connections bandwidth right sizing based on network resources occupancy monitoring
US20020145981A1 (en) * 2001-04-10 2002-10-10 Eric Klinker System and method to assure network service levels with intelligent routing

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
KRILE S ET AL: "Congestion control for SLA creation", MOBILE FUTURE, 2004 AND THE SYMPOSIUM ON TRENDS IN COMMUNICATIONS. SYM POTIC '04. JOINT IST WORKSHOP ON BRATISLAVA, SLOVAKIA 24-26 OCT. 2004, PISCATAWAY, CA, USA,IEEE, US, 24 October 2004 (2004-10-24), pages 146 - 149, XP032391471, ISBN: 978-0-7803-8556-6, DOI: 10.1109/TIC.2004.1409520 *
WANG F ET AL: "An efficient bandwidth management scheme for real-time Internet applications", COMPUTER COMMUNICATIONS, ELSEVIER SCIENCE PUBLISHERS BV, AMSTERDAM, NL, vol. 25, no. 17, 1 November 2002 (2002-11-01), pages 1596 - 1605, XP004383803, ISSN: 0140-3664, DOI: 10.1016/S0140-3664(02)00059-2 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017045640A1 (en) * 2015-09-18 2017-03-23 中兴通讯股份有限公司 Associated stream bandwidth scheduling method and apparatus in data center
EP3399704A4 (en) * 2016-05-17 2019-01-09 Huawei Technologies Co., Ltd. Method and apparatus for determining routing policy
US10972364B2 (en) 2019-05-15 2021-04-06 Cisco Technology, Inc. Using tiered storage and ISTIO to satisfy SLA in model serving and updates
CN111768283A (en) * 2020-07-01 2020-10-13 厦门力含信息技术服务有限公司 Financial big data analysis method of improved collaborative filtering algorithm model
US11240153B1 (en) 2020-07-31 2022-02-01 Cisco Technology, Inc. Scoring policies for predictive routing suggestions

Also Published As

Publication number Publication date
CN105379204A (en) 2016-03-02
CN105379204B (en) 2019-04-05

Similar Documents

Publication Publication Date Title
WO2015106795A1 (en) Methods and systems for selecting resources for data routing
US11316755B2 (en) Service enhancement discovery for connectivity traits and virtual network functions in network services
US9705783B2 (en) Techniques for end-to-end network bandwidth optimization using software defined networking
GB2541047A (en) Model management in a dynamic QOS environment
US10187318B2 (en) Dynamic bandwidth control systems and methods in software defined networking
EP3318026B1 (en) Model management in a dynamic qos environment
EP3318027B1 (en) Quality of service management in a network
US9197687B2 (en) Prioritized blocking of on-demand requests
EP3318009B1 (en) Model management in a dynamic qos environment
US20140101316A1 (en) Apparatus and method for provisioning
Guo et al. Optimal management of virtual infrastructures under flexible cloud service agreements
KR20170033179A (en) Method and apparatus for managing bandwidth of virtual networks on SDN
Basu et al. Drive: Dynamic resource introspection and vnf embedding for 5g using machine learning
CN105917621B (en) Method and system for data routing
EP3241111B1 (en) Provisioning of telecommunications resources
Yu et al. Robust resource provisioning in time-varying edge networks
Tolosana-Calasanz et al. Revenue-based resource management on shared clouds for heterogenous bursty data streams
EP3318011B1 (en) Modifying quality of service treatment for data flows
KR20110137650A (en) Method and apparatus for dynamic resource management for service overlay network
Loomba et al. Application Placement and Infrastructure Optimisation
CN117749631A (en) Isolation method and device for dynamic network topology resources

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14700403

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14700403

Country of ref document: EP

Kind code of ref document: A1