US5764625A - Optimal flow control window size design in high-speed networks - Google Patents

Optimal flow control window size design in high-speed networks Download PDF

Info

Publication number
US5764625A
US5764625A US08/554,954 US55495495A US5764625A US 5764625 A US5764625 A US 5764625A US 55495495 A US55495495 A US 55495495A US 5764625 A US5764625 A US 5764625A
Authority
US
United States
Prior art keywords
network
sub
server
window size
servers
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
US08/554,954
Inventor
Redha Mohammed Bournas
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
International Business Machines Corp
Original Assignee
International Business Machines Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Business Machines Corp filed Critical International Business Machines Corp
Priority to US08/554,954 priority Critical patent/US5764625A/en
Assigned to IBM CORPORATION reassignment IBM CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: BOURNAS, REDHA MOHAMMED
Application granted granted Critical
Publication of US5764625A publication Critical patent/US5764625A/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/28Flow control; Congestion control in relation to timing considerations
    • H04L47/283Flow control; Congestion control in relation to timing considerations in response to processing delays, e.g. caused by jitter or round trip time [RTT]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/27Evaluation or update of window size, e.g. using information derived from acknowledged [ACK] packets

Definitions

  • the problem of determining the "right" amount of data to be sent at a time, also known as window size, across communications networks has been the topic of many studies.
  • the optimal size for the data window varies based on the number of intermediate nodes in the transmission network, the transmission medium, the processing power of each intermediate node, the type of data being sent (ie. interactive vs. isochronous data), and many other factors.
  • the objective to arrive at the optimal window size is motivated by the need to maximize network productivity. If the optimal window size can be derived, then the network can perform at maximum efficiency.
  • Sliding window flow control means that only a fixed amount of data will be sent prior to an acknowledgement from the receiver, and as acknowledgements are received, additional data, up to the window size limit, is sent.
  • An example of a protocol that uses a sliding window flow control mechanism is Transmission Control Program (TCP).
  • TCP Transmission Control Program
  • Pacing window flow control allows a fixed window of data packets to be sent at any point in time. The next window is sent when the prior one has been acknowledged.
  • protocols that apply window pacing include Systems Network Architecture (SNA) and Advanced Peer to Peer Networking (APPN).
  • the goal of this invention is to determine a window size that maximizes the network efficiency depending on the performance characteristics of the sender, the receiver, and the network connecting them, thereby maximizing the network power.
  • the network power is defined as the ratio of the average session throughput to the average packet transmission delay.
  • the invention provides a means for approximating the optimal window size for transmitting information in a high-speed communications network by deriving an upper and lower bound to the optimal window size.
  • the bounds are a function of network performance characteristics: the number of servers in the network, the round trip propagation delay for the network, and the average delay for a packet at the slowest server in the network.
  • the upper bound is the one of most interest since network administrators will want to know the maximum window size that can be sent and still achieve optimal power in the network.
  • this invention allows the upper bound to be expressed as a simple function of the number of hops in the network, the round trip propagation delay, and the maximum throughput of the communication path. It is noted that the round trip propagation delay may no longer be negligible, as it was assumed to be by others who have studied this problem, due not only to the high-speed of physical network media such as fiber, but also to the implementation of very advanced and fast hardware technologies in the switching nodes.
  • the upper bound of the optimal window size derived herein is under no assumptions on the arrival and service time distributions. In addition, this means of determining the optimal upper bound for window size is not restricted in its application to sliding window flow control. It is equally applicable to pacing window flow control mechanisms, a generalization that to this point has never been able to be made.
  • FIG. 1 shows an example of a communications network.
  • FIG. 2 is a more granular view of the flow of packets through the network.
  • FIG. 3 is a diagram of the logic flow for packet transmission in both sliding window flow control and pacing flow control.
  • FIG. 4 represents the throughput that can be achieved in the network based on the window size being used.
  • FIG. 5 illustrates the power of the network based on the window size used in the network.
  • FIG. 1 there are many different types of devices, both IBM and non-IBM, which can be contained within a communications network.
  • the network of FIG. 1 is simplified for understanding purposes. As is known by those skilled in the art, networks typically contain many more devices than the five indicated in this diagram.
  • a packet is transmitted from one device, say the PS/2 10 to another device, say the AS/400 50 in this example, the packet is passed from one node in the network to an adjacent node in the network until it reaches its destination.
  • some form of acknowledgement is transmitted in the reverse manner to the original sender.
  • a packet originated at PS/2 10 is transmitted across link 11 to server 20, then it is transmitted across link 21 to mainframe 30, then across link 31 to server 40, and finally across link 41 to AS/400 50.
  • an acknowledgement is then transmitted from the AS/400 50, across link 42 to server 40, across link 32 to mainframe 30, across link 22 to server 20 and across link 12 to the PS/2 10 which originated the packet. If the system is using sliding window flow control, this acknowledgement will indicate to the PS/2 10 that it may now send additional information. If the network uses pacing window flow control mechanisms, an acknowledgement is sent only when the full window has been received.
  • FIG. 2 is a logical representation of the path a packet takes from the sender to the receiver.
  • the packet leaves the sender 110, and traverses through a given number (E) of servers 120 until it reaches its destination, the receiver 130. Once it reaches the receiver 130, if the network uses sliding window flow control, an acknowledgement then traverses back through the same K servers 120 to the sender 110. If the network uses a pacing window flow control mechanism then the acknowledgement is only sent when the full window has been received.
  • E given number
  • FIG. 3 demonstrates the logic flow for both pacing flow control and sliding window flow control.
  • the server next checks to see if the window limit is reached 307. If it has not been reached, the server sends the data and increments its window counter 308. If the window limit had been reached in 307, then a check is made to see whether an acknowledgement has been received 309. If an acknowledgement has been received, the window counter is decremented 310 and then the server continues to block 308 where the data is sent and the window counter is incremented. If an acknowledgement has not been received, the server will wait for an acknowledgement 311 prior to sending any more data.
  • FIG. 4 is a representation of the throughput T(N) of the system with respect to the window size (N) utilized in the system.
  • Throughput is a measure of the amount of data that can be transmitted through the network per period time.
  • Line 402 represents the upper bound for the throughput based on the window size and line 401 represents the lower bound of the throughput also based on the window size.
  • the window size grows indefinitely large, the upper and lower bounds for the throughput converge to the same value, which is the exact average system maximum throughput. From these throughput upper and lower bounds we derive the upper and lower bounds for the optimal window size.
  • N is the average number of packets in the system
  • is the average arrival rate
  • T is the average system or processing time
  • is the average throughput of the system in packets/second.
  • the lower bound for the throughput with respect to the window size is represented by 401 in FIG. 4.
  • the goal is to derive a lower bound that converges to the upper bound for throughput as the window size grows indefinitely large.
  • D i represents the average delay of a packet at station i
  • s i represents the average service time of a packet at station i
  • ⁇ i represents the average number of packets at station i as observed at an arrival instant.
  • ⁇ i s i is the maximal average total wait time due to the ⁇ i packets found in the queue and s i is the average service time of a packet.
  • line 401 (the lower bound) converges to 1/s m as N approaches infinity, which is exactly the limit converged upon by the upper bound.
  • FIG. 5 incorporates the network power into the equation.
  • the maximization of network power is the criterion for optimality which was first defined by Kleinrock in "On Flow Control in Computer Networks", Proceedings of the International Conference on Communications, Vol.2, June 1978, pp. 27.2.1-27.2.5.
  • the derived network power upper bound shown as 502 in FIG. 5 is: ##EQU6##
  • the lower bound for the network power is P(N) ⁇ N/((N+M)s m +T p ) 2 .
  • N l where P l (N) is a maximum
  • N 1 and N 2 are determined to be the points at which the network power upper bound P u (N) equals P x . Since P(N) ⁇ P x for all values of N, the optimal value of N cannot be less than N 1 or greater than N 2 . If we determine N 0 to be the optimal window size, then:
  • the upper bound depends only on the service rate of the bottleneck server, the number of servers traversed and the propagation delay.
  • the SNA/APPN flow control mechanism works as follows.
  • a fixed window size K is allocated for a given virtual route (logical connection between a sender and a receiver), and a pacing count PC is set at this value.
  • the first packet in a given window generates an acknowledgement when it arrives at the destination.
  • be the average elapsed time of a type-1 packet from the time it leaves the sending station until it is received at the destination.
  • the elapsed time of a type-1 packet from the time its acknowledgement is prepared at the receiving station until it is received at the source node.
  • the average elapsed time of a type-2 packet (no acknowledgement) is equal to ⁇ + ⁇ , where ⁇ is the average packet delay at the slowest intermediate station.
  • L be the average number of packets in the virtual route just before receiving an acknowledgement.
  • the number of packets in the virtual route at the time a type-1 packet starts its transmission is then equal to K+L.
  • K the average throughput
  • the throughput upper and lower bounds are functions of the average number of packets in the virtual route,say N avg .
  • this number is exactly equal to the window size N.
  • the derived equations hold if we replaced N by N avg .
  • the network power bounds are valid if we replaced N by N avg .
  • the optimal lower and upper bounds N 1 and N 2 as defined in the sliding window flow control case apply to the optimal average number of packets in the virtual route.
  • D avg .sup.(2) be the average packet delay corresponding to N 2 . Recall from the derivation of N 2 that the corresponding throughput is 1/s m .
  • D avg .sup.(2) 4s m (M+T p /s m ).
  • the average delay for type-1 packets, say D 2 corresponding to D avg .sup.(2) satisfies the inequality D ⁇ D avg + ⁇ - ⁇ or
  • is the average delay of a type-2 packet at the slowest intermediate station
  • is the elapsed time of a type-1 packet from the time its acknowledgement is prepared at the receiving station until it is received at the source node.
  • s m .
  • T p is equal to twice the distance between the sending and receiving stations times the speed of light in the "wires".
  • the parameter s m is however more difficult to calculate. It can be measured by the sender as the average inter-acknowledgement time between successive packets when the sender traffic is bursty. Other possibilities include obtaining the average packet processing time at each switching station from the switching stations, or to use an approximated value for the ratio T p /s m .
  • the method, system, apparatus or program product comprise the steps or means of determining the number of servers in the network, calculating a round trip propagation delay between an originating server in the network and a receiving server in the network, determining an average delay for a packet at the slowest of the servers in the network, calculating an optimal window size using the number of servers, the round trip propagation delay and the average delay for a packet at the slowest server, and transmitting data, using the calculated optimal window size, between the originating server and the receiving server in the network.

Abstract

This invention deals with a method for transmitting data across a high-speed communicaitons network. More specifically, it deals with the method of approximating an optimal window size for transmitting information across a high-speed communications network. This optimal window size is determined as a function of the processing delay at each node of the network, the capacity of each link, the number of servers in the network and the round trip propagation delay.

Description

BACKGROUND OF THE INVENTION
The problem of determining the "right" amount of data to be sent at a time, also known as window size, across communications networks has been the topic of many studies. The optimal size for the data window varies based on the number of intermediate nodes in the transmission network, the transmission medium, the processing power of each intermediate node, the type of data being sent (ie. interactive vs. isochronous data), and many other factors. The objective to arrive at the optimal window size is motivated by the need to maximize network productivity. If the optimal window size can be derived, then the network can perform at maximum efficiency.
The problem of determining the optimal window size arises in the flow control for all communications protocols. The following information will apply equally to sliding and pacing flow control mechanisms. Sliding window flow control means that only a fixed amount of data will be sent prior to an acknowledgement from the receiver, and as acknowledgements are received, additional data, up to the window size limit, is sent. An example of a protocol that uses a sliding window flow control mechanism is Transmission Control Program (TCP). Pacing window flow control allows a fixed window of data packets to be sent at any point in time. The next window is sent when the prior one has been acknowledged. Examples of protocols that apply window pacing include Systems Network Architecture (SNA) and Advanced Peer to Peer Networking (APPN). The goal of this invention is to determine a window size that maximizes the network efficiency depending on the performance characteristics of the sender, the receiver, and the network connecting them, thereby maximizing the network power. The network power, as used here, is defined as the ratio of the average session throughput to the average packet transmission delay.
The design of a solution to the problem at hand has been studied by many authors, one example being Keng-Tai Ko and Satish K. Tripathi in "Optimal End-to-End Sliding Window Flow Control in High-Speed Networks", IEEE Annual Phoenix Conference, 1991, pp. 601-607. The above-mentioned authors made assumptions on the inter-arrival times distribution of exogenous traffic at the intermediate packet switching nodes, and the packet service times distribution at all the nodes. The primary assumptions made were that the arrival distribution for the traffic into the network is Poisson and that the service time for packets at each node are exponential. Based on these assumptions, the optimal window size design becomes mathematically tractable, although still difficult to analyze. However, due to the emergence of high-speed networks and multimedia applications, the above assumptions no longer work. This is because multimedia applications generate bursty traffic (which is no longer Poisson in its distribution), and high-speed networking technology is based on transmitting packets of a given equal size (called cells). These new characteristics of the traffic make the assumption of generally distributed arrivals at the intermediate packet switching nodes (which imply that the packet delay at each intermediate station is generally distributed) invalid; therefore the optimal window size design problem becomes mathematically not tractable under the previous methodology.
BRIEF DESCRIPTION OF THE INVENTION
The invention provides a means for approximating the optimal window size for transmitting information in a high-speed communications network by deriving an upper and lower bound to the optimal window size. The bounds are a function of network performance characteristics: the number of servers in the network, the round trip propagation delay for the network, and the average delay for a packet at the slowest server in the network.
The upper bound is the one of most interest since network administrators will want to know the maximum window size that can be sent and still achieve optimal power in the network. In particular, this invention allows the upper bound to be expressed as a simple function of the number of hops in the network, the round trip propagation delay, and the maximum throughput of the communication path. It is noted that the round trip propagation delay may no longer be negligible, as it was assumed to be by others who have studied this problem, due not only to the high-speed of physical network media such as fiber, but also to the implementation of very advanced and fast hardware technologies in the switching nodes. Under assumptions on the arrival and service time distributions, the authors, Ko and Tripathi, in "Optimal End-to-End Sliding Window Flow Control in High-Speed Networks", IEEE Annual Phoenix Conference, 1991, pp.601-607 derived an expression for the optimal sliding window size that incorporates the round trip propagation delay. It is shown in "Optimal End-to-End Sliding Window Flow Control in High-Speed Networks" that if this delay were too small relative to the transmission and switching stations delays, then the optimal window size reduces to the one developed in "Routing and Flow Control in Data Networks", Res.Rep.RC8353, IBM Res. Div., Yorktown Heights, N.Y. July 1980 by M. Schwartz. The upper bound of the optimal window size derived herein is under no assumptions on the arrival and service time distributions. In addition, this means of determining the optimal upper bound for window size is not restricted in its application to sliding window flow control. It is equally applicable to pacing window flow control mechanisms, a generalization that to this point has never been able to be made.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 shows an example of a communications network.
FIG. 2 is a more granular view of the flow of packets through the network.
FIG. 3 is a diagram of the logic flow for packet transmission in both sliding window flow control and pacing flow control.
FIG. 4 represents the throughput that can be achieved in the network based on the window size being used.
FIG. 5 illustrates the power of the network based on the window size used in the network.
DETAILED DESCRIPTION
As demonstrated in FIG. 1, there are many different types of devices, both IBM and non-IBM, which can be contained within a communications network. The network of FIG. 1 is simplified for understanding purposes. As is known by those skilled in the art, networks typically contain many more devices than the five indicated in this diagram.
When a packet is transmitted from one device, say the PS/2 10 to another device, say the AS/400 50 in this example, the packet is passed from one node in the network to an adjacent node in the network until it reaches its destination. When the packet reaches its destination, some form of acknowledgement is transmitted in the reverse manner to the original sender. In this particular example, a packet originated at PS/2 10 is transmitted across link 11 to server 20, then it is transmitted across link 21 to mainframe 30, then across link 31 to server 40, and finally across link 41 to AS/400 50. Once the packet reaches its destination of the AS/400, if the network uses sliding window flow control, an acknowledgement is then transmitted from the AS/400 50, across link 42 to server 40, across link 32 to mainframe 30, across link 22 to server 20 and across link 12 to the PS/2 10 which originated the packet. If the system is using sliding window flow control, this acknowledgement will indicate to the PS/2 10 that it may now send additional information. If the network uses pacing window flow control mechanisms, an acknowledgement is sent only when the full window has been received.
FIG. 2 is a logical representation of the path a packet takes from the sender to the receiver. The packet leaves the sender 110, and traverses through a given number (E) of servers 120 until it reaches its destination, the receiver 130. Once it reaches the receiver 130, if the network uses sliding window flow control, an acknowledgement then traverses back through the same K servers 120 to the sender 110. If the network uses a pacing window flow control mechanism then the acknowledgement is only sent when the full window has been received.
FIG. 3 demonstrates the logic flow for both pacing flow control and sliding window flow control. First a packet is assembled to be sent from the server 301. A check is done to determine whether sliding window flow control is used 302, if it is not, then pacing flow control must be used as noted in 303. If pacing flow control is used, then a check is made to see if the window is full 304. If it is, the server sends the data 305 and waits for the window acknowledgement 312. When the window acknowledgement is received 312, then more packets are collected 306. Upon receipt of an acknowledgement, if the window is not full as determined at 313, the server collects more packets 306 and continues to check if the window is full 304. If, when the acknowledgement was received, the window was full as in block 313, the data is again sent (returning to block 305). If the server was using sliding window flow control as determined in block 302, then the server next checks to see if the window limit is reached 307. If it has not been reached, the server sends the data and increments its window counter 308. If the window limit had been reached in 307, then a check is made to see whether an acknowledgement has been received 309. If an acknowledgement has been received, the window counter is decremented 310 and then the server continues to block 308 where the data is sent and the window counter is incremented. If an acknowledgement has not been received, the server will wait for an acknowledgement 311 prior to sending any more data.
FIG. 4 is a representation of the throughput T(N) of the system with respect to the window size (N) utilized in the system. Throughput is a measure of the amount of data that can be transmitted through the network per period time. Line 402 represents the upper bound for the throughput based on the window size and line 401 represents the lower bound of the throughput also based on the window size. As can be shown from the diagram, as the window size grows indefinitely large, the upper and lower bounds for the throughput converge to the same value, which is the exact average system maximum throughput. From these throughput upper and lower bounds we derive the upper and lower bounds for the optimal window size.
SLIDING WINDOW FLOW CONTROL
Consider the closed queuing system of FIG. 2 in steady-state, and let si be the average delay (service and waiting times) of a packet at station i, m the slowest station and M=2 +2E the total number of logical nodes in the network. Also, let Tp be the round trip propagation delay (which is modeled as an infinite server) and Np be the average number of packets in the `wires`. By applying Little's result, which is well known in the art and more specifically discussed in "Queuing Systems, Volume 1 Theory" by Leonard Kleinrock, 1975,
N=λT where
N is the average number of packets in the system,
λ is the average arrival rate,
and T is the average system or processing time,
to the average number of packets in service at each station, ρi, it can be concluded that
ρ.sub.i =λs.sub.i ≦1
for each i=1,2, . . . M, where
λ is the average throughput of the system in packets/second.
It can therefore be deduced that λ≦1/sm which is represented by 402 of FIG. 4. In addition, applying Little's result to the average number of packets at each station (in the queue and in service), say Ni, with Di denoting the average delay at each station, results in Ni =λDi and Np =λTp. Since ##EQU1## and Di ≧si for each i, it can be concluded that ##EQU2## thereby deriving an upper bound represented by 404 of FIG. 4. The union of the throughput upper bounds 402 and 404 of FIG. 4 are expressed as: ##EQU3## The value of the intersection of 404 and 402 is ##EQU4## as represented by the vertical line 403 of FIG. 4.
The lower bound for the throughput with respect to the window size is represented by 401 in FIG. 4. The goal is to derive a lower bound that converges to the upper bound for throughput as the window size grows indefinitely large. We start with the inequality Di ≦sii si where:
Di represents the average delay of a packet at station i,
si represents the average service time of a packet at station i, and
ηi represents the average number of packets at station i as observed at an arrival instant.
The result, ηi si, is the maximal average total wait time due to the ηi packets found in the queue and si is the average service time of a packet. When using the knowledge that si ≦sm for all m and summing the values of i for 1≦i≦M, the total average delay, D, is derived as: ##EQU5## Given that the sum of ηi from 1 to M is less than N, where N is the total number of packets in the system, then using substitution, we arrive at:
D≦(N+M)s.sub.m +T.sub.p.
Applying Little's result, we arrive at the throughput lower bound:
λ=(N/D)≧ N/((N+M)s.sub.m +T.sub.p)!.
Referring to FIG. 4, it should be noted that line 401 (the lower bound) converges to 1/sm as N approaches infinity, which is exactly the limit converged upon by the upper bound.
FIG. 5 incorporates the network power into the equation. The maximization of network power is the criterion for optimality which was first defined by Kleinrock in "On Flow Control in Computer Networks", Proceedings of the International Conference on Communications, Vol.2, June 1978, pp. 27.2.1-27.2.5. Using the definition of power, to be P(N)=λ/D=λ2 /N and utilizing the throughput upper bound derived in FIG. 4, the derived network power upper bound shown as 502 in FIG. 5 is: ##EQU6## Similiarly, using the lower bound as depicted in FIG. 4, the lower bound for the network power is P(N)≧N/((N+M)sm +Tp)2.
Now that we understand the curves for the upper and lower power bounds, we must continue on to determine what window size will maximize the network power, hence, be the optimal window size. Using the following terminology:
Pl (N)=Network power, lower bound,
Pu (N)=Network power, upper bound,
Nl =where Pl (N) is a maximum, and
Px =Pl (Nl)=maximum network power lower bound value.
N1 and N2 are determined to be the points at which the network power upper bound Pu (N) equals Px. Since P(N)≧Px for all values of N, the optimal value of N cannot be less than N1 or greater than N2. If we determine N0 to be the optimal window size, then:
N.sub.1 ≦N.sub.0 ≦N.sub.2.
By differentiating Pl (N) with respect to N, Nl is derived, leading to Nl =M+Tp /sm and Px =1/(4sm (Msm +Tp)). By solving Pu (N)=Px, the lower bound for the optimal window size is determined to be: ##EQU7## and the upper bound is determined to be:
N.sub.2 =4(M+T.sub.p /s.sub.m).
Note that the upper bound depends only on the service rate of the bottleneck server, the number of servers traversed and the propagation delay.
While others have studied the particular problem of sliding window flow control, there has been no study for this particular problem which merges both sliding window and pacing flow control mechanisms. Next it will be shown that the formulas derived above work equally well on pacing window flow control.
PACING WINDOW FLOW CONTROL
We now consider the case of the pacing window flow control mechanism. This control procedure is used in the path control of the SNA/APPN protocols. We let the window size be K packets, to distinguish it from the window size parameter N defined earlier. The SNA/APPN flow control mechanism works as follows. A fixed window size K is allocated for a given virtual route (logical connection between a sender and a receiver), and a pacing count PC is set at this value. This pacing count is decremented by one every time a packet enters the virtual route; i.e. PC<==PC-1. Packets are not allowed in the virtual route if the pacing count is zero. The first packet in a given window generates an acknowledgement when it arrives at the destination. This virtual route pacing response (VRPRS) is sent to the source at high priority. Upon arrival at the source, it causes the current pacing count to be incremented by the number of packets in the window; i.e. PC<==PC+K. This is the basic mechanism which does not include the algorithms for changing the virtual route pacing count upon detection of congestion at any intermediate nodes which will not be discussed in this invention.
The analysis for this flow control mechanism is more complicated than that for the sliding window flow control because the average number of packets in the virtual route is not known (this number is some function of the window size K and depends on the network performance characteristics). In what follows, we determine an upper bound for the optimal window size. Our assumptions are the same as in the sliding window flow control case; a sending station is transmitting a large volume of data to a receiving station in a burst, and there are a total of M intermediate service stations in the connection (including the sender and receiver). Let Navg be the average number of packets in the virtual route. We identify two types of packets: 1. the first packet in each K-block, and 2. the other ones. We assume that upon receiving an acknowledgement, the sending station has K packets ready for transmission. Let δ be the average elapsed time of a type-1 packet from the time it leaves the sending station until it is received at the destination. We will denote by α the elapsed time of a type-1 packet from the time its acknowledgement is prepared at the receiving station until it is received at the source node. The average elapsed time of a type-2 packet (no acknowledgement) is equal to δ+β, where β is the average packet delay at the slowest intermediate station. Let L be the average number of packets in the virtual route just before receiving an acknowledgement. The number of packets in the virtual route at the time a type-1 packet starts its transmission is then equal to K+L. After an average of δ units of time from the transmission of a type-1 packet, the number in the system is equal to K. If we call λ the average throughput, then by Little's result:
L=λδ.
On the other hand, after an average of α units of time later (the average elapsed time of a type-1 packet acknowledgement), the average number of packets remaining in the system is equal to L. But then, by Little's result:
L=K-λα.
Combining the above equations, we obtain:
K=λ(δ+α).
Recalling that δ+β is the average system time of a type-2 packet, and (δ+α) is the average system time of a type-1 packet, the average number of packets in the virtual route is then:
N.sub.avg =λ (K-1)(δ+β)+(δ+α)!/K.
We first consider the case β≦α (under this assumption, one can show that Navg ≦K). The other case, β>α, will be analyzed later. Again, the goal is to determine an upper bound for the optimal window size. At this point, let us introduce some additional notation. Let Davg be the average time of a packet in the virtual route, and D be the average time of a type-1 packet. Then by Little's result Davg =Navg /λ, and using the notation of the previous paragraph, D=δ+α. Hence by combining the previous formulas, we obtain:
D.sub.avg = (λD-1)(D+β-α)+D!/λD
which can be rewritten as:
λD.sup.2 -λ(D.sub.avg +α-β)D+(α-β)=0.
Solving this quadratic equation leads to the following two non-negative possible solutions for D:
D= λ(D.sub.avg +α-β)+√Δ!/2λ,
or
D= λ(D.sub.avg +α-β)-√Δ!/2λ,
where
Δ=λ.sup.2 (D.sub.avg +α-β).sup.2 -4λ(α-β).
Since α≧β, it follows that Δ≦λ(Davg +α-β), which implies that either of the above two solutions, satisfies:
D≦D.sub.avg +α-β.
Next, recall that the throughput upper and lower bounds are functions of the average number of packets in the virtual route,say Navg. For the sliding window flow control mechanism, this number is exactly equal to the window size N. Hence the derived equations hold if we replaced N by Navg. Also the network power bounds are valid if we replaced N by Navg. Then the optimal lower and upper bounds N1 and N2 as defined in the sliding window flow control case apply to the optimal average number of packets in the virtual route. Let Davg.sup.(2) be the average packet delay corresponding to N2. Recall from the derivation of N2 that the corresponding throughput is 1/sm. Hence by Little's result, Davg.sup.(2) =4sm (M+Tp /sm). The average delay for type-1 packets, say D2, corresponding to Davg.sup.(2) satisfies the inequality D≦Davg +α-β or
D.sub.2 ≦4s.sub.m (M+T.sub.p /s.sub.m)+α-β.
Recalling that Kx =D2 /sm, where Kx is the optimal window size, then by the above equation:
K.sub.x ≦4(M+T.sub.p /s.sub.m)+(α-β)/s.sub.m.
We recall that β is the average delay of a type-2 packet at the slowest intermediate station, and α is the elapsed time of a type-1 packet from the time its acknowledgement is prepared at the receiving station until it is received at the source node. By definition, then β=sm. Also, if we assume that the propagation delay is substantially larger than the total average delay at the switching nodes because of either the implementation of the hardware and software switching logic, or the acknowledgement packets are very small in size and they have the highest processing priority at the switching nodes, then α=Tp /2, resulting in:
K.sub.x ≦4M-1+(9/2)(T.sub.p /s.sub.m).
Let us now consider the case when β>α. It was determined prior that K<Navg. As we have previously discussed, the optimal value of the average number of packets in the virtual route cannot be greater than the bound N2 =4(M+Tp /sm). Since in this case the optimal window size cannot exceed the average number of packets in the virtual route, then it must be that:
K.sub.x ≦4(M+T.sub.p /s.sub.m).
We combine the upper bounds in the following formula:
K.sub.o ≦4(M+T.sub.p /s.sub.m)+max{0,(T.sub.p /2s.sub.m) -1}.
As an approximation, one could use the upper bound derived above for the design of the optimal window size. The parameters M and Tp are calculated during the session establishment: Tp is equal to twice the distance between the sending and receiving stations times the speed of light in the "wires". The parameter sm is however more difficult to calculate. It can be measured by the sender as the average inter-acknowledgement time between successive packets when the sender traffic is bursty. Other possibilities include obtaining the average packet processing time at each switching station from the switching stations, or to use an approximated value for the ratio Tp /sm.
The above results in a method, system, apparatus and program product for controlling traffic in a high-speed network containing one or more servers where the method, system, apparatus or program product comprise the steps or means of determining the number of servers in the network, calculating a round trip propagation delay between an originating server in the network and a receiving server in the network, determining an average delay for a packet at the slowest of the servers in the network, calculating an optimal window size using the number of servers, the round trip propagation delay and the average delay for a packet at the slowest server, and transmitting data, using the calculated optimal window size, between the originating server and the receiving server in the network.

Claims (4)

What is claimed is:
1. A method for controlling traffic in a high-speed network, containing one or more servers including an originating server and a receiving server, said method comprising the steps of:
determining the number of servers in the network;
determining a round trip propagation delay between the originating server and the receiving server;
determining an average delay for a packet at the slowest of the servers in the network;
calculating an optimal window size using the number of servers, the round trip propagation delay and the average delay for a packet at the slowest server; and
transmitting data, using the calculated optimal window size, between the servers in the network wherein said optimal window size is
4(M+T.sub.P /s.sub.m)+max{0, (T.sub.p /2s.sub.m)-1} packets
where M is the number of servers in the network, TP is the round trip propagation delay, ans sm is the average delay at the slowest server.
2. A high-speed communications network, comprising:
A plurality of interconnected network devices said network devices including an originating server, a receiving server and zero or more intermediate servers; and
means for implementing flow control between the network devices, said flow control utilizing an optimal window size; whereby the network power is maximized, approximated by an upper bound on the window size of
4(M+T.sub.P /S.sub.M)+max{0, (T.sub.P /2S.sub.M)-1},
where M is the number of servers in the network, TP is the round trip propagation delay between the originating server and the receiving server, and SM is the average delay at the slowest server.
3. The system described in claim 2, wherein
said optimal window size is also bounded by a lower bound, said lower bound being: ##EQU8## where si is the packet service time at server i=1,2, . . . M.
4. A network node for performing network flow control in a network of interconnected servers, said network flow control comprising the steps of:
determining the number of servers in the network;
calculating a round-trip propagation delay between an originating server and a receiving server;
determining the average delay for a packet at the slowest of the servers in the network;
calculating the optimal window size using the number of servers, the round-trip propagation delay and the average delay for a packet at the slowest server; and
transmitting packets of data using the optimal window size to other servers in the network, wherein:
the calculated optimal window size is
4(M+T.sub.P /S.sub.M)+max{0, (T.sub.P /2S.sub.M)-1} packets;
where M is the number of servers in the network, TP is the round trip propagation delay between the originating server and the receiving server, and SM is the average delay at the slowest server.
US08/554,954 1995-11-13 1995-11-13 Optimal flow control window size design in high-speed networks Expired - Fee Related US5764625A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US08/554,954 US5764625A (en) 1995-11-13 1995-11-13 Optimal flow control window size design in high-speed networks

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US08/554,954 US5764625A (en) 1995-11-13 1995-11-13 Optimal flow control window size design in high-speed networks

Publications (1)

Publication Number Publication Date
US5764625A true US5764625A (en) 1998-06-09

Family

ID=24215390

Family Applications (1)

Application Number Title Priority Date Filing Date
US08/554,954 Expired - Fee Related US5764625A (en) 1995-11-13 1995-11-13 Optimal flow control window size design in high-speed networks

Country Status (1)

Country Link
US (1) US5764625A (en)

Cited By (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1999046902A1 (en) * 1998-03-13 1999-09-16 Packeteer, Inc. Method for transparently determining and setting an optimal minimum required tcp window size
US6009077A (en) * 1997-04-08 1999-12-28 University Of Massachusetts Flow admission control for a router
US6049833A (en) * 1997-08-29 2000-04-11 Cisco Technology, Inc. Mapping SNA session flow control to TCP flow control
US6105064A (en) * 1997-05-30 2000-08-15 Novell, Inc. System for placing packets on network for transmission from sending endnode to receiving endnode at times which are determined by window size and metering interval
US6115357A (en) * 1997-07-01 2000-09-05 Packeteer, Inc. Method for pacing data flow in a packet-based network
US6122276A (en) * 1997-06-30 2000-09-19 Cisco Technology, Inc. Communications gateway mapping internet address to logical-unit name
US6128662A (en) * 1997-08-29 2000-10-03 Cisco Technology, Inc. Display-model mapping for TN3270 client
US6202089B1 (en) 1998-06-30 2001-03-13 Microsoft Corporation Method for configuring at runtime, identifying and using a plurality of remote procedure call endpoints on a single server process
US6205498B1 (en) * 1998-04-01 2001-03-20 Microsoft Corporation Method and system for message transfer session management
US6249530B1 (en) * 1997-12-22 2001-06-19 Sun Microsystems, Inc. Network bandwidth control
US6256634B1 (en) 1998-06-30 2001-07-03 Microsoft Corporation Method and system for purging tombstones for deleted data items in a replicated database
US6275912B1 (en) 1998-06-30 2001-08-14 Microsoft Corporation Method and system for storing data items to a storage device
US6373818B1 (en) * 1997-06-13 2002-04-16 International Business Machines Corporation Method and apparatus for adapting window based data link to rate base link for high speed flow control
US20020071388A1 (en) * 2000-11-16 2002-06-13 Einar Bergsson Selectable network protocol
US6415410B1 (en) * 1995-05-09 2002-07-02 Nokia Telecommunications Oy Sliding-window data flow control using an adjustable window size
US6446206B1 (en) 1998-04-01 2002-09-03 Microsoft Corporation Method and system for access control of a message queue
US6529932B1 (en) 1998-04-01 2003-03-04 Microsoft Corporation Method and system for distributed transaction processing with asynchronous message delivery
US6633585B1 (en) * 1999-08-13 2003-10-14 International Business Machines Corporation Enhanced flow control in ATM edge switches
US6678726B1 (en) 1998-04-02 2004-01-13 Microsoft Corporation Method and apparatus for automatically determining topology information for a computer within a message queuing network
US20040109029A1 (en) * 2002-12-10 2004-06-10 International Business Machines Corporation Method, system, program product and navigator for manipulating a computer display view
US6757273B1 (en) * 2000-02-07 2004-06-29 Nokia Corporation Apparatus, and associated method, for communicating streaming video in a radio communication system
US6769030B1 (en) * 2000-02-07 2004-07-27 International Business Machines Corporation Method and apparatus to evaluate and measure the optimal network packet size for file transfer in high-speed networks
US20040151113A1 (en) * 2003-01-31 2004-08-05 Adrian Zakrzewski Adaptive transmit window control mechanism for packet transport in a universal port or multi-channel environment
US20040202110A1 (en) * 2003-03-11 2004-10-14 Samsung Electronics Co., Ltd. Method and apparatus for managing sliding window in IP security
US6831912B1 (en) * 2000-03-09 2004-12-14 Raytheon Company Effective protocol for high-rate, long-latency, asymmetric, and bit-error prone data links
US6848108B1 (en) 1998-06-30 2005-01-25 Microsoft Corporation Method and apparatus for creating, sending, and using self-descriptive objects as messages over a message queuing network
US6925502B1 (en) * 2000-06-20 2005-08-02 At&T Corp. Methods and systems for improving data transmission rates having adaptive protocols
US6990069B1 (en) * 1997-02-24 2006-01-24 At&T Corp. System and method for improving transport protocol performance in communication networks having lossy links
US20080225804A1 (en) * 2007-03-14 2008-09-18 Cisco Technology, Inc. Real-Time Sessions for Wireless Mesh Networks
US20120230179A1 (en) * 1999-12-07 2012-09-13 Rockstar Bidco, LP System, Device and Method for Distributing Link State Information in a Communication Network
US20120331107A1 (en) * 2011-06-23 2012-12-27 Honeywell International Inc. Systems and methods for negotiated accelerated block option for trivial file transfer protocol (tftp)
US10051562B2 (en) * 2015-07-17 2018-08-14 Ricoh Company, Ltd. Communication apparatus, power control method, and recording medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4736369A (en) * 1986-06-13 1988-04-05 International Business Machines Corp. Adaptive session-level pacing
US4769815A (en) * 1987-04-10 1988-09-06 American Telephone And Telegraph Company, At&T Bell Laboratories Packet flow control method
US5014265A (en) * 1989-11-30 1991-05-07 At&T Bell Laboratories Method and apparatus for congestion control in a data network
US5063562A (en) * 1990-05-23 1991-11-05 International Business Machines Corporation Flow control for high speed networks
US5130986A (en) * 1990-04-27 1992-07-14 At&T Bell Laboratories High speed transport protocol with two windows
US5442637A (en) * 1992-10-15 1995-08-15 At&T Corp. Reducing the complexities of the transmission control protocol for a high-speed networking environment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4736369A (en) * 1986-06-13 1988-04-05 International Business Machines Corp. Adaptive session-level pacing
US4769815A (en) * 1987-04-10 1988-09-06 American Telephone And Telegraph Company, At&T Bell Laboratories Packet flow control method
US5014265A (en) * 1989-11-30 1991-05-07 At&T Bell Laboratories Method and apparatus for congestion control in a data network
US5130986A (en) * 1990-04-27 1992-07-14 At&T Bell Laboratories High speed transport protocol with two windows
US5063562A (en) * 1990-05-23 1991-11-05 International Business Machines Corporation Flow control for high speed networks
US5442637A (en) * 1992-10-15 1995-08-15 At&T Corp. Reducing the complexities of the transmission control protocol for a high-speed networking environment

Cited By (50)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6415410B1 (en) * 1995-05-09 2002-07-02 Nokia Telecommunications Oy Sliding-window data flow control using an adjustable window size
US8842528B2 (en) 1997-02-24 2014-09-23 At&T Intellectual Property Ii, Lp System and method for improving transport protocol performance in communication networks having lossy links
US20100070824A1 (en) * 1997-02-24 2010-03-18 At&T Intellectual Property Ii, L.P. System and Method for Improving Transport Protocol Performance in Communication Networks Having Lossy Links
US6990069B1 (en) * 1997-02-24 2006-01-24 At&T Corp. System and method for improving transport protocol performance in communication networks having lossy links
US9225473B2 (en) 1997-02-24 2015-12-29 At&T Intellectual Property Ii, Lp System and method for improving transport protocol performance in communication networks having lossy links
US8305888B2 (en) * 1997-02-24 2012-11-06 At&T Intellectual Property Ii, L.P. System and method for improving transport protocol performance in communication networks having lossy links
US6009077A (en) * 1997-04-08 1999-12-28 University Of Massachusetts Flow admission control for a router
US6105064A (en) * 1997-05-30 2000-08-15 Novell, Inc. System for placing packets on network for transmission from sending endnode to receiving endnode at times which are determined by window size and metering interval
US6373818B1 (en) * 1997-06-13 2002-04-16 International Business Machines Corporation Method and apparatus for adapting window based data link to rate base link for high speed flow control
US6122276A (en) * 1997-06-30 2000-09-19 Cisco Technology, Inc. Communications gateway mapping internet address to logical-unit name
US6115357A (en) * 1997-07-01 2000-09-05 Packeteer, Inc. Method for pacing data flow in a packet-based network
US6128662A (en) * 1997-08-29 2000-10-03 Cisco Technology, Inc. Display-model mapping for TN3270 client
US6049833A (en) * 1997-08-29 2000-04-11 Cisco Technology, Inc. Mapping SNA session flow control to TCP flow control
US6249530B1 (en) * 1997-12-22 2001-06-19 Sun Microsystems, Inc. Network bandwidth control
US6205120B1 (en) * 1998-03-13 2001-03-20 Packeteer, Inc. Method for transparently determining and setting an optimal minimum required TCP window size
WO1999046902A1 (en) * 1998-03-13 1999-09-16 Packeteer, Inc. Method for transparently determining and setting an optimal minimum required tcp window size
US6205498B1 (en) * 1998-04-01 2001-03-20 Microsoft Corporation Method and system for message transfer session management
US6446144B1 (en) 1998-04-01 2002-09-03 Microsoft Corporation Method and system for message transfer session management
US6446206B1 (en) 1998-04-01 2002-09-03 Microsoft Corporation Method and system for access control of a message queue
US6529932B1 (en) 1998-04-01 2003-03-04 Microsoft Corporation Method and system for distributed transaction processing with asynchronous message delivery
US6678726B1 (en) 1998-04-02 2004-01-13 Microsoft Corporation Method and apparatus for automatically determining topology information for a computer within a message queuing network
US7631317B2 (en) 1998-06-30 2009-12-08 Microsoft Corporation Method and apparatus for creating, sending, and using self-descriptive objects as messages over a message queuing network
US6256634B1 (en) 1998-06-30 2001-07-03 Microsoft Corporation Method and system for purging tombstones for deleted data items in a replicated database
US8079038B2 (en) 1998-06-30 2011-12-13 Microsoft Corporation Method and apparatus for creating, sending, and using self-descriptive objects as messages over a message queuing network
US6202089B1 (en) 1998-06-30 2001-03-13 Microsoft Corporation Method for configuring at runtime, identifying and using a plurality of remote procedure call endpoints on a single server process
US6275912B1 (en) 1998-06-30 2001-08-14 Microsoft Corporation Method and system for storing data items to a storage device
US20080163250A1 (en) * 1998-06-30 2008-07-03 Microsoft Corporation Method and apparatus for creating, sending, and using self-descriptive objects as messages over a message queuing network
US6848108B1 (en) 1998-06-30 2005-01-25 Microsoft Corporation Method and apparatus for creating, sending, and using self-descriptive objects as messages over a message queuing network
US20050071314A1 (en) * 1998-06-30 2005-03-31 Microsoft Corporation Method and apparatus for creating, sending, and using self-descriptive objects as messages over a message queuing network
US7788676B2 (en) 1998-06-30 2010-08-31 Microsoft Corporation Method and apparatus for creating, sending, and using self-descriptive objects as messages over a message queuing network
US6633585B1 (en) * 1999-08-13 2003-10-14 International Business Machines Corporation Enhanced flow control in ATM edge switches
US8848527B2 (en) * 1999-12-07 2014-09-30 Rockstar Consortium Us Lp System, device and method for distributing link state information in a communication network
US9118548B2 (en) 1999-12-07 2015-08-25 Rpx Clearinghouse Llc System, device and method for distributing link state information in a communication network
US20120230179A1 (en) * 1999-12-07 2012-09-13 Rockstar Bidco, LP System, Device and Method for Distributing Link State Information in a Communication Network
US6757273B1 (en) * 2000-02-07 2004-06-29 Nokia Corporation Apparatus, and associated method, for communicating streaming video in a radio communication system
US6769030B1 (en) * 2000-02-07 2004-07-27 International Business Machines Corporation Method and apparatus to evaluate and measure the optimal network packet size for file transfer in high-speed networks
US6831912B1 (en) * 2000-03-09 2004-12-14 Raytheon Company Effective protocol for high-rate, long-latency, asymmetric, and bit-error prone data links
US6925502B1 (en) * 2000-06-20 2005-08-02 At&T Corp. Methods and systems for improving data transmission rates having adaptive protocols
US7613820B1 (en) 2000-06-20 2009-11-03 At&T Intellectual Property Ii, L.P. Methods and systems for improving data transmission rates having adaptive protocols
US7373417B1 (en) 2000-06-20 2008-05-13 At&T Corp Methods and systems for improving data transmission rates having adaptive protocols
US20020071388A1 (en) * 2000-11-16 2002-06-13 Einar Bergsson Selectable network protocol
US20040109029A1 (en) * 2002-12-10 2004-06-10 International Business Machines Corporation Method, system, program product and navigator for manipulating a computer display view
US7583594B2 (en) * 2003-01-31 2009-09-01 Texas Instruments Incorporated Adaptive transmit window control mechanism for packet transport in a universal port or multi-channel environment
US20040151113A1 (en) * 2003-01-31 2004-08-05 Adrian Zakrzewski Adaptive transmit window control mechanism for packet transport in a universal port or multi-channel environment
US20040202110A1 (en) * 2003-03-11 2004-10-14 Samsung Electronics Co., Ltd. Method and apparatus for managing sliding window in IP security
US7729328B2 (en) * 2007-03-14 2010-06-01 Cisco Technology, Inc. Real-time sessions for wireless mesh networks
US20080225804A1 (en) * 2007-03-14 2008-09-18 Cisco Technology, Inc. Real-Time Sessions for Wireless Mesh Networks
US20120331107A1 (en) * 2011-06-23 2012-12-27 Honeywell International Inc. Systems and methods for negotiated accelerated block option for trivial file transfer protocol (tftp)
US8769137B2 (en) * 2011-06-23 2014-07-01 Honeywell International Inc. Systems and methods for negotiated accelerated block option for trivial file transfer protocol (TFTP)
US10051562B2 (en) * 2015-07-17 2018-08-14 Ricoh Company, Ltd. Communication apparatus, power control method, and recording medium

Similar Documents

Publication Publication Date Title
US5764625A (en) Optimal flow control window size design in high-speed networks
Lakshman et al. The performance of TCP/IP for networks with high bandwidth-delay products and random loss
Floyd et al. Traffic phase effects in packet-switched gateways
Greenberg et al. How fair is fair queuing
Alizadeh et al. Data center transport mechanisms: Congestion control theory and IEEE standardization
US6996064B2 (en) System and method for determining network throughput speed and streaming utilization
Mankin et al. Gateway congestion control survey
JPH03174848A (en) Delay base rush evading method in computer network and device
Pazos et al. Using back-pressure to improve TCP performance with many flows
Elloumi et al. Improving congestion avoidance algorithms for asymmetric networks
Kulkarni et al. Performance analysis of a rate-based feedback control scheme
Chen et al. Markov-modulated flow model for the output queues of a packet switch
Ajmone Marsan et al. A markovian model for TCP over ATM
Westphal et al. Packet trimming to reduce buffer sizes and improve round-trip times
Garetto et al. Modeling short-lived TCP connections with open multiclass queuing networks
Bournas Bounds for optimal flow control window size design with application to high-speed networks
Lakshmikantha et al. Buffer sizing results for RCP congestion control under connection arrivals and departures
Puliafito et al. Buffer sizing for ABR traffic in an ATM switch
Mateescu On Allocation Schemes for the Interconnection of LANs and Multimedia Sources over Broadband Networks
Manthorpe The danger of data traffic models
Pujolle Comparison of some end-to-end flow control policies in a packet-switching network
Peng Delay Analysis of Partially-Saturated Heterogeneous IEEE 802.11 DCF Networks
Tan Statistical analysis of long-range dependent processes via a stochastic intensity approach, with applications in networking
Kalampoukas et al. Analysis of source policy and its effects on TCP in rate-controlled ATM networks
ISHIBASHI et al. Proposal and evaluation of method to estimate packet loss-rate using correlation of packet delay and loss

Legal Events

Date Code Title Description
AS Assignment

Owner name: IBM CORPORATION, NEW YORK

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:BOURNAS, REDHA MOHAMMED;REEL/FRAME:007778/0774

Effective date: 19951113

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

FPAY Fee payment

Year of fee payment: 4

REMI Maintenance fee reminder mailed
LAPS Lapse for failure to pay maintenance fees
LAPS Lapse for failure to pay maintenance fees

Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FP Lapsed due to failure to pay maintenance fee

Effective date: 20060609