Recherche Images Maps Play YouTube Actualités Gmail Drive Plus »
Connexion
Les utilisateurs de lecteurs d'écran peuvent cliquer sur ce lien pour activer le mode d'accessibilité. Celui-ci propose les mêmes fonctionnalités principales, mais il est optimisé pour votre lecteur d'écran.

Brevets

  1. Recherche avancée dans les brevets
Numéro de publicationUS6215769 B1
Type de publicationOctroi
Numéro de demandeUS 09/167,882
Date de publication10 avr. 2001
Date de dépôt7 oct. 1998
Date de priorité7 oct. 1998
État de paiement des fraisPayé
Autre référence de publicationCN1134945C, CN1344456A, DE69942071D1, EP1169826A2, EP1169826A4, EP1169826B1, WO2000021233A2, WO2000021233A3, WO2000021233A8, WO2000021233A9
Numéro de publication09167882, 167882, US 6215769 B1, US 6215769B1, US-B1-6215769, US6215769 B1, US6215769B1
InventeursNasir Ghani, Sudhir Sharan Dixit
Cessionnaire d'origineNokia Telecommunications, Inc.
Exporter la citationBiBTeX, EndNote, RefMan
Liens externes: USPTO, Cession USPTO, Espacenet
Enhanced acknowledgment pacing device and method for TCP connections
US 6215769 B1
Résumé
An enhanced acknowledgment pacing device and method for TCP connections is disclosed. The invention includes a link layer entity for receiving data packets from a source and forwarding the data packets to a forward data link, the link layer entity storing the received data packets in a data packet buffer until the data packets depart the link layer entity and are forwarded to the forward data link and an acknowledgment pacing device, coupled to the link layer entity, for pacing acknowledgment packets to be sent to the source in response to receiving the data packets from the source. The acknowledgment pacing device further includes an acknowledgment control unit for monitoring congestion at the link layer entity and generating a control signal for controlling the processing of acknowledgment packets based upon whether congestion is occurring at the link layer entity, an acknowledgment packet buffer, coupled to the acknowledgment control unit, for storing acknowledgment packets received from the acknowledgment control unit and a scheduler, coupled to the acknowledgment control unit and the acknowledgment buffer, the scheduler releasing acknowledgment packets to the source based upon the control signal generated by the acknowledgment control unit.
Images(9)
Previous page
Next page
Revendications(76)
What is claimed is:
1. An acknowledgment pacing device for pacing acknowledgment packets to be sent to a source in response to receiving data packets from the source, comprising:
an acknowledgment control unit for monitoring loading of a network and generating a control signal for controlling the processing of acknowledgment packets based upon the loading of the network;
an acknowledgment packet buffer, coupled to the acknowledgment control unit, for storing acknowledgment packets received from the acknowledgment control unit; and
a scheduler, coupled to the acknowledgment control unit and the acknowledgment buffer, the scheduler releasing acknowledgment packets based upon the control signal generated by the acknowledgment control unit.
2. The acknowledgment pacing device of claim 1 where the scheduler chooses acknowledgment packets to release based upon a queuing strategy.
3. The acknowledgment pacing device of claim 2 wherein the queuing strategy comprises sending a head-of-line acknowledgment packet when aggregate acknowledgment packet buffering is used.
4. The acknowledgment pacing device of claim 2 wherein the queuing strategy comprises a weighted-round-robin (WRR) process for selecting an acknowledgment packet in the acknowledgment packet buffer for release when per-class or per-flow acknowledgment packet buffering is used.
5. The acknowledgment pacing device of claim 4 wherein the weighted-round-robin (WRR) process uses weights for weighting the selection of the acknowledgment packet for release inversely proportionally to a TCP maximum segment size (MSS) to the lessen a bias against smaller MSS flows.
6. The acknowledgment pacing device of claim 2 wherein the queuing strategy comprises a fair-queuing (FQ) process for selecting an acknowledgment packet in the acknowledgment packet buffer for release when per-class or per-flow acknowledgment packet buffering is used.
7. The acknowledgment pacing device of claim 1 wherein the acknowledgment control unit further comprises an acknowledgment packet pacing processor, the acknowledgment packet pacing processor generating the control signal for controlling the processing of acknowledgment packets using a acknowledgment packet arrival processor and a data packet departure processor.
8. The acknowledgment pacing device of claim 7 wherein the acknowledgment packet arrival processor controls the processing of acknowledgment packets to the acknowledgment packet buffer by checking a congestion level of the network and deciding whether to hold acknowledgment packets in the acknowledgment packet buffer or to send the acknowledgment packet directly to the source without buffering the acknowledgment packets in the acknowledgment packet buffer.
9. The acknowledgment pacing device of claim 7 wherein the acknowledgment packet arrival processor decides whether to hold acknowledgment packets in the acknowledgment packet buffer or to send the acknowledgment packet directly to the source without buffering the acknowledgment packets in the acknowledgment packet buffer by determining if the network is congested, determining if the acknowledgment packet buffer is empty, storing an acknowledgment packet in the buffer if the network is congested or the acknowledgment packet buffer is not empty, and forwarding the acknowledgment packet to the source if the acknowledgment packet buffer is empty and the network is not congested.
10. The acknowledgment pacing device of claim 9 wherein the acknowledgment packet is stored in the acknowledgment packet buffer and gated out by the scheduler if the acknowledgment packet is a first acknowledgment packet to be buffered in the acknowledgment packet buffer during congestion.
11. The acknowledgment pacing device of claim 10 wherein the acknowledgment control unit increases a spacing between acknowledgment packets gated from the acknowledgment packet buffer if the acknowledgment packet is the first acknowledgment packet to be buffered in the acknowledgment packet buffer during congestion.
12. The acknowledgment pacing device of claim 11 wherein the acknowledgment control unit increases the spacing between acknowledgment packets by setting a packet counter variable to a first predetermined value.
13. The acknowledgment pacing device of claim 12 wherein the packet counter variable is decremented as when a data packet departs from the network.
14. The acknowledgment pacing device of claim 13 wherein the scheduler releases a buffered acknowledgment packet when the pack et counter variable is decremented to zero and the acknowledgment control unit resets the packet counter variable.
15. The acknowledgment pacing device of claim 8 wherein the congestion level of the network is determined by analyzing a queue length representing a capacity for a data packet buffer.
16. The acknowledgment pacing device of claim 15 wherein the network is indicated as being non-congested when the queue length is less than a low threshold.
17. The acknowledgment pacing device of claim 15 wherein the network is indicated as being congested when the queue length is greater than a high threshold.
18. The acknowledgment pacing device of claim 7 wherein the data packet departure processor controls the release of acknowledgment packets from the acknowledgment packet buffer by monitoring congestion levels of the network and deciding when to gate acknowledgment packets from the acknowledgment buffer to the source.
19. The acknowledgment pacing device of claim 18 wherein the data packet departure processor decides when to gate acknowledgment packets from the acknowledgment buffer to the source by checking if acknowledgment packets are in the acknowledgment packet buffer awaiting transmission and if a packet counter variable set by the acknowledgment control unit has a value of zero, and releasing a buffered acknowledgment packet in the acknowledgment packet buffer to the source when the packet counter variable has a value of zero.
20. The acknowledgment pacing device of claim 19 wherein the data packet departure processor increases the spacing between the release of acknowledgment packets from the acknowledgment packet buffer if congestion still exists in the network.
21. The acknowledgment pacing device of claim 20 wherein the spacing a between the release of acknowledgment packets is increased by resetting the packet counter variable to a first predetermined value.
22. The acknowledgment pacing device of claim 21 wherein the data packet departure processor decrementing the packet counter variable if the value of the packet counter variable is non-zero.
23. The acknowledgment pacing device of claim 22 wherein the data packet departure processor resets the packet counter variable to the second predetermined value to prevent bandwidth under-utilization after congestion periods if the packet counter variable is larger than the second predetermined value.
24. The acknowledgment pacing device of claim 19 wherein the data packet departure processor decreases a spacing between the release of acknowledgment packets from the acknowledgment packet buffer if congestion in the network has abated.
25. The acknowledgment pacing device of claim 24 wherein the data packet departure processor decreases a spacing between the release of acknowledgment packets by resetting the packet counter variable to a second predetermined value, the second predetermined value being less than the first predetermined value.
26. The acknowledgment pacing device of claim 25 wherein the data packet departure processor decrementing the packet counter variable if the value of the packet counter variable is non-zero.
27. The acknowledgment pacing device of claim 26 wherein the data packet departure processor resets the packet counter variable to the second predetermined value to prevent bandwidth under-utilization after congestion periods if the packet counter variable is larger than the second predetermined value.
28. The acknowledgment pacing device of claim 19 wherein the congestion level of the network is determined by analyzing a queue length representing a capacity for a data packet buffer.
29. The acknowledgment pacing device of claim 28 wherein the network is indicated as being non-congested when the queue length is less than a low threshold.
30. The acknowledgment pacing device of claim 28 wherein the network is indicated as being congested when the queue length is greater than a high threshold.
31. The acknowledgment pacing device of claim 1 wherein the acknowledgment packet buffer buffers acknowledgment packets on an aggregate basis.
32. The acknowledgment pacing device of claim 1 wherein the acknowledgment packet buffer buffers acknowledgment packets by flow type, and wherein the scheduler releases the acknowledgment packets in the acknowledgment buffer taking into account the type of flows for the buffered acknowledgment packets.
33. An access node device, comprising;
a link layer entity for receiving data packets from a source and forwarding the data packets to a forward data link, the link layer entity storing the received data packets in a data packet buffer until the data packets depart the link layer entity and are forwarded to the forward data link; and
an acknowledgment pacing device, coupled to the link layer entity, for pacing acknowledgment packets to be sent to the source in response to receiving the data packets from the source, the acknowledgment pacing device further comprising:
an acknowledgment control unit for monitoring congestion at the link layer entity and generating a control signal for controlling the processing of acknowledgment packets based upon whether congestion is occurring at the link layer entity;
an acknowledgment packet buffer, coupled to the acknowledgment control unit, for storing acknowledgment packets received from the acknowledgment control unit; and
a scheduler, coupled to the acknowledgment control unit and the acknowledgment buffer, the scheduler releasing acknowledgment packets to the source based upon the control signal generated by the acknowledgment control unit.
34. The access node device of claim 33 where the scheduler chooses acknowledgment packets to be released based upon a queuing strategy.
35. The access node device of claim 34 wherein the queuing strategy comprises sending a head-of-line acknowledgment packet when aggregate acknowledgment packet buffering is used.
36. The access node device of claim 34 wherein the queuing strategy comprises a weighted-round-robin (WRR) process for selecting an acknowledgment packet in the acknowledgment packet buffer for release when per-class or per-flow acknowledgment packet buffering is used.
37. The access node device of claim 36 wherein the weighted-round-robin (WRR) process uses weights for weighting the selection of the acknowledgment packet for release inversely proportionally to a TCP maximum segment size (MSS) to the lessen a bias against smaller MSS flows.
38. The access node device of claim 34 wherein the queuing strategy comprises a fair-queuing (FQ) process for selecting an acknowledgment packet in the acknowledgment packet buffer for release when per-class or per-flow acknowledgment packet buffering is used.
39. The access node device of claim 33 wherein the acknowledgment control unit further comprises an acknowledgment packet pacing processor, the acknowledgment packet pacing processor generating the control signal for controlling the processing of acknowledgment packets using a acknowledgment packet arrival processor and a data packet departure processor.
40. The access node device of claim 39 wherein the acknowledgment packet arrival processor controls the processing of acknowledgment packets to the acknowledgment packet buffer by checking the congestion at the link layer entity and deciding whether to hold acknowledgment packets in the acknowledgment packet buffer or to send the acknowledgment packet directly to the source without buffering the acknowledgment packets in the acknowledgment packet buffer.
41. The access node device of claim 39 wherein the acknowledgment packet arrival processor decides whether to hold acknowledgment packets in the acknowledgment packet buffer or to send the acknowledgment packet directly to the source without buffering the acknowledgment packets in the acknowledgment packet buffer by determining if the link layer entity is congested, determining if the acknowledgment packet buffer is empty, storing an acknowledgment packet in the buffer if the link layer entity is congested or the acknowledgment packet buffer is not empty, and forwarding the acknowledgment packet to the source if the acknowledgment packet buffer is empty and the link layer entity is not congested.
42. The access node device of claim 41 wherein the acknowledgment packet is stored in the acknowledgment packet buffer and gated out by the scheduler if the acknowledgment packet is a first acknowledgment packet to be buffered in the acknowledgment packet buffer during congestion.
43. The access node device of claim 42 wherein the acknowledgment control unit increases a spacing between acknowledgment packets gated from the acknowledgment packet buffer if the acknowledgment packet is the first acknowledgment packet to be buffered in the acknowledgment packet buffer during congestion.
44. The access node device of claim 39 wherein the data packet departure processor decreases a spacing between the release of acknowledgment packets from the acknowledgment packet buffer if congestion in the link layer entity has abated.
45. A method for providing acknowledgment pacing for acknowledgment packets to be sent to a source in response to receiving data packets from the source, comprising:
monitoring loading of a network;
generating a control signal for controlling the processing of acknowledgment packets based upon the loading of the network;
storing acknowledgment packets received from the acknowledgment control unit in an acknowledgment packet buffer; and
releasing acknowledgment packets based upon the control signal.
46. The method of claim 45 wherein the releasing further comprises choosing acknowledgment packets to release based upon a queuing strategy.
47. The method of claim 46 wherein the queuing strategy comprises sending a head-of-line acknowledgment packet when aggregate acknowledgment packet buffering is used.
48. The method of claim 46 wherein the queuing strategy comprises a weighted-round-robin (WRR) process for selecting an acknowledgment packet in the acknowledgment packet buffer for release when per-class or per-flow acknowledgment packet buffering is used.
49. The method of claim 48 wherein the weighted-round-robin (WRR) process uses weights for weighting the selection of the acknowledgment packet for release inversely proportionally to a TCP maximum segment size (MSS) to the lessen a bias against smaller MSS flows.
50. The method of claim 46 wherein the queuing strategy comprises a fair-queuing (FQ) process for selecting an acknowledgment packet in the acknowledgment packet buffer for release when per-class or per-flow acknowledgment packet buffering is used.
51. The method of claim 45 wherein the generating the control signal for controlling the processing of acknowledgment packets comprises a acknowledgment packet arrival processes and a data packet departure process.
52. The method of claim 51 wherein the acknowledgment packet arrival process controls the processing of acknowledgment packets to the acknowledgment packet buffer by checking a congestion level of the network and deciding whether to hold acknowledgment packets in the acknowledgment packet buffer or to send the acknowledgment packet directly to the source without buffering the acknowledgment packets in the acknowledgment packet buffer.
53. The method of claim 51 wherein the deciding whether to hold acknowledgment packets in the acknowledgment packet buffer or to send the acknowledgment packet directly to the source without buffering the acknowledgment packets in the acknowledgment packet buffer further comprises determining if the network is congested, determining if the acknowledgment packet buffer is empty, storing an acknowledgment packet in the buffer if the network is congested or the acknowledgment packet is not empty, and forwarding the acknowledgment packet to the source if the acknowledgment packet buffer is empty and the network is not congested.
54. The method of claim 53 wherein the acknowledgment packet is stored in the acknowledgment packet buffer and gated out if the acknowledgment packet is a first acknowledgment packet to be buffered in the acknowledgment packet buffer.
55. The method of claim 54 further comprising increasing a spacing between acknowledgment packets gated from the acknowledgment packet buffer if the acknowledgment packet is the first acknowledgment packet to be buffered in the acknowledgment packet buffer.
56. The method of claim 55 wherein increasing the spacing further comprises setting a packet counter variable to a first predetermined value.
57. The method of claim 56 further comprising decrementing the packet counter variable when a data packet departs from the network.
58. The method of claim 57 further comprising releasing a buffered acknowledgment packet when the packet counter variable is decremented to zero and resetting the packet counter variable.
59. The method of claim 52 wherein the congestion level of the network is determined by analyzing a queue length representing a capacity for a data packet buffer.
60. The method of claim 59 further comprising indicating the network is not congested when the queue length is less than a low threshold.
61. The method of claim 59 further comprising indicating the network is congested when the queue length is greater than a high threshold.
62. The method of claim 51 wherein the releasing further comprises monitoring congestion levels of the network and deciding when to gate acknowledgment packets from the acknowledgment buffer to the source.
63. The method of claim 62 wherein the deciding further comprises checking if acknowledgment packets are in the acknowledgment packet buffer awaiting transmission and if a packet counter variable set by the acknowledgment control unit has a value of zero, and releasing a buffered acknowledgment packet in the acknowledgment packet buffer to the source when the packet counter variable has a value of zero.
64. The method of claim 63 wherein the data packet departure process further comprises increasing the spacing between the release of acknowledgment packets from the acknowledgment packet buffer if congestion still exists in the network.
65. The method of claim 64 wherein the increasing further comprises resetting the packet counter variable to a first predetermined value.
66. The method of claim 65 wherein the data packet departure process further comprises decrementing the packet counter variable if the value of the packet counter variable is non-zero.
67. The method of claim 66 wherein the data packet departure process further comprises resetting the packet counter variable to the second predetermined value to prevent bandwidth under-utilization after congestion periods if the packet counter variable is larger than the second predetermined value.
68. The method of claim 63 wherein the data packet departure process further comprises decreasing a spacing between the release of acknowledgment packets from the acknowledgment packet buffer if congestion in the network has abated.
69. The method of claim 68 wherein the data packet departure process further comprises decreasing a spacing between the release of acknowledgment packets by resetting the packet counter variable to a second predetermined value, the second predetermined value being less than the first predetermined value.
70. The method of claim 69 wherein the data packet departure process further comprises decrementing the packet counter variable if the value of the packet counter variable is non-zero.
71. The method of claim 70 wherein the data packet departure process further comprises resetting the packet counter variable to the second predetermined value to prevent bandwidth under-utilization after congestion periods if the packet counter variable is larger than the second predetermined value.
72. The method of claim 63 wherein the determining of the congestion level of the network further comprises analyzing a queue length representing a capacity for a data packet buffer.
73. The method of claim 72 further comprising indicating the network is not congested when the queue length is less than a low threshold.
74. The method of claim 72 further comprising indicating the network is congested when the queue length is greater than a high threshold.
75. The method of claim 45 wherein the storing further comprises buffering acknowledgment packets on an aggregate basis.
76. The method of claim 45 wherein the storing further comprises buffering acknowledgment packets by flow type, and wherein the releasing further comprises scheduling the release of acknowledgment packets in the acknowledgment buffer by taking into account the type of flows for the buffered acknowledgment packets.
Description
BACKGROUND OF THE INVENTION

1. Field of the Invention

This invention relates in general to networks, and more particularly to an enhanced acknowledgment pacing device and method for TCP connections.

2. Description of Related Art

Today, an organization's computer network has become its circulatory system. Organizations have combined desktop work stations, servers, and hosts into Local Area Network (LAN) communities. These Local Area Networks have been connected to other Local Area Networks and to Wide Area Networks (WANs). It has become a necessity of day-to-day operation that pairs of systems must be able to communicate when they need to, without regard to where they may be located in the network.

During the early years of network computing, proprietary networking protocols were the standard. However, the development of the Open Systems Interconnection Reference Model introduced by the International Organization for Standardization (ISO) has led to an impressive degree of interworking, which generally allows end-user applications to work very well between systems in a network. Implementations are based on written standards that have been made available by volunteers from dozens of computer vendors, hardware component vendors and independent software companies.

During the last decade, LANs have been proliferating. This has created a recurring problem of how to minimize congestion and optimize throughput that must be solved by network managers. An early solution was to simply divide Local Area Networks into multiple smaller networks serving smaller populations. These segments were connected by bridges to form a single Local Area Network with traffic being segregated locally to each segment.

The evolution of new network types and Wide Area Networks created a need for routers. For example, the Internet is a set of networks connected by gateways, which are sometimes referred to as routers. Routers added filtering and firewalling capability to provide more control over broadcast domains, limit broadcast traffic and enhance security. A router is able to chose the best path through the network due to embedded intelligence. This added intelligence also allowed routers to build redundant paths to destinations when possible. Nevertheless, the added complexity of best path selection capability accorded by the embedded intelligence increased the port cost of routers and caused substantial latency overhead. Shared-media networks comprising distributed client/server data traffic, expanded user populations and more complex applications gave birth to new bandwidth bottlenecks. Such congestion produced unpredictable network response times, the inability to support the delay-sensitive applications and higher network failure rates.

Congestion control in modern networks is increasingly becoming an important issue. The explosive growth of Internet applications such as the World Wide Web (WWW) has pushed current technology to its limit, and it clear that faster transport and improved congestion control mechanisms are required. As a result, many equipment vendors and service providers are turning to advanced networking technology to provide adequate solutions to the complex quality of service (QoS) management issues involved. Examples include asynchronous transfer made (ATM) networks and emerging IP network services. Nevertheless, there is still the need to support a host of existing legacy IP protocols within these newer paradigms. In particular, the ubiquitous TCP transport-layer protocol has long been the workhorse transport protocol in IP networks, widely used by web-browsers, file/email transfer services, etc.

Transmission Control Protocol (TCP) is a part of the TCP/IP protocol family that has gained the position as one of the world's most important data communication protocols with the success of the Internet. TCP provides a reliable data connection between devices using TCP/IP protocols. TCP operates on top of IP that is used for packing the data to data packets, called datagrams, and for transmitting across the network.

The Internet Protocol (IP) is a network layer protocol that routes data across an Internet. The Internet Protocol was designed to accommodate the use of host and routers built by different vendors, encompass a growing variety of growing network types, enable the network to grow without interrupting servers, and support higher-layer of session and message-oriented services. The IP network layer allows integration of Local Area Network “islands”.

However, IP doesn't contain any flow control or retransmission mechanisms. That is why TCP is typically used on top of it. Especially, TCP uses acknowledgments for detecting lost data packets. TCP/IP networks arc nowadays probably the most important of all networks, and operate on top of several (physical) networks, such as the ATM networks mentioned above. These underlying networks may offer some information about the condition of network and traffic, which may be used to provide feedback regarding congestion.

To manage congestion, TCP uses a sliding window mechanism coupled with reactive congestion control to adjust the sender's window size. The protocol adjusts its transmission behavior contingent to returning acknowledgment (ACK) packets sent from the remote receiver's end.

A problem with TCP, however, is that its congestion control mechanism is relatively slow. Most TCP implementations use very coarse timers to measure timeouts, i.e., roughly 200-500 ms granularity. Further, most TCP implementations rely on ACK delays or packet drops to detect congestion. As a result, excessive source window reductions can result in large amounts of bandwidth being wasted as the TCP source is forced to restart its transmission window. Further, many studies have shown that TCP does not perform very well over ATM networks, especially for larger WAN-type propagation delays.

To combat the above shortcomings with TCP, it is necessary to minimize the chances of network congestion by somehow incorporating faster congestion indication mechanisms in the TCP feedback loop. However, to ensure compatibility with current versions and to expedite market acceptance, any such attempt must preclude changes to the actual TCP protocol or its implementation.

Along these lines, a variety of ACK pacing schemes have been proposed. These ACK pacing schemes basically modulate the spacing of TCP ACK packets to limit source emissions during periods of congestion. ACK pacing is well-suited at the boundary of high speed (sub)networks, such as ATM, gigabit IP (i.e., optical WDM), or satellite. In essence this technique performs TCP traffic shaping at the access nodes. Such methodologies are specifically beneficial for advanced ATM data services, i.e., underlying ABR flow control or per-connection queuing, where congestion tends to buildup at the periphery of the ATM network, i.e., in the access nodes. If the forward link is congested, as indicated via some congestion metric, ACK packets are appropriately delayed before being sent to the source.

Other authors have proposed modifying fields in the ACK packets themselves, i.e., receiver-window size, to improve performance. However, such schemes either require accurate round-trip delay measurements or cannot maintain tight buffer control. Furthermore, rewriting ACK packet fields will require expensive checksum recomputations.

Although ACK pacing is an effective way of controlling TCP source behaviors, many of the proposed schemes arc either too complex and/or overly sensitive to network parameter settings. Since studies have shown that TCP's throughput and fairness levels can be low in many high-speed network scenarios, it is necessary to devise efficient, practical schemes to enhance its performance. Although amending the protocol's functionality itself is also an option, this may not be a feasible alternative in the short-to-medium time frame. It is along these lines that the ACK pacing methods can provide significant benefits.

It can be seen that there is a need for a more robust, comprehensive scheme for ACK pacing.

It can also be seen that there is a need for ACK pacing that provides high throughput and precise levels of bandwidth fairness.

It can also be seen that there is a need for ACK pacing that significantly reduces TCP buffering delays and is applicable to a wide range of network scenarios.

It can also be seen that there is a need for ACK pacing that provides faster congestion indication without modifying the TCP protocol.

SUMMARY OF THE INVENTION

To overcome the limitations in the prior art described above, and to overcome other limitations that will become apparent upon reading and understanding the present specification, the present invention discloses an enhanced acknowledgment pacing device and method for TCP connections.

The present invention solves the above-described problems by providing a more robust, comprehensive scheme for ACK pacing. The ACK pacing according to the present invention provides high throughput and precise levels of bandwidth fairness. Further, the ACK pacing significantly reduces TCP buffering delays and is applicable to a wide range of network scenarios. Thus, the ACK pacing provides faster congestion indication without modifying the TCP protocol.

A system in accordance with the principles of the present invention includes a link layer entity for receiving data packets from a source and forwarding the data packets to a forward data link, the link layer entity storing the received data packets in a data packet buffer until the data packets depart the link layer entity and are forwarded to the forward data link and an acknowledgment pacing device, coupled to the link layer entity, for pacing ACK packets to be sent to the source in response to receiving the data packets from the source. The acknowledgment pacing device further includes an acknowledgment control unit for monitoring congestion at the link layer entity and generating a control signal for controlling the processing of acknowledgment packets based upon whether congestion is occurring at the link layer entity, an acknowledgment packet buffer, coupled to the acknowledgment control unit, for storing acknowledgment packets received from the acknowledgment control unit and a scheduler, coupled to the acknowledgment control unit and the acknowledgment buffer, the scheduler releasing acknowledgment packets to the source based upon the control signal generated by the acknowledgment control unit.

Other embodiments of a system in accordance with the principles of the invention may include alternative or optional additional aspects. One such aspect of the present invention is that the scheduler chooses ACK packets to release based upon a queuing strategy.

Another aspect of the present invention is that the queuing strategy includes sending a head-of-line ACK packet when aggregate ACK packet buffering is used.

Another aspect of the present invention is that the queuing strategy includes a weighted-round-robin (WRR) process for selecting an ACK packet in the ACK packet buffer for release when per-class or per-flow ACK packet buffering is used.

Another aspect of the present invention is that the weighted-round-robin (WRR) process uses weights for weighting the selection of the ACK packet for release inversely proportionally to a TCP maximum segment size (MSS) to the lessen a bias against smaller MSS flows.

Another aspect of the present invention is that the queuing strategy includes a fair-queuing (FQ) process for selecting an ACK packet in the ACK packet buffer for release when per-class or per-flow ACK packet buffering is used.

Another aspect of the present invention is that the acknowledgment control unit further includes an ACK packet pacing processor, the ACK packet pacing processor generating the control signal for controlling the processing of acknowledgment packets using a ACK packet arrival processor and a data packet departure processor.

Another aspect of the present invention is that the ACK packet arrival processor controls the processing of ACK packets to the ACK packet buffer by checking the congestion at the link layer entity and deciding whether to hold ACK packets in the ACK packet buffer or to send the ACK packet directly to the source without buffering the ACK packets in the ACK packet buffer.

Another aspect of the present invention is that the ACK packet arrival processor decides whether to hold ACK packets in the ACK packet buffer or to send the ACK packet directly to the source without buffering the ACK packets in the ACK packet buffer by determining if the link layer entity is congested, determining if the ACK packet buffer is empty, storing an ACK packet in the buffer if the link layer entity is congested or the ACK packet buffer is not empty, and forwarding the ACK packet to the source if the ACK packet buffer is empty and the link layer entity is not congested.

Another aspect of the present invention is that the ACK packet is stored in the ACK packet buffer and gated out by the scheduler if the ACK packet is a first ACK packet to be buffered in the ACK packet buffer during congestion.

Another aspect of the present invention is that the ACK control unit increases a spacing between ACK packets gated from the ACK packet buffer if the ACK packet is the first ACK packet to be buffered in the ACK packet buffer during congestion.

Another aspect of the present invention is that the data packet departure processor decreases a spacing between the release of ACK packets from the ACK packet buffer if congestion in the link layer entity has abated.

These and various other advantages and features of novelty which characterize the invention are pointed out with particularity in the claims annexed hereto and form a part hereof. However, for a better understanding of the invention, its advantages, and the objects obtained by its use, reference should be made to the drawings which form a further part hereof, and to accompanying descriptive matter, in which there are illustrated and described specific examples of an apparatus in accordance with the invention.

BRIEF DESCRIPTION OF THE DRAWINGS

Referring now to the drawings in which like reference numbers represent corresponding parts throughout:

FIG. 1 illustrates the OSI model which includes seven layers;

FIG. 2 illustrates a comparison of the Internet Protocol Network Layer and the OSI seven layer model;

FIG. 3 illustrates a packet stream and a TCP sliding window;

FIG. 4 illustrates a network system wherein a receiver provides acknowledgments to the source as well as receives data from the source;

FIG. 5 illustrates the enhanced ACK pacing device according to the present invention;

FIG. 6 illustrates the psuedocode for the TCP ACK arrival and data departure 660 method according to the present invention;

FIG. 7 illustrates psuedocode for a congestion status method using two hysterisis queue thresholds, QL and QH; and

FIG. 8 illustrates the psuedocode for the enhanced TCP ACK arrival and ATM cell departure methods according to the present invention

DETAILED DESCRIPTION OF THE INVENTION

In the following description of the exemplary embodiment, reference is made to the accompanying drawings which form a part hereof, and in which is shown by way of illustration the specific embodiment in which the invention may be practiced. It is to be understood that other embodiments may be utilized as structural changes may be made without departing from the scope of the present invention.

The present invention provides an enhanced acknowledgment pacing device and method for TCP connections. A more robust, comprehensive scheme for ACK pacing is provided which allows high throughput and precise levels of bandwidth fairness. Further, the ACK pacing significantly reduces TCP buffering delays and is applicable to a wide range of network scenarios. Thus, the ACK pacing according to the present invention provides faster congestion indication without modifying the TCP protocol.

FIG. 1 illustrates the OSI model 100 which includes seven layers, including an Application Layer 110, Presentation Layer 120, Session Layer 130, Transport Layer 140, Network Layer 150, Data Link Layer 160, and Physical Layer 170. The OSI model 100 was developed by the International Organization for Standardization (ISO) and is described in ISO 7498, entitled “The OSI Reference Model”, and which is incorporated by reference herein.

Each layer of the OSI model performs a specific data communications task, a service to and for the layer that precedes it (e.g., the Network Layer provides a service for the transport layer). The process can be likened to placing a letter in a series of envelopes before it is sent through the postal system. Each succeeding envelope adds another layer of processing or overhead information necessary to process the transaction. Together, all the envelopes help make sure the letter gets to the right address and that the message received is identical to the message sent. Once the entire package is received at its destination, the envelopes are opened one by one until the letter itself emerges exactly as written.

In a data communication transaction, however, each end user is unaware of the envelopes, which perform their functions transparently. For example, an automatic bank teller transaction can be tracked through the multi-layer OSI system. One multiple layer system (Open System A) provides an application layer that is an interface to a person attempting a transaction, while the other multiple layer system (Open System B) provides an application layer that interfaces with applications software in a bank's host computer. The corresponding layers in Open Systems A and B are called peer layers and communicate through peer protocols. These peer protocols provide communication support for a user's application, performing transaction related tasks such as debiting an account, dispensing currency, or crediting an account.

Actual data flow between the two open systems (Open System A and Open System B), however, is from top 180 to bottom 182 in one open system (Open System A, the source), across the communications line, and then from bottom 182 to top 180 in the other open system (Open System B, the destination). Each time that user application data passes downward from one layer to the next layer in the same system more processing information is added. When that information is removed and processed by the peer layer in the other system, it causes various tasks (error correction, flow control, etc.) to be performed.

The ISO has specifically defined all seven layers, which are summarized below in the order in which the data actually flows as they leave the source:

Layer 7, the Application Layer 1 10, provides for a user application (such as getting money from an automatic bank teller machine) to interface with the OSI application layer. That OSI application layer 110 has a corresponding peer layer in the other open system, the bank's host computer.

Layer 6, the Presentation Layer 120, makes sure the user information (a request for $50 in cash to be debited from your checking account) is in a format (i.e., syntax or sequence of ones and zeros) the destination open system can understand.

Layer 5, the Session Layer 130, provides synchronization control of data between the open systems (i.e., makes sure the bit configurations that pass through layer 5 at the source are the same as those that pass through layer 5 at the destination).

Layer 4, the Transport Layer 140, ensures that an end-to-end connection has been established between the two open systems and is often reliable (i.e., layer 4 at the destination confirms the request for a connection, so to speak, that it has received from layer 4 at the source).

Layer 3, the Network Layer 150, provides routing and relaying of data through the network (among other things, at layer 3 on the outbound side an address gets placed on the envelope which is then read by layer 3 at the destination).

Layer 2, the Data Link Layer 160, includes flow control of data as messages pass down through this layer in one open system and up through the peer layer in the other open system.

Layer 1, the Physical Interface Layer 170, includes the ways in which data communications equipment is connected mechanically and electrically, and the means by which the data moves across those physical connections from layer 1 at the source to layer 1 at the destination.

FIG. 2 is a comparison 200 illustrating where the Internet Protocol Network Layer 202 fits in the OSI seven layer model 204. In FIG. 2, the transport layers 210 provides data connection services to applications and may contains mechanisms that guarantee that data is delivered error-free, without omissions and in sequence. The transport layer 210 in the TCP/IP model 212 sends segments by passing them to the IP layer 202, which routes them to the destination. The transport layer 210 accepts incoming segments from IP 202, determines which application is the recipient, and passes the data to that application in the order in which it was sent.

Thus, the Internet Protocol 202 performs Network Layer functions and routes data between systems. Data may traverse a single link or may be relayed across several links in an Internet. Data is carried in units called datagrams, which include an IP header that contains layer 3 220 addressing information. Routers examine the destination address in the IP header in order to direct datagrams to their destinations. The IP layer 202 is called connection less because every datagram is routed independently and the IP layer 202 does not guarantee reliable or in-sequence delivery of datagrams. The IP layer 202 routes its traffic without caring which application-to-application interaction a particular datagram belongs to.

The TCP layer 210 provides a reliable data connection between devices using TCP/IP protocols. The TCP layer 210 operates on top of the IP layer 202 that is used for packing the data to data packets, called datagrams, and for transmitting the across the underlying network via physical layer 230.

However, the IP protocol doesn't contain any flow control or retransmission mechanisms. That is why the TCP layer 210 is typically used on top of the IP layer 202. In contrast, TCP protocols provide acknowledgments for detecting lost data packets.

FIG. 3 illustrates a packet stream 300 and a TCP sliding window 310. One of the main features of a TCP source is that it uses a sliding window 310 that determines the bytes and, consequently, the IP packets that can be sent before an acknowledgment is received from the receiver. This makes it possible to adjust the effective transmission rate of the source.

When the TCP source increases the size of the sliding window 310, its average transmission rate increases, too. The sliding window 310 is on top of octets 12-19. Octets up to 11 have been transmitted and the sliding window 310 has moved past them. Inside the sliding window 310, there are two octet groups 320, 322. The first octet group 320 is the octets from 12 to 16, which have been transmitted 330. The second group of octets 322 in the sliding window 310 are octets 17-19, which have not yet been transmitted. The second group of octets 322 can be sent immediately 340. Finally, octets 20 and upwards 350 cannot be transmitted 360. Octet 12 has to be acknowledged and the sliding window slid forward before octet 20 may be transmitted. Thus, TCP provides retransmission of lost data packets and flow control using this TCP sliding window 310. The sliding window 310 is actually the minimum of the congestion window of the window advertisement which is sent by the receiver.

FIG. 4 illustrates a TCP network system 400 wherein a receiver 410 provides acknowledgments 420 to the source 430 as well as receives data 440 from the source 430. The receiver 410 sends acknowledgment packets 420 that also include window advertisement data 450 for informing the source 430 of the capacity of the receiver 410 to handle incoming data 440. Thus, the receiver 410 can advertise a suitable window size 450 for flow control purposes. In practice, the window advertisement 450 specifies how many additional octets of data the receiver 410 is prepared to accept. The source 430 is supposed to adjust its sliding window according to this advertisement, unless the congestion window 460 maintained by the source 430 is smaller.

The second window, the congestion window 460, is used internally at the TCP source 430 for dropping the size of the sliding window. This occurs if a timer expires telling that a data packet has been sent, but no acknowledgment has arrived within a certain time period. This means that the data packet has been lost which is most probably caused by network congestion. In order not to make the congestion worse, the TCP source 430 drops its transmission rate by reducing the size of the sliding window. The relation of these windows can be expressed as:

Tw=MIN(window advertisement, congestion window),

where Tw, refers to the transmission window, i.e., the sliding window.

In principle, the congestion window 460 and feedback information included in the advertisement window 450 provided by the underlying network can be used for the same purpose, namely to adjust the transmission rate of the TCP source 430 according to the load and congestion of the network. However, one important difference between the congestion window 460 and feedback information included in the advertisement window 450 is that the congestion window 460 works on the end-to-end basis and is typically quite slow to react to changes due to relatively long timeouts. Thus, the congestion window 460 can not also give any detailed information. The TCP source 410 simply knows that a packet has been discarded which may not give the exact picture about the network condition. Feedback information included in the advertisement window 450, on the other hand, may be more accurate and may react faster to the changing conditions.

An underlying network can use the receiver's window advertisements 450 carried in acknowledgment packets 420 for controlling the transmission speed of a TCP source 410. This may be accomplished by adding device or network functionality, herein referred to as Feedback Information Converter (FIC).

Thus, TCP uses a sliding window protocol where the source 430 adjusts its window size based upon returning ACK packets 420 from the receiver 410. Hence the window's 460 growth rate will be related to the rate of these returning packets 420. Therefore, it is evident that by modifying the timing of the returning ACK stream 420, the growth of the source window 460 can be controlled. It is this fundamental principle upon which ACK pacing methods are based. Specifically, these methods appropriately delay returning ACK packets 420 in congested network elements, e.g., access nodes and IP routers, to limit excessive emissions by the source 430. When properly done, ACK pacing can reduce TCP timeouts, limit queue buildups, and thereby improve overall connection goodputs.

Due to the largely asymmetric nature of TCP traffic profiles, ACK pacing, is really only required at the TCP source side 430. This is a noteworthy point, since it implies that the required ACK pacing functionality need only be limited to large web-servers/file-hosts. Hence no expensive upgrades are required for a much larger, diverse user access base. It should be mentioned, however, that ACK pacing assumes good rate control inside the network. This essentially abstracts the network to a fairly constant bandwidth, causing congestion to occur primarily at the access nodes.

Advanced ATM bearer capabilities, e.g., VBR-nrt, ABR and GFR, can realistically achieve these conditions. Furthermore, it is expected that emerging rate guarantees in high-speed IP routers will also yield conditions favorable towards ACK pacing.

However, as stated above, many of the current ACK pacing methods are not particularly amenable to implementation. For example, fast-TCP (F-TCP) required knowledge of the underlying data “clearing” rate in the forward direction. This can either be the link capacity or for the case of the ATM available bit rate (ABR) service category, the connection's allowed cell rate (ACR), etc. The computed delays for the ACK packets 420 are based upon this rate.

Clearly, such schemes require more advanced information processing methods and can be problematic if the underlying rate varies widely. Furthermore, delayed emission of ACK packets 420 by the remote TCP client 410 can compound the sensitivity issues and significantly degrade the performance of such schemes. Also, no explicit fairness provision can be provided by these schemes since they simply buffer returning ACK packets 420 in an aggregate manner, i.e., first-in-first-out (FIFO). Another ACK pacing method, the ACK bucket scheme, requires too much per-flow state, essentially “tracking” the windowing behaviors of each TCP flow.

FIG. 5 illustrates the enhanced ACK pacing device 500 according to the present invention. The ACK pacing device 500 relies on queue length information to infer congestion levels and does not require any additional (expensive) timer mechanisms. The ACK pacing device 500 illustrated in FIG. 5 is very generic and can be tailored to fit a wide range of networks.

In FIG. 5, an ACK control unit 510 is provided. The ACK control unit 510 controls the processing of ACK packets during both overload (i.e., congestion) and underload periods along with the operation of the ACK scheduler unit 520. The ACK control unit 510 relies on traffic measurements and data transmit notifications 522 from the underlying link-layer entity 528 and the data packet buffer 530. During congestion periods, returning ACK packets 532 are stored in the ACK buffers 534 using appropriate classification granularities (aggregate, per-class, per-flow) and gated out at an appropriately chosen rate. Specifically, the emission of ACK packets 532 during congested periods are performed so as to allow the buffers 534 to empty in reasonable time. When congestion subsides, the ACK emission rate is then increased to allow for improved bandwidth utilization. Note the ACK control unit 510 activates the ACK scheduler unit 520 to emit ACK packets 532 in the buffers 534 in all cases.

Since TCP is an expansive protocol, it always attempts to increase it transmission quota barring any receiver window limitations. This means that for large (bulk) file transfers, the regular TCP protocol will repeatedly increase its window size, loose packets, and then slow down. As the volume of data in the system increases, so does the number of ACK packets 532.

This point has a very subtle implication for ACK pacing schemes. Namely, the data packet 540 growth in the regular TCP protocol will be “replaced” by ACK packet 532 growth. This is referred to as the ACK buffer “drift” phenomenon. The rate of this drift will be linear (i.e., fast) for the case of ACK pacing with TCP connections in slow start phases, and will be sub-linear for ACK pacing with TCP connections in congestion avoidance phases.

There are two possible methods to address this problem. The simpler approach is to provide ACK buffers 534 with sufficient capacity for the ACK packets 532 and employ drop-from-front strategies in rare event of ACK buffer 534 exhaustion. Typically, ACK numbers in the front will most likely pertain to lower sequence numbers than those for arriving packets 540. This buffering approach is very reasonable, since ACK packets 532 are small (40 bytes), and most file transfers are not infinite. For example, an ACK buffer 534 of 64 kB of RAM can hold approximately 1,700 ACK packets 532, which is more than adequate for 155 Mb/s WAN links.

Another approach would be to track TCP sequence numbers using two variables, i.e., per-flow accounting for the last-in and last-out values. This approach can yield smaller memory requirements, but requires ACK number re-writing (i.e., check sum recomputations). Also, if ACK packets 532 arrive out of sequence, special considerations are necessary. Furthermore, it is likely that the other fields present in ACK packets 532, such as receiver window sizes and URG/RST flags, may also contain non-redundant information which can complicate matters further.

Thus, the ACK buffering approach is more feasible from an implementation perspective. The ACK buffering approach posts minimal additional constraints and does not tamper with any fields in the TCP packet 532.

As shown in FIG. 5, in the forward direction of data flow, the link layer entity 528 can be representative of a wide range of underlying technologies. Examples include dedicated links or ATM VC's, or IP flow classes. Furthermore, the link layer entity 528 can be either dedicated to a single TCP flow, e.g., ATM VC, etc., or be shared among a group of TCP flows (traffic aggregation). Similarly, the ACK pacing in the reverse direction can be done on different levels. For example, if per-flow queuing is done in the forward direction., then per-flow ACK pacing is also necessary in the reverse direction, i.e., per flow data/per-flow ACK.

If, however aggregate or class-based ACK pacing is done in forward direction, then it may be desirable to do likewise in the reverse direction (aggregate data/aggregate ACK, per-class data/per-class ACK). Others may decide to do simple, aggregate queuing in the forward direction, yet more advanced per-flow ACK buffering in the reverse direction. This approach improves fairness amongst flows aggregated onto the same link-layer entity 530, without requiring high speed per-flow buffering and scheduling techniques in the forward direction. Although per-flow ACK accounting is still required for incoming ACK packets 532, it is restricted to the network edge where the processing rate requirements are also significantly reduced since ACK packets 532 pertain to larger IP packet sizes. By choosing scheduler allocations, i.e., weights, inversely proportional to a flow's TCP maximum segment size (MSS), the bias against smaller MSS flows can be lessened (to an extent).

With the ongoing standardization efforts for a differentiated services architecture, the latter philosophy fits in quite nicely. Namely, per-flow accounting/overhead is limited to the access parts of the network, i.e., where ACK pacing is done, reducing complexity within the backbone. Since most access nodes will carry much fewer connections than backbone devices, this approach is very feasible in emerging networks.

In light of the above discussion, the ACK scheduler 520 can be specified as being fairly generic, borrowing from a variety of packet scheduling methods to improve fairness. For example, in the simplest form of aggregate (FIFO) ACK buffering, the scheduler 520 merely has to send the head-of-line (HOL) ACK. For more advanced per-class or per-flow ACK buffering strategies, a weighted-round-robin (WRR) or fair-queuing (FQ) scheduler can be implemented to “choose” the next suitable ACK for transmission.

The ACK control unit 510 includes a ACK pacing processor 512 for controlling the pacing of ACK packets to the source. The ACK pacing processor includes two main components: the data packet departure processor 514 and the ACK arrival processor. The ACK arrival processor 516 checks congestion levels and decides whether or not to hold incoming ACK packets. The data packet departure processor 514 monitors congestion levels via the link layer entity 528 and data packet buffer 530 and decides when to “clock” out ACK packets 532 to the source.

FIG. 6 illustrates the psuedocode 600 for the TCP ACK arrival 610 and data departure 660 method according to the present invention. The TCP ACK arrival 610 and data departure 660 method are executed for all incoming TCP ACK packets.

In FIG. 6, it is assumed that queue objects exist for enqueuing/dequeuing ACK packets, and that a running count of the number of buffered ACK packets is kept, e.g., num_ACK 604. In case of link-layer congestion and/or a non-empty ACK buffer 612, an incoming ACK packet is stored in the buffer 614. The buffered ACK packets are kept in the buffer and can only be sent out appropriately by the data packet departure method 660.

As discussed previously, the ACK buffering can be done on an aggregate basis or more selective per-class/per-flow basis as discussed above with reference to FIG. 5. If the ACK packet arrives at an empty buffer and there is no congestion 640, it is simply forwarded onwards to the TCP source (i.e., transparent pass-through) 642. However, if this is the first ACK to be buffered 630, then in order to “jump-start” the ACK emission process, this ACK packet must be gated out after an appropriate interval 632.

To avoid any dependencies on expensive timer mechanisms, the emission of ACK packets should be associated with the underlying data packet departure process 660 in the link-layer entity. Namely, the during congestion, ACK packets are sent after every α1 data packets have been emitted, where α1 is termed an (integral) slow-down factor.

From an implementation perspective, the above functionality can be achieved elegantly by using a simple counter variable, e.g., pkt_counter 644. For the first ACK packet, the counter variable value is set to α1 and then decremented per data packet departure according to the data departure process 660. When the counter variable value reaches zero, a buffered ACK packet is released and the counter is reset.

Contrary to some expectations, a given value of α1 does not imply a TCP source slow-down of equivalent magnitude. Here, the issue is complicated by the many features of the TCP protocol, such as slow-start/congestion-avoidance phases, the “ACK-every-other property”, delayed ACK timers, etc. For example, in the idealized case of infinite sources sending full-sized segments, with the ubiquitous “ACK-every-other” feature enabled, it can be shown that a value of α1>3 is required to throttle a TCP source. Alternatively, if the TCP source's end-system behaviors are unknown, then very large values of α1 can be used to “guarantee” queue length control. In other words, such values essentially inhibit all ACK emissions until congestion subsides (i.e., on/off type control), but usually give increased queue oscillations.

Note that in order to present a generic, more flexible specification, the psuedocode in FIG. 6 does not explicitly specify the congestion detection method. Speifically, the congestion_status( ) routine 620 simply return s a boolean value indicating whether or not the link layer entity is congested. Clearly a whole variety of congestion indication mechanisms can be used here. Some examples include queue lengths, averaged queue lengths, input rate overload measurements, and data loss rates. However, preferably the queue length should be used to simplify implementation complexities.

Psuedocode for a sample method 700 using two hysterisis queue thresholds, QL 710, QH 714, is illustrated in FIG. 7. In FIG. 7, the congestion status is checked and a binary flag is returned 702. Hysterisis queue thresholds, QL 710, QH 712 are used 720. If congestion exists and the queue length is less than QL 722, then congestion abatement status change is stored by setting the flag to a first state 724, i.e., congested_flag=OFF. Alternatively, if congestion does not exist and the queue length is greater than QH 730, the congestion onset status change is stored by setting the flag to a second state 732, i.e., the congested_fla=ON. The state of the binary flag is then returned 740.

Results show that if these thresholds 710, 712 are appropriately sized based upon the round-trip delays between the sources and access nodes (i.e., access network delays), near loss-less performance can be achieved. Since such delays are usually many times smaller than the end-to-end delays observed in WAN networks, sizeable reductions in the buffering requirements are possible with ACK pacing schemes according to the present invention.

Referring again to FIG. 6, the data packet departure method 660 is executed whenever a packet departs the link layer. The goal is to release stored ACK packets in a timely fashion, thereby properly controlling the congestion (queue) levels at the access node's link-layer buffer, i.e., minimizing packet losses. The method first checks to see if the there are any buffered ACK packets awaiting transmission 662 and whether the ACK emission counter, i.e., pkt_counter, has reached zero 664. If this is the case, a buffered ACK packet is released to the source 670.

After this, if congestion still exists 680, the inter-ACK packet spacing is maintained at one per α1 data packets by resetting pk_counter to α1 682. This allows the data buffer in the data link layer to drain further. If congestion has abated 684, however, then the inter-ACK spacing is reduced to α2 data packets 686, allowing sources to send faster. The (X2 parameter is termed an integral speedup factor, and necessarily α12. If the counter is non-zero 688, then it is simply decremented 690. However, to prevent bandwidth under-utilization after congestion periods, if the counter is larger than α2, it is simply reset to α2(i.e., especially for larger α2 values).

Again, due to complications arising from TCP specifics, an α2=1 value does not imply that TCP source rates will (approximately) equal the underlying link entity's rate. More specifically, for idealized conditions with the “ACK-every-other” feature, a value of α2=2 performs better.

Referring again to FIG. 5, note that the eligible ACK packets 532 are chosen based upon the queuing strategy used by the ACK scheduler 520. This overall mechanism does not require any expensive timer mechanisms to release stored ACK packets 532 as required in prior methods.

FIG. 8 illustrates the psuedocode 800 for the enhanced TCP ACK arrival 810 and ATM cell departure 860 methods according to the present invention. In the TCP ACK arrival process 810, a determination is made as to whether the link-layer entity is congested or whether the ACK buffer is non-empty 812. If the link layer entity is congested and the ACK buffer is not empty 814, incoming ACK are stored in the queue 816 (FIFO, per-class, or per-flow). ACK packets are stored at the tail of the respective ACK queues and the ACK count is incremented 818. Next, a check is made to determine if this ACK packet is the first ACK packet buffered 820. If this is the first ACK packet to be buffered 822, the cell counter is set to α1*packet cells so that the counter is set to a larger spacing 824. Otherwise 826, the ACK packet is sent to the TCP source 828.

In the ATM Cell departure process 860, a determination is made as to whether the ACK buffer is non-empty 862, i.e., are there ACK packets to send? If there are ACK packets to send 864, scheduler determines the next eligible ACK packet 870. The next eligible ACK packet is dequeued from the head of the eligible ACK queue 872 and is sent to the TCP source 874. The ACK count is decremented 876 and the cell counter is reset appropriately 880. If congestion exists 882, the cell counter is set to equal α1*packet cells to increase the spacing 884. Otherwise 886, the cell counter is set to equal α2*packet cells to reduce the spacing 888.

If the cell counter is not zero 890, a determination is made as to whether congestion has abated 892. If congestion has abated and the cell counter value is greater than α2*packet cells 894, then the cell counter is set to equal α2*packet cells 895. Otherwise 896, the cell counter is decremented 898.

Granted that the above packet handling methods are very generic, more flexibility exists for the case of ATM networks which use smaller packet (cell) sizes. Specifically, it is possible to perform ACK emission per data (fractional) packet emission and at the same time circumvent the use of any expensive timer mechanisms, i.e., the counter is now in terms of cells not packets (cell counter). Since a cell size is typically much smaller than a TCP MSS-sized packet, packets can now be emitted with more fined-grained time granularities. Namely, the α1 and α2 factors do not have to be integers anymore, as is the case in packet-based schemes. Consider a constant value, packet_cells, namely the number of cells in packet_cells = ( [ ( TCP _ MSS + 40 ) 48 ] + 1 ) . ( 1 )

With reference again to FIG. 5, during congestion ACK packets 532 are emitted after every α1•packet_cells and during underload, α2•packet_cells. For the most part, the ACK arrival and cell departure methods are identical to their packet-based counterparts. For example, after every cell emission, the cell counter is decremented and when it reaches zero, a ACK packet 532 in buffer 534 is released by the ACK scheduler 520. During period when the ACK buffer 534 is empty, the counter value is reset appropriately.

In summary, the performance of the TCP protocol over ATM networks is an important area. Recently, various ACK pacing schemes have been proposed to improve TCP's interaction with the more advanced underlying ATM transport categories (i.e., ABR flow control, per-connection queuing). However, these schemes suffer from parameter sensitivity issues and may be difficult to realize in practice. Accordingly, an enhanced ACK pacing device has been disclosed that is capable of performing in a wide range of network scenarios. The scheme uses (more direct) queue-length congestion information to delay TCP ACK packets and can implement a wide range of fairness criterion. The method provides a robust means of improving end-to- end TCP throughput and bandwidth fairness. The buffering requirements in the access nodes are also significantly for a wide range of subnetworks.

The foregoing description of the exemplary embodiment of the invention has been presented for the purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise form disclosed. Many modifications and variations are possible in light of the above teaching. It is intended that the scope of the invention be limited not with this detailed description, but rather by the claims appended hereto.

Citations de brevets
Brevet cité Date de dépôt Date de publication Déposant Titre
US504202921 févr. 198920 août 1991Nec CorporationCongestion control method and apparatus for end-to-end packet communication
US541058522 sept. 199225 avr. 1995Nec CorporationSubscriber line test system
US554638912 juil. 199413 août 1996Alcatel N.V.Method of controlling access to a buffer and a device for temporary storage of data packets and an exchange with such a device
US57086604 mars 199613 janv. 1998Siemens AktiengesellschaftCircuit arrangement for accepting and forwarding message cells with an ATM communication equipment
US57486154 mars 19965 mai 1998Siemens AktiengesellschaftMethod and circuit arrangement for forwarding message cells transmitted via an ATM communication equipment to a serving trunk
US576862715 déc. 199516 juin 1998On Spec Electronic, Inc.External parallel-port device using a timer to measure and adjust data transfer rate
US580557719 juil. 19968 sept. 1998Jain; RajErica: explicit rate indication for congestion avoidance in ATM networks
US580558522 août 19968 sept. 1998At&T Corp.Method for providing high speed packet data services for a wireless system
US58125271 avr. 199622 sept. 1998Motorola Inc.Simplified calculation of cell transmission rates in a cell based netwook
US6038606 *25 nov. 199714 mars 2000International Business Machines Corp.Method and apparatus for scheduling packet acknowledgements
Citations hors brevets
Référence
1"ATM Lecture", Internet http://syllabus.syr.edu/lst/Mweschen/Ist656/Week4/lecture/atm/atm.htm, pp. 1-5 (Feb. 6, 1997).
2"Intergrated Services Digital Network (ISDN) Overall Network Aspects and Functions, Traffic Control and Congestion Control in B-ISDN", International Telecommunication Union, ITU-T Recommendation I.371, pp. 1-27 (Mar. 1993).
3Giroux, N., "Technical Committee, Traffic Management Specification", ATM Forum, Version 4.0, af-tm-0056.000, pp. 1-59 (Apr. 1996).
4Kessler, G., "An Overview of ATM Technology", ATM Overview, pp. 1-10 (Jan. 1995).
5Lambarelli, L., "ATM Service Categories: The Benefits to the User", The ATM Forum: White Paper on Service Categories, pp. 1-10 (date unknown).
6Symborski, C., "What are the meaning of CBR, VBR, ABR, UBR?", Maintained by Carl Symborski, (Last Changed Aug. 20, 1996), pp. 1-2.
7Yao, E. "ATM-The New Technology for Tomorrow's B-ISDN", pp. 1-23 (Dec. 1994).
8Yao, E. "ATM—The New Technology for Tomorrow's B-ISDN", pp. 1-23 (Dec. 1994).
Référencé par
Brevet citant Date de dépôt Date de publication Déposant Titre
US64529159 juil. 199917 sept. 2002Malibu Networks, Inc.IP-flow classification in a wireless point to multi-point (PTMP) transmission system
US6493316 *1 déc. 199810 déc. 2002Nortel Networks LimitedApparatus for and method of managing bandwidth for a packet based connection
US6556578 *14 avr. 199929 avr. 2003Lucent Technologies Inc.Early fair drop buffer management method
US6570851 *1 juil. 199927 mai 2003Nokia Telecommunications OyReceiver driven differentiated service marking for unicast and multicast applications
US6584517 *2 juil. 199924 juin 2003Cypress Semiconductor Corp.Circuit and method for supporting multicast/broadcast operations in multi-queue storage devices
US65908859 juil. 19998 juil. 2003Malibu Networks, Inc.IP-flow characterization in a wireless point to multi-point (PTMP) transmission system
US65942469 juil. 199915 juil. 2003Malibu Networks, Inc.IP-flow identification in a wireless point to multi-point transmission system
US66037712 juil. 19995 août 2003Cypress Semiconductor Corp.Highly scalable architecture for implementing switch fabrics with quality of services
US6611535 *21 déc. 200126 août 2003Teracom AbMethod for flow control
US66251779 août 199923 sept. 2003Cypress Semiconductor Corp.Circuit, method and/or architecture for improving the performance of a serial communication link
US6625711 *16 nov. 200023 sept. 2003Cypress Semiconductor Corp.Method and/or architecture for implementing queue expansion in multiqueue devices
US66286299 juil. 199930 sept. 2003Malibu NetworksReservation based prioritization method for wireless transmission of latency and jitter sensitive IP-flows in a wireless point to multi-point transmission system
US66286569 août 199930 sept. 2003Cypress Semiconductor Corp.Circuit, method and/or architecture for improving the performance of a serial communication link
US66402489 juil. 199928 oct. 2003Malibu Networks, Inc.Application-aware, quality of service (QoS) sensitive, media access control (MAC) layer
US66809229 juil. 199920 janv. 2004Malibu Networks, Inc.Method for the recognition and operation of virtual private networks (VPNs) over a wireless point to multi-point (PtMP) transmission system
US6721786 *6 janv. 200013 avr. 2004International Business Machines CorporationMethod and apparatus for balancing bandwidth usage in a browser
US6741555 *14 juin 200025 mai 2004Nokia Internet Communictions Inc.Enhancement of explicit congestion notification (ECN) for wireless network applications
US6785234 *22 déc. 199931 août 2004Cisco Technology, Inc.Method and apparatus for providing user control of audio quality
US6816458 *13 sept. 20009 nov. 2004Harris CorporationSystem and method prioritizing message packets for transmission
US682615213 sept. 200030 nov. 2004Harris CorporationSystem and method of conserving bandwidth in the transmission of message packets
US682615313 sept. 200030 nov. 2004Jeffrey KroonSystem and method of increasing the message throughput in a radio network
US6856599 *13 sept. 200015 févr. 2005Harris CorporationSystem and method of reducing retransmission of messages in a TCP/IP environment
US6876639 *1 juin 20005 avr. 2005Nortel Networks LimitedTransmission control protocol handoff notification system and method
US6894974 *8 mai 200017 mai 2005Nortel Networks LimitedMethod, apparatus, media, and signals for controlling packet transmission rate from a packet source
US6910063 *28 juin 200021 juin 2005Microsoft CorporationSystem and method of enhancing web server throughput in single and multiple processor systems
US6934251 *23 févr. 200123 août 2005Nec CorporationPacket size control technique
US6937600 *20 avr. 200130 août 2005Kabushiki Kaisha ToshibaCommunication device and communication control method using lower layer data transmission order control at upper layer
US6961309 *25 avr. 20011 nov. 2005International Business Machines CorporationAdaptive TCP delayed acknowledgment
US6975647 *13 nov. 200213 déc. 2005Ems Technologies Canada, LtdEnhancements for TCP performance enhancing proxies
US699007017 déc. 199924 janv. 2006Nortel Networks LimitedMethod and apparatus for adjusting packet transmission volume from a source
US7000024 *9 juil. 200114 févr. 2006Cisco Technology, Inc.Systems and methods for providing transmission control protocol communications
US70134827 juil. 200014 mars 2006802 Systems LlcMethods for packet filtering including packet invalidation if packet validity determination not timely made
US702008329 oct. 200128 mars 2006The Regents Of The University Of CaliforniaMethod for improving TCP performance over wireless links
US7031267 *21 déc. 200018 avr. 2006802 Systems LlcPLD-based packet filtering methods with PLD configuration data update of filtering rules
US7047312 *18 déc. 200016 mai 2006Nortel Networks LimitedTCP rate control with adaptive thresholds
US7050393 *10 mai 200123 mai 2006International Business Machines CorporationMethod, system, and product for alleviating router congestion
US7085227 *11 mai 20011 août 2006Cisco Technology, Inc.Method for testing congestion avoidance on high speed networks
US7103635 *26 janv. 20015 sept. 2006Lucent Technologies Inc.Really simple mail transport protocol
US7200111 *30 août 20013 avr. 2007The Regents Of The University Of CaliforniaMethod for improving TCP performance over wireless links
US7225266 *20 déc. 200229 mai 2007Nokia CorporationAdaptive delayed ACK switching for TCP applications
US7236459 *6 mai 200226 juin 2007Packeteer, Inc.Method and apparatus for controlling data transmission volume using explicit rate control and queuing without data rate supervision
US726661214 févr. 20024 sept. 2007At&T Corp.Network having overload control using deterministic early active drops
US7283469 *30 avr. 200216 oct. 2007Nokia CorporationMethod and system for throughput and efficiency enhancement of a packet based protocol in a wireless network
US7307952 *20 déc. 200211 déc. 2007Intel CorporationMethod and apparatus to determine whether data flow is restricted by a sending node, a receiving node, or by a network
US7342929 *26 avr. 200211 mars 2008Cisco Technology, Inc.Weighted fair queuing-based methods and apparatus for protecting against overload conditions on nodes of a distributed network
US7349337 *12 déc. 200325 mars 2008Novell, Inc.Techniques for shaping data transmission rates
US7352700 *9 sept. 20031 avr. 2008Lucent Technologies Inc.Methods and devices for maximizing the throughput of TCP/IP data along wireless links
US738933624 janv. 200317 juin 2008Microsoft CorporationPacing network packet transmission using at least partially uncorrelated network events
US73982927 déc. 20048 juil. 2008Microsoft CorporationSystem and method of enhancing web server throughput in single and multiple processor systems
US742859530 sept. 200223 sept. 2008Sharp Laboratories Of America, Inc.System and method for streaming TCP messages in an enterprise network
US750232230 sept. 200310 mars 2009Nokia CorporationSystem, method and computer program product for increasing throughput in bi-directional communications
US754636715 janv. 20049 juin 2009Novell, Inc.Methods and systems for managing network traffic by multiple constraints
US7577097 *22 mars 200518 août 2009Microsoft CorporationCompound transmission control protocol
US7583594 *31 janv. 20031 sept. 2009Texas Instruments IncorporatedAdaptive transmit window control mechanism for packet transport in a universal port or multi-channel environment
US7583600 *7 sept. 20051 sept. 2009Sun Microsytems, Inc.Schedule prediction for data link layer packets
US7613115 *31 oct. 20033 nov. 2009International Business Machines CorporationMinimal delay transmission of short messages
US7672234 *19 janv. 20062 mars 2010Microsoft CorporationCongestion avoidance within aggregate channels
US7719967 *28 sept. 200518 mai 2010Netapp, Inc.Cumulative TCP congestion control
US772008529 sept. 200618 mai 2010Packeteer, Inc.Method and apparatus for controlling transmission flow using explicit rate control and queuing without data rate supervision
US772475513 nov. 200625 mai 2010Fujitsu LimitedCommunications apparatus
US776529431 mai 200727 juil. 2010Embarq Holdings Company, LlcSystem and method for managing subscriber usage of a communications network
US77920391 oct. 20097 sept. 2010Netapp, Inc.Cumulative TCP congestion control
US780891831 mai 20075 oct. 2010Embarq Holdings Company, LlcSystem and method for dynamically shaping network traffic
US784383131 mai 200730 nov. 2010Embarq Holdings Company LlcSystem and method for routing data on a packet network
US786936622 mars 200711 janv. 2011Packeteer, Inc.Application-aware rate control
US788966022 août 200715 févr. 2011Embarq Holdings Company, LlcSystem and method for synchronizing counters on an asynchronous packet communications network
US7898946 *24 nov. 20031 mars 2011Samsung Electronics Co., Ltd.Communication system and method capable of improving data transmission efficiency of TCP in asymmetric network environments
US794073531 mai 200710 mai 2011Embarq Holdings Company, LlcSystem and method for selecting an access point
US7944820 *5 févr. 200817 mai 2011Alcatel-Lucent Usa Inc.Methods and devices for maximizing the throughput of TCP/IP data along wireless links
US794890931 mai 200724 mai 2011Embarq Holdings Company, LlcSystem and method for resetting counters counting network performance information at network communications devices on a packet network
US796257014 août 200714 juin 2011Aol Inc.Localization of clients and servers
US800031831 mai 200716 août 2011Embarq Holdings Company, LlcSystem and method for call routing based on transmission performance of a packet network
US801529431 mai 20076 sept. 2011Embarq Holdings Company, LPPin-hole firewall for communicating data packets on a packet network
US804081131 mai 200718 oct. 2011Embarq Holdings Company, LlcSystem and method for collecting and managing network performance information
US8046484 *31 mars 200325 oct. 2011Sharp Laboratories Of America, Inc.Transmitting data across a contention channel in a centralized network
US8059559 *7 août 200915 nov. 2011Embarq Holdings Company, LlcSystem and method for monitoring bandwidth utilization by a user
US806439131 mai 200722 nov. 2011Embarq Holdings Company, LlcSystem and method for monitoring and optimizing network performance to a wireless device
US80684259 avr. 200929 nov. 2011Embarq Holdings Company, LlcSystem and method for using network performance information to determine improved measures of path states
US8091132 *5 mars 20073 janv. 2012New Jersey Institute Of TechnologyBehavior-based traffic differentiation (BTD) for defending against distributed denial of service (DDoS) attacks
US8098579 *31 mai 200717 janv. 2012Embarq Holdings Company, LPSystem and method for adjusting the window size of a TCP packet through remote network elements
US810277031 mai 200724 janv. 2012Embarq Holdings Company, LPSystem and method for monitoring and optimizing network performance with vector performance tables and engines
US810736631 mai 200731 janv. 2012Embarq Holdings Company, LPSystem and method for using centralized network performance tables to manage network communications
US811169228 avr. 20107 févr. 2012Embarq Holdings Company LlcSystem and method for modifying network traffic
US812589731 mai 200728 févr. 2012Embarq Holdings Company LpSystem and method for monitoring and optimizing network performance with user datagram protocol network performance information packets
US813079331 mai 20076 mars 2012Embarq Holdings Company, LlcSystem and method for enabling reciprocal billing for different types of communications over a packet network
US814458631 mai 200727 mars 2012Embarq Holdings Company, LlcSystem and method for controlling network bandwidth with a connection admission control engine
US814458731 mai 200727 mars 2012Embarq Holdings Company, LlcSystem and method for load balancing network resources using a connection admission control engine
US818454931 mai 200722 mai 2012Embarq Holdings Company, LLPSystem and method for selecting network egress
US818946825 oct. 200629 mai 2012Embarq Holdings, Company, LLCSystem and method for regulating messages between networks
US819455531 mai 20075 juin 2012Embarq Holdings Company, LlcSystem and method for using distributed network performance information tables to manage network communications
US819464319 oct. 20065 juin 2012Embarq Holdings Company, LlcSystem and method for monitoring the connection of an end-user to a remote network
US819965331 mai 200712 juin 2012Embarq Holdings Company, LlcSystem and method for communicating network performance information over a packet network
US82133667 sept. 20113 juil. 2012Embarq Holdings Company, LlcSystem and method for monitoring and optimizing network performance to a wireless device
US822365431 mai 200717 juil. 2012Embarq Holdings Company, LlcApplication-specific integrated circuit for monitoring and optimizing interlayer network performance
US822365531 mai 200717 juil. 2012Embarq Holdings Company, LlcSystem and method for provisioning resources of a packet network based on collected network performance information
US822425531 mai 200717 juil. 2012Embarq Holdings Company, LlcSystem and method for managing radio frequency windows
US822879131 mai 200724 juil. 2012Embarq Holdings Company, LlcSystem and method for routing communications between packet networks based on intercarrier agreements
US8230106 *31 mars 200324 juil. 2012Alcatel LucentMethods and apparatus for improved transmission control protocol transmission over a wireless channel exhibiting rate and delay variations
US823825331 mai 20077 août 2012Embarq Holdings Company, LlcSystem and method for monitoring interlayer devices and optimizing network performance
US823953224 juin 20107 août 2012Google Inc.System and method of reducing latency using adaptive DNS resolution
US827490531 mai 200725 sept. 2012Embarq Holdings Company, LlcSystem and method for displaying a graph representative of network performance over a time period
US828996519 oct. 200616 oct. 2012Embarq Holdings Company, LlcSystem and method for establishing a communications session with an end-user based on the state of a network connection
US830706531 mai 20076 nov. 2012Centurylink Intellectual Property LlcSystem and method for remotely controlling network operators
US832562316 févr. 20104 déc. 2012Google Inc.System and method for reducing latency during data transmissions over a network
US8335160 *30 mars 201018 déc. 2012Telefonaktiebolaget L M Ericsson (Publ)Flow sampling with top talkers
US83585808 déc. 200922 janv. 2013Centurylink Intellectual Property LlcSystem and method for adjusting the window size of a TCP packet through network elements
US837409018 oct. 201012 févr. 2013Centurylink Intellectual Property LlcSystem and method for routing data on a packet network
US8374091 *26 mars 200912 févr. 2013Empire Technology Development LlcTCP extension and variants for handling heterogeneous applications
US8379515 *1 févr. 200719 févr. 2013F5 Networks, Inc.TCP throughput control by imposing temporal delay
US840776531 mai 200726 mars 2013Centurylink Intellectual Property LlcSystem and method for restricting access to network performance information tables
US845878410 sept. 20104 juin 2013802 Systems, Inc.Data protection system selectively altering an end portion of packets based on incomplete determination of whether a packet is valid or invalid
US8462778 *6 oct. 200811 juin 2013Canon Kabushiki KaishaMethod and device for transmitting data
US846819620 mai 201018 juin 2013Google Inc.System and method of reducing latency using adaptive retransmission timeouts
US84723265 juil. 201225 juin 2013Centurylink Intellectual Property LlcSystem and method for monitoring interlayer devices and optimizing network performance
US847761431 mai 20072 juil. 2013Centurylink Intellectual Property LlcSystem and method for routing calls if potential call paths are impaired or congested
US848306310 janv. 20119 juil. 2013Hitachi, Ltd.Mobile communication system and communication method
US848844731 mai 200716 juil. 2013Centurylink Intellectual Property LlcSystem and method for adjusting code speed in a transmission path during call set-up due to reduced transmission performance
US848849518 juin 201216 juil. 2013Centurylink Intellectual Property LlcSystem and method for routing communications between packet networks based on real time pricing
US850908216 mars 201213 août 2013Centurylink Intellectual Property LlcSystem and method for load balancing network resources using a connection admission control engine
US852060323 mai 201227 août 2013Centurylink Intellectual Property LlcSystem and method for monitoring and optimizing network performance to a wireless device
US853195431 mai 200710 sept. 2013Centurylink Intellectual Property LlcSystem and method for handling reservation requests with a connection admission control engine
US853769531 mai 200717 sept. 2013Centurylink Intellectual Property LlcSystem and method for establishing a call being received by a trunk on a packet network
US854940531 mai 20071 oct. 2013Centurylink Intellectual Property LlcSystem and method for displaying a graphical representation of a network to identify nodes and node segments on the network that are not operating normally
US857087218 avr. 201229 oct. 2013Centurylink Intellectual Property LlcSystem and method for selecting network ingress and egress
US857671128 sept. 20105 nov. 2013Google Inc.System and method for reducing latency via client side dynamic acknowledgements
US857672231 mai 20075 nov. 2013Centurylink Intellectual Property LlcSystem and method for modifying connectivity fault management packets
US861959627 janv. 201231 déc. 2013Centurylink Intellectual Property LlcSystem and method for using centralized network performance tables to manage network communications
US861960031 mai 200731 déc. 2013Centurylink Intellectual Property LlcSystem and method for establishing calls over a call path having best path metrics
US861982027 janv. 201231 déc. 2013Centurylink Intellectual Property LlcSystem and method for enabling communications over a number of packet networks
US8670313 *13 déc. 201211 mars 2014Centurylink Intellectual Property LlcSystem and method for adjusting the window size of a TCP packet through network elements
US868161030 janv. 201325 mars 2014F5 Networks, Inc.TCP throughput control by imposing temporal delay
US868162128 sept. 201125 mars 2014Centurylink Intellectual Property LlcSystem and method for logging traffic flow
US86876147 déc. 20101 avr. 2014Centurylink Intellectual Property LlcSystem and method for adjusting radio frequency parameters
US8705367 *10 janv. 201322 avr. 2014Empire Technology Development LlcTCP extension and variants for handling heterogeneous applications
US871791131 mai 20076 mai 2014Centurylink Intellectual Property LlcSystem and method for collecting network performance information
US874370030 mai 20123 juin 2014Centurylink Intellectual Property LlcSystem and method for provisioning resources of a packet network based on collected network performance information
US874370331 mai 20073 juin 2014Centurylink Intellectual Property LlcSystem and method for tracking application resource usage
US87501589 août 201210 juin 2014Centurylink Intellectual Property LlcSystem and method for differentiated billing
US8780705 *30 avr. 200915 juil. 2014Freescale Semiconductor, Inc.Apparatus, communications system and method for optimizing data packet flow
US881116022 janv. 201319 août 2014Centurylink Intellectual Property LlcSystem and method for routing data on a packet network
US887939130 sept. 20114 nov. 2014Centurylink Intellectual Property LlcSystem and method for using network derivations to determine path states
US887942725 oct. 20104 nov. 2014802 Systems Inc.Methods for updating the configuration of a programmable packet filtering device including a determination as to whether a packet is to be junked
US8917739 *15 janv. 201323 déc. 2014Fluke CorporationMethod and apparatus to dynamically sample NRT using a double-ended queue that allows for seamless transition from full NRT analysis to sampled NRT analysis
US896454316 févr. 201024 févr. 2015Google Inc.System and method of reducing latency by transmitting duplicate packets over a network
US896596128 mai 201324 févr. 2015Google Inc.System and method of reducing latency using adaptive retransmission timeouts
US89766651 juil. 201310 mars 2015Centurylink Intellectual Property LlcSystem and method for re-routing calls
US9009345 *22 déc. 199814 avr. 2015Aol Inc.Asynchronous data protocol
US90142046 nov. 201321 avr. 2015Centurylink Intellectual Property LlcSystem and method for managing network communications
US90423706 nov. 201326 mai 2015Centurylink Intellectual Property LlcSystem and method for establishing calls over a call path having best path metrics
US905491516 juil. 20139 juin 2015Centurylink Intellectual Property LlcSystem and method for adjusting CODEC speed in a transmission path during call set-up due to reduced transmission performance
US90549868 nov. 20139 juin 2015Centurylink Intellectual Property LlcSystem and method for enabling communications over a number of packet networks
US90942579 août 201228 juil. 2015Centurylink Intellectual Property LlcSystem and method for selecting a content delivery network
US90942618 août 201328 juil. 2015Centurylink Intellectual Property LlcSystem and method for establishing a call being received by a trunk on a packet network
US911273421 août 201218 août 2015Centurylink Intellectual Property LlcSystem and method for generating a graphical user interface representative of network performance
US911858328 janv. 201525 août 2015Centurylink Intellectual Property LlcSystem and method for re-routing calls
US915463421 oct. 20136 oct. 2015Centurylink Intellectual Property LlcSystem and method for managing network communications
US91850112 nov. 201210 nov. 2015Google Inc.System and method for reducing latency during data transmissions over a network
US92256099 oct. 201229 déc. 2015Centurylink Intellectual Property LlcSystem and method for remotely controlling network operators
US92256468 août 201329 déc. 2015Centurylink Intellectual Property LlcSystem and method for improving network performance using a connection admission control engine
US923187330 sept. 20135 janv. 2016Google Inc.System and method for reducing latency via client side dynamic acknowledgements
US924090621 août 201219 janv. 2016Centurylink Intellectual Property LlcSystem and method for monitoring and altering performance of a packet network
US924127125 janv. 201319 janv. 2016Centurylink Intellectual Property LlcSystem and method for restricting access to network performance information
US92412778 août 201319 janv. 2016Centurylink Intellectual Property LlcSystem and method for monitoring and optimizing network performance to a wireless device
US925366121 oct. 20132 févr. 2016Centurylink Intellectual Property LlcSystem and method for modifying connectivity fault management packets
US931932914 oct. 201319 avr. 2016Google Inc.Pacing enhanced packet forwarding/switching and congestion avoidance
US93381044 mars 201410 mai 2016Centurylink Intellectual Property LlcSystem and method for logging traffic flow
US947934131 mai 200725 oct. 2016Centurylink Intellectual Property LlcSystem and method for initiating diagnostics on a packet network node
US952115027 avr. 201213 déc. 2016Centurylink Intellectual Property LlcSystem and method for automatically regulating messages between networks
US9537778 *20 août 20133 janv. 2017Zte CorporationTraffic shaping drive method and driver
US954900416 juil. 201517 janv. 2017Centurylink Intellectual Property LlcSystem and method for re-routing calls
US957805521 janv. 200921 févr. 2017F5 Networks, Inc.Thwarting drone-waged denial of service attacks on a network
US96022659 sept. 201321 mars 2017Centurylink Intellectual Property LlcSystem and method for handling communications requests
US9602410 *19 sept. 201421 mars 2017Huawei Technologies Co., Ltd.Method, device, and system for processing acknowledgement packet
US96213612 août 201111 avr. 2017Centurylink Intellectual Property LlcPin-hole firewall for communicating data packets on a packet network
US966091723 nov. 201523 mai 2017Centurylink Intellectual Property LlcSystem and method for remotely controlling network operators
US966151424 févr. 201423 mai 2017Centurylink Intellectual Property LlcSystem and method for adjusting communication parameters
US971228912 juin 201218 juil. 2017Intellectual Ventures I LlcTransmission control protocol/internet protocol (TCP/IP) packet-centric wireless point to multi-point (PtMP) transmission system architecture
US971244510 juil. 201418 juil. 2017Centurylink Intellectual Property LlcSystem and method for routing data on a packet network
US974939922 juin 201529 août 2017Centurylink Intellectual Property LlcSystem and method for selecting a content delivery network
US98069725 janv. 201631 oct. 2017Centurylink Intellectual Property LlcSystem and method for monitoring and altering performance of a packet network
US98133201 juil. 20157 nov. 2017Centurylink Intellectual Property LlcSystem and method for generating a graphical user interface representative of network performance
US20010015956 *23 févr. 200123 août 2001Nec CorporationPacket size control technique
US20010036154 *20 avr. 20011 nov. 2001Kabushiki Kaisha ToshibaCommunication device and communication control method using lower layer data transmission order control at upper layer
US20020004820 *26 janv. 200110 janv. 2002Baldwin Michael ScottReally simple mail transport protocol
US20020080771 *21 déc. 200027 juin 2002802 Systems, Inc.Methods and systems using PLD-based network communication protocols
US20020080784 *21 déc. 200027 juin 2002802 Systems, Inc.Methods and systems using PLD-based network communication protocols
US20020089930 *30 août 200111 juil. 2002The Regents Of The University Of CaliforniaMethod for improving TCP performance over wireless links
US20020154602 *29 oct. 200124 oct. 2002The Regents Of The University Of CaliforniaMethod for improving TCP performance over wireless links
US20020159396 *25 avr. 200131 oct. 2002Carlson David G.Adaptive TCP delayed acknowledgment
US20020167901 *10 mai 200114 nov. 2002International Business Machines CorporationMethod, system , and product for alleviating router congestion
US20030002497 *29 juin 20012 janv. 2003Anil VasudevanMethod and apparatus to reduce packet traffic across an I/O bus
US20030067903 *24 oct. 200210 avr. 2003Jorgensen Jacob W.Method and computer program product for internet protocol (IP)-flow classification in a wireless point to multi-point (PTMP)
US20030076848 *26 avr. 200224 avr. 2003Anat Bremler-BarrWeighted fair queuing-based methods and apparatus for protecting against overload conditions on nodes of a distributed network
US20030123481 *13 nov. 20023 juil. 2003Ems Technologies, Inc.Enhancements for TCP performance enhancing proxies
US20030202480 *30 avr. 200230 oct. 2003Swami Yogesh PremMethod and system for throughput and efficiency enhancement of a packet based protocol in a wireless network
US20040062201 *30 sept. 20021 avr. 2004Sharp Laboratories Of America, Inc.System and method for streaming TCP messages in an enterprise network
US20040064509 *31 mars 20031 avr. 2004Sharp Laboratories Of America, Inc.Transmitting data across a contention channel in a centralized network
US20040109477 *24 nov. 200310 juin 2004Samsung Electronics Co., Ltd.Communication system and method capable of improving data transmission efficiency of TCP in asymmetric network environments
US20040120255 *20 déc. 200224 juin 2004Gross Gerhard W.Method and apparatus to determine whether data flow is restricted by a sending node, a receiving node, or by a network
US20040122969 *20 déc. 200224 juin 2004Pablo AmeigeirasAdaptive delayed ACK switching for TCP applications
US20040148387 *24 janv. 200329 juil. 2004Sethi Bhupinder S.Pacing network packet transmission using at least partially uncorrelated network events
US20040151113 *31 janv. 20035 août 2004Adrian ZakrzewskiAdaptive transmit window control mechanism for packet transport in a universal port or multi-channel environment
US20040215753 *31 mars 200328 oct. 2004Lucent Technologies, Inc.Methods and apparatus for improved transmission control protocol transmission over a wireless channel exhibiting rate and delay variations
US20040223506 *8 mai 200311 nov. 2004Renesas Technology Corp.Packet communication device sending delayed acknowledgement through network
US20050053002 *9 sept. 200310 mars 2005Chan Mun ChoonMethods and devices for maximizing the throughput of TCP/IP data along wireless links
US20050068894 *30 sept. 200331 mars 2005Jing YuSystem, method and computer program product for increasing throughput in bi-directional communications
US20050097158 *31 oct. 20035 mai 2005International Business Machines CorporationMinimal delay transmission of short messages
US20050097167 *7 déc. 20045 mai 2005Microsoft CorporationSystem and method of enhancing web server throughput in single and multiple processor systems
US20050232193 *28 févr. 200520 oct. 2005Jorgensen Jacob WTransmission control protocol/internet protocol (TCP/IP) packet-centric wireless point to multi-point (PtMP) transmission system architecture
US20060114825 *19 janv. 20061 juin 2006Microsoft CorporationCongestion avoidance within aggregate channels
US20060182108 *17 avr. 200617 août 2006Krumel Andrew KMethods and systems using PLD-based network communication protocols
US20060224769 *9 févr. 20045 oct. 2006Benny MoonenMethod and transmitter for transmitting data packets
US20060227708 *22 mars 200512 oct. 2006Microsoft CorporationCompound transmission control protocol
US20070038736 *10 août 200615 févr. 2007Van Drebbel Mariner LlcTime division multiple access/time division duplex (TDMA/TDD) transmission media access control (MAC) air frame
US20070038750 *10 août 200615 févr. 2007Van Drebbel Mariner LlcMethod for providing for Quality of Service (QoS) - based handling of IP-flows in a wireless point to multi-point transmission system
US20070038751 *10 août 200615 févr. 2007Van Drebbel Mariner LlcUse of priority-based scheduling for the optimization of latency and jitter sensitive IP flows in a wireless point to multi-point transmission system
US20070038752 *10 août 200615 févr. 2007Van Drebbel Mariner LlcQuality of Service (QoS) - aware wireless Point to Multi-Point (PtMP) transmission system architecture
US20070038753 *10 août 200615 févr. 2007Van Drebbel Mariner LlcTransmission Control Protocol/Internet Protocol (TCP/IP) - centric "Quality of Service(QoS)" aware Media Access Control (MAC) Layer in a wireless Point to Multi-Point (PtMP) transmission system
US20070050492 *10 août 20061 mars 2007Van Drebbel Mariner LlcMethod of operation for the integration of differentiated services (Diff-Serv) marked IP-flows into a quality of service (QoS) priorities in a wireless point to multi-point (PtMP) transmission system
US20070070906 *28 sept. 200529 mars 2007Network Appliance, Inc.Cumulative TCP congestion control
US20070073933 *13 sept. 200529 mars 2007International Business Machines CorporationAsynchronous interface with vectored interface controls
US20070209068 *5 mars 20076 sept. 2007New Jersey Institute Of TechnologyBEHAVIOR-BASED TRAFFIC DIFFERENTIATION (BTD) FOR DEFENDING AGAINST DISTRIBUTED DENIAL OF SERVICE (DDoS) ATTACKS
US20070280229 *14 août 20076 déc. 2007Aol LlcLocalization of Clients and Servers
US20080002576 *31 mai 20073 janv. 2008Bugenhagen Michael KSystem and method for resetting counters counting network performance information at network communications devices on a packet network
US20080002644 *13 nov. 20063 janv. 2008Fujitsu LimitedCommunications apparatus
US20080002716 *31 mai 20073 janv. 2008Wiley William LSystem and method for selecting network egress
US20080005156 *31 mai 20073 janv. 2008Edwards Stephen KSystem and method for managing subscriber usage of a communications network
US20080049625 *31 mai 200728 févr. 2008Edwards Stephen KSystem and method for collecting and managing network performance information
US20080049626 *31 mai 200728 févr. 2008Bugenhagen Michael KSystem and method for communicating network performance information over a packet network
US20080049629 *31 mai 200728 févr. 2008Morrill Robert JSystem and method for monitoring data link layer devices and optimizing interlayer network performance
US20080049630 *31 mai 200728 févr. 2008Kozisek Steven ESystem and method for monitoring and optimizing network performance to a wireless device
US20080049631 *31 mai 200728 févr. 2008Morrill Robert JSystem and method for monitoring interlayer devices and optimizing network performance
US20080049632 *31 mai 200728 févr. 2008Ray Amar NSystem and method for adjusting the window size of a TCP packet through remote network elements
US20080049638 *31 mai 200728 févr. 2008Ray Amar NSystem and method for monitoring and optimizing network performance with user datagram protocol network performance information packets
US20080049641 *31 mai 200728 févr. 2008Edwards Stephen KSystem and method for displaying a graph representative of network performance over a time period
US20080049649 *31 mai 200728 févr. 2008Kozisek Steven ESystem and method for selecting an access point
US20080049650 *31 mai 200728 févr. 2008Coppage Carl MSystem and method for managing radio frequency windows
US20080049745 *31 mai 200728 févr. 2008Edwards Stephen KSystem and method for enabling reciprocal billing for different types of communications over a packet network
US20080049753 *31 mai 200728 févr. 2008Heinze John MSystem and method for load balancing network resources using a connection admission control engine
US20080049757 *22 août 200728 févr. 2008Bugenhagen Michael KSystem and method for synchronizing counters on an asynchronous packet communications network
US20080049769 *31 mai 200728 févr. 2008Bugenhagen Michael KApplication-specific integrated circuit for monitoring and optimizing interlayer network performance
US20080049775 *31 mai 200728 févr. 2008Morrill Robert JSystem and method for monitoring and optimizing network performance with vector performance tables and engines
US20080049777 *31 mai 200728 févr. 2008Morrill Robert JSystem and method for using distributed network performance information tables to manage network communications
US20080049787 *31 mai 200728 févr. 2008Mcnaughton James LSystem and method for controlling network bandwidth with a connection admission control engine
US20080052393 *31 mai 200728 févr. 2008Mcnaughton James LSystem and method for remotely controlling network operators
US20080052401 *31 mai 200728 févr. 2008Bugenhagen Michael KPin-hole firewall for communicating data packets on a packet network
US20080095049 *19 oct. 200624 avr. 2008Embarq Holdings Company, LlcSystem and method for establishing a communications session with an end-user based on the state of a network connection
US20080095173 *19 oct. 200624 avr. 2008Embarq Holdings Company, LlcSystem and method for monitoring the connection of an end-user to a remote network
US20080144509 *5 févr. 200819 juin 2008Mun Choon ChanMethods and Devices For Maximizing the Throughput of TCP/IP Data Along Wireless Links
US20080279183 *31 mai 200713 nov. 2008Wiley William LSystem and method for call routing based on transmission performance of a packet network
US20090097483 *6 oct. 200816 avr. 2009Canon Kabushiki KaishaMethod and device for transmitting data
US20090257350 *9 avr. 200915 oct. 2009Embarq Holdings Company, LlcSystem and method for using network performance information to determine improved measures of path states
US20100061266 *7 août 200911 mars 2010Embarq Holdings Company, LlcSystem and method for monitoring bandwidth utilization by a user
US20100208611 *28 avr. 201019 août 2010Embarq Holdings Company, LlcSystem and method for modifying network traffic
US20100246398 *26 mars 200930 sept. 2010Mung ChiangTcp extension and variants for handling heterogeneous applications
US20110197273 *10 sept. 201011 août 2011Krumel Andrew KReal time firewall/data protection systems and methods
US20110228697 *10 janv. 201122 sept. 2011Hitachi, Ltd.Mobile Communication System and Communication Method
US20110242994 *30 mars 20106 oct. 2011Allwyn CarvalhoFlow sampling with top talkers
US20110249557 *12 nov. 200913 oct. 2011Indian Institute Of ScienceCentralized Wireless Manager (WiM) for Performance Management of IEEE 802.11 and a Method Thereof
US20120039174 *30 avr. 200916 févr. 2012Freescale Semiconductor, Inc.Apparatus, communications system and method for optimizing data packet flow
US20120075994 *2 déc. 201129 mars 2012International Business Machines CorporationMethod and apparatus for managing aggregate bandwidth at a server
US20130058214 *21 févr. 20127 mars 2013Andreas FoglarMethod and apparatus to avoid overloads on subscriber access lines
US20130121148 *10 janv. 201316 mai 2013Empire Technology Development LlcTcp extension and variants for handling heterogeneous applications
US20140198671 *15 janv. 201317 juil. 2014Fluke CorporationMethod and apparatus to dynamically sample nrt using a double-ended queue that allows for seamless transition from full nrt analysis to sampled nrt analysis
US20140351936 *19 déc. 201127 nov. 2014Beijing Rising Information Technology Co., Ltd.Frequency-variable anti-virus technology
US20150023167 *19 sept. 201422 janv. 2015Huawei Technologies Co.,Ltd.Method, Device, and System for Processing Acknowledgement Packet
US20160105367 *20 août 201314 avr. 2016Zte CorporationTraffic shaping drive method and driver
US20160182387 *31 juil. 201423 juin 2016British Telecommunications Public Limited CompanyFast friendly start for a data flow
USRE4620610 nov. 201015 nov. 2016Intellectual Ventures I LlcMethod and computer program product for internet protocol (IP)—flow classification in a wireless point to multi-point (PTMP) transmission system
CN101867972A *29 juin 201020 oct. 2010中国科学院计算技术研究所Data transmission method in unacknowledged mode of wireless link control layer
CN101867972B29 juin 201012 déc. 2012中国科学院计算技术研究所Data transmission method in unacknowledged mode of wireless link control layer
EP1441474A230 déc. 200328 juil. 2004Microsoft CorporationPacing network packet transmission using at least partially uncorrelated network events
EP1441474A3 *30 déc. 200321 juin 2006Microsoft CorporationPacing network packet transmission using at least partially uncorrelated network events
EP1881666A1 *7 nov. 200623 janv. 2008Fujitsu Ltd.Communications apparatus
EP2375684A1 *7 nov. 200612 oct. 2011Fujitsu LimitedCommunications apparatus
WO2007074539A1 *3 févr. 20065 juil. 2007Matsushita Electric Works, Ltd.Systems and methods for managing traffic within a peer-to-peer network
WO2015057524A1 *10 oct. 201423 avr. 2015Google Inc.Pacing enhanced packet forwarding/switching and congestion avoidance
Classifications
Classification aux États-Unis370/230, 370/412, 709/234
Classification internationaleH04L1/18, H04L12/56, H04L29/06
Classification coopérativeH04L47/50, H04L69/16, H04L1/1854, H04L47/12, H04L47/323, H04L47/193, H04L1/1809, H04L47/26, H04L47/30, H04L47/10
Classification européenneH04L12/56K, H04L47/12, H04L47/26, H04L47/10, H04L47/30, H04L47/19A, H04L47/32A, H04L1/18R7, H04L29/06J, H04L1/18C
Événements juridiques
DateCodeÉvénementDescription
7 oct. 1998ASAssignment
Owner name: NOKIA TELECOMMUNICATIONS, OY, FINLAND
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:GHANI, NASIR;DIXIT, SUDHIR;REEL/FRAME:009517/0559
Effective date: 19981006
22 sept. 2004FPAYFee payment
Year of fee payment: 4
22 sept. 2008FPAYFee payment
Year of fee payment: 8
12 sept. 2012FPAYFee payment
Year of fee payment: 12
1 oct. 2015ASAssignment
Owner name: NOKIA NETWORKS OY, FINLAND
Free format text: CHANGE OF NAME;ASSIGNOR:NOKIA TELECOMMUNICATIONS OY;REEL/FRAME:036742/0510
Effective date: 19990930
13 oct. 2015ASAssignment
Owner name: NOKIA CORPORATION, FINLAND
Free format text: MERGER;ASSIGNOR:NOKIA NETWORKS OY;REEL/FRAME:036852/0151
Effective date: 20011001
26 oct. 2015ASAssignment
Owner name: NOKIA TECHNOLOGIES OY, FINLAND
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:NOKIA CORPORATION;REEL/FRAME:036965/0233
Effective date: 20150116