WO2001067264A1 - Apparatus and method for predictable and differentiated delivery of multimedia streaming on the internet - Google Patents

Apparatus and method for predictable and differentiated delivery of multimedia streaming on the internet Download PDF

Info

Publication number
WO2001067264A1
WO2001067264A1 PCT/US2001/040264 US0140264W WO0167264A1 WO 2001067264 A1 WO2001067264 A1 WO 2001067264A1 US 0140264 W US0140264 W US 0140264W WO 0167264 A1 WO0167264 A1 WO 0167264A1
Authority
WO
WIPO (PCT)
Prior art keywords
packets
congestion
streaming
computing device
network
Prior art date
Application number
PCT/US2001/040264
Other languages
French (fr)
Inventor
Pratyush Moghe
Original Assignee
Streamcenter, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Streamcenter, Inc. filed Critical Streamcenter, Inc.
Priority to AU2001251715A priority Critical patent/AU2001251715A1/en
Publication of WO2001067264A1 publication Critical patent/WO2001067264A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/26Flow control; Congestion control using explicit feedback to the source, e.g. choke packets
    • H04L47/263Rate modification at the source after receiving feedback
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/29Flow control; Congestion control using a combination of thresholds
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/32Flow control; Congestion control by discarding or delaying data units, e.g. packets or frames
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/61Network streaming of media packets for supporting one-way streaming services, e.g. Internet radio
    • H04L65/612Network streaming of media packets for supporting one-way streaming services, e.g. Internet radio for unicast
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/80Responding to QoS
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/50Reducing energy consumption in communication networks in wire-line communication networks, e.g. low power modes or reduced link rate

Definitions

  • Internet and more particularly, to an apparatus and method for predictable and differentiated delivery of multimedia streaming on the Internet.
  • Multimedia streaming involves broadcast of audio and video information from content providers called streaming servers or senders via the Internet to clients or receivers on user computing devices called streaming players.
  • Real-time delivery of the multimedia streaming on the Internet is inherently uncontrollable and consequently unpredictable.
  • packets of multimedia streaming are typically dropped, thus causing degraded audio and video playback to be received by the streaming players.
  • manifested forms of congestion can be classified into two types: instantaneous and sustained (longer-term). Instantaneous congestion is caused by random variations in the arrival patterns of packets of multimedia streaming at different Internet routers and switches, and due to the varying sizes of these packets.
  • sustained congestion can last from tens of seconds up to a few minutes.
  • the client or the streaming player can buffer incoming packets of multimedia streaming for a short duration before playback. This reduces the effect of short-term delay between packets of multimedia streaming. Additionally, by requesting that a lost packet be immediately retransmitted, the streaming player can counter the loss of packets of multimedia streaming due to instantaneous congestion.
  • Several existing streaming systems use these techniques, i.e., short-term buffering along with retransmits of lost packets, to isolate the streaming player from the effects of instantaneous congestion.
  • RSVP Resource Reservation Protocol
  • IETF Internet Engineering Task Force
  • the object of the present invention is to achieve a particular level of delivery isolation on particular streaming players. This level of delivery isolation may be based on factors like an importance "value” associated with particular streaming players, or the price the user of the streaming players are charged, etc.
  • the streaming server may place specific streaming players in the high "value” category based on the demographic information on their owners, while other streaming players may be placed in the low “value” category.
  • Another object of the present invention is to ensure that the high value users of streaming players always receive consistent, predictable delivery of streaming content, even if there is an excessive user demand. If there is a situation of sustained congestion, wherein not all users can be satisfied, the content provider might want to ensure that the high value users are promised predictable delivery.
  • the present invention provides a method and an apparatus for predictable and differentiated streaming on the Internet.
  • the system consists of software and random-access memory-based buffers installed at the streaming server and at the streaming players along with a service manager at a separate location.
  • the method prescribes the manner in which the system is used to achieve predictable and differentiated delivery of streaming multimedia packets.
  • the invention is based on streaming multimedia packets between the software on the streaming server and the software on the streaming.
  • the buffer sizes in the computing devices of the streaming player and of the streaming server are initialized based on the configuration information provided by the service manager software at the start of the execution of the inventive method.
  • This configuration information includes: a) the expected duration of sustained congestion periods, b) the worst-case packet loss rate during the sustained congestion period, c) the encoded bit-rate of streaming source information, and etc.
  • the software on the streaming server relays the streaming packets to the software on the streaming player, which stores the packets in a playout buffer before playing them out.
  • the playout buffer takes care of delay variation that may arise due to instantaneous congestion on the network path.
  • the software on streaming player immediately requests the server-end software to retransmit the lost information.
  • the software on the streaming player uses estimation techniques based on probes to detect and confirm the onset of sustained congestion. Once the congestion status is verified, the streaming player software informs the streaming server software about this event. The streaming server software then modifies the manner in which it streams packets to the streaming player software. In particular, the streaming server software ensures that the transmission rate of packets never exceeds the encoded bit-rate of the streaming source. Furthermore, it gives preemptive priority to the retransmitted packets over new packets. This creates a backlog of new packets at one of the streaming server buffers.
  • the streaming player software continuously monitors the status of received packets and detects the end of congestion through estimation techniques based on probes. Once the end of congestion is confirmed, the streaming player software informs the streaming server software about this event. The streaming server software then begins to stream its backlog of new packets to the streaming player software. The transmission rate of this stream is increased based on the estimation information from the streaming player software along with feedback obtained from other streaming players in the network. This rate is carefully selected to ensure that the network is not flooded back into congestion.
  • the buffer sizing at the streaming player and at the sfreaming server is based on the configuration information and ensures that the user experiences an uninterrupted playback during sustained congestion. This assumes that the configuration information is the "worst-case" scenario.
  • the streaming player software contacts the service manager after the service.
  • the configuration information at the service manager is now updated in a persistent manner with the actual worst-case parameters.
  • this update happens as follows.
  • the buffers are sized based on the updated configuration information. The method thus automatically learns the worst-case behavior from the actual behavior of the network each time. Eventually, the method can guarantee a predictable delivery based on a non-mterrupted playout in most cases. Va ⁇ ous other embodiments, are also possible, wherein the buffer size is fixed, but the encoding bit-rate of the stream is modified to choose the largest encoding bit-rate that leads to an uninterrupted playback.
  • the method for differentiated streaming involves identical operations as the predictable streaming method, except for the mechanism for streaming the backlog of packets from the streaming server software.
  • the streaming server software transmits the stream based on the "value" associated with the particular user, m addition to the information used for predictable streaming.
  • the methodology of the mentioned invention is utilized to offer the inventive method as an "accountable delivery service" to a large number of content providers, i.e., streaming servers, and users, i.e., streaming players.
  • Such a service may deliver performance contract-based services to users of the streaming players on the Internet.
  • the service manager plays a central role in this service, by keeping and exchanging the state of several streaming players, delivery paths, and content providers/streaming servers in its database.
  • the method of the present invention tracks the "actual” delivered viewing/listening expenence of users of streaming servers by computing the delivery statistics including but not restricted to metrics that quantify “pauses or interruptions", “content throughput”, and “content loss”.
  • the delivery service can also answer the question of "service feasibility”.
  • the service manager can estimate, based on the histo ⁇ cal and instantaneous information, whether a particular user can receive an "expected quality experience " While the present invention does not describe the details of this service, it should be obvious to those skilled in the art to build such a service around the framework of the present invention
  • Figure 1 is a schematic diagram of the devices and programs comprising the invention.
  • FIG. 2 is a schematic diagram of the components of the devices utilized by the invention.
  • Figure 3 is a schematic diagram of the operation of the invention du ⁇ ng normal streaming.
  • Figure 4 is a schematic diagram of the operation of the invention dunng congestion.
  • Figure 5 is a schematic diagram of the recovery scheduling operation of the invention after congestion.
  • Figure 6 is a diagram of the modeling the backlog operation of the invention du ⁇ ng congestion
  • Figure 7 is a state transition diagram for streaming player module (normal streaming).
  • Figure 8 is a state transition diagram for streaming server module (normal streaming).
  • Figure 9 is diagram of a congestion processing protocol at the streaming player module (du ⁇ ng and after congestion)
  • FIG 10 is a diagram of an congestion processing protocol at the streaming server module (du ⁇ ng and after congestion) DETAILED DESCRIPTION OF THE INVENTION Architecture
  • the invention shown in Figure 1, comprises at least one computing device 10 for executing a service manager process 12, at least one computing device 14 for executing streaming server process 16, and at least one computing device 18 f executing streaming player process 20.
  • the service manager process 12 is shown as residing on the computing device 12 only for the clarity of presentation. It may be obvious to those skilled in the art that the service manager process 12 may reside on any computing devices 14 and 18.
  • server processes 16 may be present on the computing devices 18 alongside player processes 20, and vice versa the player processes 20 may be present on the computing devices 14 alongside server processes 16.
  • Each computing device 10, 14 and 18 are connected to a network 22 and establish data paths amongst each other. In the prefe ⁇ ed embodiment, the network is the Internet.
  • the streaming server process 16 transmits a single multimedia stream to each streaming player process 20 and the streaming player process 20 receives that single multimedia stream.
  • the streaming server process consists of three buffers: a) Output buffer 24, used to transmit new packets from the server process 16 to the player process 20; b) Backup buffer 26 , for maintaining temporary copies of the packets transmitted to the player process 20; and c) Retransmit buffer 28, for maintaining retransmit requests received from the player process 20.
  • the streaming player process 20 may comprise a player buffer 29. Before a streaming session begins, the Service Manager process 12 downloads configuration information via the network 22 to both the computing device 14 for the use of the streaming server process 16 and to the computing device 18 for the use of the streaming player process 20.
  • the configuration information on the computing device 10 is updated for the future use of the Service Manager process 12 via the network 22 by both the streaming server process 16 on the computing device 14 and the streaming player process 20 on the computing device 18.
  • the computing devices 10, 14, and 18 may take the configuration of any computer ranging from mainframes and personal computers (PCs) to digital telephones and hand held devices, e.g., palm pilotsTM.
  • PCs personal computers
  • hand held devices e.g., palm pilotsTM.
  • such computing devices may comprise a bus 30, which is connected directly to each of the following:
  • CPU central processing unit
  • the common bus 30 is further connected 1. by the video interface 40 to a display 50;
  • a storage device 52 which may illustratively take the form of memory gates, disks, diskettes, compact disks (CD), digital video disks (DVD), etc.;
  • peripheral interface 38 to the peripherals 58, such as the keyboard, the mouse, navigational buttons, e.g., on a digital phone, a touch screen, and/or writing screen on full size and hand held devices, e.g., a palm pilot TM;
  • the communications interface 44 e.g., a plurality of modems
  • a network connection 60 e.g., an Internet Service Provider (ISP) and to other services, which is in turn connected to the network 22, whereby a data path is provided between the network 22 and the computing devices 10, 14, and 18 ( Figure 1) and, in particular, the common bus 30 of these computing devices; and
  • ISP Internet Service Provider
  • FIG. 3 the normal streaming operation is shown when there are either zero, i.e., no losses, or sporadic streaming losses, i.e., loss occurring due to instantaneous burst of streaming packets.
  • the overall streaming sequence works as follows. Packets streamed from the streaming server process 16 are- ⁇ ueued up in the streaming server output buffer 24 for transmission.
  • the server process 16 is responsible for transmitting streaming packets 70 to the computing device 18 executing the player process 20. Periodically, this process 16 checks the output buffer 24 and the retransmit buffer 28 to determine if there are streaming packets 70 awaiting transmission or any requests form the player process. Any remaining streaming packets in the output buffer 24 are then transmitted.
  • any streaming packets 70 requested for retransmission and found in the backup buffer 26 are transmitted.
  • streaming packets 70 are transmitted, a temporary backup copy of the transmitted streaming packets 70 is kept in the backup buffer.
  • the streaming player process 20 receives the streaming packets 70 and stores them at the tail-end of the player buffer 29. Periodically after a user-defined interval, the player process 20 examines the received streaming packets 70 to see if any sfreaming packets 70 are missing. If packets are missing, the player process 20 sends a request 72 to the server process 16 to retransmit the missing packets. After assuring that all the streaming packets 70 are received, the received streaming packets 70 are forwarded from the head-end of the player buffer to the streaming player process 20.
  • the invention detects congestion early on, and then may take steps so that the impact of congestion on the stream is minimized.
  • a correctly configured framework can keep a stream completely transparent of sustained congestion, thus providing an effect equivalent to a "reservation.” Two principles are at the basis of this framework, detection reaction to congestion and congestion recovery.
  • the invention detects congestion through probes launched by the player process 20. The probes are discussed in detail later.
  • the server process 16 reacts by using a technique called "constant bit-rate emulation.” This name derives from the fact that the server process 16 tries to maintain a constant upper bound on the rate at which the streaming packets 70 are streamed to the player process 20. Typically, the upper bound is set equal to the encoding bit rate of the -stream.
  • the server process 16 emulates a constant bit-rate by substituting new server packets with packets that have been requested 72 for retransmission. Please recall that in congestion, the server has a large number of retransmit requests 72 issued from the player process 20.
  • Figure 4 shows the flow of streaming packets 70 during this congestion situation.
  • the method demonstrates a "network- friendly" yet efficient behavior.
  • the method is network- friendly, since it conserves bandwidth during congestion by lowering the upper bound of its streaming transmission rate. Additionally, the method is efficient for the following reasons.
  • the upper bound on the transmission rate is set equal to the maximum encoding bit-rate of the source stream; this is not a restrictive setting.
  • the present invention only reduces the transmission rate by the amount of incurred loss. This decision has an important implication in practice. Most practical congestion- processing algorithms will incur errors and delays in estimating and reacting to the start of congestion or end of congestion.
  • the invention delivers a throughput better or equal to the throughput of any other scheme, if there is no loss of streaming packets 70.
  • TCP Transmission Control Protocol
  • the constant bit-rate of the invention in congestion allows the TCP traffic on the same path to accurately sense the available bandwidth (most Internet or Web traffic uses TCP).
  • An alternative design with adapting bit-rate would confuse most other TCP connections and cause each to keep oscillating their TCP windows. If all TCP connections maintain a steady window- size, the performance of the network is much smoother in congestion. This behavior can drain out the congestion faster and reduce the estimation errors.
  • the invention detects the end of congestion through probes launched by the player process 20. Once the player process 20 confirms that congestion is over, the server process 16 enters a phase called "recovery scheduling".
  • Figure 4 shows a recovery scheduling after l o congestion. To understand this phase, recall that in the previous state of congestion, the server transmits packets at a constant rate. By substituting new packets with older lost packets, the server is in effect creating a backlog of new packets in its output buffer. The idea in recovery scheduling is to exhaust the substantial backlog that may build up at the server output queue. Clearly, this backlog must not be cleared by flooding the network. Such action may cause is significant losses in the intermediate switch buffers.
  • Recovery scheduling is a procedure that configures the rate of streaming the backlog based on the network parameters. Additionally, recovery scheduling can also take into account the "premium" placed on the particular player process 20. The latter0 consideration allows the server process 16 to stream backlog of streaming packets at different priorities. "Higher value” users can receive backlog sooner than “lower value” users. This is the basic principle behind differentiated streaming.
  • This configuration information is downloaded to the computing devices 14 and 16 at the beginning of each streaming session.
  • This configuration information includes the following parameters:
  • Averaged and minimum service rate, avg and mm This is the rate at which streaming packets 70 are actually transferred from the server process 16 to the player
  • mm is the minimum rate across the path via the Internet 22 from the server process 16 to the player process 20.
  • the parameters indicated in the list above may be obtained through historical measurements of the relevant values and through measurements made on streaming packets 70 at the computing device 14 of the server process 16. The parameters are in turn used to design the sizes of the buffers 24, 26, 28 and 29 of server process 16 and the player process 20. One preferred embodiment of achieving this is described immediately below. That embodiment ignores the upper bounds limits on the buffer sizes related to the available random-access memory 34 ( Figure 2) in the computing devices 14 and 18.
  • the buffer on the computing device 18 used by the streaming player process 20 plays out the streaming packets 70 at the maximum rate of e * bits per second.
  • the packet loss is approximately loss
  • the maximum transmission rate from the server is e ⁇ - .
  • the rate of the e *(l - P ) player buffer 29 fill rate equals to r loss bits per second.
  • the net drain on e *P the player buffer 29 proceeds at the rate of r '"" bits per second.
  • the buffer-size of the player buffer 29 should satisfy the constraint defined as: > c *x b > - e * P l,oss er * P ss or " "" (Equation 1)
  • Figure 6 shows the approximate amount of backlog as a function of time 82 during sustained congestion. As indicated, the backlog rate 80 after two retransmission intervals 84 is
  • the backup buffer 26 ( Figure 1) holds temporary copies of the server packets that have been transmitted to the player process 20 ( Figure 1). The copies must be held until the server is sure that the packets have been successfully received in the player buffer 29 ( Figure 1). To estimate the size of this backup buffer 26 ( Figure 1) again, an operational
  • the transmission rate in packets per seconds is defined as p .
  • P * R 70 ( Figure 3) is loss p . Assuming the loss affects new as well as retransmitted streaming packets 70 ( Figure 3) equally, the approximate rate of loss for older
  • the minimum required buffer size may be defined as equal to m * x . Consequently, the size constraint of the backup buffer 26 (Figure 1) is
  • FIGS. 6 and 7 show the detailed operational sequence at the streaming server process 16 ( Figure 3) and the streaming player process 20 ( Figure 3) during normal streaming of the streaming packets 70 ( Figure 3).
  • the operational sequences are described with the help of state diagrams.
  • a state diagram shows various states and transitions between states. Each state captures a specific operational state, and the transitions between states capture the events that cause the operational states to change.
  • Figure 7 shows the state diagram of events at the streaming server process 16 ( Figure 1).
  • Figure 8 shows the corresponding state diagram at the streaming player process 20.
  • the streaming player process 20 is in a normal state 100.
  • the player process 20 may keep a track of the packet loss of a received stream at a periodic interval that may be user determined. If the amount of loss exceeds a threshold, e g , Threshold 1 , across a time- window of some number of seconds which may also be user determined, e.g , TW1, the player process 20 moves into the Likely Congestion state 102
  • the streaming player process 16 ( Figure 1) launches congestion probes to determine whether the congestion symptoms are ve ⁇ fied These probes form a part of a test called confirm congestion test If the test indicates that congestion is confirmed, the process moves to a Congestion Confirmed state 104 Immediately, the streaming player process 20 contacts its server counterpart and informs it about the new state change along with some other relevant information
  • the player process 20 may actually receive a Remote Congestion Confirmed report in state 106 from its server process 16 ( Figure 1 ) indicating that some other player process 20 on a related network path is in the Congestion Confirmed state. If this happens, the player immediately moves to the Congestion Confirmed State 104 itself. If the player process 20 has already started conducting congestion probes before this notification, the player process 20 may halt them at the instant it receives the notification. Alternatively, the player process 20 may receive a Local Congestion Confirmed report in state 102 reporting that some other local player process 20 on the same streaming player module has ve ⁇ fied congestion. If the Remote or local Congestion Confirmed reports have ve ⁇ fied congestion within a past time-wmdow not exceeding the time defined as Tec, the player process 20 moves to a confirmed congestion state 104
  • the sfreaming player process 20 ( Figure 1) enters the congestion confirmed state 104, it pe ⁇ odically monitors the loss of the received stream of streaming packets 70 (Figure 3). If the packet loss is less than a second user defined threshold, e g., Threshold2, across a second user defined time-wmdow, e.g., TW2 seconds, the player process 20 enters the likely end-of-congestion (EOC) state 106 At this point, the player process 20 launches an EOC probe test that either confirms or invalidates the EOC decision If the EOC probe test confirms congestion, the streaming player process 20 enters the EOC state 108 and sends an EOC report 110 to its server process 16 ( Figure 1) counterpart With this transition, the process re-enters the normal state 100 Note that during transitions from the EOC state 1 10 back to the Normal State 100, the streaming player process 20 also needs to continually process the arriving packets in step 120 ( Figure 7), check and request for retransmission of missing packets in step 122 ( Figure 7),
  • FIG 10 shows the SCP state-diagram of the server process 16.
  • the server process 16 enters the normal state 130. If the server process 16 receives a report from its streaming player process 20 ( Figure 1) indicating congestion, the server moves to the Confirmed Congestion or Confirmed Congestion State 132. At this point, the server process 16 shares the Confirmed Congestion status information with other server processes 16 running on the same computing device 14 ( Figure 1) in the Propagate Confirmed Congestion state 134, and potentially also from other computing device 14 ( Figure 1) through interaction with the service manager 12 ( Figure 1) via the network 22 ( Figure 1). The idea behind this is to have selected server processes report the congestion to their player process so that these player processes can avoid doing the Confirmed Congestion probing test.
  • each server process that receives the Confirmed Congestion status in the Receive the Confirmed Congestion status state 140 from other server processes 16 first determines whether the Confirmed Congestion status is relevant to its target player process 20 ( Figure 1). If this is the case, the server processes notify their player process 20 ( Figure 1). Notification is done through a special signaling scheme involving reliable transport protocol (RTP) messages, as discussed in Schulzrinne, Casner, Frederick, Jacobson, "RTP: A Transport Protocol for Real-time Applications", RFC 1889, Internet Engineering Task Force.
  • RTP reliable transport protocol
  • the streaming server process 16 will then make transitions from the Confirmed Congestion State 132.
  • the server process 16 transmits streaming packets 70 ( Figure 3) at a constant bit-rate.
  • the server process 16 moves to the Recovery Scheduling State 136 as soon as it receives an End-of- congestion or EOC report from the player process 20 ( Figure 1) targeted for the reception of its content.
  • the server process 16 changes its mode of transmission, using recovery scheduling to drain its backlogged output buffer 24 ( Figure 1 ) in an intelligent way.
  • the server process 16 moves back to the normal state 130.
  • the server process 16 may receive and filter information from other server processes 16 by entering the Receive Confirmed Congestion State 140. It may also notify the player processes 20 ( Figure 1) about the confirmed congestion status through the in-band RTP signaling, by entering the Confirmed Congestion Notification state 138 from where the process 16 will return to the Normal state 130.
  • the server process 16 also needs to continually process the departing packets in step 150 ( Figure 8), check and process the requests for retransmission of missing packets in step 152 ( Figure 8), and backup packets to the backup buffer 26 ( Figure 1) for playback. These activities are indicated in Figure 8 and are not shown in Figure 10 for purposes of clarity.
  • Equation 4 all the terms are measured at the service bottleneck hop between the computing device 14 executing the server process 16 and the computing device 18 executing the player process 16.
  • B is the bottleneck link capacity at the bottleneck hop.
  • ⁇ °' her is the aggregate packet rate earned at the hop from traffic not utilizing the inventive method. This rate can be estimated by obtaining the aggregate packet rate of the
  • SR — p l — and using the fact that ° ⁇ her ⁇ X P ⁇ SME , where B is the: utilization of the bottleneck hop.
  • the parameter n is defined as the number of priority "i" streams carried at the bottleneck hop.
  • the hop is a section of a network between two network-computing devices such as routers. This parameter may be obtained from the service manager 12 ( Figure 1).
  • the logic behind this equation is as follows. The first term indicates that the new transmission rate should be set equal to the minimum service rate along the path, in order to recover from the backlog. The second term takes into account the result of increasing the transmission rate on the utilization at the bottleneck hop.
  • each computing device 14 executing server process 16 must carefully weigh the effects of increasing the transmission rate.
  • the best rate is the maximum rate that does not congest the bottleneck bandwidth during recovery. Hence the rnin operator.
  • the priorities within the equation 4 provide for differentiated streaming.
  • congestion estimation tests as part of the SCP algorithm.
  • congestion can be estimated in a variety of ways, and the invention is not restricted to any one particular method of congestion estimation.
  • a particular embodiment of the invention for estimating congestion based on a mix of passive and active probes will now be described.
  • This method comprises a set of rules configured around the results of the probes.
  • a specific congestion test e.g., an end of congestion test or a confirmed congestion test mentioned in earlier sections, combines these rules in a particular way.
  • the "Service Rate probe” computes the approximate service rate of transferring packets between the server and the player, as discussed in Van Jacobson, "pathchar - A tool to infer characteristics of Internet paths", MSRI, April 21, 1997; and R. L. Carter and M.E. Crovella, “Measuring Bottleneck Link Speed in Packet- switched Networks", TR- 96-006, Boston University Computer Science Department, March 15, 1996.
  • the minimum service rate is the least service rate on the path, while the average service rate is the "expected" service rate on the path. .
  • the steps of this exemplary method are as follows:
  • the player Periodically, the player initiates a cycle. In each cycle, the player sends n-1 queries, each spaced by l ⁇ second. The player forms timestamp differences between query responses from consecutive hops. The results are sent back to the server.
  • the server uses results from several cycles along with the link capacities to calculate the service rate.
  • ⁇ m mini ⁇ - ⁇ )
  • n max b) Denote the maximum queuing delay between node m and node m-1 by ⁇ m m - 1 , where node is a computing device, e.g., a computer or a router connected to the network.
  • r( - T mm ) c) Then, y ra ' W m,m-l ' m.m-l / avg d) Denote the average queuing delay between node m and node m- 1 by ⁇ m,m-l
  • a robust but responsive congestion detection mechanism is an essential part of the inventive method. It is important to respond quickly to congestion; this can reduce the congestion period and hence make it easier for the recovery scheduling to do its job in a shorter period of time. Ultimately this can allow the method to work with a reduced buffer size and hence a reduced connect lag. It is equally important to robustly distinguish a sustained congestion condition. False detection can lead to reduced throughput and performance. Recall that the inventive method treats sustained congestion by reducing the throughput of the newer packets, so that the throughput of older retransmitted packets is maximized.
  • the present invention uses the following congestion detection rule:
  • T s Packet losses exceeding a threshold for a consecutive time- window of duration T s .
  • T s is set to 10 seconds by default.
  • condition 1 If condition 1 is triggered, three short service rate probes are sent, spaced 2
  • the player process 20 ( Figure 1) computes the expected utilization B
  • condition 1 is satisfied and the utilization from condition 2 is more than a specified utilization threshold, e.g., 95%
  • the player process 20 declares sustained congestion (Confirmed Congestion), and asks the server to enter the constant bit-rate emulation mode.
  • An alternative method may consists of a simple rule where if packet losses exceed a threshold for Ni consecutive sampling time-windows, the player process 20 ( Figure 1) declares sustained congestion and asks the server to enter the constant bit- rate emulation mode. By default, Ni is set to 1, and time- window is set equal to 1 second.
  • the end of congestion rule is analogous to the congestion detection rule: 1. No packet losses for a consecutive time- window of duration T e . T e is set equal to 10 seconds by default. 2. If condition 1 is triggered, three short service rate probes are sent, spaced 2 seconds apart, from the player to the m " > op. esponses are average to orm avg .
  • the player computes the expected utilization B
  • condition 1 is satisfied and the utilization from condition 2 is less than a specified threshold, e.g., 90%, the player declares sustained congestion to be over, and asks the server to transition to the state of recovery scheduling.
  • a specified threshold e.g. 90%
  • An alternative method may consists of a simple rule where if in a state of sustained congestion, no packet losses are manifest for N 2 consecutive sampling time- windows, the player declares sustained congestion to be over, and asks the server to transition to the state of recovery scheduling
  • N 2 may be set equal to five, and the sampling time-wmdow is set equal to 1 second.

Abstract

The inventive method detects network congestion which interferes with streaming of multimedia audio and video packets from and reacts to it and guarantees delivery of packets. The streaming of the multimedia audio and video packets is performed when a server process executing on a sending computing device sends the packets to a player process executing on a receiving computing device via a network, such as the Internet. The detection of the network congestion is performed by communicating configuration information including network parameters from a service manager via the network to the at least one sending computing device and to the at least one receiving computing device. The network congestion (80) is then confirming by transmitting and retracting network probes. At least one buffer management technique is executed to limit degradation in the streaming of the packets. The end of the network congestion is detected by transmitting and retracting said network probes and the rate of streaming of the packets is configured accordingly (82, 84).

Description

APPARATUS AND METHOD FOR PREDICTABLE AND DIFFERENTIATED DELIVERY OF MULTIMEDIA STREAMING ON THE INTERNET
TECHNICAL FIELD This invention relates to the field of multimedia audio and video streaming on the
Internet, and more particularly, to an apparatus and method for predictable and differentiated delivery of multimedia streaming on the Internet.
BACKGROUND OF THE INVENTION Multimedia streaming involves broadcast of audio and video information from content providers called streaming servers or senders via the Internet to clients or receivers on user computing devices called streaming players. Real-time delivery of the multimedia streaming on the Internet is inherently uncontrollable and consequently unpredictable. When an Internet delivery path is congested, packets of multimedia streaming are typically dropped, thus causing degraded audio and video playback to be received by the streaming players. Typically, manifested forms of congestion can be classified into two types: instantaneous and sustained (longer-term). Instantaneous congestion is caused by random variations in the arrival patterns of packets of multimedia streaming at different Internet routers and switches, and due to the varying sizes of these packets.
These variations cause varying amounts of delay as the packets of multimedia streaming travel from the streaming server to the streaming receiver. If the delay accumulates beyond a certain point, the packets of multimedia streaming can get lost. This type of congestion typically causes sporadic delays and losses of over hundreds of milliseconds to a few seconds, and is hence referred to as "instantaneous congestion." In contrast, sustained congestion is longer-term and results during periods of excessive demand by the streaming players, e.g., demand for multimedia streaming from popular websites over the network paths on the Internet. During these periods, packet traffic surges can overwhelm the packet buffers in network routers, which are network traffic routing and forwarding computing devices, and packets of multimedia streaming can get lost in bursts. This is similar to a rush-hour traffic scenario on highways at peak hours.
Unlike instantaneous congestion, sustained congestion can last from tens of seconds up to a few minutes.
It is well known that instantaneous congestion is an easy problem to solve. The client or the streaming player can buffer incoming packets of multimedia streaming for a short duration before playback. This reduces the effect of short-term delay between packets of multimedia streaming. Additionally, by requesting that a lost packet be immediately retransmitted, the streaming player can counter the loss of packets of multimedia streaming due to instantaneous congestion. Several existing streaming systems use these techniques, i.e., short-term buffering along with retransmits of lost packets, to isolate the streaming player from the effects of instantaneous congestion.
Examples of these systems include the Robust-User Datagram Protocol (UDP) feature in the Real system from Real Networks and the NetShow 3.0 MediaPlayer from Microsoft. The problem of sustained congestion is harder to solve. This difficulty relates to the fact that the Internet delivery lacks a uniform real-time control. Several standardization and research efforts have proposed ways to introduce such control; however, none have succeeded in creating a global Internet standard. The referred to standardization and research efforts include the following:
1. L. Kleinrock, "On Flow Control in Computer Networks", International Conference on Communications (ICC78), pages 27.2.1 -27.2.5;
2. D.E. Comer, R. S. Yavatkar, "A Rate-based Congestion Avoidance and Control Scheme for Packet Switched Networks", The 10th International Conference on Distributed Computing Systems, pages 390-397, 1990;
3. K.K. Ramakrishnan, R. Jain, "A Binary Feedback Scheme for Congestion Avoidance in Computer Networks", ACM Transactions on Computer Systems, Vol. 8, No. 2,
May 1990, Pages 158-181; and
4. D. Mitra, "Asympotically Optimal Design of Congestion Control for High Speed Data Networks", IEEE Transactions on Communications, Vol. 40, No. 2, February 1992.
For example, the RSVP effort, described in Braden, R., Zhang, L., Berson, S., Herzog, S., and Jamin, S., "Resource Reservation Protocol (RSVP) - Version 1 Functional Specification", RFC 2205, Proposed Standard, September 1997, seeks to add "quality of service" control within each network router and switch within the Internet. Another effort that seeks to create fast paths on the Internet is the "differentiated service" effort within the Internet Engineering Task Force (IETF) described in the following publications: —
1) Nichols, K., Jacobson, V. and Zhang, L., "A Two-bit Differentiated Services Architecture for the Internet", Internet Draft, December 1997;
2) Ferguson, P., "Simple Differential Services: IP TOS and Precedence, Delay Indication, and Drop Preference", Internet Draft, November 1997; and 3) Clark, D. and Wroclawski, J., "An Approach to Service Allocation in the Internet", Internet Draft, July 1997.
The problem with these approaches is that they require structural changes throughout the Internet. This requirement assumes an explicit cooperation between various managers of the equipment comprising the Internet, which is an impractical assumption. Consequently, these efforts have stalled or are expected to have limited success. An alternative approach is to offer "adaptive applications". This approach proposes that application content be adapted based on the level of congestion. Methods based on this approach degrade the "experience" of an application in a controlled manner, but do not attempt to guarantee predictability or isolation from congestion effects. This approach is discussed in I. Busse, B. Deffher, H. Schulzrinne, "Dynamic QoS Control of Multimedia Applications based on RTP", May 30, 1995 and Real Networks, "White Paper on SureStream Technology", Available at hrrp://www.real.com/devzone/library/whitepapers/surestrm.html.
What is needed is a "delivery framework" that will isolate streaming players from the effect of sustained congestion by providing a "predictable" delivery. Furthermore, there is also a need for a "differentiated streaming" framework that will provide for multiple levels of delivery isolation to the streaming players, without requiring structural changes in the Internet, as in K. Nichols et al., P. Ferguson and D. Clark, discussed above. SUMMARY OF THE INVENTION The object of the present invention is to achieve a particular level of delivery isolation on particular streaming players. This level of delivery isolation may be based on factors like an importance "value" associated with particular streaming players, or the price the user of the streaming players are charged, etc. As an example, the streaming server may place specific streaming players in the high "value" category based on the demographic information on their owners, while other streaming players may be placed in the low "value" category.
Another object of the present invention is to ensure that the high value users of streaming players always receive consistent, predictable delivery of streaming content, even if there is an excessive user demand. If there is a situation of sustained congestion, wherein not all users can be satisfied, the content provider might want to ensure that the high value users are promised predictable delivery.
Accordingly, the present invention provides a method and an apparatus for predictable and differentiated streaming on the Internet. The system consists of software and random-access memory-based buffers installed at the streaming server and at the streaming players along with a service manager at a separate location. The method prescribes the manner in which the system is used to achieve predictable and differentiated delivery of streaming multimedia packets. The invention is based on streaming multimedia packets between the software on the streaming server and the software on the streaming. The buffer sizes in the computing devices of the streaming player and of the streaming server are initialized based on the configuration information provided by the service manager software at the start of the execution of the inventive method. This configuration information includes: a) the expected duration of sustained congestion periods, b) the worst-case packet loss rate during the sustained congestion period, c) the encoded bit-rate of streaming source information, and etc.
During the delivery of the streaming multimedia packets, the software on the streaming server relays the streaming packets to the software on the streaming player, which stores the packets in a playout buffer before playing them out. The playout buffer takes care of delay variation that may arise due to instantaneous congestion on the network path. In case of short-term losses caused by instantaneous congestion, the software on streaming player immediately requests the server-end software to retransmit the lost information.
Predictable Delivery Method
To deal with sustained congestion ("congestion") the invention incorporates the following method. The software on the streaming player uses estimation techniques based on probes to detect and confirm the onset of sustained congestion. Once the congestion status is verified, the streaming player software informs the streaming server software about this event. The streaming server software then modifies the manner in which it streams packets to the streaming player software. In particular, the streaming server software ensures that the transmission rate of packets never exceeds the encoded bit-rate of the streaming source. Furthermore, it gives preemptive priority to the retransmitted packets over new packets. This creates a backlog of new packets at one of the streaming server buffers.
Meanwhile, the streaming player software continuously monitors the status of received packets and detects the end of congestion through estimation techniques based on probes. Once the end of congestion is confirmed, the streaming player software informs the streaming server software about this event. The streaming server software then begins to stream its backlog of new packets to the streaming player software. The transmission rate of this stream is increased based on the estimation information from the streaming player software along with feedback obtained from other streaming players in the network. This rate is carefully selected to ensure that the network is not flooded back into congestion. The buffer sizing at the streaming player and at the sfreaming server is based on the configuration information and ensures that the user experiences an uninterrupted playback during sustained congestion. This assumes that the configuration information is the "worst-case" scenario. If the actual service parameters turn out to be worse than the configuration information, the streaming player software contacts the service manager after the service. The configuration information at the service manager is now updated in a persistent manner with the actual worst-case parameters. In one preferred embodiment, this update happens as follows. At the next service invocation, the buffers are sized based on the updated configuration information. The method thus automatically learns the worst-case behavior from the actual behavior of the network each time. Eventually, the method can guarantee a predictable delivery based on a non-mterrupted playout in most cases. Vaπous other embodiments, are also possible, wherein the buffer size is fixed, but the encoding bit-rate of the stream is modified to choose the largest encoding bit-rate that leads to an uninterrupted playback.
Differentiated Streaming Method The method for differentiated streaming involves identical operations as the predictable streaming method, except for the mechanism for streaming the backlog of packets from the streaming server software. For differentiated streaming, the streaming server software transmits the stream based on the "value" associated with the particular user, m addition to the information used for predictable streaming.
Accounting Delivery Service
Additionally, the methodology of the mentioned invention is utilized to offer the inventive method as an "accountable delivery service" to a large number of content providers, i.e., streaming servers, and users, i.e., streaming players. Such a service may deliver performance contract-based services to users of the streaming players on the Internet.
The service manager plays a central role in this service, by keeping and exchanging the state of several streaming players, delivery paths, and content providers/streaming servers in its database. The method of the present invention tracks the "actual" delivered viewing/listening expenence of users of streaming servers by computing the delivery statistics including but not restricted to metrics that quantify "pauses or interruptions", "content throughput", and "content loss". In addition to the features of predictability and differentiation, the delivery service can also answer the question of "service feasibility". That is, at the beginning of a session, the service manager can estimate, based on the histoπcal and instantaneous information, whether a particular user can receive an "expected quality experience " While the present invention does not describe the details of this service, it should be obvious to those skilled in the art to build such a service around the framework of the present invention
BRIEF DESCRIPTION OF THE DRAWINGS,
The above and other objects and advantages of the invention will be apparent upon consideration of the following detailed descπption, taken in conjunction with the accompanying drawings, m which the reference characters refer to like parts throughout and in which: Figure 1 is a schematic diagram of the devices and programs comprising the invention.
Figure 2 is a schematic diagram of the components of the devices utilized by the invention.
Figure 3 is a schematic diagram of the operation of the invention duπng normal streaming.
Figure 4 is a schematic diagram of the operation of the invention dunng congestion.
Figure 5 is a schematic diagram of the recovery scheduling operation of the invention after congestion. Figure 6 is a diagram of the modeling the backlog operation of the invention duπng congestion
Figure 7 is a state transition diagram for streaming player module (normal streaming).
Figure 8 is a state transition diagram for streaming server module (normal streaming).
Figure 9 is diagram of a congestion processing protocol at the streaming player module (duπng and after congestion)
Figure 10 is a diagram of an congestion processing protocol at the streaming server module (duπng and after congestion) DETAILED DESCRIPTION OF THE INVENTION Architecture
The invention, shown in Figure 1, comprises at least one computing device 10 for executing a service manager process 12, at least one computing device 14 for executing streaming server process 16, and at least one computing device 18 f executing streaming player process 20. The service manager process 12 is shown as residing on the computing device 12 only for the clarity of presentation. It may be obvious to those skilled in the art that the service manager process 12 may reside on any computing devices 14 and 18. Furthermore, server processes 16 may be present on the computing devices 18 alongside player processes 20, and vice versa the player processes 20 may be present on the computing devices 14 alongside server processes 16. Each computing device 10, 14 and 18 are connected to a network 22 and establish data paths amongst each other. In the prefeπed embodiment, the network is the Internet.
The streaming server process 16 transmits a single multimedia stream to each streaming player process 20 and the streaming player process 20 receives that single multimedia stream. The streaming server process consists of three buffers: a) Output buffer 24, used to transmit new packets from the server process 16 to the player process 20; b) Backup buffer 26 , for maintaining temporary copies of the packets transmitted to the player process 20; and c) Retransmit buffer 28, for maintaining retransmit requests received from the player process 20. The streaming player process 20 may comprise a player buffer 29. Before a streaming session begins, the Service Manager process 12 downloads configuration information via the network 22 to both the computing device 14 for the use of the streaming server process 16 and to the computing device 18 for the use of the streaming player process 20. At the end of each streaming session, the configuration information on the computing device 10 is updated for the future use of the Service Manager process 12 via the network 22 by both the streaming server process 16 on the computing device 14 and the streaming player process 20 on the computing device 18. The computing devices 10, 14, and 18 may take the configuration of any computer ranging from mainframes and personal computers (PCs) to digital telephones and hand held devices, e.g., palm pilots™. In one illustrative embodiment of this invention shown in Figure 2, such computing devices may comprise a bus 30, which is connected directly to each of the following:
1. a central processing unit (CPU) 32;
2. a memory 34;
3. a system clock 36;
4. a peripheral interface 38 ; 5. a video interface 40;
6. an input/output (I/O) interface 42;
7. a communications interface 44; and
8. a multimedia interface 46.
The common bus 30 is further connected 1. by the video interface 40 to a display 50;
2. by the I/O interface 42 to a storage device 52, which may illustratively take the form of memory gates, disks, diskettes, compact disks (CD), digital video disks (DVD), etc.;
3. by the multimedia interface 46 to any multimedia component 56; 4. by peripheral interface 38 to the peripherals 58, such as the keyboard, the mouse, navigational buttons, e.g., on a digital phone, a touch screen, and/or writing screen on full size and hand held devices, e.g., a palm pilot ™;
5. by the communications interface 44, e.g., a plurality of modems, to a network connection 60, e.g., an Internet Service Provider (ISP) and to other services, which is in turn connected to the network 22, whereby a data path is provided between the network 22 and the computing devices 10, 14, and 18 (Figure 1) and, in particular, the common bus 30 of these computing devices; and
6. furthermore, by the communications interface 44 to the wired and/or the wireless telephone system 54. Normal Conditions
Turning now to Figure 3, the normal streaming operation is shown when there are either zero, i.e., no losses, or sporadic streaming losses, i.e., loss occurring due to instantaneous burst of streaming packets. The overall streaming sequence works as follows. Packets streamed from the streaming server process 16 are-^ueued up in the streaming server output buffer 24 for transmission. The server process 16 is responsible for transmitting streaming packets 70 to the computing device 18 executing the player process 20. Periodically, this process 16 checks the output buffer 24 and the retransmit buffer 28 to determine if there are streaming packets 70 awaiting transmission or any requests form the player process. Any remaining streaming packets in the output buffer 24 are then transmitted. If retransmission requests are found, any streaming packets 70 requested for retransmission and found in the backup buffer 26 are transmitted. When streaming packets 70 are transmitted, a temporary backup copy of the transmitted streaming packets 70 is kept in the backup buffer. The streaming player process 20 receives the streaming packets 70 and stores them at the tail-end of the player buffer 29. Periodically after a user-defined interval, the player process 20 examines the received streaming packets 70 to see if any sfreaming packets 70 are missing. If packets are missing, the player process 20 sends a request 72 to the server process 16 to retransmit the missing packets. After assuring that all the streaming packets 70 are received, the received streaming packets 70 are forwarded from the head-end of the player buffer to the streaming player process 20.
Congested Conditions
The invention detects congestion early on, and then may take steps so that the impact of congestion on the stream is minimized. A correctly configured framework can keep a stream completely transparent of sustained congestion, thus providing an effect equivalent to a "reservation." Two principles are at the basis of this framework, detection reaction to congestion and congestion recovery.
1. Detecting and reacting to congestion The invention detects congestion through probes launched by the player process 20. The probes are discussed in detail later. Once congestion is confirmed, the server process 16 reacts by using a technique called "constant bit-rate emulation." This name derives from the fact that the server process 16 tries to maintain a constant upper bound on the rate at which the streaming packets 70 are streamed to the player process 20. Typically, the upper bound is set equal to the encoding bit rate of the -stream. In effect, the server process 16 emulates a constant bit-rate by substituting new server packets with packets that have been requested 72 for retransmission. Please recall that in congestion, the server has a large number of retransmit requests 72 issued from the player process 20. Figure 4 shows the flow of streaming packets 70 during this congestion situation. Two important reasons motivate the constant bit-rate emulation method: a) The method demonstrates a "network- friendly" yet efficient behavior. The method is network- friendly, since it conserves bandwidth during congestion by lowering the upper bound of its streaming transmission rate. Additionally, the method is efficient for the following reasons. First, the upper bound on the transmission rate is set equal to the maximum encoding bit-rate of the source stream; this is not a restrictive setting. Unlike other schemes that voluntarily reduce the source rate during congestion, the present invention only reduces the transmission rate by the amount of incurred loss. This decision has an important implication in practice. Most practical congestion- processing algorithms will incur errors and delays in estimating and reacting to the start of congestion or end of congestion. Consequently, there will be temporary periods in which these algorithms "think" that the network is congested, when in practice, the network has cleared out of congestion. In situations like this, estimation errors do not degrade the streaming throughput. In fact, the invention delivers a throughput better or equal to the throughput of any other scheme, if there is no loss of streaming packets 70. b) Assuming most connections sharing the network with streaming are using the Transmission Control Protocol (TCP), the constant bit-rate of the invention in congestion allows the TCP traffic on the same path to accurately sense the available bandwidth (most Internet or Web traffic uses TCP). An alternative design with adapting bit-rate would confuse most other TCP connections and cause each to keep oscillating their TCP windows. If all TCP connections maintain a steady window- size, the performance of the network is much smoother in congestion. This behavior can drain out the congestion faster and reduce the estimation errors.
5
2. Detecting End-of-Congestion and Recovery from congestion
The invention detects the end of congestion through probes launched by the player process 20. Once the player process 20 confirms that congestion is over, the server process 16 enters a phase called "recovery scheduling". Figure 4 shows a recovery scheduling after l o congestion. To understand this phase, recall that in the previous state of congestion, the server transmits packets at a constant rate. By substituting new packets with older lost packets, the server is in effect creating a backlog of new packets in its output buffer. The idea in recovery scheduling is to exhaust the substantial backlog that may build up at the server output queue. Clearly, this backlog must not be cleared by flooding the network. Such action may cause is significant losses in the intermediate switch buffers. In addition, if thousands of streams of packets 70 flood the backlog at the same time, this may cause certain network points to get congested again. Recovery scheduling is a procedure that configures the rate of streaming the backlog based on the network parameters. Additionally, recovery scheduling can also take into account the "premium" placed on the particular player process 20. The latter0 consideration allows the server process 16 to stream backlog of streaming packets at different priorities. "Higher value" users can receive backlog sooner than "lower value" users. This is the basic principle behind differentiated streaming.
Buffer Design 5 As described previously, the configuration information is downloaded to the computing devices 14 and 16 at the beginning of each streaming session. This configuration information includes the following parameters:
(1) Worst-case duration of congestion, c*x , where x is a sampling or retransmission interval, and c is a number of sampling intervals in a congestion period. p
(2) Worst-case of loss of streaming packets 70 during congestion defined by to"
(3) Encoding bit-rate of the stream of streaming packets 70 defined by βr
CΌ an
(4) Averaged and minimum service rate, avg and mm . This is the rate at which streaming packets 70 are actually transferred from the server process 16 to the player
CD process 20. mm is the minimum rate across the path via the Internet 22 from the server process 16 to the player process 20.
R /?
(5) Path bottleneck link bandwidth """ and the hop link capacities, where mm is the minimum physical bandwidth across all the hops, i.e., the routing devices between the server process 16 and the player process 20. The parameters indicated in the list above may be obtained through historical measurements of the relevant values and through measurements made on streaming packets 70 at the computing device 14 of the server process 16. The parameters are in turn used to design the sizes of the buffers 24, 26, 28 and 29 of server process 16 and the player process 20. One preferred embodiment of achieving this is described immediately below. That embodiment ignores the upper bounds limits on the buffer sizes related to the available random-access memory 34 (Figure 2) in the computing devices 14 and 18.
Player Buffer-size (bc)
The buffer on the computing device 18 used by the streaming player process 20 plays out the streaming packets 70 at the maximum rate of e* bits per second. During
P sustained congestion, the packet loss is approximately loss , and the maximum transmission rate from the server, as explained above is e<- . It follows that the rate of the e *(l - P ) player buffer 29 fill rate equals to r loss bits per second. Thus, the net drain on e *P the player buffer 29 proceeds at the rate of r '"" bits per second. To prevent the player buffer 29 from draining out completely within the congestion period c * x , the buffer-size of the player buffer 29 should satisfy the constraint defined as: > c *x b > - e * P l,oss er * P ss or " "" (Equation 1)
Streaming Server Output Buffer-size ("» )
Recall that the streaming server output buffer holds the backlog of new packets created during the congestion period. Consequently, the size of thisTxrffer is dictated by the worst-case amount of backlog during the congestion period. To calculate the worst- case backlog of the streaming packets 70, a simple model may be built up. Figure 6 shows the approximate amount of backlog as a function of time 82 during sustained congestion. As indicated, the backlog rate 80 after two retransmission intervals 84 is
-ώ er r ss . By extension, after c retransmission intervals 84, the backlog rate 80 will
be CX * £ r * P hss . This establishes the constraint on the size of the output buffer 14 (Figure
1) in the memory 34 (Figure 2) of the computing device 14 (Figure 1) executing streaming server process 16 (Figure 1). > r * γ * p * p
° ~ r loss (Equation 2)
Server Backup Buffer-size ( backup )
The backup buffer 26 (Figure 1) holds temporary copies of the server packets that have been transmitted to the player process 20 (Figure 1). The copies must be held until the server is sure that the packets have been successfully received in the player buffer 29 (Figure 1). To estimate the size of this backup buffer 26 (Figure 1) again, an operational
D model is built. First, the transmission rate in packets per seconds is defined as p . At the start of the first retransmit 84, the approximate rate of loss of the streaming packets
P * R 70 (Figure 3) is loss p . Assuming the loss affects new as well as retransmitted streaming packets 70 (Figure 3) equally, the approximate rate of loss for older
unsuccessful streaming packets 70 (Figure 3) at the second retransmit 84 is P '"" * R p. Supposing now that the number of retransmits that will reduce the loss rate for older packets to a negligibly small number, e.g., 0.1 packets for every retransmission interval is defined by m . This implies that:
log 0.1/
V / R P * XJ pm * R < _ m ≥ x , or that lo8rioss
Alternatively, if s is the packet size, the above equation may be expressed as:
Figure imgf000016_0001
The minimum required buffer size may be defined as equal to m *x . Consequently, the size constraint of the backup buffer 26 (Figure 1) is
'0.1 * 5, ω log 6 , i / e * χ bbackuP ≥ m * X ≥ X * - log />te (Equation 3)
The detailed operational method underlying the invention is captured through a protocol called "Streaming Congestion Protocol" (SCP). In the section below the SCP sequences during different phases of streaming are described. Figures 6 and 7 show the detailed operational sequence at the streaming server process 16 (Figure 3) and the streaming player process 20 (Figure 3) during normal streaming of the streaming packets 70 (Figure 3). The operational sequences are described with the help of state diagrams. A state diagram shows various states and transitions between states. Each state captures a specific operational state, and the transitions between states capture the events that cause the operational states to change. Figure 7 shows the state diagram of events at the streaming server process 16 (Figure 1). Figure 8 shows the corresponding state diagram at the streaming player process 20.
Processing during and after congestion
Please refer to Figure 9, by default, the streaming player process 20 is in a normal state 100. The player process 20 may keep a track of the packet loss of a received stream at a periodic interval that may be user determined. If the amount of loss exceeds a threshold, e g , Threshold 1 , across a time- window of some number of seconds which may also be user determined, e.g , TW1, the player process 20 moves into the Likely Congestion state 102 At this point, the streaming player process 16 (Figure 1) launches congestion probes to determine whether the congestion symptoms are veπfied These probes form a part of a test called confirm congestion test If the test indicates that congestion is confirmed, the process moves to a Congestion Confirmed state 104 Immediately, the streaming player process 20 contacts its server counterpart and informs it about the new state change along with some other relevant information
Note that the transition from a Likely Congested state 102 to Congestion Confirmed State 104 can happen in two other ways First, the player process 20 may actually receive a Remote Congestion Confirmed report in state 106 from its server process 16 (Figure 1 ) indicating that some other player process 20 on a related network path is in the Congestion Confirmed state. If this happens, the player immediately moves to the Congestion Confirmed State 104 itself. If the player process 20 has already started conducting congestion probes before this notification, the player process 20 may halt them at the instant it receives the notification. Alternatively, the player process 20 may receive a Local Congestion Confirmed report in state 102 reporting that some other local player process 20 on the same streaming player module has veπfied congestion. If the Remote or local Congestion Confirmed reports have veπfied congestion within a past time-wmdow not exceeding the time defined as Tec, the player process 20 moves to a confirmed congestion state 104
Once the sfreaming player process 20 (Figure 1) enters the congestion confirmed state 104, it peπodically monitors the loss of the received stream of streaming packets 70 (Figure 3). If the packet loss is less than a second user defined threshold, e g., Threshold2, across a second user defined time-wmdow, e.g., TW2 seconds, the player process 20 enters the likely end-of-congestion (EOC) state 106 At this point, the player process 20 launches an EOC probe test that either confirms or invalidates the EOC decision If the EOC probe test confirms congestion, the streaming player process 20 enters the EOC state 108 and sends an EOC report 110 to its server process 16 (Figure 1) counterpart With this transition, the process re-enters the normal state 100 Note that during transitions from the EOC state 1 10 back to the Normal State 100, the streaming player process 20 also needs to continually process the arriving packets in step 120 (Figure 7), check and request for retransmission of missing packets in step 122 (Figure 7), and forward packets to the streaming player for playback in step 124 (Figure 7). These activities are indicated in Figure 7 and are not shown in Figure 9 for purposes of clarity.
Figure 10 shows the SCP state-diagram of the server process 16. After initialization, the server process 16 enters the normal state 130. If the server process 16 receives a report from its streaming player process 20 (Figure 1) indicating congestion, the server moves to the Confirmed Congestion or Confirmed Congestion State 132. At this point, the server process 16 shares the Confirmed Congestion status information with other server processes 16 running on the same computing device 14 (Figure 1) in the Propagate Confirmed Congestion state 134, and potentially also from other computing device 14 (Figure 1) through interaction with the service manager 12 (Figure 1) via the network 22 (Figure 1). The idea behind this is to have selected server processes report the congestion to their player process so that these player processes can avoid doing the Confirmed Congestion probing test. This cuts down on the congestion reaction time and significantly cuts down the probing traffic. In general, each server process that receives the Confirmed Congestion status in the Receive the Confirmed Congestion status state 140 from other server processes 16, first determines whether the Confirmed Congestion status is relevant to its target player process 20 (Figure 1). If this is the case, the server processes notify their player process 20 (Figure 1). Notification is done through a special signaling scheme involving reliable transport protocol (RTP) messages, as discussed in Schulzrinne, Casner, Frederick, Jacobson, "RTP: A Transport Protocol for Real-time Applications", RFC 1889, Internet Engineering Task Force.
The streaming server process 16 will then make transitions from the Confirmed Congestion State 132. Starting with the Confirmed Congestion state 132, the server process 16 transmits streaming packets 70 (Figure 3) at a constant bit-rate. The server process 16 moves to the Recovery Scheduling State 136 as soon as it receives an End-of- congestion or EOC report from the player process 20 (Figure 1) targeted for the reception of its content. Thereafter the server process 16 changes its mode of transmission, using recovery scheduling to drain its backlogged output buffer 24 (Figure 1 ) in an intelligent way. As soon as the backlog is over, the server process 16 moves back to the normal state 130. While in the Normal State 130 the server process 16 may receive and filter information from other server processes 16 by entering the Receive Confirmed Congestion State 140. It may also notify the player processes 20 (Figure 1) about the confirmed congestion status through the in-band RTP signaling, by entering the Confirmed Congestion Notification state 138 from where the process 16 will return to the Normal state 130.
The server process 16 also needs to continually process the departing packets in step 150 (Figure 8), check and process the requests for retransmission of missing packets in step 152 (Figure 8), and backup packets to the backup buffer 26 (Figure 1) for playback. These activities are indicated in Figure 8 and are not shown in Figure 10 for purposes of clarity.
Recovery scheduling algorithm
Supposing that the server process 16 (Figure 1) streams of sfreaming packets 70 (Figure 3) are classified into K number of priority classes. Class 1 being the highest priority and class "K" being the lowest priority. Each priority class "j" is associated with
a weighting function a' , where ° < a <a <"a < • < a = 1 . Considering the server process 16 (Figure 1) with priority equaling to "j", the rule for transmission rate e' of this process is:
e^ mm( J SRmn,aJ χ B ' κ λo'h
∑a'n'
'=ι (Equation 4) In equation 4, all the terms are measured at the service bottleneck hop between the computing device 14 executing the server process 16 and the computing device 18 executing the player process 16. B is the bottleneck link capacity at the bottleneck hop. λ °'her is the aggregate packet rate earned at the hop from traffic not utilizing the inventive method. This rate can be estimated by obtaining the aggregate packet rate of the
sfreaming packets 70 (Figure 3) of the inventive method λ SME from the service manager,
SR — p = l — and using the fact that °<her ~ X P ~ SME , where B is the: utilization of the bottleneck hop. The parameter n is defined as the number of priority "i" streams carried at the bottleneck hop. The hop is a section of a network between two network-computing devices such as routers. This parameter may be obtained from the service manager 12 (Figure 1). The logic behind this equation is as follows. The first term indicates that the new transmission rate should be set equal to the minimum service rate along the path, in order to recover from the backlog. The second term takes into account the result of increasing the transmission rate on the utilization at the bottleneck hop. In other words, since the maximum utilization can be 100%, each computing device 14 executing server process 16 must carefully weigh the effects of increasing the transmission rate. The best rate is the maximum rate that does not congest the bottleneck bandwidth during recovery. Hence the rnin operator. Finally, the priorities within the equation 4 provide for differentiated streaming.
Congestion Estimators
Earlier on, we described the congestion estimation tests as part of the SCP algorithm. In principle, congestion can be estimated in a variety of ways, and the invention is not restricted to any one particular method of congestion estimation. A particular embodiment of the invention for estimating congestion based on a mix of passive and active probes will now be described. This method comprises a set of rules configured around the results of the probes. A specific congestion test e.g., an end of congestion test or a confirmed congestion test mentioned in earlier sections, combines these rules in a particular way.
The following are some of a variety of methods that may be used to compute the bottleneck bandwidth on a path between the streaming player 20 (Figure 1) and the streaming server 16 (Figure 1) These methods are shown here for exemplary purposes only and not to indicate that only these methods must be used with the present invention Capacity Probe
This method is described in detail in Van Jacobson, "pathchar - A tool to infer characteristics of Internet paths", MSRI, April 21, 1997, and R L Carter and M E
Crovella, "Measuring Bottleneck Link Speed in Packet-switched Networks", TR-96-006, Boston University Computer Science Department, March 15, 1996 This method is initiated by the player process 20 (Figure 1) at vaπous stages of a streaming session The steps of this exemplary method are as follows 1 Do a traceroute and get hop identities marked by IP addresses
2. a) For every hop from one onwards, send a seπes of ICMP timestamp requests (type 13) from the player to the hop. Packet sizes of queπes are changed m steps of 1024, 512, 256, 128, and 64, where each queπes are sent out with a spacing of 3 seconds between them b) For every hop l between the player and the server, the player must record the ICMP timestamp response (type 14) and compute the time difference between
the hop l transmit time ' and the player receive instant ' . c) Denote the time difference by x " — t ' — t ' . d) Take four measurements for timestamp differences by varying packet size through five values. Discard the minimum and the maximum, and take average of the remaining two. Let s' and s" denote the packet sizes of two selected
T T queπes to hop l. Let (1 and ,] denote the measurements for the respective packet sizes τ Λ ~ τ * .. = e) Calculate s -s It can be proved that link capacity
Figure imgf000022_0001
computed recursively.
Bmm = min B{ ,_, The path bottleneck bandwidth ' ' . (Equation 5)
Service Rate Probe
The "Service Rate probe" computes the approximate service rate of transferring packets between the server and the player, as discussed in Van Jacobson, "pathchar - A tool to infer characteristics of Internet paths", MSRI, April 21, 1997; and R. L. Carter and M.E. Crovella, "Measuring Bottleneck Link Speed in Packet- switched Networks", TR- 96-006, Boston University Computer Science Department, March 15, 1996. The minimum service rate is the least service rate on the path, while the average service rate is the "expected" service rate on the path. . The steps of this exemplary method are as follows:
1. Obtain the hop list through fraceroute. Suppose there are n-1 hops between the player and the server.
2. Periodically, the player initiates a cycle. In each cycle, the player sends n-1 queries, each spaced by lλ second. The player forms timestamp differences between query responses from consecutive hops. The results are sent back to the server.
3. The server uses results from several cycles along with the link capacities to calculate the service rate. τ m = miniτ -τ ) a) First, the server calculates m'm~' m l m~u ; where the min operator works
T over all collected cycle results. We approximate m m-1 to be the base-line delay between node m and node m- 1. n max b) Denote the maximum queuing delay between node m and node m-1 by ^m m-1 , where node is a computing device, e.g., a computer or a router connected to the network. r( - Tmm ) c) Then, y ra' W m,m-l ' m.m-l / avg d) Denote the average queuing delay between node m and node m- 1 by ^ m,m-l
Figure imgf000023_0001
Finally, the minimum service rate is:
and the average service rate is:
Figure imgf000023_0002
(Equation 6)
Congestion Detection Rule (CC Test)
A robust but responsive congestion detection mechanism is an essential part of the inventive method. It is important to respond quickly to congestion; this can reduce the congestion period and hence make it easier for the recovery scheduling to do its job in a shorter period of time. Ultimately this can allow the method to work with a reduced buffer size and hence a reduced connect lag. It is equally important to robustly distinguish a sustained congestion condition. False detection can lead to reduced throughput and performance. Recall that the inventive method treats sustained congestion by reducing the throughput of the newer packets, so that the throughput of older retransmitted packets is maximized.
The present invention uses the following congestion detection rule:
1. Packet losses exceeding a threshold for a consecutive time- window of duration Ts. Ts is set to 10 seconds by default.
2. If condition 1 is triggered, three short service rate probes are sent, spaced 2
PD seconds apart, from the player to the mιn hop. Responses are averaged to form SR a„vg
SR„ p = \ —
The player process 20 (Figure 1) computes the expected utilization B
If condition 1 is satisfied and the utilization from condition 2 is more than a specified utilization threshold, e.g., 95%, the player process 20 (Figure 1) declares sustained congestion (Confirmed Congestion), and asks the server to enter the constant bit-rate emulation mode. An alternative method may consists of a simple rule where if packet losses exceed a threshold for Ni consecutive sampling time-windows, the player process 20 (Figure 1) declares sustained congestion and asks the server to enter the constant bit- rate emulation mode. By default, Ni is set to 1, and time- window is set equal to 1 second.
End of Congestion Rule (EOC Test)
The end of congestion rule is analogous to the congestion detection rule: 1. No packet losses for a consecutive time- window of duration Te. Te is set equal to 10 seconds by default. 2. If condition 1 is triggered, three short service rate probes are sent, spaced 2 seconds
Figure imgf000024_0001
apart, from the player to the m"> op. esponses are average to orm avg .
The player computes the expected utilization B
If condition 1 is satisfied and the utilization from condition 2 is less than a specified threshold, e.g., 90%, the player declares sustained congestion to be over, and asks the server to transition to the state of recovery scheduling. An alternative method may consists of a simple rule where if in a state of sustained congestion, no packet losses are manifest for N2 consecutive sampling time- windows, the player declares sustained congestion to be over, and asks the server to transition to the state of recovery scheduling By default, N2 may be set equal to five, and the sampling time-wmdow is set equal to 1 second. Thus, an apparatus and method for predictable and differentiated delivery of multimedia streaming on the Internet is provided. One skilled in the art will appreciate that the present invention can be carried out in other ways and practiced by other than the described embodiments, yet not departing from the spirit and essential characteristics depicted herein. The present embodiments therefore should be considered in all respects as illustrative, and the present invention is limited only by the claims that follow.

Claims

CLAIMSHaving thus described our invention, what we claim as new, and desire to secure by Letters Patent is:
1. A method of streaming of multimedia audio and video packets from at least one sending computing device to an at least one receiving computing device via a network, said method comprising the steps of:
a) at a start of said streaming communicating configuration information from a service manager via the network to both an at least one sending computing device for use by an at least one server process and an at least one receiving computing device for use by an at least one player process, said at least one sending computing device comprising an at least one retransmit buffer, an at least one output buffer, an at least one backup buffer and said at least one server process, said at least one receiving computing device, comprising an at least one player buffer and an at least one player process; b) queuing same packets in the at least one output buffer and to the at least one backup buffer; c) determining if there are packets in the at least one output buffer and transmitting the packets from the at least one output buffer to the at least one receiving computing device via the network; d) receiving the packets on the at least one receiving device and storing them in the at least one player buffer; e) determining if there are omissions in the packets stored in the at least one player buffer and forwarding retransmission requests to the at least one sending computing device and storing them in the at least one retransmit buffer; and f) determining if there are said retransmit requests in the at least one retransmit buffer and transmitting the packets from the at least one backup buffer to the at least one receiving computing device via the network.
2. The method of claim 1 wherein, the network is Internet.
3. The method of claim 2 wherein, a size of the at least one player buffer is ≥ c*x*er * Ploss , where c is a number of sampling intervals in a congestion period, x is
a sampling or retransmission interval, c x is a worst-case duration of congestion, e<- is p an encoding bit-rate of a stream of the packets and hss is a worst case of loss of the packets.
4. The method of claim 3 wherein, a size of the at least one output buffer is
≥ c*x*er *Ploss .
5. The method of claim 4 wherein, a size of the at least one backup buffer is
≥ m *x ≥ x*
Figure imgf000027_0001
, where m is a number of retransmits that will reduce the
]( gPl0SS loss rate for older packets to a negligibly small number, and m x is a minimum required size of the at least one player buffer.
6. A method of detecting and reacting to a network congestion which interferes with streaming of multimedia audio and video packets from at least one sending computing device to an at least one receiving computing device via a network, said method comprising the steps of: a) communicating configuration information including network parameters from a service manager via the network to the at least one sending computing device and to the at least one receiving computing device, the at least one sending computing device comprising at least one server process, the at least one receiving computing device comprising at least one player process; b) confirming the network congestion by transmitting and retracting network probes; c) executing at least one congestion control technique to limit degradation in the streaming of the packets; d) detecting an end of the network congestion by transmitting and retracting said network probes; e) configuring a rate of streaming of the packets according to said ngtwork parameters
7. The method of claim 6, wherein said at least one congestion control technique is the constant bit-rate emulation.
8. The method of claim 7, wherein said at least one congestion control technique only reduces a transmission rate of the packets by the amount of inc red loss of the packets.
9. The method of claim 7, further comprising the steps of: detecting an end of congestion; computing available bandwidth in the network; and transmitting packets according to said bandwidth.
10. The method of claim 9, wherein some of the at least one player process are differentiated and protected from effects of said network congestion to a greater than others by receiving a stream backlog sooner.
11. The method of claim 10, wherein some of the at least one player process are differentiated and protected from effects of said network congestion to a greater than others by receiving a stream backlog sooner according to a rule for transmission rate of the packets
e, = min(aj χ SRmin ,aJ χ B ~ λ°,her )
'=' , where ' indicates said transmission rate of packets, a is a weighting function, for class { 1...K} .
12. The method of claim 11 , wherein said detecting of said end of the network
CD TO congestion is performed according to the expected utilization p - \ L , where αvg
B is the average rate at which the packets are transferred from the at least one server process to the at least one player process and ^ is a bottleneck link capacity at a bottleneck hop.
13. The method of claim 12, wherein said network probes compute an approximate service rate of transferring the packets from said at least one server process to said at least one player process according to the following equation:
β r, + Q -. mm,,mm--\ι <-,„
Figure imgf000029_0001
m'm-1 , where avg and mm are
averaged and minimum service rate, ^m m-' is the maximum queuing delay between a node m and a node m-1.
14. A method of guaranteed delivery of streaming multimedia audio and video packets from at least one sending computing device to an at least one receiving computing device via a network, said method comprising the steps of: a) continuously processing arriving packets; b) keeping track of loss of the packets at the at least one receiving computing device, said at least one receiving computing device comprising one or more player processes; c) determining if said loss exceeds a first threshold during a first interval; d) verifying congestion symptoms; e) communicating said verified congestion symptoms to the at least one sending computing device, said at least one sending computing device comprising one or more server processes; f) monitoring said loss and determining if said loss exceeds a second threshold during a second interval to determine an end of congestion, and g) verifying congestion symptoms
15. The method of claim 14, wherein said step of keeping track is performed at user define intervals.
16. The method of claim 15, wherein said first threshold, said second threshold, said first interval, and said second interval are user defined
17. The method of claim 16, wherein said congestion symptoms are verified by receiving a remote congestion confirmed report
18. The method of claim 17, wherein said congestion symptoms are veπfied by performing a congestion probe.
PCT/US2001/040264 2000-03-08 2001-03-08 Apparatus and method for predictable and differentiated delivery of multimedia streaming on the internet WO2001067264A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2001251715A AU2001251715A1 (en) 2000-03-08 2001-03-08 Apparatus and method for predictable and differentiated delivery of multimedia streaming on the internet

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US52043400A 2000-03-08 2000-03-08
US09/520,434 2000-03-08

Publications (1)

Publication Number Publication Date
WO2001067264A1 true WO2001067264A1 (en) 2001-09-13

Family

ID=24072583

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2001/040264 WO2001067264A1 (en) 2000-03-08 2001-03-08 Apparatus and method for predictable and differentiated delivery of multimedia streaming on the internet

Country Status (2)

Country Link
AU (1) AU2001251715A1 (en)
WO (1) WO2001067264A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004002107A1 (en) * 2002-06-20 2003-12-31 Essential Viewing Limited Method, network, server and client for distributing data via a data communications network
US8370514B2 (en) 2005-04-28 2013-02-05 DISH Digital L.L.C. System and method of minimizing network bandwidth retrieved from an external network
US8402156B2 (en) 2004-04-30 2013-03-19 DISH Digital L.L.C. Apparatus, system, and method for multi-bitrate content streaming
US8683066B2 (en) 2007-08-06 2014-03-25 DISH Digital L.L.C. Apparatus, system, and method for multi-bitrate content streaming
US8868772B2 (en) 2004-04-30 2014-10-21 Echostar Technologies L.L.C. Apparatus, system, and method for adaptive-rate shifting of streaming content
US9510029B2 (en) 2010-02-11 2016-11-29 Echostar Advanced Technologies L.L.C. Systems and methods to provide trick play during streaming playback

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5339392A (en) * 1989-07-27 1994-08-16 Risberg Jeffrey S Apparatus and method for creation of a user definable video displayed document showing changes in real time data
US6014706A (en) * 1997-01-30 2000-01-11 Microsoft Corporation Methods and apparatus for implementing control functions in a streamed video display system
US6018515A (en) * 1997-08-19 2000-01-25 Ericsson Messaging Systems Inc. Message buffering for prioritized message transmission and congestion management
US6031818A (en) * 1997-03-19 2000-02-29 Lucent Technologies Inc. Error correction system for packet switching networks

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5339392A (en) * 1989-07-27 1994-08-16 Risberg Jeffrey S Apparatus and method for creation of a user definable video displayed document showing changes in real time data
US6014706A (en) * 1997-01-30 2000-01-11 Microsoft Corporation Methods and apparatus for implementing control functions in a streamed video display system
US6031818A (en) * 1997-03-19 2000-02-29 Lucent Technologies Inc. Error correction system for packet switching networks
US6018515A (en) * 1997-08-19 2000-01-25 Ericsson Messaging Systems Inc. Message buffering for prioritized message transmission and congestion management

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004002107A1 (en) * 2002-06-20 2003-12-31 Essential Viewing Limited Method, network, server and client for distributing data via a data communications network
US10225304B2 (en) 2004-04-30 2019-03-05 Dish Technologies Llc Apparatus, system, and method for adaptive-rate shifting of streaming content
US10469554B2 (en) 2004-04-30 2019-11-05 DISH Technologies L.L.C. Apparatus, system, and method for multi-bitrate content streaming
US8612624B2 (en) 2004-04-30 2013-12-17 DISH Digital L.L.C. Apparatus, system, and method for multi-bitrate content streaming
US9407564B2 (en) 2004-04-30 2016-08-02 Echostar Technologies L.L.C. Apparatus, system, and method for adaptive-rate shifting of streaming content
US8868772B2 (en) 2004-04-30 2014-10-21 Echostar Technologies L.L.C. Apparatus, system, and method for adaptive-rate shifting of streaming content
US8402156B2 (en) 2004-04-30 2013-03-19 DISH Digital L.L.C. Apparatus, system, and method for multi-bitrate content streaming
US9071668B2 (en) 2004-04-30 2015-06-30 Echostar Technologies L.L.C. Apparatus, system, and method for multi-bitrate content streaming
US9571551B2 (en) 2004-04-30 2017-02-14 Echostar Technologies L.L.C. Apparatus, system, and method for multi-bitrate content streaming
US11677798B2 (en) 2004-04-30 2023-06-13 DISH Technologies L.L.C. Apparatus, system, and method for multi-bitrate content streaming
US10951680B2 (en) 2004-04-30 2021-03-16 DISH Technologies L.L.C. Apparatus, system, and method for multi-bitrate content streaming
US11470138B2 (en) 2004-04-30 2022-10-11 DISH Technologies L.L.C. Apparatus, system, and method for multi-bitrate content streaming
US10469555B2 (en) 2004-04-30 2019-11-05 DISH Technologies L.L.C. Apparatus, system, and method for multi-bitrate content streaming
US8880721B2 (en) 2005-04-28 2014-11-04 Echostar Technologies L.L.C. System and method for minimizing network bandwidth retrieved from an external network
US8370514B2 (en) 2005-04-28 2013-02-05 DISH Digital L.L.C. System and method of minimizing network bandwidth retrieved from an external network
US9344496B2 (en) 2005-04-28 2016-05-17 Echostar Technologies L.L.C. System and method for minimizing network bandwidth retrieved from an external network
US10165034B2 (en) 2007-08-06 2018-12-25 DISH Technologies L.L.C. Apparatus, system, and method for multi-bitrate content streaming
US10116722B2 (en) 2007-08-06 2018-10-30 Dish Technologies Llc Apparatus, system, and method for multi-bitrate content streaming
US8683066B2 (en) 2007-08-06 2014-03-25 DISH Digital L.L.C. Apparatus, system, and method for multi-bitrate content streaming
US10075744B2 (en) 2010-02-11 2018-09-11 DISH Technologies L.L.C. Systems and methods to provide trick play during streaming playback
US9510029B2 (en) 2010-02-11 2016-11-29 Echostar Advanced Technologies L.L.C. Systems and methods to provide trick play during streaming playback

Also Published As

Publication number Publication date
AU2001251715A1 (en) 2001-09-17

Similar Documents

Publication Publication Date Title
Martin et al. Delay-based congestion avoidance for TCP
Al-Saadi et al. A survey of delay-based and hybrid TCP congestion control algorithms
Hashem Analysis of random drop for gateway congestion control
Cardwell et al. Modeling TCP latency
US8379535B2 (en) Optimization of streaming data throughput in unreliable networks
KR101046105B1 (en) Computer program manufacturing, resource demand adjustment methods, and end systems
Qiu et al. Understanding the performance of many TCP flows
US20080037420A1 (en) Immediate ready implementation of virtually congestion free guaranteed service capable network: external internet nextgentcp (square waveform) TCP friendly san
Abdelsalam et al. TCP Wave: A new reliable transport approach for future internet
AU2005308530A1 (en) Immediate ready implementation of virtually congestion free guaranteed service capable network: external internet NextGenTCP (square wave form) TCP friendly san
Mamatas et al. Approaches to congestion control in packet networks
Kilinc et al. A congestion avoidance mechanism for WebRTC interactive video sessions in LTE networks
Karnik et al. Performance of TCP congestion control with explicit rate feedback
Kühlewind et al. Chirping for congestion control-implementation feasibility
WO2001067264A1 (en) Apparatus and method for predictable and differentiated delivery of multimedia streaming on the internet
Sisalem et al. The direct adjustment algorithm: A TCP-friendly adaptation scheme
Mellia et al. TCP smart framing: a segmentation algorithm to reduce TCP latency
Martin et al. The incremental deployability of RTT-based congestion avoidance for high speed TCP Internet connections
Hisamatsu et al. Non bandwidth-intrusive video streaming over TCP
Zhang et al. Optimizing TCP start-up performance
Hsiao et al. Streaming video over TCP with receiver-based delay control
Pentikousis et al. An evaluation of TCP with explicit congestion notification
Zhang et al. An online learning based path selection for multipath real‐time video transmission in overlay network
Khan et al. Jitter and delay reduction for time sensitive elastic traffic for TCP-interactive based world wide video streaming over ABone
McCreary et al. TCP-RC: a receiver-centered TCP protocol for delay-sensitive applications

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG US UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: COMMUNICATION UNDER RULE 69 EPC (EPO FORM 1205A OF 03.01.2003)

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: JP