WO2006069528A1 - Procede d'ordonnancement de paquets dans un service de transmission par paquets - Google Patents

Procede d'ordonnancement de paquets dans un service de transmission par paquets Download PDF

Info

Publication number
WO2006069528A1
WO2006069528A1 PCT/CN2005/002312 CN2005002312W WO2006069528A1 WO 2006069528 A1 WO2006069528 A1 WO 2006069528A1 CN 2005002312 W CN2005002312 W CN 2005002312W WO 2006069528 A1 WO2006069528 A1 WO 2006069528A1
Authority
WO
WIPO (PCT)
Prior art keywords
packet
priority
queue
traffic
data packet
Prior art date
Application number
PCT/CN2005/002312
Other languages
English (en)
Chinese (zh)
Inventor
Wumao Chen
Xueyi Zhao
Original Assignee
Huawei Technologies Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co., Ltd. filed Critical Huawei Technologies Co., Ltd.
Publication of WO2006069528A1 publication Critical patent/WO2006069528A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/621Individual queue per connection or flow, e.g. per VC
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/11Identifying congestion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS
    • H04L47/2425Traffic characterised by specific attributes, e.g. priority or QoS for supporting services specification, e.g. SLA
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS
    • H04L47/2441Traffic characterised by specific attributes, e.g. priority or QoS relying on flow classification, e.g. using integrated services [IntServ]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/30Flow control; Congestion control in combination with information about buffer occupancy at either end or at transit nodes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/32Flow control; Congestion control by discarding or delaying data units, e.g. packets or frames
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/52Queue scheduling by attributing bandwidth to queues
    • H04L47/525Queue scheduling by attributing bandwidth to queues by redistribution of residual bandwidth
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/52Queue scheduling by attributing bandwidth to queues
    • H04L47/527Quantum based scheduling, e.g. credit or deficit based scheduling or token bank

Definitions

  • the present invention relates to a scheduling method, and more particularly to a scheduling method applicable to UBR+ (enhanced UBR, in which UBR is an Unspecified Bit Rate, i.e., "unspecified bit rate” abbreviation) in a packet service system.
  • UBR+ enhanced UBR, in which UBR is an Unspecified Bit Rate, i.e., "unspecified bit rate” abbreviation
  • UBR Unspecified Bit Rate
  • QoS quality of service
  • Etc quality of service
  • Different services require the network to provide different QoS parameters, and different QoS parameters need to be guaranteed by corresponding scheduling methods.
  • the so-called scheduling also known as flow control, is to control the order of the connected services to achieve the following objectives: 1) to ensure that each connection enjoys the bandwidth of its reservation; 2) the remaining bandwidth in the system In the case of the remaining bandwidth, the remaining bandwidth is allocated according to the QoS requirements of each connection; 3) The delay requirements of each connection are guaranteed.
  • the service goals of the existing UBR+ service QoS requirements are as follows: 1) Ensure the connection bandwidth with minimum bandwidth requirement, and the rest does not require the connection to do its best; 2) For the remaining bandwidth of the system, all connections are shared, including Connections with minimum bandwidth requirements; 3) Do their best in terms of latency.
  • Rim and fair scheduling are the most commonly used scheduling algorithms, including round robin algorithm (RR), eighted Round Robin (abbreviated as WRR), and DRR (abbreviation of Deficit Round-Robin). Although these algorithms can well guarantee the fairness of bandwidth allocation among users, or can ensure that each user shares the export bandwidth proportionally; In the following deficiencies: It is difficult to guarantee the minimum bandwidth of one or some high-priority users while the remaining bandwidth is shared by all users. In order to ensure that all users share the remaining bandwidth while ensuring high-priority user bandwidth, it is necessary to dynamically modify the scheduling weight of each user according to the number of connections of the user. For example, suppose the egress bandwidth is N.
  • the minimum bandwidth required for the user is guaranteed to be M, and the user and other L ordinary users are simultaneously assumed (assuming weights) To 1) be able to share the remaining bandwidth (NM), then the user's weight needs to be set to M*(L+l) / (N-M).
  • the number of high-priority users and the number of ordinary users may be randomly released, so it is cumbersome to calculate and set the weight of each user. More importantly, due to the fact that the number of user connections is variable at any time, this will make it difficult or even impossible to accurately set the weight values of each user.
  • Another common scheduling method is to implement bandwidth guarantee and sharing of UBR+ services by using traffic policing and PQ scheduling.
  • the traffic policing (Commi t Acces s Rate, abbreviated as CAR) sets an allowable traversal rate. Traffic within this rate range can pass, and traffic that exceeds this rate is discarded. Moreover, in order to ensure that each high-priority user can obtain the minimum required bandwidth, the sum of the minimum bandwidth required by all high-priority users when using traffic policing cannot exceed the total outgoing bandwidth.
  • PQ scheduling is used in this scheduling method to ensure that all high-priority user traffic that passes traffic policing always takes precedence over ordinary user traffic. The working principle is shown in Figure 1.
  • the packet 1 of the high-priority user passes the traffic policing 2, allowing the traffic in the rate range to pass through and entering the high-priority traffic queue 3; Packet 4 enters the normal traffic queue 5; then, through PQ scheduling 6, it ensures that high-priority user traffic takes precedence over ordinary user traffic.
  • the minimum bandwidth of high-priority user traffic is guaranteed, only ordinary users share the remaining bandwidth, and those that exceed the minimum bandwidth of those high-priority user traffic are directly discarded and are not shared with ordinary user traffic. Remaining bandwidth. Therefore, the above solution does not meet the bandwidth requirements of the UBR+ service.
  • the technical problem to be solved by the present invention is to provide a scheduling method for a single cartridge, which can solve the minimum bandwidth of one or some high-priority users, and can also achieve the remaining bandwidth by including higher priority users.
  • the problem shared by all users can fully meet the bandwidth requirements of the UBR+ service.
  • a packet scheduling method in a packet service is provided, which includes the following steps:
  • S1 User classification, dividing all users into high priority users and ordinary users;
  • S3 Perform traffic metrics on the data packets of the high-priority user, and use the data packet that does not exceed the traffic metric rate as the high-priority data packet, and use the data packet that exceeds the traffic metric rate and the data packet of the ordinary user as the common data pack;
  • S4 For the high priority data packet, send it to the sending queue of the egress; for normal data packets, check whether the egress is congested, and if the egress is congested, discard the excess normal data packet, otherwise the ordinary data packet is placed. Send out to the send queue of the exit.
  • the sending queue includes a high-priority traffic queue and a normal traffic queue.
  • the high-priority data packet is sent to the high-priority traffic queue, and all the ordinary data packets are sent to the normal traffic queue for transmission.
  • the absolute priority scheduling is also adopted in the step S4, so that the high priority traffic queue always takes precedence over the normal traffic queue.
  • the sending queue is implemented by using a FIFO queue, and the FIFO queue is provided with a high threshold and a low threshold; the high priority data packet is judged to enter the queue according to the high threshold threshold, and is discarded when the queue cannot be entered; The normal data packet is judged to enter the queue according to the low threshold threshold, and is discarded when the queue cannot be entered.
  • the difference between the high threshold and the low threshold is not less than the sum of the bursts of all the high priority users.
  • the high-priority traffic queue and the common traffic queue are respectively set with respective thresholds, and the high-priority data packets exceeding the high-priority queue threshold are marked as low-priority data packet processing, and the low-priority queue threshold is exceeded. The low priority packets are discarded.
  • the packet scheduling method in the packet service of the present invention performs traffic metrics on data packets of high-priority users, and divides data packets that do not exceed the traffic metric rate into high-priority data.
  • Packet which divides the data packet exceeding the traffic metric rate into ordinary data packets; and divides the ordinary user data packet into ordinary data packets; sends the high priority data packets into the queue, and controls the entry of the ordinary data packets when the egress is blocked.
  • the team ensures the minimum bandwidth guarantee for high-priority user traffic, and all users share the remaining bandwidth, which can fully meet the bandwidth requirements of UBR+ services.
  • FIG. 1 is a schematic diagram of a working principle of scheduling using traffic policing and PQ.
  • FIG. 2 is a flow chart of a packet scheduling method in the packet service of the present invention.
  • FIG. 3 is a schematic diagram showing the operation of a packet scheduling method in the packet service of the present invention.
  • FIG. 4 is a schematic diagram of a first embodiment of a packet scheduling method in a packet service of the present invention.
  • FIG. 5 is a schematic diagram of a second embodiment of a packet scheduling method in a packet service according to the present invention.
  • 6 is a schematic diagram of a third embodiment of a packet scheduling method in a packet service of the present invention.
  • the packet scheduling method in the packet service of the present invention is first in steps.
  • step S1 performs user classification, that is, divides all users into two categories: high priority users and ordinary users; then, determining the minimum bandwidth of each high priority user in step S2, thereby setting the traffic measurement rate; then, in step S3,
  • the data packet sent by the user is classified, that is, the traffic metric of the high-priority user is measured according to the traffic metric rate set in step S2, and the data packet of the high-priority user may be checked by using a leaky bucket algorithm or a token bucket algorithm.
  • 10 is the excess traffic, the packet that does not exceed the traffic metric rate is divided into the high priority packet 101, the packet exceeding the traffic metric rate is divided into the ordinary data packet 102; and the ordinary user data packet 20 is divided into ordinary packets.
  • Packet 202 is the excess traffic, the packet that does not exceed the traffic metric rate is divided into the high priority packet 101, the packet exceeding the traffic metric rate is divided into the ordinary data packet 102; and the ordinary user data packet 20 is divided into ordinary packets.
  • the data packet is processed at the exit of step S4, the high priority data packet 101 is placed in the sending queue of the egress, and sent out; the normal data packet 102, 202 is checked for the congestion status of the egress, and if the egress is congested, the packet is discarded.
  • the excess normal data packets 102, 202 if the exit is unblocked, put the ordinary data packets 102, 202 into the outgoing transmission queue and send them out. This ensures the minimum bandwidth guarantee for high-priority user traffic, and at the same time enables all users to fully share the remaining bandwidth.
  • a packet scheduling method in a packet service of the present invention First divide all users into high-priority users and ordinary users; then determine the minimum bandwidth of each high-priority user, and then set the traffic metric rate; then perform traffic metrics, packet classification.
  • the traffic metric only performs the measurement of the rate without discarding, that is, the data packet 10 of the high priority user is classified, and the data packet within the rate requirement range is divided into the high priority data packet 101, and the packet is entered.
  • To the high-priority traffic queue divide the data packet required by the original over-rate into the normal data packet 102, and use the same enqueue control as the ordinary data packet 20 of other ordinary users to enter the normal traffic queue.
  • the PQ scheduling module is used for scheduling at the exit to ensure that all high priority packets always take precedence over ordinary packets.
  • all users are first divided into high-priority users and ordinary users; then the minimum bandwidth of each high-priority user is determined, thereby setting Traffic metric rate; then traffic metrics, packet classification, classifying packets 10 of the high-priority user in the rate range into high-priority packets
  • the data packet 10 exceeding the rate range is divided into the ordinary data packet 102, 202 together with the data packet 20 of the ordinary user.
  • the FIFO (First In First Out) queue is then used at the exit for packet processing.
  • the FIFO queue is provided with a high threshold and a low threshold; the high priority data packet 101 is judged to enter the queue according to the high threshold threshold, and is discarded when the queue cannot be entered; the common data packet
  • the threshold value of the low threshold is judged and entered into the queue.
  • the embodiment of Figure 4 uses two queues, one with a higher priority and one with a lower priority, while the embodiment of Figure 5 uses only one FIFO queue, but simulates by setting two thresholds in this FIFO queue. 2 logical queues (physically only one queue).
  • the scheduling of high and low priority packets is achieved through these two logical queues. For example, a FIFO queue depth is up to 1000 packets, assuming a low priority threshold of 500, a high priority threshold of 1000, and the number of packets stored in this FIFO queue is less than 500.
  • the high priority data packet is prioritized over the ordinary data packet, and the minimum bandwidth of the high priority user is guaranteed;
  • the data packet of the out-of-rate range and the data packet of the ordinary user form a common data packet, so that the high-priority user and the ordinary user can share the remaining bandwidth.
  • the difference between the thresholds of the high-gate P ⁇ and the low threshold is not less than the sum of the bursts of all the high-priority users, that is, Burst data of all high-priority users can enter the queue, thus ensuring the bandwidth requirements of high-priority users.
  • the reason why the difference between the high and low thresholds is not less than the sum of the bursts of all high-priority users is to ensure that high-priority packets of multiple user bursts cannot be cached (discarded) because the queue capacity is too small.
  • all users are first divided into high-priority users and ordinary users; then the minimum bandwidth of each high-priority user is determined, thereby setting The traffic metric rate is then divided into a high priority packet 101, and the packet 10 exceeding the rate range is divided into the normal packet 102.
  • the high-priority traffic queue and the common traffic queue are respectively set with respective thresholds, and the high-priority data packets exceeding the high-priority queue threshold are not discarded, and they are marked as low-priority data packet processing. At this time, these high-priority data packets become low-priority data packets, and their processing is completely the same as low-priority data packets, and no distinction is made; for low-priority data packets exceeding the low priority queue threshold. Discard processing.
  • the low priority packet at this time may include two parts, one of which is a low priority packet, and the other is a low priority packet that is changed by a high priority packet that cannot enter the high priority queue.
  • the PQ scheduling module is used for scheduling at the exit to ensure that all high priority packets always take precedence over normal packets. This ensures the minimum bandwidth guarantee for high-priority user traffic, and all users share the remaining bandwidth, which can fully meet the bandwidth requirements of UBR+ services.

Abstract

L'invention concerne un procédé d'ordonnancement de paquets pour un service de transmission par paquets, ce procédé consistant à diviser les utilisateurs en utilisateurs prioritaires et utilisateurs non prioritaires, à déterminer la largeur de bande minimale pour les utilisateurs prioritaires, à définir une valeur de vitesse de transmission, à mesurer la vitesse de transmission des paquets associés aux utilisateurs prioritaires, les paquets dont la vitesse se situe dans les valeurs définies étant les paquets prioritaires, et les paquets dont la vitesse ne se situe pas dans les valeurs définies étant les paquets généraux, étant considérés comme paquets généraux les paquets associés aux utilisateurs généraux, à placer les paquets prioritaires dans la file d'attente de transmission, et à détecter, pour les paquets généraux, l'état de surcharge de la sortie, à éliminer les paquets généraux excédentaires en cas de surcharge de la sortie, et à transmettre les paquets généraux si la sortie n'est pas surchargée. Ce procédé permet d'utiliser une largeur de bande minimale pour le flux de paquets des utilisateurs prioritaires, l'ensemble des utilisateurs partageant la largeur de bande restante, ce qui permet de fournir la largeur de bande requise pour un service UBR+.
PCT/CN2005/002312 2004-12-29 2005-12-26 Procede d'ordonnancement de paquets dans un service de transmission par paquets WO2006069528A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CNB2004100919192A CN100370787C (zh) 2004-12-29 2004-12-29 一种分组业务中的数据包调度方法
CN200410091919.2 2004-12-29

Publications (1)

Publication Number Publication Date
WO2006069528A1 true WO2006069528A1 (fr) 2006-07-06

Family

ID=36614495

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2005/002312 WO2006069528A1 (fr) 2004-12-29 2005-12-26 Procede d'ordonnancement de paquets dans un service de transmission par paquets

Country Status (2)

Country Link
CN (1) CN100370787C (fr)
WO (1) WO2006069528A1 (fr)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101202701B (zh) * 2006-12-12 2012-09-05 中兴通讯股份有限公司 分组网络中为汇聚的可用比特率业务分配带宽的方法
CN101035008B (zh) * 2007-04-17 2010-04-14 华为技术有限公司 一种业务调度方法及其网络汇聚设备
US7911956B2 (en) * 2007-07-27 2011-03-22 Silicon Image, Inc. Packet level prioritization in interconnection networks
CN101159903B (zh) * 2007-10-23 2011-01-05 华为技术有限公司 传输承载拥塞的防止方法、处理方法及装置
CN101296185B (zh) * 2008-06-05 2011-12-14 杭州华三通信技术有限公司 一种均衡组的流量控制方法及装置
CN101360052B (zh) * 2008-09-28 2011-02-09 成都市华为赛门铁克科技有限公司 一种流量调度的方法和装置
CN101616096B (zh) * 2009-07-31 2013-01-16 中兴通讯股份有限公司 队列调度方法及装置
CN101692648B (zh) * 2009-08-14 2012-05-23 中兴通讯股份有限公司 一种队列调度方法及系统
CN101827033B (zh) * 2010-04-30 2013-06-19 北京搜狗科技发展有限公司 一种网络流量控制方法、装置及局域网系统
CN108369531B (zh) * 2016-07-12 2023-06-02 华为云计算技术有限公司 控制io带宽和处理io访问请求的方法、装置及系统

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002045362A2 (fr) * 2000-11-30 2002-06-06 Qualcomm Incorporated Procede et appareil de programmation de la transmission de paquets de donnees dans un systeme de telecommunications sans fil
WO2002085054A2 (fr) * 2001-04-12 2002-10-24 Qualcomm Incorporated Procede et appareil d'ordonnancement de transmissions de donnees par paquets dans un systeme de communication sans fil
US6801501B1 (en) * 1999-09-14 2004-10-05 Nokia Corporation Method and apparatus for performing measurement-based admission control using peak rate envelopes
EP1478140A1 (fr) * 2003-04-24 2004-11-17 France Telecom Procédé et dispositif d'ordonnancement de paquets sur un lien de réseau en fonction d'une priorité basée sur l'analyse du débit d'arrivée des flots

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6647424B1 (en) * 1998-05-20 2003-11-11 Nortel Networks Limited Method and apparatus for discarding data packets
JP4484317B2 (ja) * 2000-05-17 2010-06-16 株式会社日立製作所 シェーピング装置
US20040125815A1 (en) * 2002-06-24 2004-07-01 Mikio Shimazu Packet transmission apparatus and method thereof, traffic conditioner, priority control mechanism and packet shaper
FI112421B (fi) * 2002-10-29 2003-11-28 Tellabs Oy Menetelmä ja laitteisto siirtoyhteyskapasiteetin vuorottamiseksi pakettikytkentäisten tietoliikennevoiden kesken

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6801501B1 (en) * 1999-09-14 2004-10-05 Nokia Corporation Method and apparatus for performing measurement-based admission control using peak rate envelopes
WO2002045362A2 (fr) * 2000-11-30 2002-06-06 Qualcomm Incorporated Procede et appareil de programmation de la transmission de paquets de donnees dans un systeme de telecommunications sans fil
WO2002085054A2 (fr) * 2001-04-12 2002-10-24 Qualcomm Incorporated Procede et appareil d'ordonnancement de transmissions de donnees par paquets dans un systeme de communication sans fil
EP1478140A1 (fr) * 2003-04-24 2004-11-17 France Telecom Procédé et dispositif d'ordonnancement de paquets sur un lien de réseau en fonction d'une priorité basée sur l'analyse du débit d'arrivée des flots

Also Published As

Publication number Publication date
CN100370787C (zh) 2008-02-20
CN1798090A (zh) 2006-07-05

Similar Documents

Publication Publication Date Title
WO2006069528A1 (fr) Procede d'ordonnancement de paquets dans un service de transmission par paquets
US8169906B2 (en) Controlling ATM traffic using bandwidth allocation technology
US6687254B1 (en) Flexible threshold based buffering system for use in digital communication devices
US8130648B2 (en) Hierarchical queue shaping
US6256315B1 (en) Network to network priority frame dequeuing
JP3088464B2 (ja) Atmネットワークのバンド幅管理とアクセス制御
JP4287157B2 (ja) データトラフィックの転送管理方法及びネットワークスイッチ
Parris et al. Lightweight active router-queue management for multimedia networking
EP1086555A1 (fr) Procede de commande d'admission et noeud de commutation pour reseaux a commutation par paquets a services integres
US8248932B2 (en) Method and apparatus for fairly sharing excess bandwidth and packet dropping amongst subscribers of a data network
US6967923B1 (en) Bandwidth allocation for ATM available bit rate service
US20080304503A1 (en) Traffic manager and method for performing active queue management of discard-eligible traffic
US9197570B2 (en) Congestion control in packet switches
JP2004266389A (ja) パケット転送制御方法及びパケット転送制御回路
JP2001519973A (ja) 共用バッファへの優先度付きアクセス
US7522624B2 (en) Scalable and QoS aware flow control
US20060251091A1 (en) Communication control unit and communication control method
WO2022135202A1 (fr) Procédé, appareil et système de planification de flux de service
Yang et al. Scheduling with dynamic bandwidth allocation for DiffServ classes
Cisco Policing and Shaping Overview
Cisco Configuring IP QoS
Cisco Configuring Quality of Service
Cisco Configuring Quality of Service
Cisco Configuring IP QOS
Cisco Configuring IP QOS

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 05822750

Country of ref document: EP

Kind code of ref document: A1