US9270544B2 - Method and system to identify a network device associated with poor QoS - Google Patents

Method and system to identify a network device associated with poor QoS Download PDF

Info

Publication number
US9270544B2
US9270544B2 US14/153,137 US201414153137A US9270544B2 US 9270544 B2 US9270544 B2 US 9270544B2 US 201414153137 A US201414153137 A US 201414153137A US 9270544 B2 US9270544 B2 US 9270544B2
Authority
US
United States
Prior art keywords
quality
network
performance data
edge router
wan edge
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
US14/153,137
Other versions
US20140129708A1 (en
Inventor
Dinesh Goyal
Sachin Purushottam Saswade
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cisco Technology Inc
Original Assignee
Cisco Technology Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cisco Technology Inc filed Critical Cisco Technology Inc
Priority to US14/153,137 priority Critical patent/US9270544B2/en
Publication of US20140129708A1 publication Critical patent/US20140129708A1/en
Application granted granted Critical
Publication of US9270544B2 publication Critical patent/US9270544B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/04Processing captured monitoring data, e.g. for logfile generation
    • H04L12/2697
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/50Testing arrangements

Definitions

  • This application relates generally to computer network communications, and particularly to a method of and system for identifying a particular network device which contributes to poor quality of service of real-time data transmission across a network.
  • IP Internet Protocol
  • VoIP Voice over IP
  • IP Telephony e.g. VoIP, video calls, etc
  • IP Telephony are correspondingly increasing in terms of number of subscribers and size of networks.
  • the increasing number of subscribers using IP telephony for their day to day communication places increased load on network infrastructure, which leads to poorer voice quality owing to inadequate capacity or faulty infrastructure.
  • IP telephony places strict requirements on IP packet loss, packet delay, and delay variation (or jitter).
  • WAN edge routers that interconnect many branches of an enterprise or of many small businesses managed by a service provider.
  • Probable causes of poor voice quality at the WAN edge router are Codec conversions, mismatched link speeds, and bandwidth oversubscription owing to number of users, number of links, and/or link speed. Each of these causes results in buffer overruns, leading to packet discards, which in turn degrades the quality of voice or service.
  • FIG. 1 is a schematic representation of a system comprising a WAN in accordance with an example embodiment.
  • FIG. 2 a is a schematic representation of a computer system in accordance with an example embodiment.
  • FIG. 2 b shows a network which includes the computer system of FIG. 2 a.
  • FIG. 3 a shows a table of MOS scores with their associated qualities.
  • FIG. 3 b shows a table of expected MOS values based on impairment factors.
  • FIGS. 4 a and 4 b show flow diagrams of methods, in accordance with example embodiments, to identify a network device contributing to a poor QoS in a real-time data network.
  • FIG. 5 shows a diagrammatic representation of a machine in the example form of a computer system within which a set of instructions, for causing the machine to perform any one or more of the methodologies discussed herein, may be executed.
  • FIG. 1 shows a system 100 to identify a particular network device which contributes to poor quality of service of real-time data transmission across a network, in accordance with an example embodiment.
  • the system 100 includes a network, shown by way of example as an IP WAN 102 , which includes a plurality of network devices, e.g. routers. Some routers (not shown) form the backbone of the WAN 102 , while other routers 104 to 108 are arranged at the edge of the WAN 102 and are therefore WAN edge routers. Each WAN edge router 104 to 108 may connect a plurality of miscellaneous network devices and/or computer systems to the WAN 102 . In this example embodiment, however, the WAN edge routers 104 to 108 are connected to the respective switches 110 to 114 .
  • the routers 104 to 108 and switches 110 to 114 are examples of level 3 (L3) network devices.
  • L3 level 3
  • a plurality of IP telephones 124 to 132 are connected via the switches 110 to 114 and routers 104 to 108 to the WAN 102 .
  • the IP telephones 124 to 132 may be fixed or mobile telephones, e.g. VoIP telephones.
  • the system 100 may include a voice application server 120 , such as a voicemail system, IVR (Interactive Voice Response) system, or the like, and also includes a computer system in the form of a call manager 122 , in accordance with an example embodiment. It should however be noted that the example embodiments are not limited to voice only transmission but also extend to any real-time (or time critical communications) such as video.
  • the example IP telephones 124 to 132 communicate with one another and/or with other telephones by digitising voice or other sound (or even video with video telephones) and by sending the voice data in a stream of IP packets across the WAN 102 or other network. It is important for networks carrying voice streams to provide a high quality of service (QoS) so that the voice is clear or at least audible when received by a receiver telephone. Thus, packet loss or delay is undesirable as it lowers the QoS. This is not necessarily a problem with conventional data packet transmission, e.g., with non-voice or non real-time data, as dropped packets can be retransmitted and delayed packets reassembled in due course.
  • QoS quality of service
  • FIG. 2 a shows an example embodiment of a computer system 200 (e.g., a computer server) for implementing the methodology described herein.
  • the computer system 200 may be configured as the Call Manager 122 of FIG. 1 .
  • the computer system 200 may act as a call processing agent.
  • the computer system 200 may also form part of (e.g., embedded within) a telephony endpoint such as an IP telephone.
  • the computer system 200 includes a processor 202 and a network interface device 206 (e.g. a network card) for communication with a network.
  • the processor 202 comprises a plurality of conceptual modules, which corresponded to functional tasks performed by the processor 202 .
  • the computer system 200 may include a machine-readable medium, e.g. the processor 202 , main memory, and/or a hard disk drive, which carries a set of instructions to direct the operation of the processor 202 , for example being in the form of a computer program.
  • the processor 202 is shown by way of example to include: a monitoring module 210 to monitor network devices connected to the system 200 ; a generating module 212 to generate a sample real-time data stream; a comparing module 214 to compare quality of the sample real-time data stream with pre-defined quality criteria; a detecting module 216 to detect network devices in a network of which the system 200 forms part; and a determining module 218 to determine whether or not any detected network devices are contributing to a poor QoS.
  • the processor 202 may be one or more microprocessors, controllers, or any other suitable computing device, resource, hardware, software, or embedded logic.
  • the functional modules 210 to 218 may distributed among several processors, or alternatively may be consolidated within a single processor, as shown in FIG. 2 a.
  • the computer system 200 need not include all the modules 210 to 218 . Accordingly, some of the conceptual modules 210 to 218 may also be distributed across different network devices.
  • the monitoring module 210 , the detecting module 216 , the determining module 218 , and a reporting module may be provided in a network management system.
  • the generating module 212 and the detecting module 216 may be provided in a call agent.
  • the multiple module e.g., duplicate modules may also be provided in different devices across the network.
  • the monitoring module 210 monitors L3 network devices in a network to which the computer system 200 is connected.
  • the monitoring module 210 is configured to poll or interrogate the network devices intermittently, e.g. at pre-defined monitoring intervals, to determine performance data or statistics for at least one but preferably for all interfaces on the network devices.
  • the monitoring module 210 is particularly configured to monitor performance statistics for network routers.
  • the performance statistics which are monitored include processor utilisation and memory utilisation of each monitored network device, for example expressing the memory utilisation of each device as a percentage of maximum memory utilisation.
  • the monitoring module 210 may further monitor non-error IP packets which are dropped or discarded, e.g. also in the form of a percentage.
  • the monitoring module 210 may thus, for instance, record that 10% of non-error data packets are being dropped by a particular network device (e.g., due to buffer overruns).
  • performance statistics which are monitored provide an indication of whether or not the particular network device, such as router 104 to 108 , is coping satisfactorily with traffic on each of its interfaces, e.g. ATM (Asynchronous Transfer Mode) interface, T1 interface, etc.
  • the traffic statistics may be temporarily stored for later use, e.g. on a memory module connected to the computer system 200 .
  • the generating module 212 may be operable to generate and send a sample real-time data stream (e.g., a known voice clip) to a remote network device or other computer system.
  • the sample real-time data stream may be of a known quality, so that quality degradation can be measured. It is to be appreciated that the generating module 212 may be remote from the other modules, e.g. hosted by a router or switch.
  • the sample stream is transmitted between two endpoints, namely a source endpoint and a destination endpoint (which may be randomly selected).
  • the generating module 212 may serve as the source endpoint, while the destination endpoint may be a remote computer system, e.g. a router or switch.
  • One or more network devices e.g.
  • the WAN edge routers 104 to 108 are in the path of the sample stream, so that the quality of the data stream after transmission is influenced by the networks device(s).
  • the generating module 212 can be located on a system separate from the computer system 200 , the computer system 200 optionally serving as a destination endpoint.
  • the comparing module 214 may compare quality of the sample real-time data stream after transmission with pre-defined quality criteria which, for example, include impairment factors such as Codec type, network topology, etc.
  • the quality of the sample stream after transmission may be measured by a measuring module (refer further to FIG. 2 b ) which need not form part of the computer system 200 . If the quality of the sample real-time data stream after transmission is lower than an expected quality based on the impairment factors, the detecting module 216 may detect which network devices (particularly L3 network devices, e.g. routers and switches) in the transmission path of the sample stream are potentially contributing to a low QoS.
  • L3 network devices e.g. routers and switches
  • the determining module 218 determines whether or not any of the detected network devices in the sample stream path are over-loaded or are performing poorly based on the performance statistics gathered by the monitoring module 210 .
  • the call manager 122 provides the computer system 200 illustrated generally in FIG. 2 a .
  • the call manager 122 also includes a reporting module 220 to generate a report or an alert regarding network devices which may be contributing to a low QoS.
  • the system 250 includes a switch 112 which comprises a processor 260 having a measuring module 262 thereon.
  • the measuring module 262 is configured for receiving a sample stream from the generating module 212 , and for measuring the quality of the sample stream after transmission.
  • the measuring module 262 may be configured to transmit the measured sample stream quality back to the call manager 122 .
  • the measuring module 262 may use any appropriate method or algorithm, for example the MOS estimation algorithm.
  • the call manager 122 may include a memory module 252 , e.g. a hard disk drive, having stored thereon an MOS lookup table 254 which comprises a plurality of pre-defined expected MOS values for associated impairment factors—Codec types, in this example.
  • FIG. 3 a shows a table 270 of MOS values with their associated qualities, and is illustrated merely to give an indication of the range of MOS values
  • FIG. 3 b shows a table 280 of expected MOS values based on impairment factors, which in this example embodiment include a codec type.
  • impairment factors which in this example embodiment include a codec type.
  • the codec type is often matched to a particular network topology, and that selection of a particular codec from table 280 may thus automatically take into account a network topology as an impairment factor.
  • FIG. 4 a shows a high-level flow diagram of a method 300 , in accordance as an example embodiment, for identifying a network device adversely affecting the communication of real-time data (e.g., communicated using (Real-time Transmission Protocol (RTP)).
  • RTP Real-time Transmission Protocol
  • Network devices in a network are monitored, at block 302 , using for example the monitoring module 210 .
  • the monitoring of the network devices may include intermittently polling or interrogating the network devices to gather performance statistics (e.g., data about hardware components of the network device such as buffer overruns) for each respective network device.
  • the monitoring of the network devices may be done repetitively at predefined monitoring intervals.
  • a sample real-time data stream (e.g., a known voice clip) is generated and transmitted, at block 304 , by the generating module 212 (the generating module 212 therefore acting as a source endpoint), to one or more destination endpoints.
  • the quality of the sample real-time data stream is measured, at block 306 , for example, using the associated measuring module 262 .
  • the quality of the sample stream after transmission through the network is compared with the quality of the sample stream before transmission (which may be perfect or undistorted), to measure the extent to which the quality of the sample stream has been degraded or reduced.
  • the comparing module 214 compares, at block 308 , the quality of the sample stream against predefined quality criteria. If the quality of the sample stream after transmission violates, at block 310 , the quality criteria, e.g. if the quality is lower than predefined minimum criteria, the detecting module 216 detects or establishes the particular network devices which are in the path of the sample stream (e.g., using RTP traceroute functionality). Based on the monitored traffic statistics gathered at block 302 , the determining module 218 determines, at block 314 , whether or not any of the detected network devices in the network path traversed by the sample stream (the sample stream path) are contributing to poor quality of service of the sample stream.
  • the call manager 122 may monitor, at block 352 , routers connected to the WAN 102 .
  • routers connected to the WAN 102 may be monitored, or instead only the WAN edge routers 104 to 108 may be monitored.
  • the WAN edge routers 104 to 108 may cause bottlenecks which may lead to a low quality of service. However, in certain circumstances other routers may also contribute to the low quality of service.
  • the performance statistics (e.g., router operating data) which are thus gathered by the monitoring module 210 may be temporarily stored by the call manager 122 , for example on the memory module 252 .
  • the call manager 122 may be used for measuring the quality of any real-time data stream, the example embodiment may find particular application in measuring the quality of sound or voice streams, for example, voice streams used for IP telephony.
  • the generating module 212 may generate a sample voice stream of known quality (e.g. having a MOS of 5), and may transmit the sample voice stream to a destination endpoint, for example switch 112 . It will be noted that in the given example, because the call manager 122 is the source endpoint and the switch 112 is the destination endpoint, WAN edge routers 104 and 106 both lie in the path of the sample voice stream.
  • the quality of the sample stream as received by switch 112 will be affected by the performance of WAN edge routers 104 to 106 .
  • the generating module 212 may transmit the sample voice stream to other destination endpoints, for example switch 114 , to gauge the performance of WAN edge router 108 .
  • one of the WAN edge routers 104 to 106 may be the destination endpoint.
  • the call manager 122 may be a destination endpoint, and a router or switch may be a source endpoint.
  • the call manager 122 may include the measuring module 262
  • the router or switch used as the source endpoint may include the generating module 212 .
  • there may be a plurality of source endpoints, and a single destination endpoint.
  • the measuring module 262 of switch 112 may measure, at block 356 , the quality of the received sample voice stream in accordance with the MOS estimation algorithm, and transmit data indicitive of the measured voice quality back to the call manager 122 .
  • the comparing module 214 may thereafter compare, at block 358 , the measured value of the sample voice stream against an expected quality value. For example, the comparing module 214 may determine what quality value is to be expected based on the network topology and/or all the codec used for transmitting the sample voice stream.
  • the comparing module 214 using impairment factors, may thus determine an expected quality of the sample voice signal after transmission. For example, if the sample voice stream was transmitted using the G.711 codec, the expected MOS is 4.10 (refer to table 280 of FIG.
  • the MOS lookup table may therefore be similar or identical to table 280 . Therefore, if the quality of the sample voice signal as measured by the measuring module 262 is less then 4.10, at block 360 , the quality of the sample voice stream is lower than expected for the particular network configuration and codec, and the call manager 122 may then be configured to detect, at block 362 , all routers in the path of the sample voice stream, for example by using the RTP traceroute command.
  • the WAN edge routers 104 to 106 are in the path of the sample voice stream. There may of course be other routers (not shown) in the path as well, these other routers forming part of the WAN 102 backbone.
  • the detecting module 216 may in such a case be configured to identify, at block 364 , which of the detected routers are WAN edge routers. As previously mentioned, it may be the WAN edge routers which cause a bottleneck and therefore contribute to a poor quality of service.
  • the determining module 218 may then determine, at block 366 , which of the detected WAN edge routers 104 to 106 in the sample stream path, if any, are contributing to the poor quality of service, and more specifically, which of these routers' interfaces are contributing to a poor quality of service. For example, if the traffic statistics show that an ATM interface of WAN edge router 104 had (and/or has) a very high memory or CPU usage (for example 80% to 100%) or was (and/or is) discarding an unusually high amount of non-error packets (e.g. one in 10 non-error packets were (and/or are) being discarded) it is likely or at least possible that the ATM interface of WAN edge router 104 is contributing to a poor quality of service.
  • the step of monitoring the routers may be performed later in the process, for example before or after the quality of the sample voice stream is measured at block 356 , or before or after the WAN edge routers 104 to 108 are identified, at block 364 .
  • the reporting module 226 may generate a report (e.g. in the form of a dashboard), at block 368 , which summarizes the performance of each interface of each of the identified potentially faulty WAN edge routers 104 to 106 insofar as it relates to transmission quality of real-time data such as voice streams.
  • the network administrator after seeing the report, may be in a better position to correct the problem, for example by adjusting or bypassing the WAN edge router 104 to 108 which is causing the low quality of service.
  • FIG. 5 shows a diagrammatic representation of machine in the example form of a computer system 400 within which a set of instructions, for causing the machine to perform any one or more of the methodologies discussed herein, may be executed.
  • the machine operates as a standalone device or may be connected (e.g., networked) to other machines.
  • the machine may operate in the capacity of a server or a client machine in server-client network environment, or as a peer machine in a peer-to-peer (or distributed) network environment.
  • the machine may be a personal computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, a network router, switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine.
  • PC personal computer
  • PDA Personal Digital Assistant
  • STB set-top box
  • WPA Personal Digital Assistant
  • the example computer system 400 includes a processor 402 (e.g., a central processing unit (CPU), a graphics processing unit (GPU) or both), a main memory 404 and a static memory 406 , which communicate with each other via a bus 408 .
  • the computer system 400 may further include a video display unit 410 (e.g., a liquid crystal display (LCD) or a cathode ray tube (CRT)).
  • the computer system 400 also includes an alphanumeric input device 412 (e.g., a keyboard), a user interface (UI) navigation device 414 (e.g., a mouse), a disk drive unit 416 , a signal generation device 418 (e.g., a speaker) and a network interface device 420 .
  • a processor 402 e.g., a central processing unit (CPU), a graphics processing unit (GPU) or both
  • main memory 404 e.g., RAM
  • static memory 406 e.g.,
  • the disk drive unit 416 includes a machine-readable medium 422 on which is stored one or more sets of instructions and data structures (e.g., software 424 ) embodying or utilized by any one or more of the methodologies or functions described herein.
  • the software 424 may also reside, completely or at least partially, within the main memory 404 and/or within the processor 402 during execution thereof by the computer system 400 , the main memory 404 and the processor 402 also constituting machine-readable media.
  • the software 424 may further be transmitted or received over a network 426 via the network interface device 420 utilizing any one of a number of well-known transfer protocols (e.g., HTTP).
  • HTTP transfer protocol
  • machine-readable medium 422 is shown in an example embodiment to be a single medium, the term “machine-readable medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions.
  • the term “machine-readable medium” shall also be taken to include any medium that is capable of storing, encoding or carrying a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present invention, or that is capable of storing, encoding or carrying data structures utilized by or associated with such a set of instructions.
  • the term “machine-readable medium” shall accordingly be taken to include, but not be limited to, solid-state memories, optical and magnetic media, and carrier wave signals.
  • the call manager 122 and/or switch 112 may be in the form of computer system 400 .
  • the example methods, devices and systems described herein may be used for troubleshooting voice quality issues in a network environment.
  • a network administrator may, based on the generated report, identify which network devices are contributing to a poor quality of service. The network administrator may therefore not need to check the performance of every network device in the network, but rather is provided with a shortlist of network devices which are potentially degrading voice quality.

Abstract

A method and apparatus to analyze real-time data transmissions across a network is described. The method may comprise transmitting a sample data stream between source and destination endpoints across a test data path which includes network devices. The method may then compare a measured quality of the received sample data stream with pre-defined quality criteria associated with the network. If the measured quality fails to meet the pre-defined quality criteria, the network devices in the test data path may be identified, device performance data may be obtained, and a network report may be generated based on the device performance data. The device performance data may comprise processor utilization, memory utilization, bandwidth over subscription, buffer over run, and/or a number of non-error packets that are discarded at the network device.

Description

CROSS-REFERENCE TO RELATED APPLICATIONS
This application is a continuation of U.S. patent application Ser. No. 11/466,390, filed on Aug. 22, 2006, which is incorporated herein by reference in its entirety for all purposes.
FIELD
This application relates generally to computer network communications, and particularly to a method of and system for identifying a particular network device which contributes to poor quality of service of real-time data transmission across a network.
BACKGROUND
Popularity of IP (Internet Protocol) telephony (e.g. VoIP, video calls, etc) is increasing, and deployments of IP Telephony are correspondingly increasing in terms of number of subscribers and size of networks. The increasing number of subscribers using IP telephony for their day to day communication places increased load on network infrastructure, which leads to poorer voice quality owing to inadequate capacity or faulty infrastructure.
IP telephony places strict requirements on IP packet loss, packet delay, and delay variation (or jitter). In multi-site complex customer networks, there may be many WAN edge routers that interconnect many branches of an enterprise or of many small businesses managed by a service provider.
Probable causes of poor voice quality at the WAN edge router are Codec conversions, mismatched link speeds, and bandwidth oversubscription owing to number of users, number of links, and/or link speed. Each of these causes results in buffer overruns, leading to packet discards, which in turn degrades the quality of voice or service.
BRIEF DESCRIPTION OF DRAWINGS
The present invention is illustrated by way of example and not limitation in the figures of the accompanying drawings, in which like references indicate similar elements and in which:
FIG. 1 is a schematic representation of a system comprising a WAN in accordance with an example embodiment.
FIG. 2 a is a schematic representation of a computer system in accordance with an example embodiment.
FIG. 2 b shows a network which includes the computer system of FIG. 2 a.
FIG. 3 a shows a table of MOS scores with their associated qualities.
FIG. 3 b shows a table of expected MOS values based on impairment factors.
FIGS. 4 a and 4 b show flow diagrams of methods, in accordance with example embodiments, to identify a network device contributing to a poor QoS in a real-time data network.
FIG. 5 shows a diagrammatic representation of a machine in the example form of a computer system within which a set of instructions, for causing the machine to perform any one or more of the methodologies discussed herein, may be executed.
DETAILED DESCRIPTION
In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of an embodiment of the present invention. It will be evident, however, to one skilled in the art that the present invention may be practiced without these specific details.
FIG. 1 shows a system 100 to identify a particular network device which contributes to poor quality of service of real-time data transmission across a network, in accordance with an example embodiment. The system 100 includes a network, shown by way of example as an IP WAN 102, which includes a plurality of network devices, e.g. routers. Some routers (not shown) form the backbone of the WAN 102, while other routers 104 to 108 are arranged at the edge of the WAN 102 and are therefore WAN edge routers. Each WAN edge router 104 to 108 may connect a plurality of miscellaneous network devices and/or computer systems to the WAN 102. In this example embodiment, however, the WAN edge routers 104 to 108 are connected to the respective switches 110 to 114. The routers 104 to 108 and switches 110 to 114 are examples of level 3 (L3) network devices.
A plurality of IP telephones 124 to 132 are connected via the switches 110 to 114 and routers 104 to 108 to the WAN 102. The IP telephones 124 to 132 may be fixed or mobile telephones, e.g. VoIP telephones. In addition, the system 100 may include a voice application server 120, such as a voicemail system, IVR (Interactive Voice Response) system, or the like, and also includes a computer system in the form of a call manager 122, in accordance with an example embodiment. It should however be noted that the example embodiments are not limited to voice only transmission but also extend to any real-time (or time critical communications) such as video.
It is to be understood that the example IP telephones 124 to 132 communicate with one another and/or with other telephones by digitising voice or other sound (or even video with video telephones) and by sending the voice data in a stream of IP packets across the WAN 102 or other network. It is important for networks carrying voice streams to provide a high quality of service (QoS) so that the voice is clear or at least audible when received by a receiver telephone. Thus, packet loss or delay is undesirable as it lowers the QoS. This is not necessarily a problem with conventional data packet transmission, e.g., with non-voice or non real-time data, as dropped packets can be retransmitted and delayed packets reassembled in due course.
FIG. 2 a shows an example embodiment of a computer system 200 (e.g., a computer server) for implementing the methodology described herein. In an example embodiment, the computer system 200 may be configured as the Call Manager 122 of FIG. 1. Thus, the computer system 200 may act as a call processing agent. However, it should be noted that the computer system 200 may also form part of (e.g., embedded within) a telephony endpoint such as an IP telephone.
The computer system 200 includes a processor 202 and a network interface device 206 (e.g. a network card) for communication with a network. The processor 202 comprises a plurality of conceptual modules, which corresponded to functional tasks performed by the processor 202. To this end, the computer system 200 may include a machine-readable medium, e.g. the processor 202, main memory, and/or a hard disk drive, which carries a set of instructions to direct the operation of the processor 202, for example being in the form of a computer program. More specifically, the processor 202 is shown by way of example to include: a monitoring module 210 to monitor network devices connected to the system 200; a generating module 212 to generate a sample real-time data stream; a comparing module 214 to compare quality of the sample real-time data stream with pre-defined quality criteria; a detecting module 216 to detect network devices in a network of which the system 200 forms part; and a determining module 218 to determine whether or not any detected network devices are contributing to a poor QoS. It is to be understood that the processor 202 may be one or more microprocessors, controllers, or any other suitable computing device, resource, hardware, software, or embedded logic. Furthermore, the functional modules 210 to 218 may distributed among several processors, or alternatively may be consolidated within a single processor, as shown in FIG. 2 a.
It is important to note that the computer system 200 need not include all the modules 210 to 218. Accordingly, some of the conceptual modules 210 to 218 may also be distributed across different network devices. For example, in an example embodiment, the monitoring module 210, the detecting module 216, the determining module 218, and a reporting module may be provided in a network management system. Further, in an example embodiment, the generating module 212 and the detecting module 216 may be provided in a call agent. It should also be noted that the multiple module (e.g., duplicate modules) may also be provided in different devices across the network.
The monitoring module 210 monitors L3 network devices in a network to which the computer system 200 is connected. The monitoring module 210 is configured to poll or interrogate the network devices intermittently, e.g. at pre-defined monitoring intervals, to determine performance data or statistics for at least one but preferably for all interfaces on the network devices. The monitoring module 210 is particularly configured to monitor performance statistics for network routers. The performance statistics which are monitored include processor utilisation and memory utilisation of each monitored network device, for example expressing the memory utilisation of each device as a percentage of maximum memory utilisation. The monitoring module 210 may further monitor non-error IP packets which are dropped or discarded, e.g. also in the form of a percentage. The monitoring module 210 may thus, for instance, record that 10% of non-error data packets are being dropped by a particular network device (e.g., due to buffer overruns).
These performance statistics which are monitored provide an indication of whether or not the particular network device, such as router 104 to 108, is coping satisfactorily with traffic on each of its interfaces, e.g. ATM (Asynchronous Transfer Mode) interface, T1 interface, etc. The traffic statistics may be temporarily stored for later use, e.g. on a memory module connected to the computer system 200.
The generating module 212 may be operable to generate and send a sample real-time data stream (e.g., a known voice clip) to a remote network device or other computer system. The sample real-time data stream may be of a known quality, so that quality degradation can be measured. It is to be appreciated that the generating module 212 may be remote from the other modules, e.g. hosted by a router or switch. The sample stream is transmitted between two endpoints, namely a source endpoint and a destination endpoint (which may be randomly selected). The generating module 212 may serve as the source endpoint, while the destination endpoint may be a remote computer system, e.g. a router or switch. One or more network devices (e.g. WAN edge routers 104 to 108) are in the path of the sample stream, so that the quality of the data stream after transmission is influenced by the networks device(s). In other embodiments, the generating module 212 can be located on a system separate from the computer system 200, the computer system 200 optionally serving as a destination endpoint.
The comparing module 214 may compare quality of the sample real-time data stream after transmission with pre-defined quality criteria which, for example, include impairment factors such as Codec type, network topology, etc. The quality of the sample stream after transmission may be measured by a measuring module (refer further to FIG. 2 b) which need not form part of the computer system 200. If the quality of the sample real-time data stream after transmission is lower than an expected quality based on the impairment factors, the detecting module 216 may detect which network devices (particularly L3 network devices, e.g. routers and switches) in the transmission path of the sample stream are potentially contributing to a low QoS.
The determining module 218 then determines whether or not any of the detected network devices in the sample stream path are over-loaded or are performing poorly based on the performance statistics gathered by the monitoring module 210.
Referring now to FIG. 2 b, a system 250 is shown in accordance with an example embodiment. In this example embodiment, the call manager 122 provides the computer system 200 illustrated generally in FIG. 2 a. In addition to modules 210 to 218, the call manager 122 also includes a reporting module 220 to generate a report or an alert regarding network devices which may be contributing to a low QoS. In an example embodiment, the system 250 includes a switch 112 which comprises a processor 260 having a measuring module 262 thereon. The measuring module 262 is configured for receiving a sample stream from the generating module 212, and for measuring the quality of the sample stream after transmission. The measuring module 262 may be configured to transmit the measured sample stream quality back to the call manager 122. To perform the measuring, the measuring module 262 may use any appropriate method or algorithm, for example the MOS estimation algorithm. In this regard, the call manager 122 may include a memory module 252, e.g. a hard disk drive, having stored thereon an MOS lookup table 254 which comprises a plurality of pre-defined expected MOS values for associated impairment factors—Codec types, in this example.
FIG. 3 a shows a table 270 of MOS values with their associated qualities, and is illustrated merely to give an indication of the range of MOS values, while FIG. 3 b shows a table 280 of expected MOS values based on impairment factors, which in this example embodiment include a codec type. However, it is to be understood that the codec type is often matched to a particular network topology, and that selection of a particular codec from table 280 may thus automatically take into account a network topology as an impairment factor.
An example embodiment is further described in use with reference to FIGS. 4 a and 4 b. FIG. 4 a shows a high-level flow diagram of a method 300, in accordance as an example embodiment, for identifying a network device adversely affecting the communication of real-time data (e.g., communicated using (Real-time Transmission Protocol (RTP)). Network devices in a network are monitored, at block 302, using for example the monitoring module 210. The monitoring of the network devices may include intermittently polling or interrogating the network devices to gather performance statistics (e.g., data about hardware components of the network device such as buffer overruns) for each respective network device. The monitoring of the network devices may be done repetitively at predefined monitoring intervals. Also at predefined intervals (or at any time a network test is run), a sample real-time data stream (e.g., a known voice clip) is generated and transmitted, at block 304, by the generating module 212 (the generating module 212 therefore acting as a source endpoint), to one or more destination endpoints. At the destination endpoint, the quality of the sample real-time data stream is measured, at block 306, for example, using the associated measuring module 262. The quality of the sample stream after transmission through the network is compared with the quality of the sample stream before transmission (which may be perfect or undistorted), to measure the extent to which the quality of the sample stream has been degraded or reduced. The comparing module 214 compares, at block 308, the quality of the sample stream against predefined quality criteria. If the quality of the sample stream after transmission violates, at block 310, the quality criteria, e.g. if the quality is lower than predefined minimum criteria, the detecting module 216 detects or establishes the particular network devices which are in the path of the sample stream (e.g., using RTP traceroute functionality). Based on the monitored traffic statistics gathered at block 302, the determining module 218 determines, at block 314, whether or not any of the detected network devices in the network path traversed by the sample stream (the sample stream path) are contributing to poor quality of service of the sample stream.
Referring to FIG. 4B, a flow diagram of a method 350, in accordance with an example embodiment, for identifying a network device adversely affecting the communication of real-time data is shown. The call manager 122 (e.g., utilizing the monitoring module 210) may monitor, at block 352, routers connected to the WAN 102. Depending on the configuration of the monitoring module 210, all routers connected to the WAN 102 may be monitored, or instead only the WAN edge routers 104 to 108 may be monitored. The WAN edge routers 104 to 108 may cause bottlenecks which may lead to a low quality of service. However, in certain circumstances other routers may also contribute to the low quality of service. The performance statistics (e.g., router operating data) which are thus gathered by the monitoring module 210 may be temporarily stored by the call manager 122, for example on the memory module 252.
Although the call manager 122 may be used for measuring the quality of any real-time data stream, the example embodiment may find particular application in measuring the quality of sound or voice streams, for example, voice streams used for IP telephony. Thus, at block 354, the generating module 212 may generate a sample voice stream of known quality (e.g. having a MOS of 5), and may transmit the sample voice stream to a destination endpoint, for example switch 112. It will be noted that in the given example, because the call manager 122 is the source endpoint and the switch 112 is the destination endpoint, WAN edge routers 104 and 106 both lie in the path of the sample voice stream. Thus, the quality of the sample stream as received by switch 112 will be affected by the performance of WAN edge routers 104 to 106. In addition, the generating module 212 may transmit the sample voice stream to other destination endpoints, for example switch 114, to gauge the performance of WAN edge router 108. Thus, there may be a plurality of destination endpoints in the system 250 so that the sample voice streams pass through as many WAN edge routers as possible.
In another example embodiment, one of the WAN edge routers 104 to 106 may be the destination endpoint. Instead of, or in addition to, the call manager 122 may be a destination endpoint, and a router or switch may be a source endpoint. In such a case, the call manager 122 may include the measuring module 262, and the router or switch used as the source endpoint may include the generating module 212. Thus, there may be a plurality of source endpoints, and a single destination endpoint.
The measuring module 262 of switch 112 may measure, at block 356, the quality of the received sample voice stream in accordance with the MOS estimation algorithm, and transmit data indicitive of the measured voice quality back to the call manager 122. The comparing module 214 may thereafter compare, at block 358, the measured value of the sample voice stream against an expected quality value. For example, the comparing module 214 may determine what quality value is to be expected based on the network topology and/or all the codec used for transmitting the sample voice stream. The comparing module 214, using impairment factors, may thus determine an expected quality of the sample voice signal after transmission. For example, if the sample voice stream was transmitted using the G.711 codec, the expected MOS is 4.10 (refer to table 280 of FIG. 3 a). The MOS lookup table may therefore be similar or identical to table 280. Therefore, if the quality of the sample voice signal as measured by the measuring module 262 is less then 4.10, at block 360, the quality of the sample voice stream is lower than expected for the particular network configuration and codec, and the call manager 122 may then be configured to detect, at block 362, all routers in the path of the sample voice stream, for example by using the RTP traceroute command. In the example system 250, the WAN edge routers 104 to 106 are in the path of the sample voice stream. There may of course be other routers (not shown) in the path as well, these other routers forming part of the WAN 102 backbone. The detecting module 216 may in such a case be configured to identify, at block 364, which of the detected routers are WAN edge routers. As previously mentioned, it may be the WAN edge routers which cause a bottleneck and therefore contribute to a poor quality of service.
Using the traffic statistics gathered at block 352 by the monitoring module 210, the determining module 218 may then determine, at block 366, which of the detected WAN edge routers 104 to 106 in the sample stream path, if any, are contributing to the poor quality of service, and more specifically, which of these routers' interfaces are contributing to a poor quality of service. For example, if the traffic statistics show that an ATM interface of WAN edge router 104 had (and/or has) a very high memory or CPU usage (for example 80% to 100%) or was (and/or is) discarding an unusually high amount of non-error packets (e.g. one in 10 non-error packets were (and/or are) being discarded) it is likely or at least possible that the ATM interface of WAN edge router 104 is contributing to a poor quality of service.
It is to be understood that the order of some of the steps/operations described above may be changed and the same result may still be achieved. For example, the step of monitoring the routers, at block 352, may be performed later in the process, for example before or after the quality of the sample voice stream is measured at block 356, or before or after the WAN edge routers 104 to 108 are identified, at block 364.
The reporting module 226 may generate a report (e.g. in the form of a dashboard), at block 368, which summarizes the performance of each interface of each of the identified potentially faulty WAN edge routers 104 to 106 insofar as it relates to transmission quality of real-time data such as voice streams. The network administrator, after seeing the report, may be in a better position to correct the problem, for example by adjusting or bypassing the WAN edge router 104 to 108 which is causing the low quality of service.
FIG. 5 shows a diagrammatic representation of machine in the example form of a computer system 400 within which a set of instructions, for causing the machine to perform any one or more of the methodologies discussed herein, may be executed. In alternative embodiments, the machine operates as a standalone device or may be connected (e.g., networked) to other machines. In a networked deployment, the machine may operate in the capacity of a server or a client machine in server-client network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine may be a personal computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, a network router, switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while only a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein.
The example computer system 400 includes a processor 402 (e.g., a central processing unit (CPU), a graphics processing unit (GPU) or both), a main memory 404 and a static memory 406, which communicate with each other via a bus 408. The computer system 400 may further include a video display unit 410 (e.g., a liquid crystal display (LCD) or a cathode ray tube (CRT)). The computer system 400 also includes an alphanumeric input device 412 (e.g., a keyboard), a user interface (UI) navigation device 414 (e.g., a mouse), a disk drive unit 416, a signal generation device 418 (e.g., a speaker) and a network interface device 420.
The disk drive unit 416 includes a machine-readable medium 422 on which is stored one or more sets of instructions and data structures (e.g., software 424) embodying or utilized by any one or more of the methodologies or functions described herein. The software 424 may also reside, completely or at least partially, within the main memory 404 and/or within the processor 402 during execution thereof by the computer system 400, the main memory 404 and the processor 402 also constituting machine-readable media.
The software 424 may further be transmitted or received over a network 426 via the network interface device 420 utilizing any one of a number of well-known transfer protocols (e.g., HTTP).
While the machine-readable medium 422 is shown in an example embodiment to be a single medium, the term “machine-readable medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions. The term “machine-readable medium” shall also be taken to include any medium that is capable of storing, encoding or carrying a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present invention, or that is capable of storing, encoding or carrying data structures utilized by or associated with such a set of instructions. The term “machine-readable medium” shall accordingly be taken to include, but not be limited to, solid-state memories, optical and magnetic media, and carrier wave signals.
Although an embodiment of the present invention has been described with reference to specific example embodiments, it will be evident that various modifications and changes may be made to these embodiments without departing from the broader spirit and scope of the invention. Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense.
The call manager 122 and/or switch 112, or any other computer system or network device in accordance with an example embodiment may be in the form of computer system 400.
The example methods, devices and systems described herein may be used for troubleshooting voice quality issues in a network environment. A network administrator may, based on the generated report, identify which network devices are contributing to a poor quality of service. The network administrator may therefore not need to check the performance of every network device in the network, but rather is provided with a shortlist of network devices which are potentially degrading voice quality.

Claims (14)

What is claimed is:
1. A method comprising:
transmitting a sample data stream at a known first quality in a network between a source endpoint and a destination endpoint across a test data path that includes at least two network devices, at least one of the two network devices being a WAN edge router that lies between the source endpoint and the destination endpoint in the test data path;
comparing a measured second quality of the received sample data stream with the known first quality of the transmitted sample data stream;
determining that the measured second quality is less than the known first quality; and
in response to the determination that the measured second quality fails to meet the known first quality, performing operations including:
identifying at least one network device in the test data path;
obtaining device performance data of the at least one network device, wherein the device performance data of the WAN edge router is obtained from an interface of the WAN edge router;
using the device performance data of the WAN edge router to determine if the WAN edge router is contributing to the failure of the measured second quality to meet the known first quality; and
generating a network report based on the device performance data, the network report relating the at least one device in the test data path to a failure of the measured second quality to meet the known first quality.
2. The method of claim 1, wherein the device performance data includes at least one of processor utilization, memory utilization, bandwidth oversubscription, buffer overrun, or a number of non-error packets that are discarded due to buffer overrun.
3. The method of claim 1, further comprising:
monitoring a plurality of network devices to obtain corresponding performance data prior to transmitting the sample data stream;
identifying the at least one network device in the test data path from the monitored network devices; and
obtaining the device performance data of the at least one network device in the test data path from the performance data obtained prior to transmitting the sample data stream.
4. The method of claim 1, wherein the operations performed in response to the determination that the measured second quality fails to meet the known first quality criteria include:
interrogating a WAN edge router in the test data path to obtain device performance data for the WAN edge router.
5. The method of claim 1, wherein the sample data stream includes a voice data stream.
6. A non-transitory machine-readable storage medium storing instructions that, when executed by a machine, cause the machine to perform operations comprising:
transmitting a sample data stream at a known first quality in a network between a source endpoint and a destination endpoint across a test data path that includes at least two network devices, at least one of the two network devices being a WAN edge router that lies between the source endpoint and the destination endpoint in the test data path;
comparing a measured second quality of the received sample data stream with the known first quality of the transmitted sample data stream;
determining that the measured second quality fails to meet the known first quality; and
in response to the determination that the measured second quality fails to meet the known first quality, performing operations including:
identifying at least one network device in the test data path;
obtaining device performance data of the at least one network device, wherein the device performance data of the WAN edge router is obtained from an interface of the WAN edge router;
using the device performance data of the WAN edge router to determine if the WAN edge router is contributing to the failure of the measured second quality to meet the known first quality; and
generating a network report based on the device performance data, the network report relating the at least one device in the test data path to a failure of the measured second quality to meet the known first quality.
7. The non-transitory machine-readable storage medium of claim 6, wherein the device performance data includes at least one of processor utilization, memory utilization, bandwidth oversubscription, buffer overrun, or a number of non-error packets that are discarded due to buffer overrun.
8. The non-transitory machine-readable storage medium of claim 6, wherein instructions further cause the machine to perform operations comprising:
monitoring a plurality of network devices to obtain corresponding performance data prior to transmitting the sample data stream;
identifying the at least one network device in the test data path from the monitored network devices; and
obtaining the device performance data of the at least one network device in the test data path from the performance data obtained prior to transmitting the sample data stream.
9. The non-transitory machine-readable storage medium of claim 6, wherein the operations performed in response to the determination that the measured second quality fails to meet the known first quality include:
interrogating a WAN edge router in the test data path to obtain device performance data for the WAN edge router.
10. The non-transitory machine-readable storage medium of claim 6, wherein the sample data stream includes a voice data stream.
11. An apparatus comprising:
a network interface unit configured to enable communications over a network; and
a processor configured to:
transmit a sample data stream at a known first quality in the network between a source endpoint and a destination endpoint across a test data path that includes at least two network devices, at least one of the two network devices being a WAN edge router that lies between the source endpoint and the destination endpoint in the test data path;
compare a measured second quality of the received sample data stream with the known first quality of the transmitted sample data stream and to determine that the measured second quality fails to meet the known first quality;
identify at least one network device in the test data path;
obtain device performance data of the at least one network device, wherein the device performance data of the WAN edge router is obtained from an interface of the WAN edge router;
use the device performance data of the WAN edge router to determine if the WAN edge router is contributing to the failure of the measured second quality to meet the known first quality; and
generate a network report based on the device performance data, the network report relating the at least one device in the test data path to a failure of the measured second quality to meet the know first quality.
12. The apparatus of claim 11, wherein the device performance data includes at least one of processor utilization, memory utilization, bandwidth oversubscription, buffer overrun, or a number of non-error packets that are discarded due to buffer overrun.
13. The apparatus of claim 11, wherein the processor is further configured to:
monitor a plurality of network devices to obtain corresponding performance data prior to transmitting the sample data stream,
identify the at least one network device in the test data path from the monitored network devices, and
obtain the device performance data of the at least one network device in the test data path from the performance data obtained prior to transmitting the sample data stream.
14. The apparatus of claim 11, wherein the processor is further configured to interrogate a WAN edge router in the test data path to obtain device performance data for the WAN edge router.
US14/153,137 2006-08-22 2014-01-13 Method and system to identify a network device associated with poor QoS Active US9270544B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US14/153,137 US9270544B2 (en) 2006-08-22 2014-01-13 Method and system to identify a network device associated with poor QoS

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US11/466,390 US8630190B2 (en) 2006-08-22 2006-08-22 Method and system to identify a network device associated with poor QoS
US14/153,137 US9270544B2 (en) 2006-08-22 2014-01-13 Method and system to identify a network device associated with poor QoS

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US11/466,390 Continuation US8630190B2 (en) 2006-08-22 2006-08-22 Method and system to identify a network device associated with poor QoS

Publications (2)

Publication Number Publication Date
US20140129708A1 US20140129708A1 (en) 2014-05-08
US9270544B2 true US9270544B2 (en) 2016-02-23

Family

ID=39113306

Family Applications (2)

Application Number Title Priority Date Filing Date
US11/466,390 Active 2028-03-11 US8630190B2 (en) 2006-08-22 2006-08-22 Method and system to identify a network device associated with poor QoS
US14/153,137 Active US9270544B2 (en) 2006-08-22 2014-01-13 Method and system to identify a network device associated with poor QoS

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US11/466,390 Active 2028-03-11 US8630190B2 (en) 2006-08-22 2006-08-22 Method and system to identify a network device associated with poor QoS

Country Status (1)

Country Link
US (2) US8630190B2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130297567A1 (en) * 2012-05-04 2013-11-07 International Business Machines Corporation Data stream quality management for analytic environments

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7796532B2 (en) * 2006-05-31 2010-09-14 Cisco Technology, Inc. Media segment monitoring
US8630190B2 (en) 2006-08-22 2014-01-14 Cisco Technology, Inc. Method and system to identify a network device associated with poor QoS
US9185226B2 (en) * 2009-08-13 2015-11-10 Verizon Patent And Licensing Inc. Voicemail server monitoring/reporting via aggregated data
WO2012106925A1 (en) * 2011-07-25 2012-08-16 华为技术有限公司 Method, apparatus and system for locating faults in ip network
KR101576758B1 (en) * 2011-09-30 2015-12-10 텔레호낙티에볼라게트 엘엠 에릭슨(피유비엘) A method, apparatus and communication network for root cause analysis
US9419866B2 (en) * 2012-11-01 2016-08-16 Huawei Technologies Co., Ltd. Method, node, and monitoring center detecting network fault
US8948020B2 (en) 2012-12-11 2015-02-03 International Business Machines Corporation Detecting and isolating dropped or out-of-order packets in communication networks
US9544220B2 (en) * 2013-02-05 2017-01-10 Cisco Technology, Inc. Binary search-based approach in routing-metric agnostic topologies for node selection to enable effective learning machine mechanisms
MX2018003863A (en) * 2015-09-28 2018-06-20 Evenroute Llc Automatic qos optimization in network equipment.
US10419580B2 (en) 2015-09-28 2019-09-17 Evenroute, Llc Automatic QoS optimization in network equipment
US10567246B2 (en) 2015-12-15 2020-02-18 At&T Intellectual Property I, L.P. Processing performance data of a content delivery network

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6269398B1 (en) 1993-08-20 2001-07-31 Nortel Networks Limited Method and system for monitoring remote routers in networks for available protocols and providing a graphical representation of information received from the routers
US20030091033A1 (en) 2001-11-09 2003-05-15 Alcatel Method for optimizing the use of network resources for the transmission of data signals, such as voice, over an IP-packet supporting network
US20040073641A1 (en) 2002-09-30 2004-04-15 Muneyb Minhazuddin Instantaneous user initiation voice quality feedback
US6868068B1 (en) * 2000-06-30 2005-03-15 Cisco Technology, Inc. Method and apparatus for estimating delay and jitter between network routers
US20050198266A1 (en) 2004-02-05 2005-09-08 Cole Robert G. Method for determining VoIP gateway performance and slas based upon path measurements
US6970924B1 (en) * 1999-02-23 2005-11-29 Visual Networks, Inc. Methods and apparatus for monitoring end-user experience in a distributed network
US20050281204A1 (en) 2004-06-18 2005-12-22 Karol Mark J Rapid fault detection and recovery for internet protocol telephony
US20060184670A1 (en) 2003-08-29 2006-08-17 Beeson Jesse D System and method for analyzing the performance of multiple transportation streams of streaming media in packet-based networks
US20060250967A1 (en) 2005-04-25 2006-11-09 Walter Miller Data connection quality analysis apparatus and methods
US20070101020A1 (en) 2005-10-28 2007-05-03 Tzu-Ming Lin Packet transmitting method of wireless network
US20070097966A1 (en) 2005-11-03 2007-05-03 Texas Instruments Incorporated Device and method for indicating an initial router of a path in a packet switching network
US20070168195A1 (en) * 2006-01-19 2007-07-19 Wilkin George P Method and system for measurement of voice quality using coded signals
US20080049634A1 (en) 2006-08-22 2008-02-28 Dinesh Goyal METHOD AND SYSTEM TO IDENTIFY A NETWORK DEVICE ASSOCIATED WITH POOR QoS
US7428300B1 (en) 2002-12-09 2008-09-23 Verizon Laboratories Inc. Diagnosing fault patterns in telecommunication networks

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6269398B1 (en) 1993-08-20 2001-07-31 Nortel Networks Limited Method and system for monitoring remote routers in networks for available protocols and providing a graphical representation of information received from the routers
US6970924B1 (en) * 1999-02-23 2005-11-29 Visual Networks, Inc. Methods and apparatus for monitoring end-user experience in a distributed network
US6868068B1 (en) * 2000-06-30 2005-03-15 Cisco Technology, Inc. Method and apparatus for estimating delay and jitter between network routers
US20030091033A1 (en) 2001-11-09 2003-05-15 Alcatel Method for optimizing the use of network resources for the transmission of data signals, such as voice, over an IP-packet supporting network
US20040073641A1 (en) 2002-09-30 2004-04-15 Muneyb Minhazuddin Instantaneous user initiation voice quality feedback
US7428300B1 (en) 2002-12-09 2008-09-23 Verizon Laboratories Inc. Diagnosing fault patterns in telecommunication networks
US20060184670A1 (en) 2003-08-29 2006-08-17 Beeson Jesse D System and method for analyzing the performance of multiple transportation streams of streaming media in packet-based networks
US20050198266A1 (en) 2004-02-05 2005-09-08 Cole Robert G. Method for determining VoIP gateway performance and slas based upon path measurements
US20050281204A1 (en) 2004-06-18 2005-12-22 Karol Mark J Rapid fault detection and recovery for internet protocol telephony
US20060250967A1 (en) 2005-04-25 2006-11-09 Walter Miller Data connection quality analysis apparatus and methods
US20070101020A1 (en) 2005-10-28 2007-05-03 Tzu-Ming Lin Packet transmitting method of wireless network
US20070097966A1 (en) 2005-11-03 2007-05-03 Texas Instruments Incorporated Device and method for indicating an initial router of a path in a packet switching network
US20070168195A1 (en) * 2006-01-19 2007-07-19 Wilkin George P Method and system for measurement of voice quality using coded signals
US20080049634A1 (en) 2006-08-22 2008-02-28 Dinesh Goyal METHOD AND SYSTEM TO IDENTIFY A NETWORK DEVICE ASSOCIATED WITH POOR QoS
US8630190B2 (en) 2006-08-22 2014-01-14 Cisco Technology, Inc. Method and system to identify a network device associated with poor QoS

Non-Patent Citations (17)

* Cited by examiner, † Cited by third party
Title
"U.S. Appl. No. 11/466,390, Advisory Action mailed Sep. 18, 2009", 4 pgs.
"U.S. Appl. No. 11/466,390, Applicant's Summary of Examiner Interview filed Jun. 14, 2010", 1 pg.
"U.S. Appl. No. 11/466,390, Final Office Action mailed Jul. 23, 2010", 17 pgs.
"U.S. Appl. No. 11/466,390, Final Office Action mailed Jun. 24, 2009", 26 pgs.
"U.S. Appl. No. 11/466,390, Final Office Action mailed Mar. 15, 2012", 27 pgs.
"U.S. Appl. No. 11/466,390, Non Final Office Action mailed Feb. 17, 2011", 21 pgs.
"U.S. Appl. No. 11/466,390, Non Final Office Action mailed Sep. 16, 2011", 26 pgs.
"U.S. Appl. No. 11/466,390, Non-Final Office Action mailed Jan. 4, 2010", 19 pgs.
"U.S. Appl. No. 11/466,390, Non-Final Office Action mailed Nov. 25, 2008", 23 pgs.
"U.S. Appl. No. 11/466,390, Notice of Allowance mailed Sep. 5, 2013", 6 pgs.
"U.S. Appl. No. 11/466,390, Response filed Aug. 24, 2009 to Final Office Action mailed Jun. 24, 2009", 14 pgs.
"U.S. Appl. No. 11/466,390, Response filed Feb. 25, 2009 to Non Final Office Action mailed Nov. 25, 2008", 14 pgs.
"U.S. Appl. No. 11/466,390, Response filed Jan. 1, 2012 to Sep. 16, 2011", 21 pgs.
"U.S. Appl. No. 11/466,390, Response filed Jul. 16, 2012 to Final Office Action mailed Mar. 15, 2012", 17 pgs.
"U.S. Appl. No. 11/466,390, Response filed Jun. 15, 2011 to Non Final Office Action mailed Feb. 17, 2011", 18 pgs.
"U.S. Appl. No. 11/466,390, Response filed May 3, 2010 to Non Final Office Action mailed Jan. 4, 2010", 10 pgs.
"U.S. Appl. No. 11/466,390, Response filed Nov. 30, 2010 to Final Office Action mailed Jul. 23, 2010", 15 pgs.

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130297567A1 (en) * 2012-05-04 2013-11-07 International Business Machines Corporation Data stream quality management for analytic environments
US9460131B2 (en) * 2012-05-04 2016-10-04 International Business Machines Corporation Data stream quality management for analytic environments
US10671580B2 (en) 2012-05-04 2020-06-02 International Business Machines Corporation Data stream quality management for analytic environments
US10803032B2 (en) 2012-05-04 2020-10-13 International Business Machines Corporation Data stream quality management for analytic environments

Also Published As

Publication number Publication date
US8630190B2 (en) 2014-01-14
US20080049634A1 (en) 2008-02-28
US20140129708A1 (en) 2014-05-08

Similar Documents

Publication Publication Date Title
US9270544B2 (en) Method and system to identify a network device associated with poor QoS
US20190190808A1 (en) Bidirectional data traffic control
US7606149B2 (en) Method and system for alert throttling in media quality monitoring
US20070286351A1 (en) Method and System for Adaptive Media Quality Monitoring
EP2530870B1 (en) Systems and methods for measuring quality of experience for media streaming
US9602376B2 (en) Detection of periodic impairments in media streams
EP1793528B1 (en) Method of monitoring the quality of a realtime communication
EP2741439B1 (en) Network failure detecting method and monitoring center
US9197516B2 (en) In-service throughput testing in distributed router/switch architectures
US8248953B2 (en) Detecting and isolating domain specific faults
US20140149572A1 (en) Monitoring and diagnostics in computer networks
Birke et al. Experiences of VoIP traffic monitoring in a commercial ISP
GB2494406A (en) System to detect protocol discrimination by network provider in the event of communication problems
EP4120654A1 (en) Adaptable software defined wide area network application-specific probing
US8619586B2 (en) System and method for providing troubleshooting in a network environment
Dinh-Xuan et al. Study on the accuracy of QoE monitoring for HTTP adaptive video streaming using VNF
Shan et al. A case study of IP network monitoring using wireless mobile devices
US10162733B2 (en) Debugging failure of a service validation test
US20230403434A1 (en) Streaming service rating determination
US20230403209A1 (en) Conferencing service rating determination
US9419866B2 (en) Method, node, and monitoring center detecting network fault
Shaikh Non-Intrusive Network-Based Estimation of Web Quality of Experience Indicators
Kampichler et al. Measuring voice readiness of local area networks
Deri et al. Enterprise Voice-over-IP Traffic Monitoring

Legal Events

Date Code Title Description
STCF Information on status: patent grant

Free format text: PATENTED CASE

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 8