US8078691B2 - Web page load time prediction and simulation - Google Patents

Web page load time prediction and simulation Download PDF

Info

Publication number
US8078691B2
US8078691B2 US12/547,704 US54770409A US8078691B2 US 8078691 B2 US8078691 B2 US 8078691B2 US 54770409 A US54770409 A US 54770409A US 8078691 B2 US8078691 B2 US 8078691B2
Authority
US
United States
Prior art keywords
web
webpage
time
plt
objects
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US12/547,704
Other versions
US20110054878A1 (en
Inventor
Ming Zhang
Yi-Min Wang
Albert Greenberg
Zhichun LI
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Microsoft Technology Licensing LLC
Original Assignee
Microsoft Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Microsoft Corp filed Critical Microsoft Corp
Priority to US12/547,704 priority Critical patent/US8078691B2/en
Assigned to MICROSOFT CORPORATION reassignment MICROSOFT CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: LI, ZHICHUN, GREENBERG, ALBERT, WANG, YI-MIN, ZHANG, MING
Publication of US20110054878A1 publication Critical patent/US20110054878A1/en
Priority to US13/267,254 priority patent/US8335838B2/en
Application granted granted Critical
Publication of US8078691B2 publication Critical patent/US8078691B2/en
Priority to US13/707,042 priority patent/US9456019B2/en
Assigned to MICROSOFT TECHNOLOGY LICENSING, LLC reassignment MICROSOFT TECHNOLOGY LICENSING, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MICROSOFT CORPORATION
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/02Protocols based on web technology, e.g. hypertext transfer protocol [HTTP]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3409Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for performance assessment
    • G06F11/3419Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for performance assessment by assessing time
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3447Performance evaluation by modeling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3457Performance evaluation by simulation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2201/00Indexing scheme relating to error detection, to error correction, and to monitoring
    • G06F2201/875Monitoring of systems including the internet

Definitions

  • Cloud computing refers to the access of computing resources and data via a network infrastructure, such as the Internet.
  • Online service providers may offer a wide range of services that may be hosted in the cloud; and such services may include search, maps, email, and outsourced enterprise applications. Further, online service providers may strive to achieve high levels of end-to-end cloud service performance to sustain and grow their user base. The performance of cloud services has direct impact on user satisfaction. For example, poor end-to-end response times, e.g., long page load times (PLTs), may result in low service usage, which in turn may undermine service income.
  • PKTs long page load times
  • performance boosting changes may include improvements to the client-side rendering capability of web browsers, improved backend server-side processing capabilities, and the reduction in the domain name system (DNS) resolution time and/or network round-trip time (RTT).
  • DNS domain name system
  • RTT network round-trip time
  • cloud computing service implementations are often complex, spanning components on end-system clients, back-end servers, as well as network paths. Thus, performance boosting changes may vary greatly in feasibility, cost and profit generation.
  • Described herein are performance prediction techniques for automatically predicting the impact of various optimizations on the performance of cloud services, and systems for implementing such techniques.
  • the cloud services may include, but are not limited to, web searches, social networking, web-based email, online retailing, online maps, personal health information portals, hosted enterprise applications, and the like.
  • the cloud services are often rich in features and functionally complex. For instance, a sample “driving directions” webpage provided by Yahoo! Maps was examined and found to comprise approximately 100 web objects and several hundred KB of JavaScript code.
  • the web objects may be retrieved from multiple data centers (DCs) and content distribution networks (CDNs). These dispersed web objects may meet only at a client device, where they may be assembled by a browser to form a complete webpage. Moreover, the web objects may have a plethora of dependencies, which means that many web objects cannot be downloaded until some other web objects are available.
  • optimizations of the cloud services may include, but are not limited to, modifications of content distribution networks, improvements to client-side rendering capability of web browsers, improved backend server-side processing capabilities, and reductions in DNS resolution time and/or network round-trip time (RTT).
  • RTT network round-trip time
  • techniques described herein, and the implementing systems may enable performance predictions via one or more inferences of dependencies between web objects, and the simulated download of web objects in a web page via a web browser.
  • the simulation may further include the simulated modification of parameters that affect cloud service performance.
  • parameters may include, but are not limited to, round-trip time (RTT), network processing time (e.g., DNS lookup time, TCP handshake time, or data transfer time), client execution time, and server processing time.
  • the techniques, and the implementing systems, described herein may enable the assessment of cloud service performance under a wide range of hypothetical scenarios.
  • the techniques and systems described herein may enable the prediction of parameter settings that provide optimal cloud service performance, i.e., shortest page load time (PLT).
  • PLT shortest page load time
  • web page load time prediction and simulation includes determining an original page load time (PLT) of a webpage and timing information of each web object of the web page in a scenario.
  • PDG parental dependency graph
  • the time information of each web object is further adjusted to reflect a second scenario that includes one or more modified parameters.
  • the page loading of the web page is then simulated based on the adjusted timing information of each web object and the PDG of the web page to estimate a new PLT of the web page.
  • FIG. 1 is a block diagram that illustrates an example architecture that implements automated performance prediction for cloud services, in accordance with various embodiments.
  • FIG. 2 is a block diagram that illustrates selected components for automatically predicting the performance of cloud services, in accordance with various embodiments.
  • FIG. 3 is a block diagram that illustrates the extractions of stream parents and their dependency offsets for web objects, in accordance with various embodiments.
  • FIG. 4 shows block diagrams that illustrate four scenarios of object timing relationship for a single HTTP object, in accordance with various embodiments.
  • FIG. 5 is a flow diagram that illustrates an example process to extract a parental dependency graph (PDG) for a webpage, in accordance with various embodiments.
  • PDG parental dependency graph
  • FIG. 6 is a flow diagram that illustrates an example process to derive and compare a new page load time (PLT) of a webpage with an original PLT of the web page, in accordance with various embodiments.
  • PLT page load time
  • FIG. 7 is a block diagram that illustrates a representative computing device that may implement automated performance prediction for cloud services.
  • Cloud service implementations are often complex, spanning components on end system clients, backend servers, as well as a variety of network paths.
  • the performance of the cloud services may be improved, i.e., page load time (PLT) minimized, via the modification of parameters that affect cloud service performance.
  • PLT page load time
  • Such parameters may include, but are not limited to, round-trip time (RTT), network processing time (e.g., DNS lookup time, TCP handshake time, or data transfer time), client execution time, and server processing time.
  • RTT round-trip time
  • network processing time e.g., DNS lookup time, TCP handshake time, or data transfer time
  • This difficulty may be attributed to the complexity of the cloud service implementations, variability of the interdependencies between web objects that access the cloud services, as well as the different options for optimizing access to cloud services. In fact, manual efforts to assess the impact of parameter modifications on the performance of cloud service can, in practice, be overly burdensome and error prone.
  • the automated performance prediction for cloud services may enable the simulated modification of parameters that affect cloud service performance.
  • the techniques, and the implementing systems, described herein may enable the prediction of parameter settings that provide optimal cloud service performance, i.e., shortest PLT. Therefore, the use of automated performance prediction for cloud services may improve PLTs of cloud services without the error prone and/or time consuming manual trial-and-error parameter modifications and performance assessments.
  • the data generated by the automated performance predictions may be used by online service providers to implement parameter changes that improve the performance of cloud services without the addition of data centers, servers, switches, and/or other hardware infrastructure.
  • user satisfaction with the performance of the cloud services may be maximized at a minimal cost.
  • FIG. 1 is a block diagram that illustrates an example architecture 100 that implements automated performance prediction for cloud services.
  • this architecture 100 implements automated performance prediction to help improve the end-to-end response times of the cloud services.
  • the example architecture 100 may be implemented on a “computing cloud”, which may include a plurality of data centers, such as the data centers 102 ( 1 )- 102 ( n ).
  • the data center 102 ( 1 ) may include one or more servers 104 ( 1 )
  • the data center 102 ( 2 ) may include one or more servers 104 ( 2 )
  • the data center 102 ( 3 ) may include one or more servers 104 ( 3 )
  • the data center 102 ( n ) may include one or more servers 104 ( n ).
  • the data centers 102 ( 1 )- 102 ( n ) may provide computing resource and data storage/retrieval capabilities.
  • computing resources may refer to any hardware and/or software that are used to process input data to generate output data, such as by the execution of algorithms, programs, and/or applications.
  • Each of the respective data centers may be further connected to other data centers via the network infrastructure 106 , such as the Internet.
  • the data centers 102 ( 1 )- 102 ( n ) may be further connected to one or more clients, such as a client 108 , via the network infrastructure 106 .
  • the data center 102 ( 1 )- 102 ( n ) may use their computing resources and data storage/retrieval capabilities to provide cloud services.
  • These cloud services may include, but are not limited to, web searches, social networking, web-based email, online retailing, online maps, and the like.
  • Cloud services may be delivered to users in form of web pages that can be rendered by browsers.
  • Sophisticated web pages may contain numerous static and dynamic web objects arranged hierarchically.
  • a browser may first download a main HTML object that defines the structure of the webpage. The browser may then download a Cascading Style Sheets (CSS) object that describes the presentation of the webpage.
  • the main HTML object may be embedded with many JavaScript objects that are executed locally to interact with a user.
  • an HTML or a JavaScript object of the webpage may request additional objects, such as images and additional JavaScript objects. This process may continue recursively until all relevant objects are downloaded.
  • a cloud service analyzer 110 may be implemented on a client device 108 .
  • the client device 108 may be a computing device that is capable of receiving, processing, and output data.
  • the client device 108 may be a desktop computer, a portable computer, a game console, a smart phone, or the like.
  • the cloud service analyzer 110 may predict the impact of various optimizations on user-perceived performance of cloud services.
  • the cloud service analyzer 110 may derive a parental dependency graph (PDG) 112 for a particular webpage.
  • PDG 112 may represent the various web object dependencies that may be present in a webpage.
  • the dependencies between web objects may be caused by a number of reasons.
  • the common ones may include, but is not limited to: (1) the embedded web objects in an HTML page depend on the HTML page; (2) since at least some web objects are dynamically requested during JavaScript execution, these web objects may depend on the corresponding JavaScript; (3) the download of an external CSS or JavaScript object may block the download of other types of web objects in the same HTML page; and (4) web object downloads may depend on certain events in a JavaScript or the browser.
  • the cloud service analyzer 110 may further obtain a total page load time (PLT) 114 of the webpage 116 by performing a page load in a baseline scenario.
  • PKT total page load time
  • the cloud service analyzer 110 may also obtain parameters, i.e., timing information 118 , related to the download of each web object in the webpage 116 .
  • the timing information 118 may include client delay, network delay, and server delay that occur during the page load of the webpage.
  • the client delay may be due to various browser activities such as page rendering and JavaScript execution.
  • the network delay may be due to DNS lookup time, transmission control protocol (TCP) handshake time, and data transfer time. Both TCP handshake time and data transfer time may be influenced by network path conditions such as RTT and packet loss.
  • the server delay may be incurred by various server processing tasks such as retrieving static content and generating dynamic content.
  • the cloud service analyzer 110 may add additional client delay 120 for each web object to the timing information 118 .
  • the cloud service analyzer 110 may infer the additional client delay by combining the timing information 118 with the PDG 112 of the webpage 116 .
  • the cloud service analyzer 110 may receive a new scenario that includes modified parameters, i.e., modified timing information 122 .
  • modified parameters i.e., modified timing information 122 .
  • the cloud service analyzer 110 may receive modifications to RTT, modifications to network processing time, modifications to client execution time, modifications to server processing time, and/or the like.
  • the cloud service analyzer 110 may simulate the page loading of all web objects of the webpage 116 based on the modified time information 122 and the PDG 112 to estimate the second PLT 124 of the webpage.
  • the cloud service analyzer 110 may compare the second PLT 124 of the webpage to the first PLT 114 of the webpage to determine whether the load time has been improved via the parameters modifications (e.g., whether a shorter load time is achieved).
  • the cloud service analyzer 110 may test a plurality of new scenarios to derive improved parameter settings that minimize PLT.
  • FIG. 2 is a block diagram that illustrates selected components for performing automated performance prediction for cloud services.
  • the selected components may be implemented on a client device 108 ( FIG. 1 ).
  • the client device 108 may include one or more processors 202 and memory 204 .
  • the memory 204 may include volatile and/or nonvolatile memory, removable and/or non-removable media implemented in any method or technology for storage of information, such as computer-readable instructions, data structures, program modules or other data.
  • Such memory may include, but is not limited to, random accessory memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, RAID storage systems, or any other medium which can be used to store the desired information and is accessible by a computer system.
  • the components may be in the form of routines, programs, objects, and data structures that cause the performance of particular tasks or implement particular abstract data types.
  • the memory 204 may store components.
  • the components, or modules, may include routines, programs instructions, objects, and/or data structures that perform particular tasks or implement particular abstract data types.
  • the selected components may include a measurement engine 206 , a dependency extractor 222 , and a performance predictor 228 , a comparison engine 212 , a data storage module 238 , a comparison engine 240 , and a user interface module 242 .
  • the various components may be part of the cloud server analyzer 110 ( FIG. 1 ).
  • the measurement engine 206 may collect packet traces 216 that enable the cloud service analyzer 102 to determine the page load times (PLTs) of web pages, as well as determine DNS, TCP handshake, and HTTP object information.
  • the measurement engine 206 may include a set of measurement agents 208 located at different network nodes and a central controller 210 .
  • the controller 210 may use an application updater 212 to upgrade the script snippets that simulate user interaction with particular websites and workload (e.g., the input to cloud services).
  • the controller 210 may store packet traces 216 collected from the different agents 208 by a data collector 214 .
  • the measurement engine 206 may run an automated web robot 218 that drives a full-featured web browser.
  • the web robot 218 may be used to simulate user inputs and to request web pages. Further, the web robot 218 may call a trace logger 220 to log the packet traces 216 multiple times so that the measurement engine 206 may recover the distributional properties of a particular page loading process.
  • the web robot 218 may also take control of a web browser via a browser plug-in.
  • the web robot 218 may include a script library that enables the web robot to control browser related functions. These functions may include cache control, browser parameter adjustment, as well as user input simulation.
  • the dependency extractor 222 may include an algorithm to infer dependencies between web objects by perturbing the download times of individual web objects.
  • the algorithm may leverage the fact that the delay of an individual web object will be propagated to all other dependent web objects. Accordingly, this time perturbation may be systematically applied to discover web object dependencies.
  • Modern web pages may contain many types of web objects such as HTML, JavaScript, CSS, and images. These embedded web objects may be downloaded recursively instead of all at once.
  • the main HTML may contain a JavaScript whose execution will lead to additional downloads of HTML and image objects.
  • a web object may be classified as being dependent on another if the former cannot be downloaded until the latter is available.
  • the dependencies between web objects can be caused by a number of reasons.
  • the common reasons may include, but are not limited to: (1) the embedded web objects in an HTML page may depend on the HTML page; (2) since web objects are dynamically requested during JavaScript execution, these web objects may depend on the corresponding JavaScript; (3) the download of an external CSS or JavaScript object may block the download of other types of objects in the same HTML page; and (4) web object downloads may depend on certain events in a JavaScript or the browser. For instance, a JavaScript may download image B only after image A is loaded. In other words, given an image A, the dependent web objects of image A usually cannot be requested before image A is completely downloaded.
  • a browser may render an HTML page in a streamlined fashion, in which the HTML page may be partially displayed before the downloading finishes. For instance, if an HTML page has an embedded image with tag ⁇ img>, the image may be downloaded and displayed in parallel with the downloading of the HTML page. In fact, the image download may start once the tag ⁇ img> (identified by a byte offset in the HTML page) has been parsed.
  • web objects that can be processed in a streamlined fashion may be defined as stream objects, and all HTML objects may be stream objects.
  • the notation dependent offset A may be used to denote the byte offset of ⁇ img> in the stream object A.
  • browsers that exhibit streamlined processing behavior may include the Internet Explorer® browser produced by the Microsoft® Corporation of Redmond, Wash.
  • a different kind of notation may be used to denote dependencies between non-stream objects.
  • the notation descendant (X) may be used to denote the set of objects that depend on the non-stream object X.
  • the notation ancestor (X) may be used to denote the set of web objects that the non-stream object X depends on.
  • the object whose loading immediately precedes the loading of the non-stream object X may be denoted as the last parent of the non-stream object X.
  • the available-time of the object Y may be expressed as when the dependency offset Y (X) has been loaded.
  • the available-time of the object Y may be when the object Y is completed loaded.
  • the available-time of the object Y may be used to estimate the start time of the non-stream object X's download and predict the page load time (PLT) of a webpage.
  • non-stream object X has only one last parent
  • the last parent of the non-stream object X may change across different page loads due to variations in the available-time of its ancestors.
  • parent (X) may be used denote the object in ancestor (X) that may be the last parent of the non-stream object X.
  • a parental dependency graph (PDG) 112 may be used to encapsulate the parental relationships between web objects in the webpage.
  • each link Y ⁇ X means Y is a parent of X.
  • the dependency extractor 222 may extract the dependencies of a webpage by perturbing the download of individual web objects.
  • the dependency extractor 222 may leverage the fact that the delay of an individual web object may be propagated to all other dependent web objects. Accordingly, the dependency extractor 222 may extract the stream parent of a web object and the corresponding dependent offset. For example, suppose a web object X has a stream parent Y, to discover this parental relationship, the available time of Offset Y (X) should be earlier than all of the other parents of the web object X in a particular page load. Thus, the dependency extractor 222 may control the download of not only each non-stream parent of the web object X as a whole, but also each partial download of each stream parent of the web object X.
  • the dependency extractor 222 may discover ancestors and descendants for the web objects.
  • the dependency extractor 222 may use a web proxy 224 to delay web object downloads of a webpage and extract the list of web objects in the webpage by obtaining their corresponding Universal Resource Identifiers (URIs).
  • URIs Universal Resource Identifiers
  • the dependency extractor 222 may discover the descendants of each web object using an iterative process.
  • the dependency extractor 222 may reload the page and delay the download of a web object, such as web object X, for ⁇ seconds in each round of the iterative process.
  • the web object X may be a web object that has not been processed, and ⁇ may be greater than the normal loading time of the web object X.
  • the descendants of the web object X may be web objects whose download is delayed together with the web object X.
  • the iterative process may be repeated until the descendants of all the web objects are discovered.
  • the dependency extractor 222 may use a browser robot 226 to reload pages.
  • the operation of the dependency extractor may be based on two assumptions: (1) the dependencies of a webpage do not change during the discovery process; and (2) introduced delays will not change the dependencies in the webpage.
  • the dependency extractor 222 may also extract non-stream parents of each web object during the process of extracting dependencies of the web objects.
  • non-stream web object X may be the parent of descendant web object Z if and only if there does not exist a web object Y that is also the descendant of web object X and the ancestor of descendant web object Z.
  • the available-time of the web object Y may be later than that of the non-stream web object X. This is because the web object Y cannot be downloaded until the non-stream web object X is available, which implies the web object X cannot be the parent of the web object Z.
  • the dependency extractor 222 may use the following algorithm (shown in pseudo code) to take a set of web objects and the set of descendants of each web object as input and compute the parent set of each web object:
  • a HTML stream object H 302 may contain a JavaScript J 304 and an image object I 306 .
  • JavaScript J 304 and image object I 306 may be embedded in the beginning and the end of stream object H 302 respectively (offset H (J) ⁇ offset H (I)).
  • image object I 306 cannot be downloaded until JavaScript J is executed. This causes the image object I 306 to depend on both the stream object H 302 and the JavaScript J 304 while the JavaScript J 304 only depend on the stream object H 302 .
  • the stream object H 302 cannot be the parent of the image object I 306 since the JavaScript object J 304 is the descendant of stream object H 302 and the ancestor of the image object I 306 . Nonetheless, when the download of the stream object H 302 is slow, the JavaScript J 304 may be downloaded and executed before the offset H (I) becomes available. In such a case, the stream object H 302 may become the last parent of the image object I 306 .
  • the dependency extractor 222 may determine whether the stream object H 302 is the parent of the image object I 306 . In various embodiments, the dependency extractor 222 may first load an entire web page and control the download of the stream object H 302 at an extremely low rate ⁇ . If the stream object H 302 is the parent of the image object I 306 , all of the other ancestors of the image object I 306 may be available by the time offset H (I) is available. The dependency extractor 222 may then estimate offset H (I) with offset H (I)′, whereby the latter is the offset of stream object H 302 that has been downloaded when the request of the image object I starts to be sent out.
  • offset H (I)′ may be directly inferred from network traces and is usually larger than offset H (I). This is because it may take some extra time to request image object I after offset H (I) is available. However, since stream object H 302 is downloaded at an extremely low rate, these two offsets should be very close.
  • the dependency extractor 222 may perform an additional parental test to determine whether stream object H 302 is the parent of image object I 306 . During the testing process, the dependency extractor 222 may reload the entire webpage. However, for this reload, the dependency extractor 222 may control the download of stream object H 302 at the same low rate ⁇ as well as delay the download of all the known non-stream parents of image object I 306 by ⁇ .
  • stream object H 302 may be designated as the last parent of image object I 306 .
  • the parental relationship between the stream objects in the webpage, as well as their dependency offsets may be further encapsulated by the PDG 112 ( FIG. 1 ).
  • may reflect a tradeoff between measurement accuracy and efficiency.
  • a small ⁇ may enable the dependency extractor 222 to estimate offset H (I) more accurately but may also lead to a relatively long processing time, as the parameter ⁇ may directly affect the accuracy of the parental tests. For example, if ⁇ is too small, the results may be susceptible to noise, increasing the chance of missing the true stream parents. However, if ⁇ is too large, the dependency extractor 222 may mistakenly infer a parent relationship because offset H (I)′′ ⁇ offset H (I)′ is bound by size H ⁇ offset H (I) where size H is the page size of stream object H 302 .
  • additional embodiments may use other values for ⁇ and ⁇ provided they supply adequate tradeoffs between measurement accuracy and efficiency.
  • the performance predictor 228 may predict the performance of cloud services under hypothetical scenarios. Given the page load time (PLT) of a baseline scenario, the performance predictor 228 may predict the new PLT when there are changes to a plurality of parameters.
  • the changes to the parameters may include changes in the client delay, the server delay, the network delay (e.g., DNS lookup time, TCP handshake time, or data transfer time), and/or the RTT.
  • the performance predictor 228 may help service providers identify performance bottlenecks and devise effective optimization strategies. In other words, the performance predictor 228 may simulate page loading processes of a browser under a wide range of hypothetical scenarios.
  • the performance predictor 228 may first infer the timing information of each web object from the network trace of a webpage load in a baseline scenario.
  • the performance predictor 228 may first obtain the timing information from the measurement engine 206 .
  • the performance predictor 228 may further annotate each web object with additional information related to client delay.
  • the performance predictor 228 may adjust the web object timing information to reflect changes from the baseline scenario to the new one.
  • the performance predictor 228 may simulate the page load process with new web object timing information to estimate the new page load PLT. Except for the factors adjusted, the page load process of the new scenario may inherit properties from the baseline scenario, including non-deterministic factors, such as whether a domain name is cached by the browser.
  • the performance predictor 228 may infer web objects and their timing information from network traces of a page load collected at the client, such as the client 108 ( FIG. 1 ).
  • the performance predictor 228 may include a trace analyzer 230 that recovers DNS, TCP and HTTP object information.
  • the performance predictor 228 may use TCP reassembly and HTTP protocol parsing to recover the HTTP object information.
  • the trace analyzer 230 may use a DNS parser to recover the domain name related to the DNS requests and responses.
  • the performance predictor 228 may leverage a network intrusion detection tool component to recover the timing and semantic information from the raw packet trace.
  • the trace analyzer 230 may estimate the RTT of each connection by the time between SYN and SYN/ACK packets. Based on the TCP self-clocking behavior, the trace analyzer 230 may estimate the number of round trips of the HTTP transfer taken by a web object.
  • the packets in one TCP sending window are relatively close to each other (e.g., less than one RTT), but the packets in different sending windows are about one RTT apart (approximately 1 ⁇ 0.25 RTT). This approach may help the trace analyzer 230 to identify the additional delay added by the web servers, which is usually quite different from one RTT.
  • the performance predictor 228 may be client-deployable since it does not require any application-level instrumentations.
  • the performance predictor 228 may identify object timing information 232 for each web object.
  • the timing information 232 may include information for three types of activity: (1) DNS lookup time: the time used for looking up a domain name; (2) TCP connection time: the time used for establishing a TCP connection; and (3) HTTP time: the time used to load a web object.
  • HTTP time may be further decomposed into three parts: (i) request transfer time: the time to transfer the first byte to the last byte of the HTTP request; (ii) response time: the time from when the last byte of the HTTP request is sent to when the first byte of the HTTP reply is received, which may include one RTT plus server delay; and (iii) reply transfer time: the time to transfer the first byte to the last byte of an HTTP reply.
  • the performance predictor 228 may infer the RTT for each TCP connection.
  • the RTT of a TCP connection may be quite stable since the entire page load process usually lasts for only a few seconds.
  • the performance predictor 228 may also infer the number of round-trips involved in transferring an HTTP request or reply. Such information may enable the performance predictor 228 to predict transfer times when RTT changes.
  • the performance predictor 228 may further annotate each object with additional timing information related to client delay.
  • the timing information 232 may be supplemented with the additional timing information related to client delay. For example, when the last parent of a web object X becomes available, the browser may not issue a request for the web object X immediately. This is because the browser may need time to perform some additional processing, e.g., parsing HTML pages or executing JavaScripts.
  • the performance predictor 228 may use client delay to denote the time from when its last parent is available to when browser starts to request the web object X.
  • the performance predictor 228 may infer client delay of each web object by combining the obtained object timing information of each web object with the PDG (e.g., PDG 112 ) of the webpage.
  • PDG e.g., PDG 112
  • the first activity may be DNS, TCP connection, or HTTP, depending on the current state and behavior of the browser.
  • TCP connections may limit the maximum number of TCP connections to a host server. For example, some browsers (e.g., Internet Explorer®7) may limit the number of TCP connections to six by default. Such limitations may cause the request for a web object to wait for available connections even when the request is ready to be sent. Therefore, the client delay that the performance predictor 228 observes in a trace may be longer than the actual browser processing time. To overcome this problem, when collecting the packet traces in the baseline scenario for the purpose of prediction, the performance predictor 228 may set the connection limit per host of the browser to a large number, for instance, 30. This may help to reduce the effects of connection waiting time.
  • a large number for instance, 30. This may help to reduce the effects of connection waiting time.
  • the performance predictor 228 may adjust the timing information 232 for each web object according to a new scenario.
  • the timing information 232 may be adjusted with new input parameters 122 ( FIG. 1 ) that modify one or more of the server delay, the client delay, the network delay, the RTT time, etc.
  • the new input parameters 122 ( FIG. 1 ) may be inputted by a user or a testing application. The modifications may increase or decrease the value of each delay or time.
  • the performance predictor 228 may add the input server ⁇ to the response time of each web object to reflect the server delay change in the new scenario.
  • the performance predictor 228 may use similar methods to adjust DNS activity and client delay for each web object with the new scenario inputs 236 .
  • the performance predictor 228 may also adjust for RTT changes. Assuming the HTTP request and response transfers involve m and n round-trips for the web object X, the performance predictor 228 may add (m+n+1) ⁇ rtt ⁇ to the HTTP activity of the web object X, and rtt ⁇ to the TCP connection activity if a new TCP connection is required for loading the web object X. In such embodiments, the performance predictor 228 may operate under the assumption that RTT change has little impact on the number of round-trips involved in loading a web object.
  • the performance predictor 228 may further predict page load time (PLT) based on the web object timing information 232 by simulating a page load process. Since web object downloads are not independent of each other, the download of a web object may be blocked because its dependent objects are unavailable or because there are no TCP connections ready for use. Accordingly, the performance predictor 228 may account for these limitations when simulating the page load process by taking into accounts the constraints of a browser and the corresponding PDG (e.g., PDG 112 ) of the webpage.
  • PDG page load time
  • HTTP/1.1 HyperText Transfer Protocol
  • a browser may use persistent TCP connections that can be reused for multiple HTTP requests and replies. The browser may attempt to keep the number of parallel connections small.
  • the browser may open a new connection only when it needs to send a request and all existing connections are occupied by other requests or replies.
  • the browser may be further configured with an internal parameter to limit the maximum number of parallel connections with a particular host. Such limit may be commonly applied to a host instead of to an IP address. However, if multiple hosts map to the same IP address, the number of parallel connections with that IP address may exceed the limit.
  • the performance predictor 228 may set the number of possible parallel connections according to the connection limitations of a particular browser. However, in other embodiments, the number of possible parallel connections may be modified (e.g., increased or decreased).
  • the new input parameters 122 ( FIG. 1 ) of a new scenario may further include the number of possible parallel connections.
  • the web browsers may also share some common features. For example, loading a web object in a web browser may trigger multiple activities including looking up a DNS name, establishing a new TCP connection, waiting for an existing TCP connection, and/or issuing an HTTP request. Five possible combinations of these activities are illustrated below in Table I. A “- ⁇ ” in Table I means that the corresponding condition does not matter for the purpose of predicting page load time (PLT). The activities involved in each case are further illustrated in FIG. 4 .
  • FIG. 4 shows block diagrams that illustrate scenarios 402 - 408 of web object timing relationship for a single HTTP object, in accordance with various embodiments.
  • a browser may load a web object from a domain with which it already has established TCP connections. However, because all the existing TCP connections are occupied and the number of parallel connections has reached a maximum limit, the browser may have to wait for the availability of an existing connection to issue the new HTTP request.
  • Case V is illustrated as scenario 408 of FIG. 4 .
  • the performance predictor 228 may estimate a new PLT of a webpage by using prediction engine 234 to simulate the corresponding page load process.
  • the prediction engine 234 may use an algorithm that takes the adjusted timing information of each web object in the web page, as well as the parental dependency graph (PDG) of the web page as inputs, and simulate the page loading process for the webpage to determine a new page load time (PLT).
  • PDG parental dependency graph
  • the algorithm may be expressed in pseudo code (“PredictPLT”) as follows:
  • PredictPLT Insert root objects into CandidateQ While (CandidateQ not empty) 1) Get earliest candidate C from CandidateQ 2) Load C according to conditions in Table 1 3) Find new candidates whose parents are available 4) Adjust timings of new candidates 5) Insert new candidates into CandidateQ Endwhile
  • the PLT may be estimated by the algorithm as the time when all the web objects are loaded.
  • the algorithm may keep track of four time variables: (1) T p : when the web object X's last parent is available; (2) T r : when the HTTP request for the web object X is ready to be sent; (3) T f : when the first byte of the HTTP request is sent; and (4) T l : when the last byte of the HTTP reply is received.
  • FIG. 4 illustrates the position of these time variables in four different scenarios.
  • the algorithm may maintain a priority queue CandidateQ that contains the web objects that can be requested. The objects in CandidateQ may be sorted based on T r .
  • the algorithm may insert the root objects of the corresponding PDG of the webpage into the queue CandidateQ with their T r set to 0.
  • a webpage may have one main HTML object that serves as the root of the PDG.
  • multiple root objects may exist due to asynchronous JavaScript and Extensible Markup Language (XML) downloading, such as via AJAX.
  • the algorithm may perform the following tasks: (1) obtain object C with smallest T r , from CandidateQ; (2) load object C according to the conditions listed in Table 1; (2) update T f and T l based on whether the loading involves looking up DNS names, opening new TCP connections, waiting for existing connections, and/or issuing HTTP requests.
  • T f and T l may be used to determine when a TCP connection is occupied by the object C; (3) after the object C is loaded, find all the new candidates whose last parent is the object C; (4) adjust the T p and T r of each new candidate X. If the object C is a non-stream object, T p of the object X may be set to T l of the object C. However, if object C is a stream object, T p of the object X may be set to the available-time of offset C (X), and T r of the object X may be set to T p , plus the client delay of the object X; and (5) insert new candidates into CandidateQ.
  • the performance predictor 228 may simulate a page load process for the webpage based on the adjusted web object timing information 232 , in which the adjusted web object timing information 232 includes the new modified input parameters 122 ( FIG. 1 ).
  • the simulation of the page load process may generate a new predicted PLT for the webpage.
  • the performance predictor 228 may output the predicted PLT as the result 236 .
  • the data storage module 238 may be configured to store data in a portion of memory 204 (e.g., a database).
  • the data storage module 238 may store web pages for analysis, algorithms used by the dependency extractor 222 and the prediction engine 234 , the extracted PDGs, and timing information of the web objects (e.g., the client delay, the network delay, and the server delay).
  • the data storage module 238 may further store parameter settings for different page loading scenarios, and/or results such as the PLTs of web pages under different scenarios.
  • the data storage module 238 may also store any additional data derived during the automated performance of cloud services, such as, but not limited, trace information produced by the trace analyzer 230 .
  • the comparison engine 240 may compare the new predicted PLT (e.g., result 236 ) of each webpage with the original PLT of each webpage, as measured by the measurement engine 206 . Additionally, in some embodiments, the results 236 may also include one or more of the server delay, the client delay, the network delay, the RTT, etc., as obtained during the simulated page loading of the webpage by the performance predictor 228 . Accordingly, the comparison engine 240 may also compare such values to one or more corresponding values obtained by the measurement engine 206 during the original page load of the webpage. Further, the comparison engine 240 may also compare a plurality of predicted PLTs of each webpage, as derived from different sets of input parameters, so that one or more improved sets of input parameters may be identified.
  • the new predicted PLT e.g., result 236
  • the user interface module 242 may interact with a user via a user interface (not shown).
  • the user interface may include a data output device such as a display, and one or more data input devices.
  • the data input devices may include, but are not limited to, combinations of one or more of keypads, keyboards, mouse devices, touch screens, microphones, speech recognition packages, and any other suitable devices or other electronic/software selection methods.
  • the user interface module 242 may enable a user to input or modify parameter settings for different page loading scenarios, as well as select web pages and page loading processes for analysis. Additionally, the user interface module 242 may further cause the display to output representation of the PDGs of selected web pages, current parameter settings, timing information of the web objects, PLTs of web pages under different scenarios, and/or other pertinent data.
  • FIGS. 5-6 describe various example processes for automated cloud service performance prediction.
  • the order in which the operations are described in each example process is not intended to be construed as a limitation, and any number of the described blocks can be combined in any order and/or in parallel to implement each process.
  • the blocks in the FIGS. 5-6 may be operations that can be implemented in hardware, software, and a combination thereof.
  • the blocks represent computer-executable instructions that, when executed by one or more processors, cause one or more processors to perform the recited operations.
  • computer-executable instructions include routines, programs, objects, components, data structures, and the like that cause the particular functions to be performed or particular abstract data types to be implemented.
  • FIG. 5 is a flow diagram that illustrates an example process 500 to extract a parental dependency graph (PDG) for a webpage, in accordance with various embodiments.
  • PDG parental dependency graph
  • the dependency extractor 222 of the cloud service analyzer 206 may extract a list of web objects in a webpage.
  • the dependency extractor 222 may use a web proxy 224 to delay web object downloads of a webpage and extract the list of web objects in the webpage by obtaining their corresponding Universal Resource Identifiers (URIs).
  • URIs Universal Resource Identifiers
  • the dependency extractor 222 may load the webpage and delay the download of a web object so that one or more descendants and/or one or more ancestors thereof may be discovered.
  • the dependency extractor 222 may use a browser robot 226 to load the webpage.
  • the dependency extractor 222 may use an iterative algorithm to discover all the descendants and the ancestors of the web object.
  • the algorithm may extract one or more stream parent objects and/or one or more non-stream parent objects, one or more descendant objects, as well as the dependency relationships between the web object and the other web objects.
  • the dependency extractor 222 may encapsulate the one or more dependency relationships of the web object that is currently being processed in a PDG, such as the PDG 112 ( FIG. 1 ).
  • the dependency extractor 222 may determine whether all the web objects of the webpage have been processed. In various embodiments, the dependency extractor 222 may make this determination based on the list of the web objects in the webpage. If the dependency extractor 222 determines that not all of the web objects in the webpage has been processed (“no” at decision block 510 ), the process 500 may loop back to block 504 , where another web object in the list of web objects may be processed for descendants and ancestor web objects. However, if the dependency extractor 222 determines all web objects have been processed (“yes” at decision block 510 ), the process may terminate at block 512 .
  • FIG. 6 is a flow diagram that illustrates an example process 600 to derive and compare a new page load time (PLT) of a webpage with an original PLT of the web page, in accordance with various embodiments.
  • PLT page load time
  • the performance predictor 228 of the cloud service analyzer 206 may obtain an original PLT (e.g., PLT 114 ) of a webpage.
  • the performance predictor 228 may obtain the PLT of the webpage from the measurement engine 206 .
  • the performance predictor 228 may extract timing information (e.g., timing information 232 ) of each web object using a network trace.
  • the network trace may be implemented on a page load of the webpage under a first scenario.
  • the performance predictor 228 may use trace logger 220 of the measurement engine 206 to perform the network tracing.
  • the first scenario may include a set of default parameters that represent client delay, the network delay (e.g., DNS lookup time, TCP handshake time, or data transfer time), the server delay, and/or the RTT experienced during the page load.
  • the performance predictor 228 may annotate each web object with additional client delay information based on the parental dependency graph (PDG) of the webpage.
  • the additional client delay information may represent the time a browser needs to do some additional processing, e.g., parsing HTML pages or executing JavaScripts, the time expended due to browser limitation on the maximum number of simultaneous TCP connections.
  • the performance predictor 228 may adjust the web object timing information of each web object to reflect a second scenario that includes one or more modified parameters (e.g., modified parameters 122 ).
  • the modified parameters may have been modified based on input from a user or a testing application.
  • the cloud service analyzer 110 FIG. 1
  • the modifications may increase or decrease the value of each delay or time.
  • the performance predictor 228 may simulate page loading of the webpage to estimate a new PLT (e.g., PLT 124 ) of the webpage.
  • the performance engine may use the prediction engine 234 to simulate the page loading of each web object of the webpage based on the adjusted time information that includes the modified parameters (e.g., modified parameters 122 ) and the PDG (e.g., PDG 112 ).
  • the comparison engine 240 of the cloud service analyzer 206 may compare the new PLT of the webpage to the original PLT of the webpage to determine whether the load time has been improved via the parameter modifications (e.g., whether a shorter or improved PLT is achieved, or whether the parameter modifications actually increased the PLT). In further embodiments, the comparison engine 240 may also compare a plurality of new PLTs that are derived based on different modified timing information so that improved parameters may be determined.
  • FIG. 7 illustrates a representative computing device 700 that may implement automated performance prediction for cloud services.
  • the computing device 700 may be a server, such as one of the servers 102 ( 1 )- 102 ( n ), as described in FIG. 1 .
  • the computing device 700 may also act as the client device 108 described in the discussion accompanying FIG. 1 .
  • the computing device 700 shown in FIG. 7 is only one example of a computing device and is not intended to suggest any limitation as to the scope of use or functionality of the computer and network architectures.
  • computing device 700 typically includes at least one processing unit 702 and system memory 704 .
  • system memory 704 may be volatile (such as RAM), non-volatile (such as ROM, flash memory, etc.) or some combination thereof.
  • System memory 704 may include an operating system 706 , one or more program modules 708 , and may include program data 710 .
  • the operating system 706 includes a component-based framework 712 that supports components (including properties and events), objects, inheritance, polymorphism, reflection, and provides an object-oriented component-based application programming interface (API), such as, but by no means limited to, that of the .NETTM Framework manufactured by the Microsoft® Corporation, Redmond, Wash.
  • API object-oriented component-based application programming interface
  • the computing device 700 is of a very basic configuration demarcated by a dashed line 714 . Again, a terminal may have fewer components but may interact with a computing device that may have such a basic configuration.
  • Computing device 700 may have additional features or functionality.
  • computing device 700 may also include additional data storage devices (removable and/or non-removable) such as, for example, magnetic disks, optical disks, or tape.
  • additional storage is illustrated in FIG. 7 by removable storage 716 and non-removable storage 718 .
  • Computer storage media may include volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, program modules, or other data.
  • System memory 704 , removable storage 716 and non-removable storage 718 are all examples of computer storage media.
  • Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by Computing device 700 . Any such computer storage media may be part of device 700 .
  • Computing device 700 may also have input device(s) 720 such as keyboard, mouse, pen, voice input device, touch input device, etc.
  • Output device(s) 722 such as a display, speakers, printer, etc. may also be included.
  • Computing device 700 may also contain communication connections 724 that allow the device to communicate with other computing devices 726 , such as over a network. These networks may include wired networks as well as wireless networks. Communication connections 724 are some examples of communication media. Communication media may typically be embodied by computer readable instructions, data structures, program modules, etc.
  • computing device 700 is only one example of a suitable device and is not intended to suggest any limitation as to the scope of use or functionality of the various embodiments described.
  • Other well-known computing devices, systems, environments and/or configurations that may be suitable for use with the embodiments include, but are not limited to personal computers, server computers, hand-held or laptop devices, multiprocessor systems, microprocessor-base systems, set top boxes, game consoles, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and/or the like.
  • the implementation of automated performance prediction for cloud services on a client device may enable the assessment of cloud service performance variation in response to a wide range of hypothetical scenarios.
  • the techniques described herein may enable the prediction of parameter settings that provide optimal cloud service performance, i.e., shortest page load time (PLT) without error prone and/or time consuming manual trial-and-error parameter modifications and performance assessments.
  • PHT shortest page load time
  • the implementation of automated performance prediction for cloud services on a client device may take into account factors that are not visible to cloud service providers. These factors may include page rendering time, object dependencies, multiple data sources across data centers and data providers.

Abstract

Web page load time production and simulation includes determining an original page load time (PLT) of a webpage and timing information of each web object of the web page in a scenario. Each object is also annotated with client delay information based on a parental dependency graph (PDG) of the web page. The time information of each web object is further adjusted to reflect a second scenario that includes one or more modified parameters. The page loading of the web page is then simulated based on the adjusted timing information of each web object and the PDG of the web page to estimate a new PLT of the web page.

Description

BACKGROUND
“Cloud computing” refers to the access of computing resources and data via a network infrastructure, such as the Internet. Online service providers may offer a wide range of services that may be hosted in the cloud; and such services may include search, maps, email, and outsourced enterprise applications. Further, online service providers may strive to achieve high levels of end-to-end cloud service performance to sustain and grow their user base. The performance of cloud services has direct impact on user satisfaction. For example, poor end-to-end response times, e.g., long page load times (PLTs), may result in low service usage, which in turn may undermine service income.
In order to achieve high levels of end-to-end performance, online service providers may implement performance boosting changes. For example, performance boosting changes may include improvements to the client-side rendering capability of web browsers, improved backend server-side processing capabilities, and the reduction in the domain name system (DNS) resolution time and/or network round-trip time (RTT). However, cloud computing service implementations are often complex, spanning components on end-system clients, back-end servers, as well as network paths. Thus, performance boosting changes may vary greatly in feasibility, cost and profit generation.
SUMMARY
Described herein are performance prediction techniques for automatically predicting the impact of various optimizations on the performance of cloud services, and systems for implementing such techniques.
The cloud services may include, but are not limited to, web searches, social networking, web-based email, online retailing, online maps, personal health information portals, hosted enterprise applications, and the like. The cloud services are often rich in features and functionally complex. For instance, a sample “driving directions” webpage provided by Yahoo! Maps was examined and found to comprise approximately 100 web objects and several hundred KB of JavaScript code. The web objects may be retrieved from multiple data centers (DCs) and content distribution networks (CDNs). These dispersed web objects may meet only at a client device, where they may be assembled by a browser to form a complete webpage. Moreover, the web objects may have a plethora of dependencies, which means that many web objects cannot be downloaded until some other web objects are available. For instance, an image download may have to wait for a JavaScript® to execute in an instance where the image download is requested by the download JavaScript. Accordingly, optimizations of the cloud services may include, but are not limited to, modifications of content distribution networks, improvements to client-side rendering capability of web browsers, improved backend server-side processing capabilities, and reductions in DNS resolution time and/or network round-trip time (RTT).
Thus, because of the complexity of the cloud services, variability of the interdependencies between web objects, as well as the different combinations of potential optimizations, it is often difficult to understand and predict the impact of various optimizations on the performance of cloud services. However, techniques described herein, and the implementing systems, may enable performance predictions via one or more inferences of dependencies between web objects, and the simulated download of web objects in a web page via a web browser. The simulation may further include the simulated modification of parameters that affect cloud service performance. Such parameters may include, but are not limited to, round-trip time (RTT), network processing time (e.g., DNS lookup time, TCP handshake time, or data transfer time), client execution time, and server processing time. In this way, the techniques, and the implementing systems, described herein may enable the assessment of cloud service performance under a wide range of hypothetical scenarios. Thus, the techniques and systems described herein may enable the prediction of parameter settings that provide optimal cloud service performance, i.e., shortest page load time (PLT).
In at least one embodiment, web page load time prediction and simulation includes determining an original page load time (PLT) of a webpage and timing information of each web object of the web page in a scenario. Each object is also annotated with client delay information based on a parental dependency graph (PDG) of the web page. The time information of each web object is further adjusted to reflect a second scenario that includes one or more modified parameters. The page loading of the web page is then simulated based on the adjusted timing information of each web object and the PDG of the web page to estimate a new PLT of the web page. Other embodiments will become more apparent from the following detailed description when taken in conjunction with the accompanying drawings.
This Summary is provided to introduce a selection of concepts in a simplified form that is further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
BRIEF DESCRIPTION OF THE DRAWINGS
The detailed description is described with reference to the accompanying figures. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The use of the same reference number in different figures indicates similar or identical items.
FIG. 1 is a block diagram that illustrates an example architecture that implements automated performance prediction for cloud services, in accordance with various embodiments.
FIG. 2 is a block diagram that illustrates selected components for automatically predicting the performance of cloud services, in accordance with various embodiments.
FIG. 3 is a block diagram that illustrates the extractions of stream parents and their dependency offsets for web objects, in accordance with various embodiments.
FIG. 4 shows block diagrams that illustrate four scenarios of object timing relationship for a single HTTP object, in accordance with various embodiments.
FIG. 5 is a flow diagram that illustrates an example process to extract a parental dependency graph (PDG) for a webpage, in accordance with various embodiments.
FIG. 6 is a flow diagram that illustrates an example process to derive and compare a new page load time (PLT) of a webpage with an original PLT of the web page, in accordance with various embodiments.
FIG. 7 is a block diagram that illustrates a representative computing device that may implement automated performance prediction for cloud services.
DETAILED DESCRIPTION
This disclosure is directed to the implementation of automated performance prediction for cloud services. Cloud service implementations are often complex, spanning components on end system clients, backend servers, as well as a variety of network paths. In various instances, the performance of the cloud services may be improved, i.e., page load time (PLT) minimized, via the modification of parameters that affect cloud service performance. Such parameters may include, but are not limited to, round-trip time (RTT), network processing time (e.g., DNS lookup time, TCP handshake time, or data transfer time), client execution time, and server processing time. However, it is often difficult to understand and predict the true effect of parameter modifications on the performance of clouds. This difficulty may be attributed to the complexity of the cloud service implementations, variability of the interdependencies between web objects that access the cloud services, as well as the different options for optimizing access to cloud services. In fact, manual efforts to assess the impact of parameter modifications on the performance of cloud service can, in practice, be overly burdensome and error prone.
In various embodiments, the automated performance prediction for cloud services may enable the simulated modification of parameters that affect cloud service performance. Thus, the techniques, and the implementing systems, described herein may enable the prediction of parameter settings that provide optimal cloud service performance, i.e., shortest PLT. Therefore, the use of automated performance prediction for cloud services may improve PLTs of cloud services without the error prone and/or time consuming manual trial-and-error parameter modifications and performance assessments.
Further, the data generated by the automated performance predictions may be used by online service providers to implement parameter changes that improve the performance of cloud services without the addition of data centers, servers, switches, and/or other hardware infrastructure. Thus, user satisfaction with the performance of the cloud services may be maximized at a minimal cost. Various examples for implementing automated performance prediction for cloud services in accordance with the embodiments are described below with reference to FIGS. 1-7.
Example Architecture
FIG. 1 is a block diagram that illustrates an example architecture 100 that implements automated performance prediction for cloud services. In particular, this architecture 100 implements automated performance prediction to help improve the end-to-end response times of the cloud services.
The example architecture 100 may be implemented on a “computing cloud”, which may include a plurality of data centers, such as the data centers 102(1)-102(n). As shown in FIG. 1, the data center 102(1) may include one or more servers 104(1), the data center 102(2) may include one or more servers 104(2), the data center 102(3) may include one or more servers 104(3), and the data center 102(n) may include one or more servers 104(n).
The data centers 102(1)-102(n) may provide computing resource and data storage/retrieval capabilities. As used herein, computing resources may refer to any hardware and/or software that are used to process input data to generate output data, such as by the execution of algorithms, programs, and/or applications. Each of the respective data centers may be further connected to other data centers via the network infrastructure 106, such as the Internet. Moreover, the data centers 102(1)-102(n) may be further connected to one or more clients, such as a client 108, via the network infrastructure 106.
The data center 102(1)-102(n) may use their computing resources and data storage/retrieval capabilities to provide cloud services. These cloud services may include, but are not limited to, web searches, social networking, web-based email, online retailing, online maps, and the like. Cloud services may be delivered to users in form of web pages that can be rendered by browsers. Sophisticated web pages may contain numerous static and dynamic web objects arranged hierarchically. To load a typical webpage, a browser may first download a main HTML object that defines the structure of the webpage. The browser may then download a Cascading Style Sheets (CSS) object that describes the presentation of the webpage. The main HTML object may be embedded with many JavaScript objects that are executed locally to interact with a user. As the webpage is being rendered, an HTML or a JavaScript object of the webpage may request additional objects, such as images and additional JavaScript objects. This process may continue recursively until all relevant objects are downloaded.
A cloud service analyzer 110 may be implemented on a client device 108. The client device 108 may be a computing device that is capable of receiving, processing, and output data. For example, but not as a limitation, the client device 108 may be a desktop computer, a portable computer, a game console, a smart phone, or the like. The cloud service analyzer 110 may predict the impact of various optimizations on user-perceived performance of cloud services.
In operation, the cloud service analyzer 110 may derive a parental dependency graph (PDG) 112 for a particular webpage. As further described below, the PDG 112 may represent the various web object dependencies that may be present in a webpage. The dependencies between web objects may be caused by a number of reasons. The common ones may include, but is not limited to: (1) the embedded web objects in an HTML page depend on the HTML page; (2) since at least some web objects are dynamically requested during JavaScript execution, these web objects may depend on the corresponding JavaScript; (3) the download of an external CSS or JavaScript object may block the download of other types of web objects in the same HTML page; and (4) web object downloads may depend on certain events in a JavaScript or the browser.
The cloud service analyzer 110 may further obtain a total page load time (PLT) 114 of the webpage 116 by performing a page load in a baseline scenario. During the page load, the cloud service analyzer 110 may also obtain parameters, i.e., timing information 118, related to the download of each web object in the webpage 116. The timing information 118 may include client delay, network delay, and server delay that occur during the page load of the webpage. For example, the client delay may be due to various browser activities such as page rendering and JavaScript execution. The network delay may be due to DNS lookup time, transmission control protocol (TCP) handshake time, and data transfer time. Both TCP handshake time and data transfer time may be influenced by network path conditions such as RTT and packet loss. Moreover, the server delay may be incurred by various server processing tasks such as retrieving static content and generating dynamic content.
Having obtained the timing information 118 for each web object in the webpage 116 for the baseline scenario, the cloud service analyzer 110 may add additional client delay 120 for each web object to the timing information 118. In at least one embodiment, the cloud service analyzer 110 may infer the additional client delay by combining the timing information 118 with the PDG 112 of the webpage 116.
Subsequently, the cloud service analyzer 110 may receive a new scenario that includes modified parameters, i.e., modified timing information 122. For example, but not as a limitation, the cloud service analyzer 110 may receive modifications to RTT, modifications to network processing time, modifications to client execution time, modifications to server processing time, and/or the like. The cloud service analyzer 110 may simulate the page loading of all web objects of the webpage 116 based on the modified time information 122 and the PDG 112 to estimate the second PLT 124 of the webpage. Subsequently, the cloud service analyzer 110 may compare the second PLT 124 of the webpage to the first PLT 114 of the webpage to determine whether the load time has been improved via the parameters modifications (e.g., whether a shorter load time is achieved). Thus, by repeating the modifications of the parameters and the comparisons, the cloud service analyzer 110 may test a plurality of new scenarios to derive improved parameter settings that minimize PLT.
Example Components
FIG. 2 is a block diagram that illustrates selected components for performing automated performance prediction for cloud services. The selected components may be implemented on a client device 108 (FIG. 1). The client device 108 may include one or more processors 202 and memory 204.
The memory 204 may include volatile and/or nonvolatile memory, removable and/or non-removable media implemented in any method or technology for storage of information, such as computer-readable instructions, data structures, program modules or other data. Such memory may include, but is not limited to, random accessory memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, RAID storage systems, or any other medium which can be used to store the desired information and is accessible by a computer system. Further, the components may be in the form of routines, programs, objects, and data structures that cause the performance of particular tasks or implement particular abstract data types.
The memory 204 may store components. The components, or modules, may include routines, programs instructions, objects, and/or data structures that perform particular tasks or implement particular abstract data types. The selected components may include a measurement engine 206, a dependency extractor 222, and a performance predictor 228, a comparison engine 212, a data storage module 238, a comparison engine 240, and a user interface module 242. The various components may be part of the cloud server analyzer 110 (FIG. 1).
Measurement Engine
The measurement engine 206 may collect packet traces 216 that enable the cloud service analyzer 102 to determine the page load times (PLTs) of web pages, as well as determine DNS, TCP handshake, and HTTP object information. The measurement engine 206 may include a set of measurement agents 208 located at different network nodes and a central controller 210. The controller 210 may use an application updater 212 to upgrade the script snippets that simulate user interaction with particular websites and workload (e.g., the input to cloud services). Moreover, the controller 210 may store packet traces 216 collected from the different agents 208 by a data collector 214.
The measurement engine 206 may run an automated web robot 218 that drives a full-featured web browser. The web robot 218 may be used to simulate user inputs and to request web pages. Further, the web robot 218 may call a trace logger 220 to log the packet traces 216 multiple times so that the measurement engine 206 may recover the distributional properties of a particular page loading process. In various embodiments, the web robot 218 may also take control of a web browser via a browser plug-in. The web robot 218 may include a script library that enables the web robot to control browser related functions. These functions may include cache control, browser parameter adjustment, as well as user input simulation.
Dependency Extractor
The dependency extractor 222 may include an algorithm to infer dependencies between web objects by perturbing the download times of individual web objects. The algorithm may leverage the fact that the delay of an individual web object will be propagated to all other dependent web objects. Accordingly, this time perturbation may be systematically applied to discover web object dependencies.
Modern web pages may contain many types of web objects such as HTML, JavaScript, CSS, and images. These embedded web objects may be downloaded recursively instead of all at once. For instance, the main HTML may contain a JavaScript whose execution will lead to additional downloads of HTML and image objects. Thus, a web object may be classified as being dependent on another if the former cannot be downloaded until the latter is available. The dependencies between web objects can be caused by a number of reasons. The common reasons may include, but are not limited to: (1) the embedded web objects in an HTML page may depend on the HTML page; (2) since web objects are dynamically requested during JavaScript execution, these web objects may depend on the corresponding JavaScript; (3) the download of an external CSS or JavaScript object may block the download of other types of objects in the same HTML page; and (4) web object downloads may depend on certain events in a JavaScript or the browser. For instance, a JavaScript may download image B only after image A is loaded. In other words, given an image A, the dependent web objects of image A usually cannot be requested before image A is completely downloaded.
However, there are exceptions to the common dependencies between objects, and therefore the following may provide context for the functionality of the dependency extractor 222. In some instances, a browser may render an HTML page in a streamlined fashion, in which the HTML page may be partially displayed before the downloading finishes. For instance, if an HTML page has an embedded image with tag <img>, the image may be downloaded and displayed in parallel with the downloading of the HTML page. In fact, the image download may start once the tag <img> (identified by a byte offset in the HTML page) has been parsed. Thus, web objects that can be processed in a streamlined fashion may be defined as stream objects, and all HTML objects may be stream objects. For instance, given a stream object A, the notation dependent offsetA (img) may be used to denote the byte offset of <img> in the stream object A. In one example, browsers that exhibit streamlined processing behavior may include the Internet Explorer® browser produced by the Microsoft® Corporation of Redmond, Wash.
Thus, in order to distinguish stream objects from non-stream objects, a different kind of notation may be used to denote dependencies between non-stream objects. In various embodiments, given a non-stream object X, the notation descendant (X) may be used to denote the set of objects that depend on the non-stream object X. Conversely, the notation ancestor (X) may be used to denote the set of web objects that the non-stream object X depends on. Thus, by definition, the non-stream object X cannot be requested until all the objects in ancestor (X) are available.
Among the objects in ancestor (X), the object whose loading immediately precedes the loading of the non-stream object X may be denoted as the last parent of the non-stream object X. Thus, if such a preceding web object is designated as object Y, the available-time of the object Y may be expressed as when the dependency offsetY(X) has been loaded. In embodiments where object Y is a non-stream object, the available-time of the object Y may be when the object Y is completed loaded. As further described below, the available-time of the object Y may be used to estimate the start time of the non-stream object X's download and predict the page load time (PLT) of a webpage. Further, while the non-stream object X has only one last parent, the last parent of the non-stream object X may change across different page loads due to variations in the available-time of its ancestors. Accordingly, parent (X) may be used denote the object in ancestor (X) that may be the last parent of the non-stream object X.
Therefore, as further described below, given a webpage, a parental dependency graph (PDG) 112 (FIG. 1) may be used to encapsulate the parental relationships between web objects in the webpage. For example, a PDG=(V,E) may be a directed acyclic graph (DAG) that includes a set of nodes and directed links, where each node is a web object. Moreover, each link Y→X means Y is a parent of X. Thus, the work performed by the dependency extractor 222 may be explained based on the above described notations and relationships.
The dependency extractor 222 may extract the dependencies of a webpage by perturbing the download of individual web objects. The dependency extractor 222 may leverage the fact that the delay of an individual web object may be propagated to all other dependent web objects. Accordingly, the dependency extractor 222 may extract the stream parent of a web object and the corresponding dependent offset. For example, suppose a web object X has a stream parent Y, to discover this parental relationship, the available time of OffsetY (X) should be earlier than all of the other parents of the web object X in a particular page load. Thus, the dependency extractor 222 may control the download of not only each non-stream parent of the web object X as a whole, but also each partial download of each stream parent of the web object X.
In order to extract dependencies of the web objects of a web page, the dependency extractor 222 may discover ancestors and descendants for the web objects. In various embodiments, the dependency extractor 222 may use a web proxy 224 to delay web object downloads of a webpage and extract the list of web objects in the webpage by obtaining their corresponding Universal Resource Identifiers (URIs). Further, the dependency extractor 222 may discover the descendants of each web object using an iterative process. In various embodiments, the dependency extractor 222 may reload the page and delay the download of a web object, such as web object X, for τ seconds in each round of the iterative process. In each round, the web object X may be a web object that has not been processed, and τ may be greater than the normal loading time of the web object X. The descendants of the web object X may be web objects whose download is delayed together with the web object X. The iterative process may be repeated until the descendants of all the web objects are discovered. During the iterative process, the dependency extractor 222 may use a browser robot 226 to reload pages. In such embodiments, the operation of the dependency extractor may be based on two assumptions: (1) the dependencies of a webpage do not change during the discovery process; and (2) introduced delays will not change the dependencies in the webpage.
The dependency extractor 222 may also extract non-stream parents of each web object during the process of extracting dependencies of the web objects. In various embodiments, given a non-stream web object X and its web object Z non-stream web object X may be the parent of descendant web object Z if and only if there does not exist a web object Y that is also the descendant of web object X and the ancestor of descendant web object Z. On the other hand, if such web object Y exists, the available-time of the web object Y may be later than that of the non-stream web object X. This is because the web object Y cannot be downloaded until the non-stream web object X is available, which implies the web object X cannot be the parent of the web object Z. However, if the web object Y does not exist, there may be scenarios in which the non-stream web object X is delayed until all of the other ancestors of the web object Z are available. This may be possible because none of the other ancestors of the web object Z depend on the web object X, which may imply that the non-stream web object X may indeed be the parent of the descendant web object Z. Based on these observations, the dependency extractor 222 may use the following algorithm (shown in pseudo code) to take a set of web objects and the set of descendants of each web object as input and compute the parent set of each web object:
ExtractNonStreamParent(Object, Descendant)
For X in Object
 For Z in Descendant (X)
 IsParent = True
 For Y in Descendant (X)
  If (Z in Descendant (Y) )
   IsParent = False
   Break
  EndIf
 EndFor
 If (Isparent) add X to Parent (Z)
 EndFor
EndFor
Furthermore, the dependency extractor 222 may also extract stream parents and their dependency offsets. The extractions of stream parents and their dependency offsets are illustrated in FIG. 3. As shown in FIG. 3, a HTML stream object H 302 may contain a JavaScript J 304 and an image object I 306. JavaScript J 304 and image object I 306 may be embedded in the beginning and the end of stream object H 302 respectively (offsetH (J)<offsetH (I)). However, because the URL of image object I 306 is defined in JavaScript J 304, image object I 306 cannot be downloaded until JavaScript J is executed. This causes the image object I 306 to depend on both the stream object H 302 and the JavaScript J 304 while the JavaScript J 304 only depend on the stream object H 302.
In a normal scenario, the stream object H 302 cannot be the parent of the image object I 306 since the JavaScript object J 304 is the descendant of stream object H 302 and the ancestor of the image object I 306. Nonetheless, when the download of the stream object H 302 is slow, the JavaScript J 304 may be downloaded and executed before the offsetH(I) becomes available. In such a case, the stream object H 302 may become the last parent of the image object I 306.
Thus, given the stream object H 302 and its descendant image object I 306, the dependency extractor 222 may determine whether the stream object H 302 is the parent of the image object I 306. In various embodiments, the dependency extractor 222 may first load an entire web page and control the download of the stream object H 302 at an extremely low rate λ. If the stream object H 302 is the parent of the image object I 306, all of the other ancestors of the image object I 306 may be available by the time offsetH (I) is available. The dependency extractor 222 may then estimate offsetH(I) with offsetH (I)′, whereby the latter is the offset of stream object H 302 that has been downloaded when the request of the image object I starts to be sent out. In some embodiments, offsetH (I)′ may be directly inferred from network traces and is usually larger than offsetH(I). This is because it may take some extra time to request image object I after offsetH(I) is available. However, since stream object H 302 is downloaded at an extremely low rate, these two offsets should be very close.
Having inferred offsetH (I)′, the dependency extractor 222 may perform an additional parental test to determine whether stream object H 302 is the parent of image object I 306. During the testing process, the dependency extractor 222 may reload the entire webpage. However, for this reload, the dependency extractor 222 may control the download of stream object H 302 at the same low rate λ as well as delay the download of all the known non-stream parents of image object I 306 by τ. Further, assuming offsetH (I)″ is the offset of stream object H 302 that was downloaded when the request of image object I 306 is sent out during the reload, if offsetH (I)″−offsetH (I)′<<τ×λ is true, then the delay of I′s known parents has little effect on when image object I 306 is requested. Accordingly, stream object H 302 may be designated as the last parent of image object I 306. In this way, the parental relationship between the stream objects in the webpage, as well as their dependency offsets, may be further encapsulated by the PDG 112 (FIG. 1).
The choice of λ may reflect a tradeoff between measurement accuracy and efficiency. A small λ may enable the dependency extractor 222 to estimate offsetH (I) more accurately but may also lead to a relatively long processing time, as the parameter τ may directly affect the accuracy of the parental tests. For example, if τ is too small, the results may be susceptible to noise, increasing the chance of missing the true stream parents. However, if τ is too large, the dependency extractor 222 may mistakenly infer a parent relationship because offsetH (I)″−offsetH (I)′ is bound by sizeH−offsetH (I) where sizeH is the page size of stream object H 302. In at least one embodiment, the dependency extractor 222 may use λ=sizeH/200 bytes/sec and τ=2 seconds. This means the stream object H takes 200 seconds to transfer. However, additional embodiments may use other values for λ and τ provided they supply adequate tradeoffs between measurement accuracy and efficiency.
Performance Predictor
The performance predictor 228 may predict the performance of cloud services under hypothetical scenarios. Given the page load time (PLT) of a baseline scenario, the performance predictor 228 may predict the new PLT when there are changes to a plurality of parameters. In various embodiments, the changes to the parameters may include changes in the client delay, the server delay, the network delay (e.g., DNS lookup time, TCP handshake time, or data transfer time), and/or the RTT. Thus, the performance predictor 228 may help service providers identify performance bottlenecks and devise effective optimization strategies. In other words, the performance predictor 228 may simulate page loading processes of a browser under a wide range of hypothetical scenarios.
During the operation of the performance predictor 228, the performance predictor 228 may first infer the timing information of each web object from the network trace of a webpage load in a baseline scenario. In various embodiments, the performance predictor 228 may first obtain the timing information from the measurement engine 206. Second, based on the parental dependency graph (PDG) of the webpage, the performance predictor 228 may further annotate each web object with additional information related to client delay. Third, the performance predictor 228 may adjust the web object timing information to reflect changes from the baseline scenario to the new one. Finally, the performance predictor 228 may simulate the page load process with new web object timing information to estimate the new page load PLT. Except for the factors adjusted, the page load process of the new scenario may inherit properties from the baseline scenario, including non-deterministic factors, such as whether a domain name is cached by the browser.
Performance Predictor—Obtain Timing Information
The performance predictor 228 may infer web objects and their timing information from network traces of a page load collected at the client, such as the client 108 (FIG. 1). In various embodiments, the performance predictor 228 may include a trace analyzer 230 that recovers DNS, TCP and HTTP object information. For HTTP protocol, the performance predictor 228 may use TCP reassembly and HTTP protocol parsing to recover the HTTP object information. For DNS protocol, the trace analyzer 230 may use a DNS parser to recover the domain name related to the DNS requests and responses. In at least one embodiment, the performance predictor 228 may leverage a network intrusion detection tool component to recover the timing and semantic information from the raw packet trace.
The trace analyzer 230 may estimate the RTT of each connection by the time between SYN and SYN/ACK packets. Based on the TCP self-clocking behavior, the trace analyzer 230 may estimate the number of round trips of the HTTP transfer taken by a web object. The packets in one TCP sending window are relatively close to each other (e.g., less than one RTT), but the packets in different sending windows are about one RTT apart (approximately 1±0.25 RTT). This approach may help the trace analyzer 230 to identify the additional delay added by the web servers, which is usually quite different from one RTT.
Thus, the performance predictor 228 may be client-deployable since it does not require any application-level instrumentations. By using the trace analyzer 230, the performance predictor 228 may identify object timing information 232 for each web object. The timing information 232 may include information for three types of activity: (1) DNS lookup time: the time used for looking up a domain name; (2) TCP connection time: the time used for establishing a TCP connection; and (3) HTTP time: the time used to load a web object.
Moreover, HTTP time may be further decomposed into three parts: (i) request transfer time: the time to transfer the first byte to the last byte of the HTTP request; (ii) response time: the time from when the last byte of the HTTP request is sent to when the first byte of the HTTP reply is received, which may include one RTT plus server delay; and (iii) reply transfer time: the time to transfer the first byte to the last byte of an HTTP reply.
In addition, the performance predictor 228 may infer the RTT for each TCP connection. The RTT of a TCP connection may be quite stable since the entire page load process usually lasts for only a few seconds. The performance predictor 228 may also infer the number of round-trips involved in transferring an HTTP request or reply. Such information may enable the performance predictor 228 to predict transfer times when RTT changes.
Performance Predictor—Additional Client Delay Information
As stated above, the performance predictor 228 may further annotate each object with additional timing information related to client delay. In various embodiments, the timing information 232 may be supplemented with the additional timing information related to client delay. For example, when the last parent of a web object X becomes available, the browser may not issue a request for the web object X immediately. This is because the browser may need time to perform some additional processing, e.g., parsing HTML pages or executing JavaScripts. For the web object X, the performance predictor 228 may use client delay to denote the time from when its last parent is available to when browser starts to request the web object X.
When the browser loads a sophisticated webpage or the client machine is slow, client delay may have significant impact on PLT. Accordingly, the performance predictor 228 may infer client delay of each web object by combining the obtained object timing information of each web object with the PDG (e.g., PDG 112) of the webpage. It will be appreciated that when a browser starts to request a web object, the first activity may be DNS, TCP connection, or HTTP, depending on the current state and behavior of the browser.
Many browsers may limit the maximum number of TCP connections to a host server. For example, some browsers (e.g., Internet Explorer®7) may limit the number of TCP connections to six by default. Such limitations may cause the request for a web object to wait for available connections even when the request is ready to be sent. Therefore, the client delay that the performance predictor 228 observes in a trace may be longer than the actual browser processing time. To overcome this problem, when collecting the packet traces in the baseline scenario for the purpose of prediction, the performance predictor 228 may set the connection limit per host of the browser to a large number, for instance, 30. This may help to reduce the effects of connection waiting time.
Performance Predictor—Adjust Object Timing Information
Further, having obtained the web object timing information 232 under the baseline scenario, the performance predictor 228 may adjust the timing information 232 for each web object according to a new scenario. In other words, the timing information 232 may be adjusted with new input parameters 122 (FIG. 1) that modify one or more of the server delay, the client delay, the network delay, the RTT time, etc. The new input parameters 122 (FIG. 1) may be inputted by a user or a testing application. The modifications may increase or decrease the value of each delay or time.
In at least one embodiment, assuming serverδ is the server delay difference between the new and the baseline scenario, the performance predictor 228 may add the input serverδ to the response time of each web object to reflect the server delay change in the new scenario. The performance predictor 228 may use similar methods to adjust DNS activity and client delay for each web object with the new scenario inputs 236.
The performance predictor 228 may also adjust for RTT changes. Assuming the HTTP request and response transfers involve m and n round-trips for the web object X, the performance predictor 228 may add (m+n+1)×rttδ to the HTTP activity of the web object X, and rttδ to the TCP connection activity if a new TCP connection is required for loading the web object X. In such embodiments, the performance predictor 228 may operate under the assumption that RTT change has little impact on the number of round-trips involved in loading a web object.
Performance Predictor—Simulating Page Load
The performance predictor 228 may further predict page load time (PLT) based on the web object timing information 232 by simulating a page load process. Since web object downloads are not independent of each other, the download of a web object may be blocked because its dependent objects are unavailable or because there are no TCP connections ready for use. Accordingly, the performance predictor 228 may account for these limitations when simulating the page load process by taking into accounts the constraints of a browser and the corresponding PDG (e.g., PDG 112) of the webpage.
For example, web browsers such as Internet Explorer® and Firefox®, as produced by the Mozilla Corporation of Mountain View, Calif., may share some common constraints and features. Presently, Internet Explorer® and Firefox® both use HTTP/1.1 with HTTP pipelining disabled by default. This may be due to the fact that HTTP pipelining may perform badly with the presence of dynamic content, e.g., one slow request may block other requests. However, without pipelining, HTTP request-reply pairs do not overlap with each other within the same TCP connection. However, in HTTP/1.1, a browser may use persistent TCP connections that can be reused for multiple HTTP requests and replies. The browser may attempt to keep the number of parallel connections small. Accordingly, the browser may open a new connection only when it needs to send a request and all existing connections are occupied by other requests or replies. The browser may be further configured with an internal parameter to limit the maximum number of parallel connections with a particular host. Such limit may be commonly applied to a host instead of to an IP address. However, if multiple hosts map to the same IP address, the number of parallel connections with that IP address may exceed the limit. Thus, during the page load simulation, the performance predictor 228 may set the number of possible parallel connections according to the connection limitations of a particular browser. However, in other embodiments, the number of possible parallel connections may be modified (e.g., increased or decreased). Thus, the new input parameters 122 (FIG. 1) of a new scenario may further include the number of possible parallel connections.
Moreover, the web browsers, such as Internet Explorer® and Firefox®, may also share some common features. For example, loading a web object in a web browser may trigger multiple activities including looking up a DNS name, establishing a new TCP connection, waiting for an existing TCP connection, and/or issuing an HTTP request. Five possible combinations of these activities are illustrated below in Table I. A “-−” in Table I means that the corresponding condition does not matter for the purpose of predicting page load time (PLT). The activities involved in each case are further illustrated in FIG. 4.
TABLE I
Connection Packing Possibilities for a Given Domain Name
Case
I II III IV V
First Web Object of a Domain Yes Yes No No No
Cached DNS Name No Yes
Available TCP Connections Yes No No
Maximum Number of Parallel No Yes
Connections
Corresponding Scenario in FIG. 4 402 404 406 404 408
FIG. 4 shows block diagrams that illustrate scenarios 402-408 of web object timing relationship for a single HTTP object, in accordance with various embodiments. For instance, in Case V, a browser may load a web object from a domain with which it already has established TCP connections. However, because all the existing TCP connections are occupied and the number of parallel connections has reached a maximum limit, the browser may have to wait for the availability of an existing connection to issue the new HTTP request. Case V is illustrated as scenario 408 of FIG. 4.
Based on the common limitations and features of web browsers, the performance predictor 228 may estimate a new PLT of a webpage by using prediction engine 234 to simulate the corresponding page load process. In various embodiments, the prediction engine 234 may use an algorithm that takes the adjusted timing information of each web object in the web page, as well as the parental dependency graph (PDG) of the web page as inputs, and simulate the page loading process for the webpage to determine a new page load time (PLT). The algorithm may be expressed in pseudo code (“PredictPLT”) as follows:
PredictPLT(ObjectTimingInfo, PDG)
Insert root objects into CandidateQ
While (CandidateQ not empty)
 1) Get earliest candidate C from CandidateQ
 2) Load C according to conditions in Table 1
 3) Find new candidates whose parents are available
 4) Adjust timings of new candidates
 5) Insert new candidates into CandidateQ
Endwhile
In other words, the PLT may be estimated by the algorithm as the time when all the web objects are loaded. For each web object X, the algorithm may keep track of four time variables: (1) Tp: when the web object X's last parent is available; (2) Tr: when the HTTP request for the web object X is ready to be sent; (3) Tf: when the first byte of the HTTP request is sent; and (4) Tl: when the last byte of the HTTP reply is received. As stated above, FIG. 4 illustrates the position of these time variables in four different scenarios. In addition, the algorithm may maintain a priority queue CandidateQ that contains the web objects that can be requested. The objects in CandidateQ may be sorted based on Tr.
Initially, the algorithm may insert the root objects of the corresponding PDG of the webpage into the queue CandidateQ with their Tr set to 0. A webpage may have one main HTML object that serves as the root of the PDG. However, in some scenarios, multiple root objects may exist due to asynchronous JavaScript and Extensible Markup Language (XML) downloading, such as via AJAX. In each Iteration, the algorithm may perform the following tasks: (1) obtain object C with smallest Tr, from CandidateQ; (2) load object C according to the conditions listed in Table 1; (2) update Tf and Tl based on whether the loading involves looking up DNS names, opening new TCP connections, waiting for existing connections, and/or issuing HTTP requests. Tf and Tl may be used to determine when a TCP connection is occupied by the object C; (3) after the object C is loaded, find all the new candidates whose last parent is the object C; (4) adjust the Tp and Tr of each new candidate X. If the object C is a non-stream object, Tp of the object X may be set to Tl of the object C. However, if object C is a stream object, Tp of the object X may be set to the available-time of offsetC (X), and Tr of the object X may be set to Tp, plus the client delay of the object X; and (5) insert new candidates into CandidateQ.
With the use of such an algorithm and the prediction engine 234, the performance predictor 228 may simulate a page load process for the webpage based on the adjusted web object timing information 232, in which the adjusted web object timing information 232 includes the new modified input parameters 122 (FIG. 1). The simulation of the page load process may generate a new predicted PLT for the webpage. In various embodiments, the performance predictor 228 may output the predicted PLT as the result 236.
The data storage module 238 may be configured to store data in a portion of memory 204 (e.g., a database). In various embodiments, the data storage module 238 may store web pages for analysis, algorithms used by the dependency extractor 222 and the prediction engine 234, the extracted PDGs, and timing information of the web objects (e.g., the client delay, the network delay, and the server delay). The data storage module 238 may further store parameter settings for different page loading scenarios, and/or results such as the PLTs of web pages under different scenarios. The data storage module 238 may also store any additional data derived during the automated performance of cloud services, such as, but not limited, trace information produced by the trace analyzer 230.
The comparison engine 240 may compare the new predicted PLT (e.g., result 236) of each webpage with the original PLT of each webpage, as measured by the measurement engine 206. Additionally, in some embodiments, the results 236 may also include one or more of the server delay, the client delay, the network delay, the RTT, etc., as obtained during the simulated page loading of the webpage by the performance predictor 228. Accordingly, the comparison engine 240 may also compare such values to one or more corresponding values obtained by the measurement engine 206 during the original page load of the webpage. Further, the comparison engine 240 may also compare a plurality of predicted PLTs of each webpage, as derived from different sets of input parameters, so that one or more improved sets of input parameters may be identified.
The user interface module 242 may interact with a user via a user interface (not shown). The user interface may include a data output device such as a display, and one or more data input devices. The data input devices may include, but are not limited to, combinations of one or more of keypads, keyboards, mouse devices, touch screens, microphones, speech recognition packages, and any other suitable devices or other electronic/software selection methods. The user interface module 242 may enable a user to input or modify parameter settings for different page loading scenarios, as well as select web pages and page loading processes for analysis. Additionally, the user interface module 242 may further cause the display to output representation of the PDGs of selected web pages, current parameter settings, timing information of the web objects, PLTs of web pages under different scenarios, and/or other pertinent data.
Example Processes
FIGS. 5-6 describe various example processes for automated cloud service performance prediction. The order in which the operations are described in each example process is not intended to be construed as a limitation, and any number of the described blocks can be combined in any order and/or in parallel to implement each process. Moreover, the blocks in the FIGS. 5-6 may be operations that can be implemented in hardware, software, and a combination thereof. In the context of software, the blocks represent computer-executable instructions that, when executed by one or more processors, cause one or more processors to perform the recited operations. Generally, computer-executable instructions include routines, programs, objects, components, data structures, and the like that cause the particular functions to be performed or particular abstract data types to be implemented.
FIG. 5 is a flow diagram that illustrates an example process 500 to extract a parental dependency graph (PDG) for a webpage, in accordance with various embodiments.
At block 502, the dependency extractor 222 of the cloud service analyzer 206 may extract a list of web objects in a webpage. In various embodiments, the dependency extractor 222 may use a web proxy 224 to delay web object downloads of a webpage and extract the list of web objects in the webpage by obtaining their corresponding Universal Resource Identifiers (URIs).
At block 504, the dependency extractor 222 may load the webpage and delay the download of a web object so that one or more descendants and/or one or more ancestors thereof may be discovered. In various embodiments, the dependency extractor 222 may use a browser robot 226 to load the webpage.
At block 506, the dependency extractor 222 may use an iterative algorithm to discover all the descendants and the ancestors of the web object. In various embodiments, the algorithm may extract one or more stream parent objects and/or one or more non-stream parent objects, one or more descendant objects, as well as the dependency relationships between the web object and the other web objects.
At block 508, the dependency extractor 222 may encapsulate the one or more dependency relationships of the web object that is currently being processed in a PDG, such as the PDG 112 (FIG. 1).
At decision block 510, the dependency extractor 222 may determine whether all the web objects of the webpage have been processed. In various embodiments, the dependency extractor 222 may make this determination based on the list of the web objects in the webpage. If the dependency extractor 222 determines that not all of the web objects in the webpage has been processed (“no” at decision block 510), the process 500 may loop back to block 504, where another web object in the list of web objects may be processed for descendants and ancestor web objects. However, if the dependency extractor 222 determines all web objects have been processed (“yes” at decision block 510), the process may terminate at block 512.
FIG. 6 is a flow diagram that illustrates an example process 600 to derive and compare a new page load time (PLT) of a webpage with an original PLT of the web page, in accordance with various embodiments.
At block 602, the performance predictor 228 of the cloud service analyzer 206 may obtain an original PLT (e.g., PLT 114) of a webpage. In various embodiments, the performance predictor 228 may obtain the PLT of the webpage from the measurement engine 206.
At block 604, the performance predictor 228 may extract timing information (e.g., timing information 232) of each web object using a network trace. The network trace may be implemented on a page load of the webpage under a first scenario. In various embodiments, the performance predictor 228 may use trace logger 220 of the measurement engine 206 to perform the network tracing. The first scenario may include a set of default parameters that represent client delay, the network delay (e.g., DNS lookup time, TCP handshake time, or data transfer time), the server delay, and/or the RTT experienced during the page load.
At block 606, the performance predictor 228 may annotate each web object with additional client delay information based on the parental dependency graph (PDG) of the webpage. In various embodiments, the additional client delay information may represent the time a browser needs to do some additional processing, e.g., parsing HTML pages or executing JavaScripts, the time expended due to browser limitation on the maximum number of simultaneous TCP connections.
At block 608, the performance predictor 228 may adjust the web object timing information of each web object to reflect a second scenario that includes one or more modified parameters (e.g., modified parameters 122). In various embodiments, the modified parameters may have been modified based on input from a user or a testing application. For example, but not as a limitation, the cloud service analyzer 110 (FIG. 1) may receive one or more modifications to RTT, modifications to network processing delay (e.g., modifications to DNS lookup time, TCP handshake time, or data transfer time), modifications to client processing delay, modifications to server processing delay, and/or the like. The modifications may increase or decrease the value of each delay or time.
At block 610, the performance predictor 228 may simulate page loading of the webpage to estimate a new PLT (e.g., PLT 124) of the webpage. In various embodiments, the performance engine may use the prediction engine 234 to simulate the page loading of each web object of the webpage based on the adjusted time information that includes the modified parameters (e.g., modified parameters 122) and the PDG (e.g., PDG 112).
At block 612, the comparison engine 240 of the cloud service analyzer 206 may compare the new PLT of the webpage to the original PLT of the webpage to determine whether the load time has been improved via the parameter modifications (e.g., whether a shorter or improved PLT is achieved, or whether the parameter modifications actually increased the PLT). In further embodiments, the comparison engine 240 may also compare a plurality of new PLTs that are derived based on different modified timing information so that improved parameters may be determined.
Example Computing Device
FIG. 7 illustrates a representative computing device 700 that may implement automated performance prediction for cloud services. For example, the computing device 700 may be a server, such as one of the servers 102(1)-102(n), as described in FIG. 1. Moreover, the computing device 700 may also act as the client device 108 described in the discussion accompanying FIG. 1. However, it will be readily appreciated that the techniques and mechanisms may be implemented in other computing devices, systems, and environments. The computing device 700 shown in FIG. 7 is only one example of a computing device and is not intended to suggest any limitation as to the scope of use or functionality of the computer and network architectures.
In at least one configuration, computing device 700 typically includes at least one processing unit 702 and system memory 704. Depending on the exact configuration and type of computing device, system memory 704 may be volatile (such as RAM), non-volatile (such as ROM, flash memory, etc.) or some combination thereof. System memory 704 may include an operating system 706, one or more program modules 708, and may include program data 710. The operating system 706 includes a component-based framework 712 that supports components (including properties and events), objects, inheritance, polymorphism, reflection, and provides an object-oriented component-based application programming interface (API), such as, but by no means limited to, that of the .NET™ Framework manufactured by the Microsoft® Corporation, Redmond, Wash. The computing device 700 is of a very basic configuration demarcated by a dashed line 714. Again, a terminal may have fewer components but may interact with a computing device that may have such a basic configuration.
Computing device 700 may have additional features or functionality. For example, computing device 700 may also include additional data storage devices (removable and/or non-removable) such as, for example, magnetic disks, optical disks, or tape. Such additional storage is illustrated in FIG. 7 by removable storage 716 and non-removable storage 718. Computer storage media may include volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, program modules, or other data. System memory 704, removable storage 716 and non-removable storage 718 are all examples of computer storage media. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by Computing device 700. Any such computer storage media may be part of device 700. Computing device 700 may also have input device(s) 720 such as keyboard, mouse, pen, voice input device, touch input device, etc. Output device(s) 722 such as a display, speakers, printer, etc. may also be included.
Computing device 700 may also contain communication connections 724 that allow the device to communicate with other computing devices 726, such as over a network. These networks may include wired networks as well as wireless networks. Communication connections 724 are some examples of communication media. Communication media may typically be embodied by computer readable instructions, data structures, program modules, etc.
It is appreciated that the illustrated computing device 700 is only one example of a suitable device and is not intended to suggest any limitation as to the scope of use or functionality of the various embodiments described. Other well-known computing devices, systems, environments and/or configurations that may be suitable for use with the embodiments include, but are not limited to personal computers, server computers, hand-held or laptop devices, multiprocessor systems, microprocessor-base systems, set top boxes, game consoles, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and/or the like.
The implementation of automated performance prediction for cloud services on a client device may enable the assessment of cloud service performance variation in response to a wide range of hypothetical scenarios. Thus, the techniques described herein may enable the prediction of parameter settings that provide optimal cloud service performance, i.e., shortest page load time (PLT) without error prone and/or time consuming manual trial-and-error parameter modifications and performance assessments. Moreover, the implementation of automated performance prediction for cloud services on a client device may take into account factors that are not visible to cloud service providers. These factors may include page rendering time, object dependencies, multiple data sources across data centers and data providers.
Conclusion
In closing, although the various embodiments have been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended representations is not necessarily limited to the specific features or acts described. Rather, the specific features and acts are disclosed as exemplary forms of implementing the claimed subject matter.

Claims (9)

1. A computer readable medium storing computer-executable instructions that, when executed, cause one or more processors to perform acts comprising:
determining an original page load time (PLT) of a webpage and timing information of each web object of the webpage in a first scenario;
annotating each object with client delay information based on a parental dependency graph (PDG) of the webpage;
adjusting the timing information of each web object to reflect a second scenario that includes one or more modified parameters; and
simulating a page loading of the webpage based on the adjusted timing information of each web object and the PDG of the webpage to estimate a new PLT of the webpage.
2. The computer readable medium of claim 1, further storing an instruction that, when executed, cause the one or more processors to perform an act comprising comparing the original PLT of the webpage to the new PLT of the webpage to determine whether the one or more modified parameters improved the new PLT of the webpage.
3. The computer readable medium of claim 1, wherein the determining includes determining an original PLT of the webpage and the timing information of each web object in the webpage using a network trace of a page loading of the webpage.
4. The computer readable medium of claim 1, wherein the timing information includes at least one of a client delay associated with at least one of page rendering or JavaScript execution, a server delay associated with at least one of retrieving static content or generating dynamic content, a network delay, or a round trip time (RTT).
5. The computer readable medium of claim 4, wherein the network delay includes at least one of domain name system (DNS) lookup time, transmission control protocol (TCP) handshake time, or data transfer time.
6. The computer readable medium of claim 1, wherein the modified parameters includes at least one of a modification to a client delay associated with at least one of page rendering or JavaScript execution, a modification to a server delay associated with at least one of retrieving static content or generating dynamic content, a modification to a network delay, or a modification to a round trip time (RTT).
7. The computer readable medium of claim 6, wherein the modification to the network delay includes at least one a DNS lookup time modification, a TCP handshake time modification, or a data transfer time modification.
8. The computer readable medium of claim 1, wherein the modified parameters may include a modification to a number of possible parallel connections used to load the webpage.
9. The computer readable medium of claim 1, further storing an instruction that, when executed, cause the one or more processors to perform an act comprising deriving the PDG, wherein the deriving comprises:
extracting a list of web objects in the webpage;
loading the webpage and delay the download of each of the web objects to discover at least one of one or more descendant web objects or one or more ancestor web objects of each web object; and
encapsulating one or more dependency relationships for each web object in the PDG, each dependency relationship corresponding to the dependency relationship between each web object and one descendant web object or one web ancestor object.
US12/547,704 2009-08-26 2009-08-26 Web page load time prediction and simulation Active 2029-11-10 US8078691B2 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US12/547,704 US8078691B2 (en) 2009-08-26 2009-08-26 Web page load time prediction and simulation
US13/267,254 US8335838B2 (en) 2009-08-26 2011-10-06 Web page load time prediction and simulation
US13/707,042 US9456019B2 (en) 2009-08-26 2012-12-06 Web page load time prediction and simulation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US12/547,704 US8078691B2 (en) 2009-08-26 2009-08-26 Web page load time prediction and simulation

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US13/267,254 Division US8335838B2 (en) 2009-08-26 2011-10-06 Web page load time prediction and simulation

Publications (2)

Publication Number Publication Date
US20110054878A1 US20110054878A1 (en) 2011-03-03
US8078691B2 true US8078691B2 (en) 2011-12-13

Family

ID=43626146

Family Applications (3)

Application Number Title Priority Date Filing Date
US12/547,704 Active 2029-11-10 US8078691B2 (en) 2009-08-26 2009-08-26 Web page load time prediction and simulation
US13/267,254 Active US8335838B2 (en) 2009-08-26 2011-10-06 Web page load time prediction and simulation
US13/707,042 Active 2031-11-24 US9456019B2 (en) 2009-08-26 2012-12-06 Web page load time prediction and simulation

Family Applications After (2)

Application Number Title Priority Date Filing Date
US13/267,254 Active US8335838B2 (en) 2009-08-26 2011-10-06 Web page load time prediction and simulation
US13/707,042 Active 2031-11-24 US9456019B2 (en) 2009-08-26 2012-12-06 Web page load time prediction and simulation

Country Status (1)

Country Link
US (3) US8078691B2 (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110088034A1 (en) * 2009-10-09 2011-04-14 International Business Machines Corporation Method and system for managing resources
US20110172963A1 (en) * 2010-01-13 2011-07-14 Nec Laboratories America, Inc. Methods and Apparatus for Predicting the Performance of a Multi-Tier Computer Software System
US20120079361A1 (en) * 2010-09-27 2012-03-29 International Business Machines Corporation System, method, and program for generating web page
US20120084133A1 (en) * 2010-09-30 2012-04-05 Scott Ross Methods and apparatus to distinguish between parent and child webpage accesses and/or browser tabs in focus
WO2013177259A3 (en) * 2012-05-22 2014-08-28 Microsoft Corporation Page phase time
US9826359B2 (en) 2015-05-01 2017-11-21 The Nielsen Company (Us), Llc Methods and apparatus to associate geographic locations with user devices
US10459937B2 (en) 2016-06-30 2019-10-29 Micro Focus Llc Effect of operations on application requests
US11188941B2 (en) 2016-06-21 2021-11-30 The Nielsen Company (Us), Llc Methods and apparatus to collect and process browsing history

Families Citing this family (72)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9473419B2 (en) 2008-12-22 2016-10-18 Ctera Networks, Ltd. Multi-tenant cloud storage system
US8886714B2 (en) 2011-08-08 2014-11-11 Ctera Networks Ltd. Remote access service for cloud-enabled network devices
US8078691B2 (en) 2009-08-26 2011-12-13 Microsoft Corporation Web page load time prediction and simulation
US9307003B1 (en) 2010-04-18 2016-04-05 Viasat, Inc. Web hierarchy modeling
WO2012023050A2 (en) 2010-08-20 2012-02-23 Overtis Group Limited Secure cloud computing system and method
JP2012053853A (en) * 2010-09-03 2012-03-15 Ricoh Co Ltd Information processor, information processing system, service provision device determination method and program
GB2488790A (en) * 2011-03-07 2012-09-12 Celebrus Technologies Ltd A method of controlling web page behaviour on a web enabled device
US9912718B1 (en) 2011-04-11 2018-03-06 Viasat, Inc. Progressive prefetching
US9456050B1 (en) 2011-04-11 2016-09-27 Viasat, Inc. Browser optimization through user history analysis
US9106607B1 (en) 2011-04-11 2015-08-11 Viasat, Inc. Browser based feedback for optimized web browsing
US9160785B2 (en) * 2011-05-26 2015-10-13 Candi Controls, Inc. Discovering device drivers within a domain of a premises
US20120072544A1 (en) * 2011-06-06 2012-03-22 Precision Networking, Inc. Estimating application performance in a networked environment
CN102982044A (en) * 2011-09-07 2013-03-20 腾讯科技(深圳)有限公司 Method and device for webpage browsing
US8832249B2 (en) * 2011-11-30 2014-09-09 At&T Intellectual Property I, L.P. Methods and apparatus to adjust resource allocation in a distributive computing network
DE102011122491A1 (en) * 2011-12-29 2013-07-04 Deutsche Telekom Ag Method for outputting interaction elements
KR101558909B1 (en) * 2012-01-19 2015-10-08 엠파이어 테크놀로지 디벨롭먼트 엘엘씨 Iterative simulation of requirement metrics for assumption and schema-free configuration management
US9037636B2 (en) * 2012-01-19 2015-05-19 Microsoft Technology Licensing, Llc Managing script file dependencies and load times
JP5890539B2 (en) * 2012-02-22 2016-03-22 ノキア テクノロジーズ オーユー Access to predictive services
JP5963540B2 (en) * 2012-05-30 2016-08-03 キヤノン株式会社 Information processing apparatus, program, and control method
US8804523B2 (en) 2012-06-21 2014-08-12 Microsoft Corporation Ensuring predictable and quantifiable networking performance
US8706871B2 (en) 2012-07-20 2014-04-22 Blue Kai, Inc. Tag latency monitoring and control system for enhanced web page performance
US10666533B2 (en) 2012-07-20 2020-05-26 Oracle International Corporation Tag latency monitoring and control system for enhanced web page performance
EP2904738B1 (en) * 2012-09-28 2016-07-13 Telefonaktiebolaget LM Ericsson (publ) Measuring web page rendering time
CN103237043B (en) * 2012-12-25 2016-09-14 上海绿岸网络科技股份有限公司 Based on the distributed data collecting system of multi-data source
US8694055B1 (en) * 2013-03-05 2014-04-08 Metropcs Wireless, Inc. System and method for selection of wireless service provider via a mobile terminal
US10375192B1 (en) 2013-03-15 2019-08-06 Viasat, Inc. Faster web browsing using HTTP over an aggregated TCP transport
US20140331117A1 (en) * 2013-05-06 2014-11-06 Qualcomm Innovation Center, Inc. Application-based dependency graph
CA2856620A1 (en) 2013-07-10 2015-01-10 Comcast Cable Communications, Llc Adaptive content delivery
US9785452B2 (en) * 2013-10-09 2017-10-10 Cisco Technology, Inc. Framework for dependency management and automatic file load in a network environment
JP2017501512A (en) 2013-11-01 2017-01-12 カパオ・テクノロジーズKapow Technologies Judging web page processing status
CN105794175B (en) 2013-12-03 2019-07-16 瑞典爱立信有限公司 Convey the performance metric of the system of WEB content
KR102170520B1 (en) 2014-03-06 2020-10-27 삼성전자주식회사 Apparatas and method for improving a loading time in an electronic device
US9613158B1 (en) 2014-05-13 2017-04-04 Viasat, Inc. Cache hinting systems
US10855797B2 (en) 2014-06-03 2020-12-01 Viasat, Inc. Server-machine-driven hint generation for improved web page loading using client-machine-driven feedback
US10318615B1 (en) * 2014-06-18 2019-06-11 Amazon Technologies, Inc. Modeling and measuring browser performance using reference pages
US9723057B2 (en) 2014-09-25 2017-08-01 Oracle International Corporation Reducing web page load latency by scheduling sets of successive outgoing HTTP calls
US10051029B1 (en) * 2014-10-27 2018-08-14 Google Llc Publisher specified load time thresholds for online content items
CN105701113B (en) * 2014-11-27 2019-04-09 国际商业机器公司 Method and apparatus for optimizing webpage preloading
US9990264B2 (en) * 2014-12-12 2018-06-05 Schneider Electric Software, Llc Graphic performance measurement system
US10198348B2 (en) 2015-08-13 2019-02-05 Spirent Communications, Inc. Method to configure monitoring thresholds using output of load or resource loadings
US10621075B2 (en) * 2014-12-30 2020-04-14 Spirent Communications, Inc. Performance testing of a network segment between test appliances
JP6467999B2 (en) * 2015-03-06 2019-02-13 富士ゼロックス株式会社 Information processing system and program
US10341432B2 (en) 2015-04-27 2019-07-02 Honeywell International Inc. System for optimizing web page loading
US10033656B2 (en) * 2015-05-21 2018-07-24 Sap Portals Israel Ltd Critical rendering path optimization
US10432490B2 (en) * 2015-07-31 2019-10-01 Cisco Technology, Inc. Monitoring single content page application transitions
AU2016317736B2 (en) 2015-08-28 2020-07-23 Viasat, Inc. Systems and methods for prefetching dynamic URLs
US10387676B2 (en) 2015-09-14 2019-08-20 Viasat, Inc. Machine-driven crowd-disambiguation of data resources
WO2017053835A1 (en) 2015-09-23 2017-03-30 Viasat, Inc. Acceleration of online certificate status checking with an internet hinting service
CA3002517C (en) 2015-10-20 2022-06-21 Viasat, Inc. Hint model updating using automated browsing clusters
CN108780446B (en) 2015-10-28 2022-08-19 维尔塞特公司 Time-dependent machine-generated cues
EP3860093B1 (en) 2015-12-04 2023-08-16 ViaSat Inc. Accelerating connections to a host server
US20180077227A1 (en) * 2016-08-24 2018-03-15 Oleg Yeshaya RYABOY High Volume Traffic Handling for Ordering High Demand Products
US10880396B2 (en) 2016-12-02 2020-12-29 Viasat, Inc. Pre-fetching random-value resource locators
US10592404B2 (en) * 2017-03-01 2020-03-17 International Business Machines Corporation Performance test of software products with reduced duration
US10680908B2 (en) 2017-05-23 2020-06-09 International Business Machines Corporation User interface with expected response times of commands
US10769352B2 (en) * 2017-11-09 2020-09-08 Adobe Inc. Providing dynamic web content without flicker
US10565083B2 (en) 2017-12-08 2020-02-18 Cisco Technology, Inc. Simulating hosted application performance
CN108123940B (en) * 2017-12-18 2020-07-24 中国科学院深圳先进技术研究院 Socket-based asynchronous communication method, storage medium and processor
CN108170427A (en) * 2017-12-19 2018-06-15 中山大学 A kind of webpage extracting components and multiplexing method based on test
US20200162359A1 (en) * 2018-11-16 2020-05-21 Citrix Systems, Inc. Systems and methods for checking compatibility of saas apps for different browsers
US10666528B1 (en) 2018-11-28 2020-05-26 Sap Portals Israel Ltd. Decoupling platform as a service providers using a service management platform
US10958546B2 (en) * 2018-12-31 2021-03-23 Hughes Network Systems, Llc System and method for estimation of quality of experience (QoE) for web browsing using passive measurements
GB2584821A (en) * 2019-05-16 2020-12-23 Samknows Ltd Web-browsing test system
CN110209975B (en) * 2019-05-30 2021-11-02 百度在线网络技术(北京)有限公司 Method, apparatus, device and storage medium for providing object
CN111061430B (en) * 2019-11-27 2021-02-19 东南大学 Data placement method for heterogeneous I/O fine-grained perception in multi-cloud environment
US11429577B2 (en) 2020-01-27 2022-08-30 Salesforce.Com, Inc. Performance simulation and cost-benefit analysis for performance factors for web products in database systems
US11494286B2 (en) 2020-01-27 2022-11-08 Salesforce.Com, Inc. Dynamic adjustment of web product-based performance factors in database systems
US11573880B2 (en) 2020-01-27 2023-02-07 Salesforce.Com, Inc. Performance simulation for selected platforms for web products in database systems
US11281563B2 (en) * 2020-01-27 2022-03-22 Salesforce.Com, Inc. Actionable insights for performance of web products in database systems
CN113760631A (en) * 2020-07-06 2021-12-07 北京沃东天骏信息技术有限公司 Page loading duration determination method, device, equipment and storage medium
KR102390381B1 (en) * 2020-11-25 2022-04-25 고려대학교 산학협력단 Apparatuse and method for predicting webpage load time based on simulation data, and a recording medium program for performing the same
CN114710314B (en) * 2022-02-21 2023-06-06 深圳腾银信息咨询有限责任公司 Access method, device, system and medium for configured software service platform

Citations (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4380174A (en) * 1980-08-15 1983-04-19 Tanenbaum Joseph M Apparatus for shear testing welds
US4408115A (en) * 1980-08-20 1983-10-04 Tanenbaum Joseph M Electrical resistance welding method and apparatus for same; and improved apparatus for testing welds
US4421969A (en) * 1980-08-20 1983-12-20 Tanenbaum Joseph M Method for fabricating open web steel joists
US6473794B1 (en) * 1999-05-27 2002-10-29 Accenture Llp System for establishing plan to test components of web based framework by displaying pictorial representation and conveying indicia coded components of existing network framework
US6519571B1 (en) * 1999-05-27 2003-02-11 Accenture Llp Dynamic customer profile management
US6536037B1 (en) * 1999-05-27 2003-03-18 Accenture Llp Identification of redundancies and omissions among components of a web based architecture
US6601020B1 (en) * 2000-05-03 2003-07-29 Eureka Software Solutions, Inc. System load testing coordination over a network
US6606744B1 (en) * 1999-11-22 2003-08-12 Accenture, Llp Providing collaborative installation management in a network-based supply chain environment
US6615166B1 (en) * 1999-05-27 2003-09-02 Accenture Llp Prioritizing components of a network framework required for implementation of technology
US6671818B1 (en) * 1999-11-22 2003-12-30 Accenture Llp Problem isolation through translating and filtering events into a standard object format in a network based supply chain
US20050262104A1 (en) 1999-06-23 2005-11-24 Savvis Communications Corporation Method and system for internet performance monitoring and analysis
US7124101B1 (en) * 1999-11-22 2006-10-17 Accenture Llp Asset tracking in a network-based supply chain environment
US7149698B2 (en) * 1999-05-27 2006-12-12 Accenture, Llp Business alliance identification in a web architecture Framework
US7165041B1 (en) * 1999-05-27 2007-01-16 Accenture, Llp Web-based architecture sales tool
US20070016573A1 (en) 2005-07-15 2007-01-18 International Business Machines Corporation Selection of web services by service providers
US20070083649A1 (en) 2005-10-12 2007-04-12 Brian Zuzga Performance monitoring of network applications
US20070136788A1 (en) 2004-12-16 2007-06-14 Monahan Brian Q Modelling network to assess security properties
US20070294399A1 (en) 2006-06-20 2007-12-20 Clifford Grossner Network service performance monitoring apparatus and methods
US7388839B2 (en) 2003-10-22 2008-06-17 International Business Machines Corporation Methods, apparatus and computer programs for managing performance and resource utilization within cluster-based systems
US7475067B2 (en) * 2004-07-09 2009-01-06 Aol Llc Web page performance scoring
US7853535B2 (en) * 2006-12-27 2010-12-14 Colella Brian A System for secure internet access for children
US7904518B2 (en) * 2005-02-15 2011-03-08 Gytheion Networks Llc Apparatus and method for analyzing and filtering email and for providing web related services
US7921205B2 (en) * 2007-04-13 2011-04-05 Compuware Corporation Website load testing using a plurality of remotely operating agents distributed over a wide area

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2798508A1 (en) * 2000-10-18 2002-04-25 Johnson & Johnson Consumer Companies, Inc. Intelligent performance-based product recommendation system
EP1618479A1 (en) * 2003-04-30 2006-01-25 Telecom Italia S.p.A. Method, system and computer program product for evaluating download performance of web pages
US7703079B1 (en) * 2005-05-03 2010-04-20 Oracle America, Inc. System performance prediction
US20090094137A1 (en) * 2005-12-22 2009-04-09 Toppenberg Larry W Web Page Optimization Systems
US7558771B2 (en) * 2006-06-07 2009-07-07 Gm Global Technology Operations, Inc. System and method for selection of prediction tools
US7853244B1 (en) * 2007-03-21 2010-12-14 At&T Mobility Ii Llc Exclusive wireless service proposals
US7937478B2 (en) * 2007-08-29 2011-05-03 International Business Machines Corporation Apparatus, system, and method for cooperation between a browser and a server to package small objects in one or more archives
US8078691B2 (en) 2009-08-26 2011-12-13 Microsoft Corporation Web page load time prediction and simulation

Patent Citations (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4380174A (en) * 1980-08-15 1983-04-19 Tanenbaum Joseph M Apparatus for shear testing welds
US4395615A (en) * 1980-08-15 1983-07-26 Tanenbaum Joseph M Apparatus for welding metal trusses
US4408115A (en) * 1980-08-20 1983-10-04 Tanenbaum Joseph M Electrical resistance welding method and apparatus for same; and improved apparatus for testing welds
US4421969A (en) * 1980-08-20 1983-12-20 Tanenbaum Joseph M Method for fabricating open web steel joists
US6473794B1 (en) * 1999-05-27 2002-10-29 Accenture Llp System for establishing plan to test components of web based framework by displaying pictorial representation and conveying indicia coded components of existing network framework
US6519571B1 (en) * 1999-05-27 2003-02-11 Accenture Llp Dynamic customer profile management
US6536037B1 (en) * 1999-05-27 2003-03-18 Accenture Llp Identification of redundancies and omissions among components of a web based architecture
US7165041B1 (en) * 1999-05-27 2007-01-16 Accenture, Llp Web-based architecture sales tool
US7149698B2 (en) * 1999-05-27 2006-12-12 Accenture, Llp Business alliance identification in a web architecture Framework
US6615166B1 (en) * 1999-05-27 2003-09-02 Accenture Llp Prioritizing components of a network framework required for implementation of technology
US20050262104A1 (en) 1999-06-23 2005-11-24 Savvis Communications Corporation Method and system for internet performance monitoring and analysis
US6671818B1 (en) * 1999-11-22 2003-12-30 Accenture Llp Problem isolation through translating and filtering events into a standard object format in a network based supply chain
US7124101B1 (en) * 1999-11-22 2006-10-17 Accenture Llp Asset tracking in a network-based supply chain environment
US6606744B1 (en) * 1999-11-22 2003-08-12 Accenture, Llp Providing collaborative installation management in a network-based supply chain environment
US6775644B2 (en) * 2000-05-03 2004-08-10 Eureka Software Solutions, Inc. System load testing coordination over a network
US6601020B1 (en) * 2000-05-03 2003-07-29 Eureka Software Solutions, Inc. System load testing coordination over a network
US7388839B2 (en) 2003-10-22 2008-06-17 International Business Machines Corporation Methods, apparatus and computer programs for managing performance and resource utilization within cluster-based systems
US7475067B2 (en) * 2004-07-09 2009-01-06 Aol Llc Web page performance scoring
US20070136788A1 (en) 2004-12-16 2007-06-14 Monahan Brian Q Modelling network to assess security properties
US7904518B2 (en) * 2005-02-15 2011-03-08 Gytheion Networks Llc Apparatus and method for analyzing and filtering email and for providing web related services
US20070016573A1 (en) 2005-07-15 2007-01-18 International Business Machines Corporation Selection of web services by service providers
US20070083649A1 (en) 2005-10-12 2007-04-12 Brian Zuzga Performance monitoring of network applications
US20070294399A1 (en) 2006-06-20 2007-12-20 Clifford Grossner Network service performance monitoring apparatus and methods
US7853535B2 (en) * 2006-12-27 2010-12-14 Colella Brian A System for secure internet access for children
US7921205B2 (en) * 2007-04-13 2011-04-05 Compuware Corporation Website load testing using a plurality of remotely operating agents distributed over a wide area

Non-Patent Citations (25)

* Cited by examiner, † Cited by third party
Title
"iMacros for Firefox", retrieved on Oct 13, 2009 at http://www.iopus.com/imacros/home/fx/welcome.htm, 3 pgs.
"Keynote Internet Testing Environment", retrieved on Oct 13, 2009 at http://kite.keynote.com, 1 pg.
"Selenium Web Application Testing System", retrieved on Oct. 13, 2009 at http://seleniumhq.org, 2 pgs.
"Watir, Web Applicatioon Testing in Ruby", retrieved on Oct. 13, 2009 at http://watir.com, 4 pgs.
Bahl et al, "Towards Highly Reliable Enterprise Network Services via Inference of Multi-Level Dependencies", ACM SIGCOMM, Aug 2007, 12 pgs.
Brown et al, "An Active Approach to Characterizing Dynamic Dependencies for Problem Determination in a Distributed Environment", IFIP/IEEE Intl Symposium on Integrated Network Management, May 2001, 14 pgs.
Cardwell et al, "Modeling TCP Latency", IEEE INFOCOM, Mar. 2000, 10 pgs.
Chen et al, "Automating Network Application Dependency Discovery: Experiences, Limitations and New Solutions", OSDI, 8th USENIX Symposium on Operating Systems Design and Implementation, Dec 2008, 14 pgs.
Chen et al, "Link Gradients: Predicting the Impact of Network Latency on Multitier Applications", IEEE INFOCOM, Mar. 2009, 9 pgs.
Hu et al, "Improving TCP Startup Performance Using Active Measurements: Algorithm and Evaluation", Proc 11th IEEE Intl Conf on Network Protocols, Nov. 2003, 12 pgs.
Kandula et al, "What's Going on? Learning Communication Rules in Edge Networks", ACM SIGCOMM, Aug 2008, pp. 87-98.
Kiciman et al, "AjaxScope: A Platform for Remotely Monitoring the Client-Side Behavior of Web 2.0 Applications", ACM SOSP, Oct. 2007, 14 pgs.
Kohavi et al, "Practical Guide to Controlled Experiments on the Web: Listen to Your Customers not to the HiPPO", ACM KDD, Aug 2007, 9 pgs.
Litoiu, "Migrating to Web Services-Latency and Scalability", Proceedings of the Fourth International Workshop on Web Site Evolution, WSE 2002, IEEE, 2002, 8 pages.
Nahum et al, "The Effects of Wide-Area Conditions on WWW Server Performance", ACM Slgmetrics, Jun. 2001, pp. 257-267.
Olshefski et al, "Inferring Client Response Time at the Web Server", ACM Sigmetrics, Jun. 2002, 12 pgs.
Park et al., "CoDNS: Improving DNS Performance and Reliability via Cooperative Lookups", Proc 6th Conf on Symposium on Operating Systems Design and Implementation, USENIX OSDI, Dec. 2004, vol. 6, 16 pgs.
Paxson, "Bro: A System for Detecting Network Intruders in Real-Time", Computer Networks, 31 (23-24), Dec. 1999, 2435-2463, 22 pgs.
Rajamony et al, "Measuring Client-Perceived Response Times on the WWW", USENIX Symposium on Internet Technologies and Systems, Mar. 2001, 12 pgs.
Shah et al, "Keyboards and Covert Channels", USENIX Security, Jul.-Aug 2006, pp. 59-75.
Tariq et al, "Answering What If Deployment and Configuration Questions with Wise", ACM SIGCOMM, Aug 2008, pp. 99-110.
Tian, et al., "A Concept for QoS Integration in Web Services", Proceedings of the Fourth International Conference on Web Information Systems Engineering Workshops, WISEW 2003, IEEE, 2004, 7 pages.
Veal et al, "New Methods for Passive Estimation of TCP Round-Trip Times", PAM Passive and Active Network Measurement, 6th International Workshop, Mar. 2005, 14 pgs.
Wikipedia, "Cloud Computing", retrieved on Jun. 25, 2009 at http://en.wikipedia.org/wiki/Cloud computing, Wikipedia, Jun. 24, 2009, pp. 1-14.
Wischik, "Short Messages", Journal Philosophical Transactions of the Royal Society, 366 (1872), University of Leicester MSc seminar, Jan. 2008, 11 pgs.

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9471379B2 (en) 2009-10-09 2016-10-18 International Business Machines Corporation Generating timing sequence for activating resources linked through time dependency relationships
US8793690B2 (en) * 2009-10-09 2014-07-29 International Business Machines Corporation Generating timing sequence for activating resources linked through time dependency relationships
US20110088034A1 (en) * 2009-10-09 2011-04-14 International Business Machines Corporation Method and system for managing resources
US10606645B2 (en) 2009-10-09 2020-03-31 International Business Machines Corporation Generating timing sequence for activating resources linked through time dependency relationships
US20110172963A1 (en) * 2010-01-13 2011-07-14 Nec Laboratories America, Inc. Methods and Apparatus for Predicting the Performance of a Multi-Tier Computer Software System
US9069871B2 (en) * 2010-09-27 2015-06-30 International Business Machines Corporation System, method, and program for generating web page
US20120079361A1 (en) * 2010-09-27 2012-03-29 International Business Machines Corporation System, method, and program for generating web page
US20120084133A1 (en) * 2010-09-30 2012-04-05 Scott Ross Methods and apparatus to distinguish between parent and child webpage accesses and/or browser tabs in focus
US8499065B2 (en) * 2010-09-30 2013-07-30 The Nielsen Company (Us), Llc Methods and apparatus to distinguish between parent and child webpage accesses and/or browser tabs in focus
US9332056B2 (en) 2010-09-30 2016-05-03 The Nielsen Company (Us), Llc Methods and apparatus to distinguish between parent and child webpage accesses and/or browser tabs in focus
US10305768B2 (en) 2012-05-22 2019-05-28 Microsoft Technology Licensing, Llc Page phase time
EP2852900A4 (en) * 2012-05-22 2016-07-06 Microsoft Technology Licensing Llc Page phase time
WO2013177259A3 (en) * 2012-05-22 2014-08-28 Microsoft Corporation Page phase time
US9826359B2 (en) 2015-05-01 2017-11-21 The Nielsen Company (Us), Llc Methods and apparatus to associate geographic locations with user devices
US10057718B2 (en) 2015-05-01 2018-08-21 The Nielsen Company (Us), Llc Methods and apparatus to associate geographic locations with user devices
US10681497B2 (en) 2015-05-01 2020-06-09 The Nielsen Company (Us), Llc Methods and apparatus to associate geographic locations with user devices
US10412547B2 (en) 2015-05-01 2019-09-10 The Nielsen Company (Us), Llc Methods and apparatus to associate geographic locations with user devices
US11197125B2 (en) 2015-05-01 2021-12-07 The Nielsen Company (Us), Llc Methods and apparatus to associate geographic locations with user devices
US11188941B2 (en) 2016-06-21 2021-11-30 The Nielsen Company (Us), Llc Methods and apparatus to collect and process browsing history
US10459937B2 (en) 2016-06-30 2019-10-29 Micro Focus Llc Effect of operations on application requests

Also Published As

Publication number Publication date
US9456019B2 (en) 2016-09-27
US20110054878A1 (en) 2011-03-03
US20120030338A1 (en) 2012-02-02
US20130097313A1 (en) 2013-04-18
US8335838B2 (en) 2012-12-18

Similar Documents

Publication Publication Date Title
US8078691B2 (en) Web page load time prediction and simulation
US7877681B2 (en) Automatic context management for web applications with client side code execution
Li et al. WebProphet: Automating Performance Prediction for Web Services.
EP1490775B1 (en) Java application response time analyzer
US7334220B2 (en) Data driven test automation of web sites and web services
US9785452B2 (en) Framework for dependency management and automatic file load in a network environment
US10965766B2 (en) Synchronized console data and user interface playback
US10452463B2 (en) Predictive analytics on database wait events
US7996822B2 (en) User/process runtime system trace
CN102597993A (en) Managing application state information by means of a uniform resource identifier (uri)
US11675682B2 (en) Agent profiler to monitor activities and performance of software agents
US20060230133A1 (en) On demand problem determination based on remote autonomic modification of web application server operating characteristics
Mardani et al. Fawkes: Faster Mobile Page Loads via {App-Inspired} Static Templating
US10644971B2 (en) Graph search in structured query language style query
Panum et al. Kraaler: A user-perspective web crawler
Avram et al. Latency amplification: Characterizing the impact of web page content on load times
Krishnamurthy Synthetic workload generation for stress testing session-based systems
Book et al. Cost and response time simulation for web-based applications on mobile channels
Gustavsson et al. Efficient data communication between a webclient and a cloud environment
Asrese Web Quality of Experience Measurement: Metrics, Methods and Tools
US11516234B1 (en) In-process correlation through class field injection
Chiew Web page performance analysis
Wu et al. Improving Web Browsing Experience with Personalized Edge Computing
Palomäki Web Application Performance Testing
Smirnova et al. Load Testing of Vaadin Flow applications

Legal Events

Date Code Title Description
AS Assignment

Owner name: MICROSOFT CORPORATION, WASHINGTON

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ZHANG, MING;WANG, YI-MIN;GREENBERG, ALBERT;AND OTHERS;SIGNING DATES FROM 20090824 TO 20090825;REEL/FRAME:023146/0731

STCF Information on status: patent grant

Free format text: PATENTED CASE

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

AS Assignment

Owner name: MICROSOFT TECHNOLOGY LICENSING, LLC, WASHINGTON

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MICROSOFT CORPORATION;REEL/FRAME:034564/0001

Effective date: 20141014

FPAY Fee payment

Year of fee payment: 4

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 8

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 12TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1553); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 12