US8332820B2 - Automated load model - Google Patents

Automated load model Download PDF

Info

Publication number
US8332820B2
US8332820B2 US12/261,519 US26151908A US8332820B2 US 8332820 B2 US8332820 B2 US 8332820B2 US 26151908 A US26151908 A US 26151908A US 8332820 B2 US8332820 B2 US 8332820B2
Authority
US
United States
Prior art keywords
dimension
script
target
multiple dimensions
total error
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US12/261,519
Other versions
US20100115339A1 (en
Inventor
David M. Hummel, JR.
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Accenture Global Services Ltd
Original Assignee
Accenture Global Services Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Accenture Global Services Ltd filed Critical Accenture Global Services Ltd
Priority to US12/261,519 priority Critical patent/US8332820B2/en
Assigned to ACCENTURE GLOBAL SERVICES GMBH reassignment ACCENTURE GLOBAL SERVICES GMBH ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: HUMMEL, DAVID M., JR.
Priority to CA2681887A priority patent/CA2681887C/en
Priority to EP09252500.5A priority patent/EP2189907B1/en
Priority to CN200910174909.8A priority patent/CN101727372B/en
Priority to BRPI0904262-8A priority patent/BRPI0904262B1/en
Publication of US20100115339A1 publication Critical patent/US20100115339A1/en
Assigned to ACCENTURE GLOBAL SERVICES LIMITED reassignment ACCENTURE GLOBAL SERVICES LIMITED ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ACCENTURE GLOBAL SERVICES GMBH
Publication of US8332820B2 publication Critical patent/US8332820B2/en
Application granted granted Critical
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3409Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for performance assessment
    • G06F11/3414Workload generation, e.g. scripts, playback
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/22Detection or location of defective computer hardware by testing during standby operation or during idle time, e.g. start-up testing
    • G06F11/26Functional testing
    • G06F11/263Generation of test inputs, e.g. test vectors, patterns or sequences ; with adaptation of the tested hardware for testability with external testers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3688Test management for test execution, e.g. scheduling of test suites
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3409Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for performance assessment
    • G06F11/3433Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for performance assessment for load management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3452Performance evaluation by statistical analysis

Definitions

  • the present embodiment relates to computer performance testing. More specifically, the present embodiment relates to a method and system for optimizing the testing of computers systems.
  • a system characterized by large number of users may also undergo performance testing to assess the readiness or real world performance of the system. This involves determining whether the real world operation of the running system meets set expectations. To make this determination, the system under test should: 1) match or correlate directly to a production system being certified, 2) be monitored continuously throughout testing to determine whether the results meet or fail appropriate targets, and 3) be operating on a realistic workload which reflects real world usage.
  • reaching this third requirement can be challenging.
  • the method may receive a script footprint that includes dimension-effect values corresponding to the number of times a computer system dimension is affected by the script.
  • a script corresponds to a code listing, executed by a processor that enables testing functionality associated with the computer system.
  • a script footprint is a measure of the way in which a script affects the computer system as represented by the number of times per time period the script exercises operations of the computer system.
  • a dimension corresponds to an operation performed by the computer system that a script may or may not exercise.
  • an authentication server may perform operations such as identity authentication, alternate identity authentication, password updating, group authentication, and/or operations for setting privileges with the system.
  • Target information may also be received.
  • the target information includes target dimension values corresponding to a desired number of times per time period each dimension should be affected.
  • the method and system may determine the number of times to execute the scripts within the time period, so as to minimize the difference between the actual number of times dimensions are affected and the target-dimension during the time period.
  • the method and system may also execute the script on the computer system the determined number of times within the time period.
  • the differences between the actual number of times dimensions are affected, and the target-dimension values, are minimized via a linear-least-squares algorithm.
  • a weighting factor may be applied to the dimension-effect values and the target-dimension values, so as to define the relative importance of the metrics and/or to even out the effects of different scaled metrics.
  • the weighting factor may correspond to the reciprocal of a highest of the target-dimension values, the reciprocal of an average of all of the target-dimension values, and/or the reciprocal of a sum of all of the target-dimension values.
  • the target-dimension values may be scaled by a growth factor to provide more current target-dimension values in cases where the target-dimension values are based on historical data.
  • FIG. 1 is an embodiment of a test system for testing the robustness of computer systems in accordance with the present invention
  • FIG. 2 is an embodiment of a tree structure of a test plan for testing a computer system in accordance with the present invention
  • FIG. 3 is a portion of an exemplary table generated by an embodiment of a load model application that enables characterizing various test scripts utilized to test a computer system in accordance with the present invention
  • FIG. 4 is a portion of an exemplary table generated by the load model application used in FIG. 3 that enables specifying target information for groups in accordance with the present invention
  • FIG. 5 is a portion of an exemplary result table generated by the load model application of FIG. 3 in accordance with the present invention.
  • FIG. 6 is a flow diagram that describes the operations of the load model application of FIG. 3 ;
  • FIG. 7 schematically illustrates an embodiment of a computer system using the load model application of FIG. 3 in accordance with the present invention.
  • FIG. 1 is a test system 100 for testing the robustness of computer systems.
  • the test system 100 includes a processor 115 , test scripts 105 , and a load model 120 .
  • the processor 115 may correspond to any conventional computer or other data processing device capable of executing applications for verifying the functionality of computer systems.
  • the processor 115 may correspond to an Intel®, AMD®, or PowerPC® based processor operating, for example, a Microsoft Windows®, Linux, or other Unix® based operating system.
  • the processor 115 may be adapted to execute applications such as a load model application and/or a load testing application, such as HP Load Runner®.
  • the load testing application executes test scripts for testing the computer systems according to the load model.
  • the processor 115 may also be adapted to communicate with the computer systems via an interface, such as a network interface.
  • the test scripts 105 correspond to code listings, executed by the processor 115 , that enable testing functionality associated with the various computer systems.
  • the code listings may correspond to a scripting language, such as Java® and/or Microsoft Visual Basic®.
  • the code listings may also correspond to programming languages, such a C or C++.
  • the test scripts 105 may include code listings designed to simulate human interactions associated with the various computer systems to be tested by test system 100 .
  • a first test script may include code that enables simulating human interactions associated with reading and writing e-mail messages.
  • the test script may include code that generates an e-mail message, selects a recipient for the e-mail message, and communicates the e-mail message to an e-mail server 125 .
  • a second test script may include code that enables simulating human interactions associated with browsing web pages.
  • the script may include code that requests a web page from a web server 130 , specifies fields on the web page, and communicates the fields back to the web server 130 .
  • a third test script may include code that enables simulating human interactions associated with database interactions.
  • the script may include code that retrieves and stores data to and from a database server 135 .
  • Other systems and scripts for testing the systems may exist as well.
  • the processor 115 executes the test scripts 105 according to a load model 120 .
  • the load model 120 is generated by a load model application and specifies how many times per time period each of the test scripts 105 is executed. For example, a first script may be executed 159.24 times per minute and a second script may be executed 26.29 times per minute. How often to execute a given script is determined by the load model application.
  • the load model application determines the execution rates of the scripts based on a variety of factors including various metrics that include dimensions.
  • a metric corresponds to a collection of dimensions associated with a particular aspect of a system under test.
  • a dimension corresponds to an operation performed within that particular metric or aspect of the system.
  • authentication metrics may be associated with an authentication server that performs operations, such as identity authentication, alternate identity authentication, password updating, group authentication, and/or operations for setting privileges with the system.
  • FIGS. 2-4 describe various user interfaces and operations associated with the load model application.
  • FIG. 2 is a tree structure 220 of a test plan for testing a computer system by the test system 100 .
  • the test plan may be created via the load model application described above.
  • the tree structure 220 includes various group nodes 225 and several script nodes 210 .
  • the tree structure 220 may be utilized to define the relationships between various scripts utilized to test the computer system.
  • the internal functionality of the load model application is guided by the grouping of the scripts. Often times there are important relationships between the scripts that may need to be taken into consideration. These relationships may be inherent to the way the software or computer systems operate. For example, in many web applications, after a user successfully logs in, the user is presented with a welcome screen.
  • Group nodes 225 are utilized to group related scripts and/or other groups, and to specify targeting information utilized to constrain the execution rates of scripts associated with the group.
  • a “Test Plan” group node 200 may be utilized to specify targeting information for all the scripts below the “Test Plan” group node 200 in the tree structure 220 , which in this example includes those scripts below the “Admin App group” group node 205 , “Batch Related” group node 207 , and “Client App” group node 209 .
  • Each of these group nodes in turn may be utilized to specify targeting information for scripts below the respective group.
  • the “Admin App” group node 205 may be utilized to specify targeting information for all the scripts below the “Admin App” group node 205 , which includes script nodes 210 for disabling an account, searching an account, updating an account, and logging into an account, by an administrator.
  • the previously mentioned target information includes a list of metrics, weights for each metric, values for dimensions associated with each metric, and growth factors for each metric. Weights are utilized to define the relative importance of the metrics.
  • the target dimension values may be based on historic or expected values.
  • Each metric in the profile may be multiplied by a growth factor to predict some future state or weighted to guide the model when making compromises.
  • the growth factors are expressed as a multiple of historic values.
  • Script nodes 210 are utilized to specify a script footprint.
  • a script corresponds to a code listing, executed by a processor that enables testing functionality associated with a computer system that is under test.
  • a script footprint is a measure of the way in which a script affects the computer system as represented by the number of times per time period the script exercises operations of the computer system.
  • a script footprint includes a list of dimensions that may be affected by the script, numeric values for each dimension that measures the effect of an individual execution of that script, and the minimum duration expected when running the script alone, without any other scripts running.
  • the footprint of a script may be determined by a combination of application knowledge and experimentation. It may be important to determine which metrics apply to a script. Often it takes access to an analyst with specific knowledge of the application to determine which web servers, databases and backend systems are touched when running a script. If that is not available, it may be possible to determine this information by running the script alone, without any other scripts running, and logging and/or monitoring the behavior of the script.
  • Running a script multiple times may help identify and eliminate errors or unintended traffic. For example, if a script is run twenty times in succession, then the resulting traffic may occur in multiples of twenty. Metrics that are not in even multiples either show interfering background traffic (which can often be ignored and subtracted) or a variable (non-deterministic) behavior. This type of traffic may be accounted for with fractional values.
  • FIG. 3 is a portion of an exemplary table 325 generated by the load model application that enables specifying the footprint of various scripts utilized to test computer systems, as described above.
  • the table 325 includes scripts 310 , dimensions 315 that may be affected by a script, and dimension-effect-fields 320 .
  • Each dimension 315 corresponds to an operation performed by the system under test.
  • an authentication server of the system under test may perform operations such as identity authentication, alternate identity authentication, password updating, group authentication, and/or operations for setting privileges with the system. These operations are represented by the dimensions 315 at the top of the table 325 .
  • the dimension-effect-fields 320 are input fields of the exemplary table that are utilized to specify a value associated with the effect of an individual execution of a script on a given dimension 315 .
  • the “Account Disable” script may affect the “ID Authentication,” “Password Update,” and “Group Authentication” dimensions 315 one time per execution, and the “Privileges” dimension two times per execution, as illustrated by the dimension-effect values 1.0 and 2.0 in the exemplary table 325 .
  • the “Admin Login” script may affect the “ID Authentication” dimension and “Group Authentication” dimension one time per execution.
  • the scripts 310 in the exemplary table 325 are grouped together according to the tree structure 220 of FIG. 2 .
  • the scripts 310 “Account disable,” “Account Search,” “Account Updating,” and “Admin Login” are grouped below the “Admin App” group 305 , which in turn is below the “Test Plan” group 300 .
  • FIG. 4 is a portion of an exemplary table 420 generated by the load model application that enables specifying target information for groups as described above.
  • the table 420 includes group rows 402 , dimensions 410 , and target-dimension fields 415 .
  • the group rows 402 shown in the exemplary table 420 may correspond to groups shown in the exemplary tree structure 220 of FIG. 2 .
  • the table 420 includes a “Test Plan” group row 400 row and an “Admin App” group row 405 .
  • the dimensions 410 may correspond to dimensions that are effected by the various scripts, such as the dimensions “ID Authentication,” “Alt Authentication,” “Password Update,” “Group Authentication,” and “Privileges” as described above with reference to FIG. 3 .
  • the target-dimension fields 415 are utilized to specifying target-dimension values for each dimension.
  • the target dimensions 415 specified in the “Test Plan” group row 400 and “Admin App” group row 405 are utilized to constrain the number of times per time period the dimensions 410 are affected by scripts associated with those groups. For example, referring to the table 420 , the number of times per minute the dimensions “ID Authentication,” “Alt Authentication,” and “Password Update” may be affected by scripts that are part of the “Test Plan” group 400 may be constrained to 5003, 2345, and 45.3 times per minute respectively.
  • the number of times per minute the dimensions “Alt Authentication” and “Group Authentication” may be affected by scripts that are part of the “Admin App” group 405 may be constrained to 45.0 and 121 times per minute, respectively.
  • Tables 1-10 below describe how the load model application determines the optimal execution rate, of the individual scripts, necessary to meet the defined targets.
  • Table 1 shows exemplary script footprints or script profiles associated with various scripts. In this case, the Table defines the footprints of scripts named “Script 1” and “Script 2”.
  • the dimensions of a system under test that may or may not be affected by the scripts are listed across the top of the table and include, “Query 1,” “Query 2,” “URL 1,” “URL 2,” and “URL 3.”
  • the query dimensions may correspond to operations performed by a database and are therefore included under the heading of “Database metric”.
  • the URL dimensions may correspond to operations performed by a web server and are therefore included under the heading of “Web metric.”
  • the script footprint is defined by the script-dimension values in the table.
  • the script-dimension values correspond to the number of times the various dimensions are affected by the script each time the script executes. For example, as shown in Table 1, “Script 1” affects dimension “Query 2” one time per execution, “URL 1” two times per execution, and “URL 3” one time per execution. “Script 2” affects dimension “Query 1” one time per execution, “URL 2” one time per execution, and “URL 3” one time per execution.
  • Table 2 is a table showing target-dimension values corresponding to the number of times per minute that the dimensions in Table 1 ideally be affected.
  • the table specifies that the dimensions “Query 1,” “Query 2,” “URL 1,” “URL 2,” and “URL 3” should be affected 10, 8, 16, 10, and 18 times per minute, respectively. That is, when executing “Script 1” and “Script 2” simultaneously, the number of times that a given dimension is affected should be constrained to the target-dimension values in the Table 2.
  • the target-dimension values of Table 1 and script-dimension values of Table 2 are represented on the left side of Table 3.
  • the middle portion of Table 3 represents the number of times per minute that each script is executed. For example, “ ⁇ 8” indicates that “Script 1” is executed 8 times per minute, and “ ⁇ 10” indicates the “Script 2” is executed 10 times per minute.
  • Table 3 shows the actual number of times per minute that the dimensions are affected by the scripts. For example, when “Script 1” is executed 8 times in one minute, the dimensions “Q2,” “URL 1,” and “URL 3” are affected 8, 16, and 8 times per minute respectively. When “Script 2” is executed 10 times in one minute, the dimensions “Q1,” “URL 2,” and “URL 3” are each affected 10 times per minute.
  • executing “Script 1” eight times per minute and “Script 2” ten times per minute provides a perfect solution to the problem because the actual number of times the dimensions are affected on the right side of the table equals the target-dimension values on the left side of the table.
  • the actual number of times the dimension “Q1” is affected equals 10 times per minute, which is equal to the target-dimension value for “Q1.”
  • the actual numbers of times the other dimensions are affected correspond to the respective target-dimension values. This means that executing “Script 1” 8 times per minute and “Script 2” 10 times per minute is the best combination of execution rates.
  • the target-dimension values are based on historic values.
  • a growth factor may be applied to the target-dimension values to provide more current target-dimension values.
  • all the target-dimension values may be doubled to bring historic target-dimension values to more current values.
  • Table 4 represents a case where a perfect fit cannot be found. In this case, both scripts are executed six times per minute, but this results in an error as shown on the right side.
  • the actual values for dimensions A, B, and C is off by 4, ⁇ 2, and 4 respectively. In other words, functions A and C are affected four times per minute too few times, and dimension B is affected two times per minute too many times.
  • the model may have different metrics that are used simultaneously to define a load. Often times, those metrics have completely different units and scale. For example, database calls may occur at a rate of 5000 times per minute, while a web server may be providing a page at a rate of 30 times per minute.
  • a 10% error on the database end is 500 and is squared to 250,000; where a 10% error or the web server end is 3 squared to 9. Since the error value for the database is much larger, the model will make extreme sacrifices at the cost of web server traffic to meet database targets.
  • Table 5 An example where the units and scale are completely different is shown in Table 5 below.
  • the combination of execution rates that minimizes the total error may correspond to executing “Script 1” 10 times per minute, “Script 2” 19.62 times per minute, and “Script 3” 9.81 times per minute.
  • executing the scripts according to these rates while minimizing the overall error, results in an error of 96.2% in the actual dimension values associated with dimensions B and C as opposed to an error of 1.9% for the actual dimension values associated with dimensions E and F.
  • the error is not balanced across the dimensions. This occurs because the target-dimension values for the “Web metric” is an order of magnitude below those of the “Database metric.”
  • weightings may be provided. There are several ways to define weights between metrics: manual weights and automatic normalization. Manual weights are values that may be assigned to each metric dimension value in a group. The target-dimension values and script-dimension values may be multiplied by this weight, and this may result in a different best fit solution.
  • Table 6 illustrates the results when weighting values are applied.
  • the dimension values A, B, and C are multiplied times the weighting value 10 and the dimension values for D, E, and F are multiplied by the weighting value 1.
  • the combination of execution rates that minimizes the total error now corresponds to executing “Script 1” 10 times per minute, “Script 2” 12 times per minute, and “Script 3” 6 times per minute.
  • the error associated with dimension values B and C decreases from 96.2% to 20% and the error associated with dimension values E and F increases from 1.9% to 40%.
  • the error between dimensions is now more balanced.
  • weightings may be used to even out the effects of different scaled metrics, they may also be used to introduce an intended preferential treatment of metrics. For example, in Table 7 below, preference is given to the “Web metric” dimensions rather than the “Database metric” dimensions by weighting the “Web metric” dimensions by a factor of 15 and weighting the “Database metric” dimensions by a factor of 1.
  • Normalized weighting values may also be applied. Normalized weights attempt to find a fit that considers all metrics equally. Tables 8-10 illustrate various ways in which the weights may be normalized.
  • Table 8 illustrates weighting based on the peak metric value within a group.
  • target-dimension values in the first group are divided by the maximum target-dimension value of 10. This effectively changes the target-dimensions in the first group from 10, 1, and 1 to 1, 0.1, and 0.1 respectively.
  • Target-dimension values in the second group are divided by 100, which is the maximum target-dimension value in that group. This effectively changes the target-dimensions in the second group from 100, 24, and 4 to 1, 0.24, and 0.04 respectively.
  • Table 9 below illustrates weighting based on the average of the target-dimension values within a group.
  • the first group of target-dimension values is divided by 4.6, which corresponds to the average of the target-dimension values 10, 3, and 1
  • the second group of target-dimension values is divided by 42.6, which is the average of the target-dimension values 100, 24, and 4. This effectively changes the target-dimension values of the first group to 2.2, 0.65, and 0.22, and the target-dimension values of the second group to 2.35, 0.56, and 0.09 respectively.
  • Table 10 below illustrates weighting based on the sum of the target-dimension values within a group.
  • the first group of target-dimension values is divided by 45, which corresponds to the sum of the target-dimension values 12, 10, 8, 7, and 4
  • the second group of target-dimension values is divided by 25, which is the sum of the target-dimension values 2, 2, 12, 9, and 4. This effectively changes the target-dimension values of the first group to 0.27, 0.22, 0.18, 0.16, and 0.09, respectively, and the target-dimension values of the second group to 0.08, 0.08, 0.48, 0.36, and 0.16, respectively.
  • FIG. 5 is a portion of an exemplary result table 500 that is generated by the load model after completing the steps of defining metrics, building the component tree, and inputting target and script profiles.
  • the portion of the exemplary result table 500 may be communicated to a display, so that an operator may assess the effectiveness of the load model in testing a system under test.
  • the left side of the result table 500 includes groups 510 and scripts 512 .
  • the top of the result table 500 includes an execution rate column 517 and dimensions 515 . Each group 510 includes a row for the targeted-dimension value.
  • the targeted-dimension values for the “ID Authentication,” “Alt Authentication,” “Password Update,” “Group Authentication,” and “Privileges” dimensions as related to the “Test Plan” group correspond to 5003, 2345, 45.3, 0, and 0, respectively.
  • the targeted-dimension values for the “ID Authentication,” “Alt Authentication,” “Password Update,” “Group Authentication,” and “Privileges” dimensions as related to the “Admin App” group correspond to 0, 45.0, 0, 121, and 0, respectively.
  • Each script 512 row includes values corresponding to the actual dimension-effect values of a script.
  • the actual dimension-effect value of a script corresponds to the dimension-effect values of a script multiplied by the execution rate value in the corresponding execution rate column 517 .
  • the values in the execution rate column 517 correspond to the combination of execution rates that minimizes the total error, as described above. In this case, executing the scripts “Account Disable,” “Account Search,” “Account Updating,” and “Admin Login” 28.6, 10.1, 2.21, and 107 times per minutes, respectively, minimizes the total error.
  • the “Account Disable” script affects the dimensions “ID Authentication,” “Password Update,” and “Group Authentication” 28.6 times per minute, and the “Privileges” dimension 57.2 times per minute.
  • the “Account Search” script affects the dimensions “Group Authentication” and “Privileges” 10.1 times per minute.
  • the “Account Updating” script affects the dimensions “ID Authentication” and “Group Authentication” 2.21 times per minute and “Privileges” dimension 4.42 times per minute.
  • the “Admin Login” script affects the dimensions “ID Authentication,” “Group Authentication,” and “Privileges” 107 times per minute.
  • Each group 510 also includes a fitness row 505 .
  • the fitness row 505 includes values that describe how close the traffic generated by the model may be to the target traffic. A score of 0% means that the target was completely missed, and a value of 100% is a perfect match. There are a number of ways to interpret the data in the result table 500 . Instances where the fitness score is 0% may indicate that no script touches on a metric dimension. When this is the case, the load model application will be unable to generate a solution that generates traffic for that dimension. This problem may be corrected by introducing new scripts or expanding existing scripts.
  • Instances where the fitness score is low for a number of metric dimensions shared by a script may indicate that real-world users, from where the target information comes from, are utilizing a particular application in an unexpected way. For example, if a script goes from page A to page B, then to page C every time, but real-world users skip page B 90% of the time, there may not be a good solution.
  • FIG. 6 is a flow diagram that describes the operations of the load model application described above.
  • the footprint information for various scripts utilized to test a computer system may be specified.
  • a script footprint is a measure of the way in which a script affects the computer system as represented by the number of times per time period the script exercises operations of the computer system.
  • Receiving the footprint information may involve specifying information such as a list of metric dimensions that may be affected by the script, numeric values for each metric dimension that measures the effect of an individual execution of that script, and the minimum duration expected when running the script in a clean environment. This information may be specified via the exemplary table 325 of FIG. 3 or a different user interface.
  • target information may be specified.
  • target information includes target dimension values corresponding to a desired number of times per time period each dimension above should be affected.
  • the information may include a list of metrics, weights for each metric, values for dimensions associated with each metric, and growth factors for each metric. This information may be specified via the exemplary table 420 of FIG. 4 or a different user interface.
  • a growth factor may be applied to all the target-dimension values, so as to predict some future state to guide the model when making compromises.
  • the growth factor may be applied when the target-dimension values are based on historic values.
  • the growth factor may enable converting the historic target-dimension values to target-dimension values that are more representative of the current state of the computer system. For example, all the target-dimension values may be doubled to bring historic target-dimension values to more current values
  • weights may be applied to all the target-dimension values for a given metric.
  • the weights may be utilized to define the relative importance of the metrics.
  • the weights may be manually specified or automatically normalized. Manual weights may be utilized to provide specialized treatment for a group of metrics. Normalized weights may be utilized to even out the effects of different scaled metrics. In this case the weightings may be based on the maximum target-dimension value, average target-dimension values, or sum of target-dimension values within a metric.
  • the load model application may determine the optimal number of times to execute scripts, so as to minimize the differences between the number of times the dimensions are affected and the target-dimension values.
  • the load model application may utilize an LLS algorithm or other suitable algorithm utilized to determine a set of coefficients that minimizes the error of a system of equations.
  • the LLS algorithm may determine the optimal number of times to execute the various scripts, so as to minimize the overall difference between the target-dimension values and the actual-dimension values.
  • the load model application may generate a result table, such as the exemplary result table 500 of FIG. 5 .
  • the result table may be communicated to a display, so that an operator may interpret the results.
  • the scripts may be executed according to the execution rates determined at block 620 .
  • the processor 115 may execute test scripts 105 according to a load model 120 , so as to test the functionality of various computer systems, such as an e-mail server 125 , web server 130 , and database server 135 .
  • the load model may be communicated to another system that is operative to test the system under test.
  • FIG. 7 illustrates a general computer system 700 , which may represent the processor 115 of the test system 100 shown in FIG. 1 ; or the e-mail server 125 , web server 130 , database server 135 shown in FIG. 1 ; or any of the other computing devices referenced herein.
  • the computer system 700 may include a set of instructions 745 that may be executed to cause the computer system 700 to perform any one or more of the methods or computer-based functions disclosed herein.
  • the computer system 700 may operate as a stand-alone device or may be connected, e.g., using a network, to other computer systems or peripheral devices.
  • the computer system may operate in the capacity of a server or as a client user computer in a server-client user network environment, or as a peer computer system in a peer-to-peer (or distributed) network environment.
  • the computer system 700 may also be implemented as or incorporated into various devices, such as a personal computer (PC), a tablet PC, a set-top box (STB), a personal digital assistant (PDA), a mobile device, a palmtop computer, a laptop computer, a desktop computer, a communications device, a wireless telephone, a land-line telephone, a control system, a camera, a scanner, a facsimile machine, a printer, a pager, a personal trusted device, a web appliance, a network router, switch or bridge, or any other machine capable of executing a set of instructions 745 (sequential or otherwise) that specify actions to be taken by that machine.
  • PC personal computer
  • PDA personal digital assistant
  • the computer system 700 may be implemented using electronic devices that provide voice, video or data communication. Further, while a single computer system 700 may be illustrated, the term “system” shall also be taken to include any collection of systems or sub-systems that individually or jointly execute a set, or multiple sets, of instructions to perform one or more computer functions.
  • the computer system 700 may include a processor 705 , such as a central processing unit (CPU), a graphics processing unit (GPU), or both.
  • the processor 705 may correspond to the processor 115 of the test system 100 .
  • the processor 705 may be a component in a variety of systems.
  • the processor 705 may be part of a standard personal computer or a workstation.
  • the processor 705 may be one or more general processors, digital signal processors, application specific integrated circuits, field programmable gate arrays, servers, networks, digital circuits, analog circuits, combinations thereof, or other now-known or later-developed devices for analyzing and processing data.
  • the processor 705 may implement a software program, such as code generated manually (i.e., programmed).
  • the computer system 700 may include a memory 710 that can communicate via a bus 720 .
  • the memory 710 may be a main memory, a static memory, or a dynamic memory.
  • the memory 710 may include, but may not be limited to, computer-readable storage media such as various types of volatile and non-volatile storage media including, but not limited to, random access memory, read-only memory, programmable read-only memory, electrically programmable read-only memory, electrically erasable read-only memory, flash memory, magnetic tape or disk, optical media and the like.
  • the memory 710 may include a cache or random access memory for the processor 705 .
  • the memory 710 may be separate from the processor 705 , such as a cache memory of a processor, the system memory, or other memory.
  • the memory 710 may be an external storage device or database for storing data. Examples may include a hard drive, compact disc (“CD”), digital video disc (“DVD”), memory card, memory stick, floppy disc, universal serial bus (“USB”) memory device, or any other device operative to store data.
  • the memory 710 may be operable to store instructions 745 executable by the processor 705 .
  • the functions, acts or tasks illustrated in the figures or described herein may be performed by the programmed processor 705 executing the instructions 745 stored in the memory 710 .
  • processing strategies may include multiprocessing, multitasking, parallel processing and the like.
  • the computer system 700 may further include a display 730 , such as a liquid crystal display (LCD), an organic light emitting diode (OLED), a flat panel display, a solid state display, a cathode ray tube (CRT), a projector, a printer or other now-known or later-developed display device for outputting determined information.
  • a display 730 such as a liquid crystal display (LCD), an organic light emitting diode (OLED), a flat panel display, a solid state display, a cathode ray tube (CRT), a projector, a printer or other now-known or later-developed display device for outputting determined information.
  • the display 730 may act as an interface for the user to see the functioning of the processor 705 , or specifically as an interface with the software stored in the memory 710 or in a drive unit 715 .
  • the computer system 700 may include an input device 725 configured to allow a user to interact with any of the components of system 700 .
  • the input device 725 may be a number pad, a keyboard, or a cursor control device, such as a mouse, or a joystick, touch screen display, remote control or any other device operative to interact with the system 700 .
  • the computer system 700 may also include a disk or optical drive unit 715 .
  • the disk drive unit 715 may include a computer-readable medium 740 in which one or more sets of instructions 745 , e.g., software, can be embedded. Further, the instructions 745 may perform one or more of the methods or logic as described herein.
  • the instructions 745 may reside completely, or at least partially, within the memory 710 and/or within the processor 705 during execution by the computer system 700 .
  • the memory 710 and the processor 705 also may include computer-readable media as discussed above.
  • the present disclosure contemplates a computer-readable medium 740 that includes instructions 745 or receives and executes instructions 745 responsive to a propagated signal; so that a device connected to a network 750 may communicate voice, video, audio, images or any other data over the network 750 .
  • the instructions 745 may be implemented with hardware, software and/or firmware, or any combination thereof. Further, the instructions 745 may be transmitted or received over the network 750 via a communication interface 735 .
  • the communication interface 735 may be a part of the processor 705 or may be a separate component.
  • the communication interface 735 may be created in software or may be a physical connection in hardware.
  • the communication interface 735 may be configured to connect with a network 750 , external media, the display 730 , or any other components in system 700 , or combinations thereof.
  • the connection with the network 750 may be a physical connection, such as a wired Ethernet connection or may be established wirelessly as discussed below.
  • the additional connections with other components of the system 700 may be physical connections or may be established wirelessly.
  • the network 750 may include wired networks, wireless networks, or combinations thereof.
  • the wireless network may be a cellular telephone network, an 802.11, 802.16, 802.20, or WiMax network.
  • the network 750 may be a public network, such as the Internet, a private network, such as an intranet, or combinations thereof, and may utilize a variety of networking protocols now available or later developed including, but not limited to, TCP/IP based networking protocols.
  • the computer-readable medium 740 may be a single medium, or the computer-readable medium 740 may be a single medium or multiple media, such as a centralized or distributed database, and/or associated caches and servers that store one or more sets of instructions.
  • the term “computer-readable medium” may also include any medium that may be capable of storing, encoding or carrying a set of instructions for execution by a processor or that may cause a computer system to perform any one or more of the methods or operations disclosed herein.
  • the computer-readable medium 740 may include a solid-state memory such as a memory card or other package that houses one or more non-volatile read-only memories.
  • the computer-readable medium 740 also may be a random access memory or other volatile re-writable memory.
  • the computer-readable medium 740 may include a magneto-optical or optical medium, such as a disk or tapes or other storage device to capture carrier wave signals such as a signal communicated over a transmission medium.
  • a digital file attachment to an e-mail or other self-contained information archive or set of archives may be considered a distribution medium that may be a tangible storage medium. Accordingly, the disclosure may be considered to include any one or more of a computer-readable medium or a distribution medium and other equivalents and successor media, in which data or instructions may be stored.
  • dedicated hardware implementations such as application-specific integrated circuits, programmable logic arrays and other hardware devices, may be constructed to implement one or more of the methods described herein.
  • Applications that may include the apparatus and systems of various embodiments may broadly include a variety of electronic and computer systems.
  • One or more embodiments described herein may implement functions using two or more specific interconnected hardware modules or devices with related control and data signals that may be communicated between and through the modules, or as portions of an application-specific integrated circuit. Accordingly, the present system may encompass software, firmware, and hardware implementations.
  • the method and system may be realized in hardware, software, or a combination of hardware and software.
  • the method and system may be realized in a centralized fashion in at least one computer system or in a distributed fashion where different elements are spread across several interconnected computer systems. Any kind of computer system or other apparatus adapted for carrying out the methods described herein is suited.
  • a typical combination of hardware and software may be a general-purpose computer system with a computer program that, when being loaded and executed, controls the computer system such that it carries out the methods described herein.
  • the method and system may also be embedded in a computer program product, which includes all the features enabling the implementation of the methods described herein and which, when loaded in a computer system, is able to carry out these methods.
  • Computer program in the present context means any expression, in any language, code or notation, of a set of instructions intended to cause a system having an information processing capability to perform a particular function either directly or after either or both of the following: a) conversion to another language, code or notation; b) reproduction in a different material form.
  • the embodiments disclosed herein provide an approach for optimizing the testing of computer systems. For example, metrics and dimensions for a system may be determined. Then the footprint of test scripts may be determined. The footprint may include dimension-effect values corresponding to the number of times a dimension is affected by a script. Next, target-dimension values corresponding to a desired number of times per period each dimension should be affected is specified. Then an LLS algorithm is utilized to determine the number of times to execute the script within the period so as to minimize the difference between the actual number of times dimensions are affected and the target-dimension values. Finally, the scripts are executed on the computer system the determined number of times within the time period. The computer system in turn exercises operations of other computer systems according to the scripts.

Abstract

A method and system for testing a computer system is provided. In one implementation the method and system may include receiving a script footprint that includes dimension-effect values corresponding to the number of times a computer system dimension is affected by the script. Target information may also be received. The target information includes target dimension values corresponding to a desired number of times per time period each dimension should be affected. The method and system may determining the number of times to execute the scripts within the time period so as to minimize the difference between the actual number of times dimensions are affected and the target-dimension value per time period. The method and system may also execute the script on the computer system the determined number of times within the time period.

Description

BACKGROUND
1. Field
The present embodiment relates to computer performance testing. More specifically, the present embodiment relates to a method and system for optimizing the testing of computers systems.
2. Background Information
As computer systems have evolved, so have the complexity of applications that operate on these systems. Oftentimes teams of developers create these complex applications and systems. Testing these systems becomes more and more difficult as the complexity increases. In some cases, applications and systems undergo alpha and beta testing. This typically involves allowing select groups of individuals to exercise various functions supported by the systems in an attempt to identify any deficiencies the systems may have.
A system characterized by large number of users may also undergo performance testing to assess the readiness or real world performance of the system. This involves determining whether the real world operation of the running system meets set expectations. To make this determination, the system under test should: 1) match or correlate directly to a production system being certified, 2) be monitored continuously throughout testing to determine whether the results meet or fail appropriate targets, and 3) be operating on a realistic workload which reflects real world usage. However, reaching this third requirement can be challenging.
SUMMARY OF INVENTION
To address the problems outlined above, a method and system for testing a computer system is provided. In one implementation, the method may receive a script footprint that includes dimension-effect values corresponding to the number of times a computer system dimension is affected by the script. A script corresponds to a code listing, executed by a processor that enables testing functionality associated with the computer system. A script footprint is a measure of the way in which a script affects the computer system as represented by the number of times per time period the script exercises operations of the computer system. A dimension corresponds to an operation performed by the computer system that a script may or may not exercise. For example, an authentication server may perform operations such as identity authentication, alternate identity authentication, password updating, group authentication, and/or operations for setting privileges with the system. Target information may also be received. The target information includes target dimension values corresponding to a desired number of times per time period each dimension should be affected. The method and system may determine the number of times to execute the scripts within the time period, so as to minimize the difference between the actual number of times dimensions are affected and the target-dimension during the time period. The method and system may also execute the script on the computer system the determined number of times within the time period.
In one aspect of the present invention, the differences between the actual number of times dimensions are affected, and the target-dimension values, are minimized via a linear-least-squares algorithm.
In another aspect of the present invention, a weighting factor may be applied to the dimension-effect values and the target-dimension values, so as to define the relative importance of the metrics and/or to even out the effects of different scaled metrics. The weighting factor may correspond to the reciprocal of a highest of the target-dimension values, the reciprocal of an average of all of the target-dimension values, and/or the reciprocal of a sum of all of the target-dimension values.
In yet another aspect of the present invention, the target-dimension values may be scaled by a growth factor to provide more current target-dimension values in cases where the target-dimension values are based on historical data.
BRIEF DESCRIPTION OF DRAWINGS
FIG. 1 is an embodiment of a test system for testing the robustness of computer systems in accordance with the present invention;
FIG. 2 is an embodiment of a tree structure of a test plan for testing a computer system in accordance with the present invention;
FIG. 3 is a portion of an exemplary table generated by an embodiment of a load model application that enables characterizing various test scripts utilized to test a computer system in accordance with the present invention;
FIG. 4 is a portion of an exemplary table generated by the load model application used in FIG. 3 that enables specifying target information for groups in accordance with the present invention;
FIG. 5 is a portion of an exemplary result table generated by the load model application of FIG. 3 in accordance with the present invention;
FIG. 6 is a flow diagram that describes the operations of the load model application of FIG. 3; and
FIG. 7 schematically illustrates an embodiment of a computer system using the load model application of FIG. 3 in accordance with the present invention.
DETAILED DESCRIPTION OF THE INVENTION
FIG. 1 is a test system 100 for testing the robustness of computer systems. The test system 100 includes a processor 115, test scripts 105, and a load model 120.
The processor 115 may correspond to any conventional computer or other data processing device capable of executing applications for verifying the functionality of computer systems. For example, the processor 115 may correspond to an Intel®, AMD®, or PowerPC® based processor operating, for example, a Microsoft Windows®, Linux, or other Unix® based operating system. The processor 115 may be adapted to execute applications such as a load model application and/or a load testing application, such as HP Load Runner®. The load testing application executes test scripts for testing the computer systems according to the load model. The processor 115 may also be adapted to communicate with the computer systems via an interface, such as a network interface.
The test scripts 105 correspond to code listings, executed by the processor 115, that enable testing functionality associated with the various computer systems. The code listings may correspond to a scripting language, such as Java® and/or Microsoft Visual Basic®. The code listings may also correspond to programming languages, such a C or C++. The test scripts 105 may include code listings designed to simulate human interactions associated with the various computer systems to be tested by test system 100. For example, a first test script may include code that enables simulating human interactions associated with reading and writing e-mail messages. In this case, the test script may include code that generates an e-mail message, selects a recipient for the e-mail message, and communicates the e-mail message to an e-mail server 125. A second test script may include code that enables simulating human interactions associated with browsing web pages. For example, the script may include code that requests a web page from a web server 130, specifies fields on the web page, and communicates the fields back to the web server 130. A third test script may include code that enables simulating human interactions associated with database interactions. For example, the script may include code that retrieves and stores data to and from a database server 135. Other systems and scripts for testing the systems may exist as well.
In operation, the processor 115 executes the test scripts 105 according to a load model 120. The load model 120 is generated by a load model application and specifies how many times per time period each of the test scripts 105 is executed. For example, a first script may be executed 159.24 times per minute and a second script may be executed 26.29 times per minute. How often to execute a given script is determined by the load model application. The load model application determines the execution rates of the scripts based on a variety of factors including various metrics that include dimensions. A metric corresponds to a collection of dimensions associated with a particular aspect of a system under test. A dimension corresponds to an operation performed within that particular metric or aspect of the system. For example, authentication metrics may be associated with an authentication server that performs operations, such as identity authentication, alternate identity authentication, password updating, group authentication, and/or operations for setting privileges with the system.
FIGS. 2-4 describe various user interfaces and operations associated with the load model application.
FIG. 2 is a tree structure 220 of a test plan for testing a computer system by the test system 100. The test plan may be created via the load model application described above. The tree structure 220 includes various group nodes 225 and several script nodes 210. The tree structure 220 may be utilized to define the relationships between various scripts utilized to test the computer system. The internal functionality of the load model application is guided by the grouping of the scripts. Often times there are important relationships between the scripts that may need to be taken into consideration. These relationships may be inherent to the way the software or computer systems operate. For example, in many web applications, after a user successfully logs in, the user is presented with a welcome screen. If there are two scripts, one for logging in and one for navigating the welcome screen, it may make sense to run them one after another. However, an unconstrained load model may attempt to break this relationship between the scripts because it was not aware of it. The tree structure 220 enables constraining the load model, so as to maintain these relationships.
Group nodes 225 are utilized to group related scripts and/or other groups, and to specify targeting information utilized to constrain the execution rates of scripts associated with the group. For example, a “Test Plan” group node 200 may be utilized to specify targeting information for all the scripts below the “Test Plan” group node 200 in the tree structure 220, which in this example includes those scripts below the “Admin App group” group node 205, “Batch Related” group node 207, and “Client App” group node 209. Each of these group nodes in turn may be utilized to specify targeting information for scripts below the respective group. For example, the “Admin App” group node 205 may be utilized to specify targeting information for all the scripts below the “Admin App” group node 205, which includes script nodes 210 for disabling an account, searching an account, updating an account, and logging into an account, by an administrator.
The previously mentioned target information includes a list of metrics, weights for each metric, values for dimensions associated with each metric, and growth factors for each metric. Weights are utilized to define the relative importance of the metrics. The target dimension values may be based on historic or expected values. Each metric in the profile may be multiplied by a growth factor to predict some future state or weighted to guide the model when making compromises. The growth factors are expressed as a multiple of historic values.
Script nodes 210 are utilized to specify a script footprint. A script corresponds to a code listing, executed by a processor that enables testing functionality associated with a computer system that is under test. A script footprint is a measure of the way in which a script affects the computer system as represented by the number of times per time period the script exercises operations of the computer system. A script footprint includes a list of dimensions that may be affected by the script, numeric values for each dimension that measures the effect of an individual execution of that script, and the minimum duration expected when running the script alone, without any other scripts running.
The footprint of a script may be determined by a combination of application knowledge and experimentation. It may be important to determine which metrics apply to a script. Often it takes access to an analyst with specific knowledge of the application to determine which web servers, databases and backend systems are touched when running a script. If that is not available, it may be possible to determine this information by running the script alone, without any other scripts running, and logging and/or monitoring the behavior of the script.
After determining the list of metrics that apply to a script, it may be best to measure the effects of the script in a clean environment with logging/monitoring enabled for those system metrics that have been targeted. Running a script multiple times may help identify and eliminate errors or unintended traffic. For example, if a script is run twenty times in succession, then the resulting traffic may occur in multiples of twenty. Metrics that are not in even multiples either show interfering background traffic (which can often be ignored and subtracted) or a variable (non-deterministic) behavior. This type of traffic may be accounted for with fractional values.
FIG. 3 is a portion of an exemplary table 325 generated by the load model application that enables specifying the footprint of various scripts utilized to test computer systems, as described above. The table 325 includes scripts 310, dimensions 315 that may be affected by a script, and dimension-effect-fields 320. Each dimension 315 corresponds to an operation performed by the system under test. For example, an authentication server of the system under test may perform operations such as identity authentication, alternate identity authentication, password updating, group authentication, and/or operations for setting privileges with the system. These operations are represented by the dimensions 315 at the top of the table 325.
The dimension-effect-fields 320 are input fields of the exemplary table that are utilized to specify a value associated with the effect of an individual execution of a script on a given dimension 315. For example, referring to FIG. 3, the “Account Disable” script may affect the “ID Authentication,” “Password Update,” and “Group Authentication” dimensions 315 one time per execution, and the “Privileges” dimension two times per execution, as illustrated by the dimension-effect values 1.0 and 2.0 in the exemplary table 325. The “Admin Login” script may affect the “ID Authentication” dimension and “Group Authentication” dimension one time per execution.
The scripts 310 in the exemplary table 325 are grouped together according to the tree structure 220 of FIG. 2. For example, in the exemplary table 325, the scripts 310 “Account disable,” “Account Search,” “Account Updating,” and “Admin Login” are grouped below the “Admin App” group 305, which in turn is below the “Test Plan” group 300.
FIG. 4 is a portion of an exemplary table 420 generated by the load model application that enables specifying target information for groups as described above. The table 420 includes group rows 402, dimensions 410, and target-dimension fields 415. The group rows 402 shown in the exemplary table 420 may correspond to groups shown in the exemplary tree structure 220 of FIG. 2. For example, the table 420 includes a “Test Plan” group row 400 row and an “Admin App” group row 405.
The dimensions 410 may correspond to dimensions that are effected by the various scripts, such as the dimensions “ID Authentication,” “Alt Authentication,” “Password Update,” “Group Authentication,” and “Privileges” as described above with reference to FIG. 3.
The target-dimension fields 415 are utilized to specifying target-dimension values for each dimension. In the exemplary table 420, the target dimensions 415 specified in the “Test Plan” group row 400 and “Admin App” group row 405 are utilized to constrain the number of times per time period the dimensions 410 are affected by scripts associated with those groups. For example, referring to the table 420, the number of times per minute the dimensions “ID Authentication,” “Alt Authentication,” and “Password Update” may be affected by scripts that are part of the “Test Plan” group 400 may be constrained to 5003, 2345, and 45.3 times per minute respectively. Similarly, within the “Admin App” group 405 the number of times per minute the dimensions “Alt Authentication” and “Group Authentication” may be affected by scripts that are part of the “Admin App” group 405 may be constrained to 45.0 and 121 times per minute, respectively.
Tables 1-10 below describe how the load model application determines the optimal execution rate, of the individual scripts, necessary to meet the defined targets. Table 1 shows exemplary script footprints or script profiles associated with various scripts. In this case, the Table defines the footprints of scripts named “Script 1” and “Script 2”. The dimensions of a system under test that may or may not be affected by the scripts are listed across the top of the table and include, “Query 1,” “Query 2,” “URL 1,” “URL 2,” and “URL 3.” The query dimensions may correspond to operations performed by a database and are therefore included under the heading of “Database metric”. Similarly, the URL dimensions may correspond to operations performed by a web server and are therefore included under the heading of “Web metric.”
The script footprint is defined by the script-dimension values in the table. The script-dimension values correspond to the number of times the various dimensions are affected by the script each time the script executes. For example, as shown in Table 1, “Script 1” affects dimension “Query 2” one time per execution, “URL 1” two times per execution, and “URL 3” one time per execution. “Script 2” affects dimension “Query 1” one time per execution, “URL 2” one time per execution, and “URL 3” one time per execution.
TABLE 1
Exemplary script profiles
Database metric Web metric
Query
1 Query 2 URL 1 URL 2 URL 3
Script 1 0 1 2 0 1
Script 2 1 0 0 1 1
Table 2 below is a table showing target-dimension values corresponding to the number of times per minute that the dimensions in Table 1 ideally be affected.
TABLE 2
Exemplary target function execution counts
Database metric Web metric
Query
1 Query 2 URL 1 URL 2 URL 3
Target 10 8 16 10 18
In this case, the table specifies that the dimensions “Query 1,” “Query 2,” “URL 1,” “URL 2,” and “URL 3” should be affected 10, 8, 16, 10, and 18 times per minute, respectively. That is, when executing “Script 1” and “Script 2” simultaneously, the number of times that a given dimension is affected should be constrained to the target-dimension values in the Table 2.
The question becomes: What is the optimal number of times to execute “Script 1” and “Script 2” in Table 1, so as to best match the target-dimensions values of Table 2? Table 3 represents a solution to this problem.
TABLE 3
Exemplary perfect fit solution
Database Web metric Database Web metric
Q1 Q2 URL 1 URL 2 URL 3 Q1 Q2 URL 1 URL 2 URL 3
Target 10 8 16 10 18 10 8 16 10 18
Script 1 0 1 2 0 1  ×8 0 8 16 0 8
Script 2 1 0 0 1 1 ×10 10 0 0 10 10
The target-dimension values of Table 1 and script-dimension values of Table 2 are represented on the left side of Table 3. The middle portion of Table 3 represents the number of times per minute that each script is executed. For example, “×8” indicates that “Script 1” is executed 8 times per minute, and “×10” indicates the “Script 2” is executed 10 times per minute.
The right side of Table 3 shows the actual number of times per minute that the dimensions are affected by the scripts. For example, when “Script 1” is executed 8 times in one minute, the dimensions “Q2,” “URL 1,” and “URL 3” are affected 8, 16, and 8 times per minute respectively. When “Script 2” is executed 10 times in one minute, the dimensions “Q1,” “URL 2,” and “URL 3” are each affected 10 times per minute.
In this case, executing “Script 1” eight times per minute and “Script 2” ten times per minute provides a perfect solution to the problem because the actual number of times the dimensions are affected on the right side of the table equals the target-dimension values on the left side of the table. For example, the actual number of times the dimension “Q1” is affected equals 10 times per minute, which is equal to the target-dimension value for “Q1.” Likewise, the actual numbers of times the other dimensions are affected correspond to the respective target-dimension values. This means that executing “Script 1” 8 times per minute and “Script 2” 10 times per minute is the best combination of execution rates.
Often times, the target-dimension values are based on historic values. In this case a growth factor may be applied to the target-dimension values to provide more current target-dimension values. For example, all the target-dimension values may be doubled to bring historic target-dimension values to more current values.
The previous example was a perfect solution because all the target-dimension values were met. This may happen under real-world conditions where the scripts are faithful to user behaviors. However, oftentimes, user behaviors are too complex, or the target-dimension values are estimated making it difficult, if not impossible, to find a perfect fit. This may result in an error between the actual and target-dimension values. The goal in these situations is to find a combination of script execution rates that minimizes this error. The following paragraphs explain how this is accomplished.
TABLE 4
Best fit solution
A B C A B C
Target 10 10 10 × Target 10 10 10
Script 1 1 1 0 6 Script 1 6 6 0
Script 2 0 1 1 6 Script 2 0 6 6
Totals 6 12 6
Errors 4 −2 4
Table 4 represents a case where a perfect fit cannot be found. In this case, both scripts are executed six times per minute, but this results in an error as shown on the right side. Here, the actual values for dimensions A, B, and C is off by 4, −2, and 4 respectively. In other words, functions A and C are affected four times per minute too few times, and dimension B is affected two times per minute too many times.
Since each error, negative or positive, represents some level of failure to meet the target, all the error values should be counted. Directly adding these numbers together is not possible since the negative and positive errors will cancel out. Instead, the total error is computed according to the following equation:
Total Error=42+(−2)2+42
The advantage of representing the error this way is that it lends itself to the application of the mathematical approach known as Linear Least Square (LLS). This approach enables finding combinations of script execution rates that minimize the total error.
Generally, application of the LLS approach begins by considering an over determined system in the form
j = 1 n X ij β = y i , ( i = 1 , 2 , , m ) ,
of m linear equations in n unknowns, β1, β2, . . . , βn, with m>n, written in matrix form as Xβ=y. The Linear Least Square approach has a unique solution, provided that the n columns of the matrix X are linearly independent. The solution may be obtained by solving the normal equations (XTX){circumflex over (β)}=XTy.
When finding a best combination, the model may have different metrics that are used simultaneously to define a load. Often times, those metrics have completely different units and scale. For example, database calls may occur at a rate of 5000 times per minute, while a web server may be providing a page at a rate of 30 times per minute. When these two metrics are processed by the LLS algorithm, a 10% error on the database end is 500 and is squared to 250,000; where a 10% error or the web server end is 3 squared to 9. Since the error value for the database is much larger, the model will make extreme sacrifices at the cost of web server traffic to meet database targets. An example where the units and scale are completely different is shown in Table 5 below.
TABLE 5
Unweighted best fit solution
Web metric Database metric Web metric Database metric
A B C D E F A B C D E F
Target 10 10 10 100 100 100 × 10 10 10 100 100 100
Script 1 1 0 0 10 0 0 10 10 0 0 100 0 0
Script 2 0 1 0 0 5 0 19.62 0 19.62 0 0 98.1 0
Script 3 0 0 2 0 0 10 9.81 0 0 19.62 0 0 98.1
Errors 0 −9.62 −9.62 0 1.92 1.92
% 0.0% 96.2% 96.2% 0.0% 1.9% 1.9%
In this example, the combination of execution rates that minimizes the total error may correspond to executing “Script 1” 10 times per minute, “Script 2” 19.62 times per minute, and “Script 3” 9.81 times per minute. However, executing the scripts according to these rates, while minimizing the overall error, results in an error of 96.2% in the actual dimension values associated with dimensions B and C as opposed to an error of 1.9% for the actual dimension values associated with dimensions E and F. In other words, the error is not balanced across the dimensions. This occurs because the target-dimension values for the “Web metric” is an order of magnitude below those of the “Database metric.”
As this may not be the intended result, weightings may be provided. There are several ways to define weights between metrics: manual weights and automatic normalization. Manual weights are values that may be assigned to each metric dimension value in a group. The target-dimension values and script-dimension values may be multiplied by this weight, and this may result in a different best fit solution.
Table 6 below illustrates the results when weighting values are applied. In this case, the dimension values A, B, and C are multiplied times the weighting value 10 and the dimension values for D, E, and F are multiplied by the weighting value 1. With these weightings applied, the combination of execution rates that minimizes the total error now corresponds to executing “Script 1” 10 times per minute, “Script 2” 12 times per minute, and “Script 3” 6 times per minute. When executing the scripts according to these rates, the error associated with dimension values B and C decreases from 96.2% to 20% and the error associated with dimension values E and F increases from 1.9% to 40%. The error between dimensions is now more balanced.
TABLE 6
Manually weighted best fit solution
Web metric Database metric Web metric Database metric
A B C D E F A B C D E F
Target 10 10 10 100 100 100 × 10 10 10 100 100 100
Script 1 1 0 0 10 0 0 10 10 0 0 100 0 0
Script 2 0 1 0 0 5 0 12 0 12 0 0 60 0
Script 3 0 0 2 0 0 10  6 0 0 12 0 0 60
Weights 10 1 Errors 0 −2 −2 0 40 40
% 0.0% 20.0% 20.0% 0.0% 40.0% 40.0%
While weightings may be used to even out the effects of different scaled metrics, they may also be used to introduce an intended preferential treatment of metrics. For example, in Table 7 below, preference is given to the “Web metric” dimensions rather than the “Database metric” dimensions by weighting the “Web metric” dimensions by a factor of 15 and weighting the “Database metric” dimensions by a factor of 1.
TABLE 7
Manually weighted to provide preferential results
Web metric Database metric Web metric Database metric
A B C D E F A B C D E F
Target 10 10 10 100 100 100 × 10 10 10 100 100 100
Script 1 1 0 0 10 0 0 10 10 0 0 100 0 0
Script 2 0 1 0 0 5 0 12 0 12 0 0 60 0
Script 3 0 0 2 0 0 10  6 0 0 12 0 0 60
Weights 15 1 Errors 0 −1 −1 0 45 45
% 0.0% 10.0% 10.0% 0.0% 45.0% 45.0%
Normalized weighting values may also be applied. Normalized weights attempt to find a fit that considers all metrics equally. Tables 8-10 illustrate various ways in which the weights may be normalized.
Table 8 illustrates weighting based on the peak metric value within a group. In this case, target-dimension values in the first group are divided by the maximum target-dimension value of 10. This effectively changes the target-dimensions in the first group from 10, 1, and 1 to 1, 0.1, and 0.1 respectively. Target-dimension values in the second group are divided by 100, which is the maximum target-dimension value in that group. This effectively changes the target-dimensions in the second group from 100, 24, and 4 to 1, 0.24, and 0.04 respectively.
TABLE 8
Peak value weighting
Target 10 1 1 100 24 4
Weights 1/10 1/100
Table 9 below illustrates weighting based on the average of the target-dimension values within a group. In this case, the first group of target-dimension values is divided by 4.6, which corresponds to the average of the target-dimension values 10, 3, and 1, and the second group of target-dimension values is divided by 42.6, which is the average of the target-dimension values 100, 24, and 4. This effectively changes the target-dimension values of the first group to 2.2, 0.65, and 0.22, and the target-dimension values of the second group to 2.35, 0.56, and 0.09 respectively.
TABLE 9
Average value weighting
Target 10 3 1 100 24 4
Weights 1/4.6 1/42.6
Table 10 below illustrates weighting based on the sum of the target-dimension values within a group. In this case, the first group of target-dimension values is divided by 45, which corresponds to the sum of the target-dimension values 12, 10, 8, 7, and 4, and the second group of target-dimension values is divided by 25, which is the sum of the target- dimension values 2, 2, 12, 9, and 4. This effectively changes the target-dimension values of the first group to 0.27, 0.22, 0.18, 0.16, and 0.09, respectively, and the target-dimension values of the second group to 0.08, 0.08, 0.48, 0.36, and 0.16, respectively.
TABLE 10
Total value weighting
Target 12 10 8 7 4 2 2 12 9 4
Weights 1/45 1/25
FIG. 5 is a portion of an exemplary result table 500 that is generated by the load model after completing the steps of defining metrics, building the component tree, and inputting target and script profiles. The portion of the exemplary result table 500 may be communicated to a display, so that an operator may assess the effectiveness of the load model in testing a system under test. The left side of the result table 500 includes groups 510 and scripts 512. The top of the result table 500 includes an execution rate column 517 and dimensions 515. Each group 510 includes a row for the targeted-dimension value. For example, as shown in the table, the targeted-dimension values for the “ID Authentication,” “Alt Authentication,” “Password Update,” “Group Authentication,” and “Privileges” dimensions as related to the “Test Plan” group correspond to 5003, 2345, 45.3, 0, and 0, respectively. The targeted-dimension values for the “ID Authentication,” “Alt Authentication,” “Password Update,” “Group Authentication,” and “Privileges” dimensions as related to the “Admin App” group correspond to 0, 45.0, 0, 121, and 0, respectively.
Each script 512 row includes values corresponding to the actual dimension-effect values of a script. The actual dimension-effect value of a script corresponds to the dimension-effect values of a script multiplied by the execution rate value in the corresponding execution rate column 517. The values in the execution rate column 517 correspond to the combination of execution rates that minimizes the total error, as described above. In this case, executing the scripts “Account Disable,” “Account Search,” “Account Updating,” and “Admin Login” 28.6, 10.1, 2.21, and 107 times per minutes, respectively, minimizes the total error. When executed according to these rates, the “Account Disable” script affects the dimensions “ID Authentication,” “Password Update,” and “Group Authentication” 28.6 times per minute, and the “Privileges” dimension 57.2 times per minute. The “Account Search” script affects the dimensions “Group Authentication” and “Privileges” 10.1 times per minute. The “Account Updating” script affects the dimensions “ID Authentication” and “Group Authentication” 2.21 times per minute and “Privileges” dimension 4.42 times per minute. Finally, the “Admin Login” script affects the dimensions “ID Authentication,” “Group Authentication,” and “Privileges” 107 times per minute.
Each group 510 also includes a fitness row 505. The fitness row 505 includes values that describe how close the traffic generated by the model may be to the target traffic. A score of 0% means that the target was completely missed, and a value of 100% is a perfect match. There are a number of ways to interpret the data in the result table 500. Instances where the fitness score is 0% may indicate that no script touches on a metric dimension. When this is the case, the load model application will be unable to generate a solution that generates traffic for that dimension. This problem may be corrected by introducing new scripts or expanding existing scripts.
Instances where the fitness score is low for a number of metric dimensions shared by a script may indicate that real-world users, from where the target information comes from, are utilizing a particular application in an unexpected way. For example, if a script goes from page A to page B, then to page C every time, but real-world users skip page B 90% of the time, there may not be a good solution.
If there are very few metrics, and these metrics are affected equally by a number of scripts, then there may be many possible solutions. While any of these solutions may meet the target load, only one is correct. Providing additional target data or estimates may fix the situation. Unfortunately, it may not be obvious when this problem occurs, as the fit % may be very good. It may need to be proactively identified. One clue may be a repeating pattern in the load generated, as the model tries to distribute the values equally, but this does not always occur.
When there is limited data shared by a number of scripts AND they have invalid proportions, the model may come up with a bizarre solution that involves a negative number of executions per minute. This occurs when the model attempts to solve an invalid proportion, but there is not enough data to limit it to a realistic compromise. Normally, the model would not consider a negative solution because it would create a large amount of error as it will never approach the positive target values. However, if the target metrics are shared by other scripts, and are already too high due to invalid proportions, then running a script a negative number of times may seem like a good idea (mathematically). However, this is not possible in the real-world, so this problem should be addressed with new scripts and additional data.
FIG. 6 is a flow diagram that describes the operations of the load model application described above. At block 600, the footprint information for various scripts utilized to test a computer system may be specified. As described above, a script footprint is a measure of the way in which a script affects the computer system as represented by the number of times per time period the script exercises operations of the computer system. Receiving the footprint information may involve specifying information such as a list of metric dimensions that may be affected by the script, numeric values for each metric dimension that measures the effect of an individual execution of that script, and the minimum duration expected when running the script in a clean environment. This information may be specified via the exemplary table 325 of FIG. 3 or a different user interface.
At block 605, target information may be specified. As described above, target information includes target dimension values corresponding to a desired number of times per time period each dimension above should be affected. The information may include a list of metrics, weights for each metric, values for dimensions associated with each metric, and growth factors for each metric. This information may be specified via the exemplary table 420 of FIG. 4 or a different user interface.
At block 610, a growth factor may be applied to all the target-dimension values, so as to predict some future state to guide the model when making compromises. The growth factor may be applied when the target-dimension values are based on historic values. The growth factor may enable converting the historic target-dimension values to target-dimension values that are more representative of the current state of the computer system. For example, all the target-dimension values may be doubled to bring historic target-dimension values to more current values
At block 615, weights may be applied to all the target-dimension values for a given metric. The weights may be utilized to define the relative importance of the metrics. The weights may be manually specified or automatically normalized. Manual weights may be utilized to provide specialized treatment for a group of metrics. Normalized weights may be utilized to even out the effects of different scaled metrics. In this case the weightings may be based on the maximum target-dimension value, average target-dimension values, or sum of target-dimension values within a metric.
At block 620, the load model application may determine the optimal number of times to execute scripts, so as to minimize the differences between the number of times the dimensions are affected and the target-dimension values. The load model application may utilize an LLS algorithm or other suitable algorithm utilized to determine a set of coefficients that minimizes the error of a system of equations. For example, the LLS algorithm may determine the optimal number of times to execute the various scripts, so as to minimize the overall difference between the target-dimension values and the actual-dimension values. After determining the information, the load model application may generate a result table, such as the exemplary result table 500 of FIG. 5. The result table may be communicated to a display, so that an operator may interpret the results.
At block 625, the scripts may be executed according to the execution rates determined at block 620. For example, referring to FIG. 1, the processor 115 may execute test scripts 105 according to a load model 120, so as to test the functionality of various computer systems, such as an e-mail server 125, web server 130, and database server 135. Alternatively, the load model may be communicated to another system that is operative to test the system under test.
FIG. 7 illustrates a general computer system 700, which may represent the processor 115 of the test system 100 shown in FIG. 1; or the e-mail server 125, web server 130, database server 135 shown in FIG. 1; or any of the other computing devices referenced herein. The computer system 700 may include a set of instructions 745 that may be executed to cause the computer system 700 to perform any one or more of the methods or computer-based functions disclosed herein. The computer system 700 may operate as a stand-alone device or may be connected, e.g., using a network, to other computer systems or peripheral devices.
In a networked deployment, the computer system may operate in the capacity of a server or as a client user computer in a server-client user network environment, or as a peer computer system in a peer-to-peer (or distributed) network environment. The computer system 700 may also be implemented as or incorporated into various devices, such as a personal computer (PC), a tablet PC, a set-top box (STB), a personal digital assistant (PDA), a mobile device, a palmtop computer, a laptop computer, a desktop computer, a communications device, a wireless telephone, a land-line telephone, a control system, a camera, a scanner, a facsimile machine, a printer, a pager, a personal trusted device, a web appliance, a network router, switch or bridge, or any other machine capable of executing a set of instructions 745 (sequential or otherwise) that specify actions to be taken by that machine. In one embodiment, the computer system 700 may be implemented using electronic devices that provide voice, video or data communication. Further, while a single computer system 700 may be illustrated, the term “system” shall also be taken to include any collection of systems or sub-systems that individually or jointly execute a set, or multiple sets, of instructions to perform one or more computer functions.
As illustrated in FIG. 7, the computer system 700, may include a processor 705, such as a central processing unit (CPU), a graphics processing unit (GPU), or both. The processor 705 may correspond to the processor 115 of the test system 100. The processor 705 may be a component in a variety of systems. For example, the processor 705 may be part of a standard personal computer or a workstation. The processor 705 may be one or more general processors, digital signal processors, application specific integrated circuits, field programmable gate arrays, servers, networks, digital circuits, analog circuits, combinations thereof, or other now-known or later-developed devices for analyzing and processing data. The processor 705 may implement a software program, such as code generated manually (i.e., programmed).
The computer system 700 may include a memory 710 that can communicate via a bus 720. The memory 710 may be a main memory, a static memory, or a dynamic memory. The memory 710 may include, but may not be limited to, computer-readable storage media such as various types of volatile and non-volatile storage media including, but not limited to, random access memory, read-only memory, programmable read-only memory, electrically programmable read-only memory, electrically erasable read-only memory, flash memory, magnetic tape or disk, optical media and the like. In one case, the memory 710 may include a cache or random access memory for the processor 705. Alternatively or in addition, the memory 710 may be separate from the processor 705, such as a cache memory of a processor, the system memory, or other memory. The memory 710 may be an external storage device or database for storing data. Examples may include a hard drive, compact disc (“CD”), digital video disc (“DVD”), memory card, memory stick, floppy disc, universal serial bus (“USB”) memory device, or any other device operative to store data. The memory 710 may be operable to store instructions 745 executable by the processor 705. The functions, acts or tasks illustrated in the figures or described herein may be performed by the programmed processor 705 executing the instructions 745 stored in the memory 710. The functions, acts or tasks may be independent of the particular type of instruction set, storage media, processor or processing strategy and may be performed by software, hardware, integrated circuits, firm-ware, micro-code and the like, operating alone or in combination. Likewise, processing strategies may include multiprocessing, multitasking, parallel processing and the like.
The computer system 700 may further include a display 730, such as a liquid crystal display (LCD), an organic light emitting diode (OLED), a flat panel display, a solid state display, a cathode ray tube (CRT), a projector, a printer or other now-known or later-developed display device for outputting determined information. The display 730 may act as an interface for the user to see the functioning of the processor 705, or specifically as an interface with the software stored in the memory 710 or in a drive unit 715.
Additionally, the computer system 700 may include an input device 725 configured to allow a user to interact with any of the components of system 700. The input device 725 may be a number pad, a keyboard, or a cursor control device, such as a mouse, or a joystick, touch screen display, remote control or any other device operative to interact with the system 700.
The computer system 700 may also include a disk or optical drive unit 715. The disk drive unit 715 may include a computer-readable medium 740 in which one or more sets of instructions 745, e.g., software, can be embedded. Further, the instructions 745 may perform one or more of the methods or logic as described herein. The instructions 745 may reside completely, or at least partially, within the memory 710 and/or within the processor 705 during execution by the computer system 700. The memory 710 and the processor 705 also may include computer-readable media as discussed above.
The present disclosure contemplates a computer-readable medium 740 that includes instructions 745 or receives and executes instructions 745 responsive to a propagated signal; so that a device connected to a network 750 may communicate voice, video, audio, images or any other data over the network 750. The instructions 745 may be implemented with hardware, software and/or firmware, or any combination thereof. Further, the instructions 745 may be transmitted or received over the network 750 via a communication interface 735. The communication interface 735 may be a part of the processor 705 or may be a separate component. The communication interface 735 may be created in software or may be a physical connection in hardware. The communication interface 735 may be configured to connect with a network 750, external media, the display 730, or any other components in system 700, or combinations thereof. The connection with the network 750 may be a physical connection, such as a wired Ethernet connection or may be established wirelessly as discussed below. Likewise, the additional connections with other components of the system 700 may be physical connections or may be established wirelessly.
The network 750 may include wired networks, wireless networks, or combinations thereof. The wireless network may be a cellular telephone network, an 802.11, 802.16, 802.20, or WiMax network. Further, the network 750 may be a public network, such as the Internet, a private network, such as an intranet, or combinations thereof, and may utilize a variety of networking protocols now available or later developed including, but not limited to, TCP/IP based networking protocols.
The computer-readable medium 740 may be a single medium, or the computer-readable medium 740 may be a single medium or multiple media, such as a centralized or distributed database, and/or associated caches and servers that store one or more sets of instructions. The term “computer-readable medium” may also include any medium that may be capable of storing, encoding or carrying a set of instructions for execution by a processor or that may cause a computer system to perform any one or more of the methods or operations disclosed herein.
The computer-readable medium 740 may include a solid-state memory such as a memory card or other package that houses one or more non-volatile read-only memories. The computer-readable medium 740 also may be a random access memory or other volatile re-writable memory. Additionally, the computer-readable medium 740 may include a magneto-optical or optical medium, such as a disk or tapes or other storage device to capture carrier wave signals such as a signal communicated over a transmission medium. A digital file attachment to an e-mail or other self-contained information archive or set of archives may be considered a distribution medium that may be a tangible storage medium. Accordingly, the disclosure may be considered to include any one or more of a computer-readable medium or a distribution medium and other equivalents and successor media, in which data or instructions may be stored.
Alternatively or in addition, dedicated hardware implementations, such as application-specific integrated circuits, programmable logic arrays and other hardware devices, may be constructed to implement one or more of the methods described herein. Applications that may include the apparatus and systems of various embodiments may broadly include a variety of electronic and computer systems. One or more embodiments described herein may implement functions using two or more specific interconnected hardware modules or devices with related control and data signals that may be communicated between and through the modules, or as portions of an application-specific integrated circuit. Accordingly, the present system may encompass software, firmware, and hardware implementations.
Accordingly, the method and system may be realized in hardware, software, or a combination of hardware and software. The method and system may be realized in a centralized fashion in at least one computer system or in a distributed fashion where different elements are spread across several interconnected computer systems. Any kind of computer system or other apparatus adapted for carrying out the methods described herein is suited. A typical combination of hardware and software may be a general-purpose computer system with a computer program that, when being loaded and executed, controls the computer system such that it carries out the methods described herein.
The method and system may also be embedded in a computer program product, which includes all the features enabling the implementation of the methods described herein and which, when loaded in a computer system, is able to carry out these methods. Computer program in the present context means any expression, in any language, code or notation, of a set of instructions intended to cause a system having an information processing capability to perform a particular function either directly or after either or both of the following: a) conversion to another language, code or notation; b) reproduction in a different material form.
From the foregoing, it may be seen that the embodiments disclosed herein provide an approach for optimizing the testing of computer systems. For example, metrics and dimensions for a system may be determined. Then the footprint of test scripts may be determined. The footprint may include dimension-effect values corresponding to the number of times a dimension is affected by a script. Next, target-dimension values corresponding to a desired number of times per period each dimension should be affected is specified. Then an LLS algorithm is utilized to determine the number of times to execute the script within the period so as to minimize the difference between the actual number of times dimensions are affected and the target-dimension values. Finally, the scripts are executed on the computer system the determined number of times within the time period. The computer system in turn exercises operations of other computer systems according to the scripts.
While the method and system has been described with reference to certain embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted without departing from the scope. In addition, many modifications may be made to adapt a particular situation or material to the teachings without departing from its scope. Therefore, it is intended that the present method and system not be limited to the particular embodiment disclosed, but that the method and system include all embodiments falling within the scope of the appended claims.

Claims (23)

1. A method for testing a computer system, the method comprising:
receiving script information that includes a dimension-effect value for each of multiple dimensions, each dimension-effect value corresponding to a number of times a dimension is affected by a script;
receiving, based on user input, a target-dimension value for each of the multiple dimensions, each target-dimension value corresponding to a desired number of times per time period each dimension should be affected by the script;
determining, by a processor and based on the dimension-effect value for each of the multiple dimensions and the target-dimension value for each of the multiple dimensions, a number of times to execute the script within the time period that minimizes a total error across the multiple dimensions, the total error accounting for differences between dimension-effect value and target-dimension value for each of the multiple dimensions; and
executing the script on the computer system the determined number of times within the time period.
2. The method according to claim 1, further comprising minimizing the total error across the multiple dimensions via a linear-least-squares algorithm.
3. The method according to claim 1, further comprising scaling at least one dimension-effect value and at least one target-dimension value by a weighting factor.
4. The method according to claim 3, wherein the weighting factor corresponds to the reciprocal of a highest of the target-dimension values.
5. The method according to claim 3, wherein the weighting factor corresponds to the reciprocal of an average of all of the target-dimension values.
6. The method according to claim 3, wherein the weighting factor corresponds to the reciprocal of a sum of all of the target-dimension values.
7. The method according to claim 1, further comprising multiplying at least one target-dimension value by a growth factor.
8. The method according to claim 1, wherein minimizing the total error across the multiple dimensions comprises determining a percentage of error and communicating the percentage of error to a display.
9. A non-transitory machine-readable storage medium having stored thereon a computer program comprising at least one code section for testing a computer system, the at least one code section being executable by a machine for causing the machine to perform acts of:
receiving script information that includes a dimension-effect value for each of multiple dimensions, each dimension-effect value corresponding to a number of times a dimension is affected by a script;
receiving, based on user input, a target-dimension value for each of the multiple dimensions, each target-dimension value corresponding to a desired number of times per time period each dimension should be affected by the script;
determining, based on the dimension-effect value for each of the multiple dimensions and the target-dimension value for each of the multiple dimensions, a number of times to execute the script within the time period that minimizes a total error across the multiple dimensions, the total error accounting for differences between dimension-effect value and target-dimension value for each of the multiple dimensions; and
executing the script on the computer system the determined number of times within the time period.
10. The machine-readable storage according to claim 9, wherein the code section is executable to cause the machine to minimize the total error via a linear-least-squares algorithm.
11. A system for testing a computer system, the system comprising:
a processor operable to receive script information that includes a dimension-effect value for each of multiple dimensions, each dimension-effect value corresponding to a number of times a dimension is affected by a script; receive, based on user input, a target-dimension value for each of the multiple dimensions, each target-dimension value corresponding to a desired number of times per time period each dimension should be affected by the script; determine, based on the dimension-effect value for each of the multiple dimensions and the target-dimension value for each of the multiple dimensions, a number of times to execute the script within the time period that minimizes a total error across the multiple dimensions, the total error accounting for differences between dimension-effect value and target-dimension value for each of the multiple dimensions; and execute the script on the computer system the determined number of times within the time period.
12. The system according to claim 11, wherein the processor is operable to minimize the total error via a linear-least-squares algorithm.
13. The system according to claim 11, wherein the processor is operable to scale at least one dimension-effect value and at least one target-dimension value by a weighting factor.
14. The system according to claim 13, wherein the weighting factor corresponds to the reciprocal of a highest of the target-dimension values.
15. The system according to claim 13, wherein the weighting factor corresponds to the reciprocal of an average of all of the target-dimension values.
16. The system according to claim 13, wherein the weighting factor corresponds to the reciprocal of a sum of all of the target-dimension values.
17. The system according to claim 11, wherein the processor is operable to multiply at least one target-dimension value by a growth factor.
18. The method of claim 1:
wherein receiving script information comprises receiving script information for multiple scripts with at least one dimension-effect value for each of the multiple scripts;
wherein determining the number of times to execute the script within the time period that minimizes the total error across the multiple dimensions comprises determining, by the processor, a load model for the multiple scripts that minimizes the total error across the multiple dimensions, the load model defining an execution rate for each of the multiple scripts; and
wherein executing the script on the computer system the determined number of times within the time period comprises executing the multiple scripts in accordance with the load model.
19. The method of claim 18:
wherein the multiple scripts are grouped in a manner that defines relationships between groups of the multiple scripts; and
wherein determining the load model for the multiple scripts that minimizes the total error across the multiple dimensions comprises constraining the load model to minimize the total error across the multiple dimensions while maintaining the relationships between the groups of the multiple scripts.
20. The method of claim 18, wherein determining the load model for the multiple scripts that minimizes the total error across the multiple dimensions comprises finding, by the processor, combinations of script execution rates for the multiple scripts that minimize the total error.
21. The method of claim 1, wherein minimizing the total error across the multiple dimensions comprises minimizing a summation of errors for each of the multiple dimensions while allowing a first error for a first dimension in the multiple dimensions to be higher than a second error for a second dimension in the multiple dimensions, thereby sacrificing the first dimension for the second dimension to better reduce the total error across the multiple dimensions.
22. The method of claim 1, wherein minimizing the total error across the multiple dimensions comprises minimizing the total error across the multiple dimensions without balancing error across the multiple dimensions.
23. The method of claim 1, wherein determining the number of times to execute the script within the time period that minimizes the total error across the multiple dimensions comprises:
applying a first weighting factor to values for a first dimension in the multiple dimensions;
applying a second weighting factor to values for a second dimension in the multiple dimensions, the second weighting factor being different than the first weighting factor; and
determining, based on the weighted values for the first dimension and the weighted values for the second dimension, the number of times to execute the script within the time period that minimizes the total error across the multiple dimensions.
US12/261,519 2008-10-30 2008-10-30 Automated load model Active 2031-10-12 US8332820B2 (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
US12/261,519 US8332820B2 (en) 2008-10-30 2008-10-30 Automated load model
CA2681887A CA2681887C (en) 2008-10-30 2009-10-08 Automated load model
EP09252500.5A EP2189907B1 (en) 2008-10-30 2009-10-29 Automated load model for computer performance testing
CN200910174909.8A CN101727372B (en) 2008-10-30 2009-10-29 Method, device and system for testing computer system
BRPI0904262-8A BRPI0904262B1 (en) 2008-10-30 2009-10-30 METHOD AND SYSTEM FOR TESTING A COMPUTER SYSTEM AND STORAGE MEDIA

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US12/261,519 US8332820B2 (en) 2008-10-30 2008-10-30 Automated load model

Publications (2)

Publication Number Publication Date
US20100115339A1 US20100115339A1 (en) 2010-05-06
US8332820B2 true US8332820B2 (en) 2012-12-11

Family

ID=42096498

Family Applications (1)

Application Number Title Priority Date Filing Date
US12/261,519 Active 2031-10-12 US8332820B2 (en) 2008-10-30 2008-10-30 Automated load model

Country Status (5)

Country Link
US (1) US8332820B2 (en)
EP (1) EP2189907B1 (en)
CN (1) CN101727372B (en)
BR (1) BRPI0904262B1 (en)
CA (1) CA2681887C (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9710364B2 (en) * 2015-09-04 2017-07-18 Micron Technology Licensing, Llc Method of detecting false test alarms using test step failure analysis
US9720818B2 (en) * 2015-09-03 2017-08-01 Netapp, Inc. Scalable, distributed, fault-tolerant test framework
US10310961B1 (en) * 2017-11-29 2019-06-04 International Business Machines Corporation Cognitive dynamic script language builder

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102075789B (en) * 2010-12-31 2012-10-10 上海全景数字技术有限公司 Method and system for quickly testing set top box
CN109857431B (en) * 2019-01-11 2022-06-03 平安科技(深圳)有限公司 Code modification method and device, computer readable medium and electronic equipment
CN112631643A (en) * 2019-10-08 2021-04-09 华晨宝马汽车有限公司 Comprehensive operation and maintenance management method, system, equipment and medium
CN112632105B (en) * 2020-01-17 2021-09-10 华东师范大学 System and method for verifying correctness of large-scale transaction load generation and database isolation level

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5136529A (en) * 1989-09-29 1992-08-04 Hitachi, Ltd. Digital signal weighting processing apparatus and method
US5657438A (en) * 1990-11-27 1997-08-12 Mercury Interactive (Israel) Ltd. Interactive system for developing tests of system under test allowing independent positioning of execution start and stop markers to execute subportion of test script
US20050107997A1 (en) * 2002-03-14 2005-05-19 Julian Watts System and method for resource usage estimation
US20060248529A1 (en) * 2002-12-27 2006-11-02 Loboz Charles Z System and method for estimation of computer resource usage by transaction types
US20070006162A1 (en) * 2005-06-30 2007-01-04 Nokia Corporation Method, terminal device and computer software for changing the appearance of a visual program representative
US20080022159A1 (en) * 2006-07-19 2008-01-24 Sei Kato Method for detecting abnormal information processing apparatus
US20080040707A1 (en) * 2006-08-09 2008-02-14 Fujitsu Limited Program monitoring method, computer, and abnormal monitoring program product
US20080172581A1 (en) * 2007-01-11 2008-07-17 Microsoft Corporation Load test load modeling based on rates of user operations
US20090106600A1 (en) * 2007-10-17 2009-04-23 Sun Microsystems, Inc. Optimal stress exerciser for computer servers
US20090265681A1 (en) * 2008-04-21 2009-10-22 Microsoft Corporation Ranking and optimizing automated test scripts
US7627452B2 (en) * 2003-07-10 2009-12-01 Daimler Ag Method and device for predicting a failure frequency
US20100083049A1 (en) * 2008-09-29 2010-04-01 Hitachi, Ltd. Computer system, method of detecting symptom of failure in computer system, and program

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1990001196A1 (en) * 1988-07-20 1990-02-08 Ishizaka Shoji Co., Ltd. Textile color design simulator
US5805795A (en) * 1996-01-05 1998-09-08 Sun Microsystems, Inc. Method and computer program product for generating a computer program product test that includes an optimized set of computer program product test cases, and method for selecting same
CN100465918C (en) * 2004-08-02 2009-03-04 微软公司 Automatic configuration of transaction-based performance models
US20070022142A1 (en) * 2005-07-20 2007-01-25 International Business Machines Corporation System and method to generate domain knowledge for automated system management by combining designer specifications with data mining activity
US20070083630A1 (en) * 2005-09-27 2007-04-12 Bea Systems, Inc. System and method for performance testing framework

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5136529A (en) * 1989-09-29 1992-08-04 Hitachi, Ltd. Digital signal weighting processing apparatus and method
US5657438A (en) * 1990-11-27 1997-08-12 Mercury Interactive (Israel) Ltd. Interactive system for developing tests of system under test allowing independent positioning of execution start and stop markers to execute subportion of test script
US20050107997A1 (en) * 2002-03-14 2005-05-19 Julian Watts System and method for resource usage estimation
US20060248529A1 (en) * 2002-12-27 2006-11-02 Loboz Charles Z System and method for estimation of computer resource usage by transaction types
US7627452B2 (en) * 2003-07-10 2009-12-01 Daimler Ag Method and device for predicting a failure frequency
US20070006162A1 (en) * 2005-06-30 2007-01-04 Nokia Corporation Method, terminal device and computer software for changing the appearance of a visual program representative
US20080022159A1 (en) * 2006-07-19 2008-01-24 Sei Kato Method for detecting abnormal information processing apparatus
US20080040707A1 (en) * 2006-08-09 2008-02-14 Fujitsu Limited Program monitoring method, computer, and abnormal monitoring program product
US7516042B2 (en) * 2007-01-11 2009-04-07 Microsoft Corporation Load test load modeling based on rates of user operations
US20080172581A1 (en) * 2007-01-11 2008-07-17 Microsoft Corporation Load test load modeling based on rates of user operations
US20090106600A1 (en) * 2007-10-17 2009-04-23 Sun Microsystems, Inc. Optimal stress exerciser for computer servers
US20090265681A1 (en) * 2008-04-21 2009-10-22 Microsoft Corporation Ranking and optimizing automated test scripts
US20100083049A1 (en) * 2008-09-29 2010-04-01 Hitachi, Ltd. Computer system, method of detecting symptom of failure in computer system, and program

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Myers, "Scripting Graphical Applications by Demonstration", Apr. 18-23, 1998, Carneigie Mellon University. *
Steward, "Measuring Execution Time and Real-Time Performance",Sep. 2006, Embedded Systems Conference. *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9720818B2 (en) * 2015-09-03 2017-08-01 Netapp, Inc. Scalable, distributed, fault-tolerant test framework
US9710364B2 (en) * 2015-09-04 2017-07-18 Micron Technology Licensing, Llc Method of detecting false test alarms using test step failure analysis
US10235277B2 (en) 2015-09-04 2019-03-19 Microsoft Technology Licensing, Llc Method of detecting false test alarms using test step failure analysis
US10310961B1 (en) * 2017-11-29 2019-06-04 International Business Machines Corporation Cognitive dynamic script language builder

Also Published As

Publication number Publication date
EP2189907A3 (en) 2017-04-12
EP2189907A2 (en) 2010-05-26
CN101727372B (en) 2014-04-16
EP2189907B1 (en) 2020-07-22
CA2681887A1 (en) 2010-04-30
CN101727372A (en) 2010-06-09
US20100115339A1 (en) 2010-05-06
BRPI0904262A2 (en) 2011-02-01
CA2681887C (en) 2017-11-21
BRPI0904262B1 (en) 2020-03-31

Similar Documents

Publication Publication Date Title
US8332820B2 (en) Automated load model
US20090259526A1 (en) System for valuating users and user generated content in a collaborative environment
CN106716958B (en) Lateral movement detection
US9529699B2 (en) System and method for test data generation and optimization for data driven testing
US10360574B2 (en) Systems and methods for response rate determination and offer selection
US20100262457A1 (en) Computer-Implemented Systems And Methods For Behavioral Identification Of Non-Human Web Sessions
US10353703B1 (en) Automated evaluation of computer programming
US11650903B2 (en) Computer programming assessment
US10810106B1 (en) Automated application security maturity modeling
US9135259B2 (en) Multi-tenancy storage node
US8626479B2 (en) Client load simulation framework
US20130174258A1 (en) Execution of Multiple Execution Paths
WO2017007488A1 (en) Staged application rollout
Liew et al. Cloudguide: Helping users estimate cloud deployment cost and performance for legacy web applications
Caminero et al. Choosing the right LMS: A performance evaluation of three open-source LMS
Singh et al. Improving the quality of software by quantifying the code change metric and predicting the bugs
van Riet et al. Optimize along the way: An industrial case study on web performance
US10313262B1 (en) System for management of content changes and detection of novelty effects
US11551271B2 (en) Feedback service in cloud application
Horn et al. Native vs web apps: Comparing the energy consumption and performance of android apps and their web counterparts
CN111694753B (en) Application program testing method and device and computer storage medium
Ahmed et al. Organizational learning on bug bounty platforms
US20200311745A1 (en) Personalize and optimize decision parameters using heterogeneous effects
Lim et al. Understanding the interplay between hardware errors and user job characteristics on the Titan supercomputer
Patsakis et al. The role of weighted entropy in security quantification

Legal Events

Date Code Title Description
AS Assignment

Owner name: ACCENTURE GLOBAL SERVICES GMBH,GERMANY

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:HUMMEL, DAVID M., JR.;REEL/FRAME:021809/0242

Effective date: 20081028

Owner name: ACCENTURE GLOBAL SERVICES GMBH, GERMANY

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:HUMMEL, DAVID M., JR.;REEL/FRAME:021809/0242

Effective date: 20081028

AS Assignment

Owner name: ACCENTURE GLOBAL SERVICES LIMITED, IRELAND

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:ACCENTURE GLOBAL SERVICES GMBH;REEL/FRAME:025700/0287

Effective date: 20100901

STCF Information on status: patent grant

Free format text: PATENTED CASE

CC Certificate of correction
FPAY Fee payment

Year of fee payment: 4

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 8