WO2002031657A2 - Automatic performance test generation - Google Patents

Automatic performance test generation Download PDF

Info

Publication number
WO2002031657A2
WO2002031657A2 PCT/US2001/031886 US0131886W WO0231657A2 WO 2002031657 A2 WO2002031657 A2 WO 2002031657A2 US 0131886 W US0131886 W US 0131886W WO 0231657 A2 WO0231657 A2 WO 0231657A2
Authority
WO
WIPO (PCT)
Prior art keywords
parameter
subprogram
performance
processing system
data processing
Prior art date
Application number
PCT/US2001/031886
Other languages
French (fr)
Other versions
WO2002031657A3 (en
Inventor
Paul J. Hinker
Original Assignee
Sun Microsystems, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sun Microsystems, Inc. filed Critical Sun Microsystems, Inc.
Priority to AU2002213142A priority Critical patent/AU2002213142A1/en
Publication of WO2002031657A2 publication Critical patent/WO2002031657A2/en
Publication of WO2002031657A3 publication Critical patent/WO2002031657A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3409Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for performance assessment
    • G06F11/3414Workload generation, e.g. scripts, playback
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3409Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for performance assessment
    • G06F11/3428Benchmarking

Definitions

  • This invention relates generally to data processing systems and, more particularly, to the automatic generation of performance tests.
  • Performance tests are run on an executable subprogram to determine how a subprogram performs and to determine ways of improving the subprogram's performance.
  • a performance test may be run on a subprogram repeatedly, with different parameters, to monitor performance.
  • a subprogram may include a test that multiplies a square matrix by another square matrix.
  • a complete set of performance tests for this subprogram would include tests that not only multiply square matrices but also matrices that have various aspect ratios, e.g., long, thin matrices times short, wide matrices and vice versa.
  • the performance test may also be used to determine the performance of a subprogram executing in various hardware environments, compiler versions, and source file versions. Each time a parameter or environment changes, the performance test is run to determine the impact of such change. Further, the performance test may need to be rewritten between successive runs so that it accurately captures the relevant performance data.
  • Methods and systems operating in accordance with the principles of the present invention provide an automatic performance test generator which generates performance tests for determining the performance level of a subprogram.
  • a method is provided in a data processing system having a subprogram that automatically generates a performance routine that determines a performance level of the subprogram and that invokes execution of the subprogram.
  • a method in a data processing system having source code with a subprogram with a parameter.
  • the method creates an interface file for the subprogram, adds to the interface a comment ' for the parameter, the comment indicating characteristics of the parameter, reads the interface file to obtain the characteristics of the parameter, and uses the characteristics of the parameter to automatically generate a performance test that collects performance analysis data of the subprogram.
  • a method is provided in a data processing system that receives source code for an executable subprogram and automatically generates a performance test that runs the executable subprogram and that measures the performance characteristics of the executable subprogram while the executable subprogram is running.
  • a computer-readable memory device encoded with a program having instructions for execution by a processor includes source code with a subprogram having a parameter and performance test generator that automatically develops a performance test which reads the source code, runs the subprogram, and collects performance analysis data of the subprogram while the subprogram is running.
  • a data processing system contains a storage device and a processor.
  • the storage device comprises source code for a subprogram having a parameter, an interface generator that reads the subprogram and that generates an interface file with indications of characteristics of the required parameters, and a performance test generator that reads the interface file, that generates a performance test which collects performance analysis data of the subprogram by using the characteristics of the parameter, and that runs the subprogram.
  • Fig. 1 depicts a data processing system suitable for use with methods and systems consistent with the present invention
  • Fig. 1 A depicts a block diagram of a data processing system suitable for use with methods and systems consistent with the present invention
  • Fig. 2 depicts a flow chart of the steps performed to automatically generate performance tests in accordance with methods and systems consistent with the present invention
  • Figs. 3 A and 3B depict a flow chart of the steps performed by the interface generator depicted in Fig. IB;
  • Fig. 4 depicts a flow chart of the steps performed by the performance test generator depicted in Fig. IB.
  • Methods and systems operating in accordance with the principles of the present invention provide a performance test generator which generates performance tests to determine the performance of a subprogram, wherein the subprogram could be, for example, written in the Fortran 77 language.
  • the performance tests may be run repeatedly on a subprogram while varying subprogram parameters to collect performance analysis data of a subprogram.
  • programmers may analyze the performance of a subprogram and the impact of parameter changes on such performance, for example, compiler changes, memory allocation, or hardware, without expending the time required to manually develop and execute a performance test repeatedly.
  • Methods and systems operating in accordance with the principles of the present invention provide a subprogram script that scans the source code of a subprogram and that generates an interface description language file.
  • the interface description language file defines the signature for a subprogram, including its name, its parameters, and each parameter's type.
  • the script then scans the source code again and inserts code-generator statements into the interface file.
  • the code-generator statements provide meaningful information, such as characteristics of the parameters, to facilitate the automatic generation of a performance test.
  • the generated performance test automatically invokes the related subprogram and collects performance data as the subprogram executes.
  • the performance test generator iteratively runs the performance tests with varying parameters.
  • Fig. 1 A shows an example of a conventional system in which the present invention may be implemented.
  • the system comprises an input device 10, display 12 and a computer 14.
  • Figure IB depicts a block diagram of computer 14.
  • Data processing system 100 suitable for use with methods and systems consistent with the present invention, includes a memory 124, a secondary storage device 110, an input device 122, a processor 114, and an output device 118.
  • the memory 124 resides an interface generator 134 and a performance test generator 138.
  • Interface generator 134 reads subprogram code 130 and generates an interface file 140.
  • Performance test generator 138 reads the interface file 140 and generates performance test code 144 which invokes the execution of subprogram code 130.
  • Table 1 is a definition of an interface file, where the words INTERFACE, SUBROUTINE, FUNCTION, and END are keywords, and the word TYPE represents any valid type (i.e., INTEGER, LOGICAL, REAL, CHARACTER, or COMPLEX):
  • Table 2 is an example of an interface file for the "CAXPY" subprogram, which multiplies a constant ALPHA by a vector X and adds a vector Y, where X and Y are one-dimensional arrays and ALPHA is a scalar.
  • the default value of the parameter ALPHA is a scalar complex constant value described by the #SCALAR statement.
  • the parameter "X" can be initialized by making a subroutine call to the 'Initialize' routine with the parameters "N" and "X" as indicated by the #BSfIT statement.
  • the "INCX” parameter is described as having a default value that is the stride on the "X" parameter.
  • the term 'stride' described the number of elements of an array separate from the elements of interest. For example, if the stride of an array is ' 1', then every element is of interest.
  • stride of an array is '2', then every other element is of interest. If the stride of an array is '3', then every third element is of interest, etc.
  • the "INCX" parameter is further described with the #READOPT statement. This indicates that the performance test should generate code that will assign the value of this parameter if it exists in the input stream. If it does not exist, then the default value should be used.
  • ALPHA (1.0E0,0.0E0)
  • T2 TLMER0 * * Calculate number of calls depending on overhead percentage *
  • lines 1 - 13 Variable declaration and program statement.
  • Lines 14 - 17 Variable declaration to perform timing.
  • Lines 18 - 23 Local variables used to gather statistics and read input.
  • Lines 24 - 29 Initialize variables that have default values.
  • Lines 30 - 35 Loop to read parameter values from input file.
  • Lines 36 - 42 Checks to verify all the READREQ parameter values were read in.
  • Lines 43 - 48 Calculate the overhead to call the timing routine.
  • Line 49 - 62 Allocate memory and initialize arrays.
  • Lines 63 - 68 Call the routine and record the time required.
  • Lines 69 - 73 Calculate the number of calls necessary so that the overall time spent in the timed routine is large compared to timer overhead.
  • Lines 74 - 79 Call the timed routine the calculated number of times and record the total elapsed time spent in the timed routine.
  • Lines 80 - 82 Report the average time spent in the timed routine and stop.
  • Lines 83 - 113 Routine to read input file and assign values to parameters.
  • Additional statements may be added to the performance test of Table 3 to determine performance aspects of the CAXPY subprogram, including, for example, statements to perform the following functions: a statement to ensure that the data cache is invalid before calling the timing test; a statement to ensure that the data is in the cache before the test for timing is called; a series of calls to the test and an average of the results.
  • Performance tests may also be used to compare the timing of runs when the memory is dynamically allocated, as in Table 3 above, to when memory is statically allocated.
  • the automatic generation of performance tests allows a programmer to run many different performance tests under many different circumstances and with varying parameters to determine detailed performance analysis data of a subprogram.
  • Figure 2 depicts a flowchart of the steps performed by methods and systems consistent with the present invention when creating a performance test for an executable subprogram.
  • the first step is to create an interface file by invoking the interface generator (step 210).
  • the interface generator scans the source code and creates an interface file according to the definition provided above.
  • the interface generator then adds code-generator statements to the interface file to facilitate the test generator in creating performance tests for the subprogram.
  • the interface generator parses the arguments of the subprogram and adds a comment line that provides meaningful information used by the performance test generator to generate a performance test. For example, such meaningful information may include how to generate a value for a given parameter if the value for the parameter is missing.
  • a programmer may manually supplement the code-generator statements generated by the interface generator. After the code-generator statements have been added to the interface file, the user invokes the test generator (step 220).
  • the test generator reads the interface file and reads the code-generator statements included in the interface file to create a performance test.
  • the performance test automatically invokes execution of the subprogram wherein the process is timed to indicate performance.
  • FIGs 3 A and 3B depict a flowchart of the steps performed by the interface generator.
  • the first step performed by the interface generator is to create an interface file for the subprogram (step 304).
  • the interface generator generates a definition for the subprogram similar to that described above in Table 1.
  • the interface generator selects a parameter within the subprogram (step 308).
  • the arguments for the subprogram parameters contain comments that provide information about that parameter, such as whether the parameter is an input, output, or input/output parameter; its type; and the meaning associated with its values.
  • the parameter descriptions closely conform to the following form in Table 4 below:
  • IP TV (input) INTEGER array, dimension (N)
  • the interface generator determines whether the value of this parameter can be calculated from either the other parameters or another source (step 312). For example, if the selected parameter were a length parameter or a stride parameter, the value of the parameter can be obtained through a system call to identify the size or stride of the parameter. If the value of the parameter is calculatable, the interface generator inserts a code-generator statement "D" as a comment next to the parameter declaration (step 316).
  • the "D" code-generator statement indicates that the parameter is optional because its value can be derived from another source.
  • the "D" code-generator statement takes the form of D(expression), where "expression” indicates how to derive the value of the parameter.
  • a valid "expression” could include any constant term or expression, any expression including constant terms and/or actual parameters, or other code-generator statements.
  • An example of the "D" code-generator statement follows in Table 5:
  • the interface generator determines if the argument has a conditional value (step 328). If so, the interface generator inserts the IF code-generator statement (step 332). In this step, the programmer has indicated in the source code the conditional requirement, and thus, the interface generator inserts an appropriate expression indicating the conditionality of the argument.
  • the "IF" code-generator statement is defined as IF(expression, defaultl ⁇ ELSE default2 ⁇ ), where if "expression" evaluates to true, then the value of this argument is defaultl. Otherwise, the value is default2.
  • An example of the IF code-generator statement follows in Table 6:
  • the interface generator inserts the directionality of the parameter into the interface file (step 344).
  • the interface generator determines if the parameter is an inout, input or output parameter by examining the comments in the source code. After making this determination, either an inout, input or output code- generator statement is inserted into the interface file. If the parameter is an input parameter, the variable needs to be assigned before calling the test. If the parameter is an output parameter, the variable will be modified by the test. If the parameter is an inout parameter, it is passed with input and output semantics. In the case of C interfaces, this means that the C interface passes a scalar parameter by reference. This parameter allows the compiler to perform optimizations across subprogram boundaries.
  • Table 7 An example of the INOUT code-generator statement follows in Table 7:
  • the parameter is an Input parameter, it is passed with input semantics. In the case of C interfaces, this means that the C interface can pass the parameter by value. This parameter allows the compiler to perform optimizations across subprogram boundaries as seen in Table 8.
  • the parameter is an output parameter, it is passed with output semantics.
  • the interface generator determines if the argument will return a multi-dimensional variable (step 348 in Fig. 3B). If so, it inserts a RANK code-generator statement indicating that the performance test should generate both a multi-dimensional array as well as a single dimensional variable in the event that the programmer was only expecting a one-dimensional variable (step 352).
  • the RANK code-generator statement is defined as RANK(list), where list indicates the possible dimensions of the parameter.
  • the interface generator determines if the size of the argument is declared in terms of another argument (step 356), and if so, it adds the SIZE code-generator statement (step 360).
  • Table 11 is an example of the SIZE code-generator statement:
  • the interface generator determines if this parameter is a stride parameter indicating the stride of another parameter by examining the comments associated with the parameter (step 364). If the comments indicate that this parameter is a stride for another parameter, the interface generator inserts the stride code-generator statement (step 368).
  • the interface generator determines if this parameter is a work space parameter (step 372).
  • a workspace parameter provides memory that will be used by the underlying subprogram. This determination is made by examining the comments of the parameter in the source code. If this parameter is a workspace parameter, the interface generator inserts the WORK code-generator statement into the interface file (step 376).
  • the WORK code-generator statement is defined as WORK(expression ⁇ ,save ⁇ ), where the "expression” indicates the size of the workspace and "save” indicates where the workspace should be saved as seen in Table 13.
  • the interface generator determines if more parameters remain to be processed (step 380), and if so, processing continues to step 308. If no more parameters remain to be processed, processing ends.
  • the following interface in Table 14 is generated by examining the CSTSV source to extract the parameter list and the parameter declarations.
  • SUBROUTINE _STSV N, NRHS, L, D, SUBL, B, LDB, IPIV
  • the interface generator can add code-generator statements to the interface file. For instance, the following example line in the source code:
  • N input
  • INTEGER allows the interface generator to insert the #TNPUT code-generator statement into the interface file associated with the parameter N.
  • NRHS (input) INTEGER allow the interface generator to not only associate the #INOUT statement with the parameters D and L, but also the #OUTPUT statement can be associated with the SUBL parameter and the #INPUT statement to the NRHS parameter.
  • the declaration of D gives the interface generator enough information to construct a default value for the parameter N.
  • B input / output
  • dimension LDB, NRHS
  • Figure 4 depicts a flowchart of the steps performed by the performance .test generator.
  • the performance test generator performs two passes through the interface file that has been marked up with the code-generator statements. The first pass discovers information regarding a subprogram and its parameters and begins to populate a hash table with such information. The second pass provides more detailed information to the hash table. Once the hash table has been populated, the performance test generator generates performance tests using this information.
  • the first step performed by the performance test generator is to determine whether the subprogram is a subroutine (i.e., does not return a return code) or is a function (i.e., returns a return code) (step 404).
  • the performance test generator records the name of the subprogram in a hash table (step 408).
  • the hash table entry has the following items of information where items 2-10 are specified for each parameter of the subprogram:
  • Sizer this parameter describes the size of another parameter, the name of that parameter is stored in this field.
  • Strider if this parameter is a strider for another parameter, then its name is stored in this field.
  • Intent undefined, input, output, or i/o.
  • the performance test generator After recording the name, the performance test generator examines the parameter list to determine the number of parameters as well as their name for the subprogram and stores this information in the hash table (step 412). The performance test generator then identifies the details of each parameter including its shape and type and stores this in the hash table (step 416). After identifying the parameter details, the performance test generator processes the code-generator statements by inserting various information into the hash table (step 420). The following table indicates the code-generator statements and the processing that occurs for each one:
  • Code-Generator Statement Processing That Occurs D (default expression) Record expression as the default value for the parameter. If the value for the parameter is not read from input, use the default value to initialize the parameter.
  • Inout, Input, Output Set the intent field for the parameter to the appropriate value.
  • Size (expression) Assign the size value for the parameter to expression. Given this information the performance test generator can correctly allocate memory for the parameter and initialize the parameter.
  • Stride Assign the stride value for the parameter to expression Given this information, the performance test generator can correctly allocate memory for the parameter and initialize the parameter.
  • the performance test generator can correctly allocate memory for the parameter.
  • the performance test generator After processing the code-generator statements, the performance test generator generates the performance test (step 432). Table 3, above, includes an exemplary performance test. If one of the parameters is a work parameter, then the performance test generator allocates the appropriate memory before the call and deallocates the memory afterwards. After generating the performance test code, processing ends.

Abstract

In accordance with methods and systems consistent with the present invention, a system that automatically generates a performance test to determine the performance of a subprogram is provided. An interface file including statements indicating characteristics of the parameters of the subprogram is automatically generated. A performance test generator then determines characteristics of the subprogram and automatically generates a set of performance tests with a varying number of parameters and varying parameter values. The performance tests invoke execution of the subprogram and during execution of the subprogram and collect performance data of the subprogram. The performance test generator iteratively executes the performance tests.

Description

AUTOMATIC PERFORMANCE TEST GENERATION
FIELD OF THE INVENTION
This invention relates generally to data processing systems and, more particularly, to the automatic generation of performance tests.
BACKGROUND OF THE INVENTION
Performance tests are run on an executable subprogram to determine how a subprogram performs and to determine ways of improving the subprogram's performance. A performance test may be run on a subprogram repeatedly, with different parameters, to monitor performance. For example, a subprogram may include a test that multiplies a square matrix by another square matrix. A complete set of performance tests for this subprogram would include tests that not only multiply square matrices but also matrices that have various aspect ratios, e.g., long, thin matrices times short, wide matrices and vice versa. The performance test may also be used to determine the performance of a subprogram executing in various hardware environments, compiler versions, and source file versions. Each time a parameter or environment changes, the performance test is run to determine the impact of such change. Further, the performance test may need to be rewritten between successive runs so that it accurately captures the relevant performance data.
Conventional systems require a programmer to manually create a performance test and run it iteratively to monitor performance. The efficiency of such a manual process decreases as the number of subprogram increases. This process is time intensive and requires a programmer skilled in the area of performance testing to create appropriate performance tests.
Accordingly, a need exists for a more efficient manner of generating performance tests to collect performance analysis data of a subprogram.
SUMMARY OF THE INVENTION
Methods and systems operating in accordance with the principles of the present invention provide an automatic performance test generator which generates performance tests for determining the performance level of a subprogram. In accordance with an implementation of methods consistent with the present invention, a method is provided in a data processing system having a subprogram that automatically generates a performance routine that determines a performance level of the subprogram and that invokes execution of the subprogram.
In accordance with another implementation, a method is provided in a data processing system having source code with a subprogram with a parameter. The method creates an interface file for the subprogram, adds to the interface a comment ' for the parameter, the comment indicating characteristics of the parameter, reads the interface file to obtain the characteristics of the parameter, and uses the characteristics of the parameter to automatically generate a performance test that collects performance analysis data of the subprogram.
In yet another implementation, a method is provided in a data processing system that receives source code for an executable subprogram and automatically generates a performance test that runs the executable subprogram and that measures the performance characteristics of the executable subprogram while the executable subprogram is running.
In accordance with another implementation consistent with the present invention, a computer-readable memory device encoded with a program having instructions for execution by a processor is provided. The program includes source code with a subprogram having a parameter and performance test generator that automatically develops a performance test which reads the source code, runs the subprogram, and collects performance analysis data of the subprogram while the subprogram is running.
In an implementation of systems consistent with the present invention, a data processing system contains a storage device and a processor. The storage device comprises source code for a subprogram having a parameter, an interface generator that reads the subprogram and that generates an interface file with indications of characteristics of the required parameters, and a performance test generator that reads the interface file, that generates a performance test which collects performance analysis data of the subprogram by using the characteristics of the parameter, and that runs the subprogram.
BRIEF DESCRIPTION OF THE DRAWINGS
This invention is pointed out with particularity in the appended claims. The above and further advantages of this invention may be better understood by referring to the following description taken in conjunction with the accompanying drawings, in which:
Fig. 1 depicts a data processing system suitable for use with methods and systems consistent with the present invention;
Fig. 1 A depicts a block diagram of a data processing system suitable for use with methods and systems consistent with the present invention;
Fig. 2 depicts a flow chart of the steps performed to automatically generate performance tests in accordance with methods and systems consistent with the present invention;
Figs. 3 A and 3B depict a flow chart of the steps performed by the interface generator depicted in Fig. IB; and
Fig. 4 depicts a flow chart of the steps performed by the performance test generator depicted in Fig. IB.
DETAILED DESCRIPTION
Methods and systems operating in accordance with the principles of the present invention provide a performance test generator which generates performance tests to determine the performance of a subprogram, wherein the subprogram could be, for example, written in the Fortran 77 language. The performance tests may be run repeatedly on a subprogram while varying subprogram parameters to collect performance analysis data of a subprogram. Thus, programmers may analyze the performance of a subprogram and the impact of parameter changes on such performance, for example, compiler changes, memory allocation, or hardware, without expending the time required to manually develop and execute a performance test repeatedly. Overview
Methods and systems operating in accordance with the principles of the present invention provide a subprogram script that scans the source code of a subprogram and that generates an interface description language file. The interface description language file defines the signature for a subprogram, including its name, its parameters, and each parameter's type. The script then scans the source code again and inserts code-generator statements into the interface file. The code-generator statements provide meaningful information, such as characteristics of the parameters, to facilitate the automatic generation of a performance test. The generated performance test automatically invokes the related subprogram and collects performance data as the subprogram executes. The performance test generator iteratively runs the performance tests with varying parameters.
Implementation Details
Fig. 1 A shows an example of a conventional system in which the present invention may be implemented. The system comprises an input device 10, display 12 and a computer 14.
Figure IB depicts a block diagram of computer 14. Data processing system 100, suitable for use with methods and systems consistent with the present invention, includes a memory 124, a secondary storage device 110, an input device 122, a processor 114, and an output device 118. In the memory 124 resides an interface generator 134 and a performance test generator 138. Interface generator 134 reads subprogram code 130 and generates an interface file 140. Performance test generator 138 reads the interface file 140 and generates performance test code 144 which invokes the execution of subprogram code 130.
Table 1 is a definition of an interface file, where the words INTERFACE, SUBROUTINE, FUNCTION, and END are keywords, and the word TYPE represents any valid type (i.e., INTEGER, LOGICAL, REAL, CHARACTER, or COMPLEX):
Table 1
INTERFACE Interface Name {SUBROUTINE | TYPE FUNCTION} (Parameter 1,
[Parameter2, . . . , ParameterN])
TFPE Parameterl
2TEE Parameter2
TYPE ParameterN END {SUBROUTINE | FUNCTION} END INTERFACE
Table 2 is an example of an interface file for the "CAXPY" subprogram, which multiplies a constant ALPHA by a vector X and adds a vector Y, where X and Y are one-dimensional arrays and ALPHA is a scalar.
Table 2 INTERFACE
SUBROUTINE CAXPY (N,ALPHA,X,TNCX,Y,INCY) INTEGER :: N !#D(#SIZE(X)),#READREQ COMPLEX :: ALPHA !#SCALAR((1.0e0,0.0e0)) COMPLEX :: X (*) !#INrT(Initialize(N,X)) INTEGER :: INCX !#D(STRIDE(X)),#READOPT COMPLEX :: Y (*)!#INιT(ιnitialize(N,Y)) INTEGER :: TNCY !#D(#STRIDE(Y)),#READOPT END SUBROUTINE END INTERFACE In this interface file, the first parameter is an integer, "N." The #D statement indicates that the default value of "N" is the size of array "X." By association, this also indicates that the size of array "X" is "N." The #READREQ statement indicates that the performance test must read the value of the parameter "N" from input. If the performance test reads the input file and does not file a value for this parameter, it should generate an error message and exit. The default value of the parameter ALPHA is a scalar complex constant value described by the #SCALAR statement. The parameter "X" can be initialized by making a subroutine call to the 'Initialize' routine with the parameters "N" and "X" as indicated by the #BSfIT statement. The "INCX" parameter is described as having a default value that is the stride on the "X" parameter. The term 'stride' described the number of elements of an array separate from the elements of interest. For example, if the stride of an array is ' 1', then every element is of interest. If the stride of an array is '2', then every other element is of interest. If the stride of an array is '3', then every third element is of interest, etc. The "INCX" parameter is further described with the #READOPT statement. This indicates that the performance test should generate code that will assign the value of this parameter if it exists in the input stream. If it does not exist, then the default value should be used.
Given the above interface file, a performance test generator in accordance with this invention will produce the following performance test as shown in Table 3.
Table 3
1 PROGRAM Test CAXPY
2 USE PERFORMANCEJ-NTERFACE
3 IMPLICIT NONE
4 *
5 * Routine Parameters
6 *
7 INTEGER :: N
8 COMPLEX :: ALPHA
9 COMPLEX, D ENSIONO), POINTER : : X
10 INTEGER :: INCX
11 COMPLEX, DIMENSIONO), POINTER : : Y
12 INTEGER :: INCY
13 COMMON /PARAMETERS/ N, INCX, INCY
14 *
15 * Timing variables
16 *
17 REAL(8) :: TIMER, T1, T2, Overhead * * Other variables * INTEGER :: MEMSTAT INTEGER :: Calls, I CHARACTER *(20) ParamName, ParamVal * * Assign default values to parameters that have them * N = -1 INCX = 1 INCY = 1 * ' * Read in Parameters * 100 READ(*,*,END=102) ParamName, ParamVal CALL AssignParameters(ParamName, ParamVal) GOTO 100 * * Check that all the READREQ parameters got read in * 102 IF (N .EQ. -1) THEN PRINT *, "Error, did not read in variable 'N'" STOP 'Error Exit' ENDIF * * Calculate timer overhead * Tl = TJMERO T2 = TTMER0 Overhead = T2 - Tl * * Initialize Parameters *
ALPHA = (1.0E0,0.0E0)
ALLOCATE(X(N),STAT=MEMSTAT)
IF (MEMSTAT .NE. 0) THEN
STOP 'Error Allocating memory for X'
ENDIF
CALL INITIALIZE(N,X)
ALLOCATE(Y(N),STAT=MEMSTAT)
IF (MEMSTAT .NE. 0) THEN
STOP 'Error Allocating memory for Y'
ENDIF
CALL INITIALIZER, Y) * * Call routine to be timed *
Tl = TEVIERO
CALL CAXPY(N,ALPHA,X,INCX,Y,INCY)
T2 = TLMER0 * * Calculate number of calls depending on overhead percentage *
Calls = (Overhead / (T2 - (Tl + Overhead))) * 200 + 1
Tl = TIMER0
DO 1 = 1, Calls
CALL CAXPY(N,ALPHA,X,INCX,Y,INCY)
END DO 78 T2 = TTMERO
79
80 PRINT *, "Elapsed Time = ", (T2 - (Tl + Overhead))/Calls
81 STOP 'Normal Exit'
82 END
83 *
84 * Routine to interpret parameter values from input files
85 *
86 SUBROUTINE AssignParameters(ParamName, ParamVal)
87 IMPLICIT NONE
88 CHARACTER *(*) ParamName, ParamVal
89 INTEGER N
90 INTEGER INCX
91 INTEGER INCY
92
93 COMMON /PARAMETERS/ N, INCX, INCY
94
95 IF (ParamName .eq. "N") THEN
96 READ(ParamVal,'(I10)',ERR=900)N
97 ELSEIF (ParamName .eq. "INCX") THEN
98 READ(ParamVal,'(I10)',ERR=900)INCX
99 ELSEIF (ParamName .eq. "INCY") THEN
100 READ(ParamVal,'(I10)',ERR=900)INCY
101 ELSE
102 PRINT *, "Error in input. ParamName = ", ParamName
103 PRINT *, " ParamVal = ", ParamVal
104 STOP "Error Exit"
105 ENDIF
106
107 RETURN 109 900 PRINT *, "Error in input. ParamName = ", ParamName
110 PRINT *, " ParamVal = ", ParamVal
111 STOP "Error Exit" 112
113 END
As shown in Table 3, lines 1 - 13 : Variable declaration and program statement. Lines 14 - 17 : Variable declaration to perform timing. Lines 18 - 23 : Local variables used to gather statistics and read input. Lines 24 - 29 : Initialize variables that have default values. Lines 30 - 35 : Loop to read parameter values from input file. Lines 36 - 42 : Checks to verify all the READREQ parameter values were read in. Lines 43 - 48 : Calculate the overhead to call the timing routine. Line 49 - 62 : Allocate memory and initialize arrays. Lines 63 - 68 : Call the routine and record the time required. Lines 69 - 73 : Calculate the number of calls necessary so that the overall time spent in the timed routine is large compared to timer overhead. Lines 74 - 79 : Call the timed routine the calculated number of times and record the total elapsed time spent in the timed routine. Lines 80 - 82 : Report the average time spent in the timed routine and stop. Lines 83 - 113 : Routine to read input file and assign values to parameters.
Additional statements may be added to the performance test of Table 3 to determine performance aspects of the CAXPY subprogram, including, for example, statements to perform the following functions: a statement to ensure that the data cache is invalid before calling the timing test; a statement to ensure that the data is in the cache before the test for timing is called; a series of calls to the test and an average of the results. Performance tests may also be used to compare the timing of runs when the memory is dynamically allocated, as in Table 3 above, to when memory is statically allocated. Thus, the automatic generation of performance tests allows a programmer to run many different performance tests under many different circumstances and with varying parameters to determine detailed performance analysis data of a subprogram. Figure 2 depicts a flowchart of the steps performed by methods and systems consistent with the present invention when creating a performance test for an executable subprogram. The first step is to create an interface file by invoking the interface generator (step 210). In this step, the interface generator scans the source code and creates an interface file according to the definition provided above. The interface generator then adds code-generator statements to the interface file to facilitate the test generator in creating performance tests for the subprogram. The interface generator parses the arguments of the subprogram and adds a comment line that provides meaningful information used by the performance test generator to generate a performance test. For example, such meaningful information may include how to generate a value for a given parameter if the value for the parameter is missing. A programmer may manually supplement the code-generator statements generated by the interface generator. After the code-generator statements have been added to the interface file, the user invokes the test generator (step 220). The test generator reads the interface file and reads the code-generator statements included in the interface file to create a performance test. The performance test automatically invokes execution of the subprogram wherein the process is timed to indicate performance.
Figures 3 A and 3B depict a flowchart of the steps performed by the interface generator. The first step performed by the interface generator is to create an interface file for the subprogram (step 304). In this step, the interface generator generates a definition for the subprogram similar to that described above in Table 1.
Next, the interface generator selects a parameter within the subprogram (step 308). The arguments for the subprogram parameters contain comments that provide information about that parameter, such as whether the parameter is an input, output, or input/output parameter; its type; and the meaning associated with its values. In accordance with methods and systems consistent with the present invention, the parameter descriptions closely conform to the following form in Table 4 below:
Table 4
Parameter Name Comment Line in Source Code N (input) INTEGER
The order of the matrix A. N >= 0.
D (input/output) COMPLEX array, dimension (N)
On entry, the diagonal elements of A.
On exit, the diagonal elements DD.
L (input/output) COMPLEX array, dimension (N-l)
On entry, the subdiagonal elements of A.
On exit, the subdiagonal elements of LL and DD.
SUBL (output) COMPLEX array, dimension (N-2)
On exit, the second subdiagonal elements of LL.
NRHS (input) INTEGER
The number of right hand sides, i.e., the number of columns of matrix B. NRHS >= 0.
B (input/output) COMPLEX array, dimension (LDB, NRHS)
On entry, the N-by-NRHS right hand side matrix B.
On exit, the N-by-NRHS solution matrix X.
LDB (input) INTEGER
The leading dimension of the array B. LDB >= max (1, N).
IP TV (input) INTEGER array, dimension (N)
Details of the interchanges and block pivot. If IPIV (K) > 0, 1 by 1 pivot, and if MN (K) = K + 1 an interchange done; If IPIV (K) < 0, 2 by 2 pivot, no interchange required.
Thus, when an argument "Ν" appears in a subprogram, its associated comment indicates that it is an input parameter, it is of type integer, and its purpose is to define the order of the matrix A.
After selecting a parameter, the interface generator determines whether the value of this parameter can be calculated from either the other parameters or another source (step 312). For example, if the selected parameter were a length parameter or a stride parameter, the value of the parameter can be obtained through a system call to identify the size or stride of the parameter. If the value of the parameter is calculatable, the interface generator inserts a code-generator statement "D" as a comment next to the parameter declaration (step 316). The "D" code-generator statement indicates that the parameter is optional because its value can be derived from another source. The "D" code-generator statement takes the form of D(expression), where "expression" indicates how to derive the value of the parameter. A valid "expression" could include any constant term or expression, any expression including constant terms and/or actual parameters, or other code-generator statements. An example of the "D" code-generator statement follows in Table 5:
Table 5
INTERFACE D
SUBROUTINE _D (N, X, M)
INTEGER N !#D (#SIZE (X))
COMPLEX X (*)
INTEGER M !#D (N)
END SUBROUTINE
END INTERFACE
Next, the interface generator determines if the argument has a conditional value (step 328). If so, the interface generator inserts the IF code-generator statement (step 332). In this step, the programmer has indicated in the source code the conditional requirement, and thus, the interface generator inserts an appropriate expression indicating the conditionality of the argument. The "IF" code-generator statement is defined as IF(expression, defaultl {ELSE default2}), where if "expression" evaluates to true, then the value of this argument is defaultl. Otherwise, the value is default2. An example of the IF code-generator statement follows in Table 6:
Table 6
INTERFACE IF
SUBROUTINE JF (FLAG1, N, ARRAY)
CHARACTER : : FLAG1 !#D ( ' Y ' ) INTEGER : : N !#IF ( (FLAG1 .EQ. ' Y ' ), #D (100) , #ELSE (#D (200) )
INTEGER : : ARRAY ( : )
END SUBROUTINE
END INTERFACE
Next, the interface generator inserts the directionality of the parameter into the interface file (step 344). In this step, the interface generator determines if the parameter is an inout, input or output parameter by examining the comments in the source code. After making this determination, either an inout, input or output code- generator statement is inserted into the interface file. If the parameter is an input parameter, the variable needs to be assigned before calling the test. If the parameter is an output parameter, the variable will be modified by the test. If the parameter is an inout parameter, it is passed with input and output semantics. In the case of C interfaces, this means that the C interface passes a scalar parameter by reference. This parameter allows the compiler to perform optimizations across subprogram boundaries. An example of the INOUT code-generator statement follows in Table 7:
Table 7
INTERFACE INOUT
SUBROUTINE NOUT (N, A, RECOND)
INTEGER : : N !#D (#SIZE (A) )
INTEGER : : A ( * )
REAL : : RCOND !#INOUT
END SUBROUTINE
END INTERFACE
If the parameter is an Input parameter, it is passed with input semantics. In the case of C interfaces, this means that the C interface can pass the parameter by value. This parameter allows the compiler to perform optimizations across subprogram boundaries as seen in Table 8.
Table 8
INTERFACE INPUT SUBROUTINE JNFUT (N, A, RECOND)
INTEGER : : N !#D (#SIZE (A) )
INTEGER : : A ( * )
REAL : : RCOND !#INPUT
END SUBROUTINE
END INTERFACE
If the parameter is an output parameter, it is passed with output semantics. In the case of C interfaces, this means that the C interface needs to pass a scalar parameter by reference. This parameter allows the compiler to perform optimizations across subprogram boundaries as seen in Table 9.
Table 9
INTERFACE OUTPUT
SUBROUTINE _OUTPUT (N, A, RECOND)
INTEGER : : N !#D (#SIZE (A) )
INTEGER : : A ( * )
REAL : : RCOND !#OUTPUT
END SUBROUTINE
END INTERFACE
After inserting the directionality, the interface generator determines if the argument will return a multi-dimensional variable (step 348 in Fig. 3B). If so, it inserts a RANK code-generator statement indicating that the performance test should generate both a multi-dimensional array as well as a single dimensional variable in the event that the programmer was only expecting a one-dimensional variable (step 352). The RANK code-generator statement is defined as RANK(list), where list indicates the possible dimensions of the parameter. An example follows in Table 10:
Table 10
INTERFACE RANK
SUBROUTINE _RANK (N, ARRAY)
INTEGER : : N
COMPLEX : : ARRAY (: , : ) !#RANK (1) END SUBROUTINE
END INTERFACE
Next, the interface generator determines if the size of the argument is declared in terms of another argument (step 356), and if so, it adds the SIZE code-generator statement (step 360). The SIZE code-generator statement is defined as SIZE(name, [#DIM=d]), where "name" is the name of the variable that this parameter acts as the size of and "DIM" indicates which dimension of a multidimensional variable describes the size of the variable. Following in Table 11 is an example of the SIZE code-generator statement:
Table 11
INTERFACE SIZE
SUBROUTINE _SIZE (N, ARRAY)
INTEGER : : N !#D (#SIZE (ARRAY,#DIM=1) )
DOUBLE PRECISION : : ARRAY ( : )
END SUBROUTINE
END INTERFACE
The interface generator then determines if this parameter is a stride parameter indicating the stride of another parameter by examining the comments associated with the parameter (step 364). If the comments indicate that this parameter is a stride for another parameter, the interface generator inserts the stride code-generator statement (step 368). The STRIDE code-generator statement is defined as STRIDE(name,[#DIM=d]), where "name" indicates the parameter that this parameter is the stride for and "DIM" indicates which dimension of the parameter as seen in Table 12.
Table 12
INTERFACE STRIDE
SUBROUTINE _STRIDE (N, X, INCX, Y, INCY)
INTEGER N !#D (#SIZE (X) )
COMPLEX X
INTEGER INCX !#D (#STRIDE (X) ) COMPLEX : : Y
INTEGER : : INCY !#D (#STRTDE (Y) )
END SUBROUTINE
END INTERFACE
Next the interface generator determines if this parameter is a work space parameter (step 372). A workspace parameter provides memory that will be used by the underlying subprogram. This determination is made by examining the comments of the parameter in the source code. If this parameter is a workspace parameter, the interface generator inserts the WORK code-generator statement into the interface file (step 376). The WORK code-generator statement is defined as WORK(expression{,save}), where the "expression" indicates the size of the workspace and "save" indicates where the workspace should be saved as seen in Table 13.
Table 13
INTERFACE WORK
SUBROUTINE _WORK (N, ARRAY, WORK, IWORK)
INTEGER : : N !#D (#SIZE (ARRAY, #DIM=1) )
COMPLEX : : ARRAY
REAL : : WORK ( : ) !#WORK(N*2)
REAL : : IWORK ( : ) !#WORK (N)
END SUBROUTINE
END INTERFACE
Next, the interface generator determines if more parameters remain to be processed (step 380), and if so, processing continues to step 308. If no more parameters remain to be processed, processing ends.
For an example of inserting code-generator statements into an interface file, consider the following. The CSTSN subprogram computes the solution to a complex system of linear equations A * X = B, where A is an Ν-by-Ν symmetric tridiagonal matrix and X and B are Ν-by-ΝRHS matrices. The following interface in Table 14 is generated by examining the CSTSV source to extract the parameter list and the parameter declarations.
Table 14 INTERFACE STSV
SUBROUTINE _STSV (N, NRHS, L, D, SUBL, B, LDB, IPIV)
INTEGER : : N
INTEGER : : NRHS
COMPLEX : : L (*)
COMPLEX : : D (*)
COMPLEX : : SUBL (*)
COMPLEX : : B (LDB, *)
INTEGER : : LDB
INTEGER : : IPIV (*)
END SUBROUTINE
END INTERFACE
By parsing the comments in the source code, the interface generator can add code-generator statements to the interface file. For instance, the following example line in the source code:
N (input) INTEGER allows the interface generator to insert the #TNPUT code-generator statement into the interface file associated with the parameter N.
Also, the following exemplary source code declarations:
D (input / output) COMPLEX array, dimension (N)
L (input / output) COMPLEX array, dimension (N-l)
SUBL (output) COMPLEX array, dimension (N-2)
NRHS (input) INTEGER allow the interface generator to not only associate the #INOUT statement with the parameters D and L, but also the #OUTPUT statement can be associated with the SUBL parameter and the #INPUT statement to the NRHS parameter. In addition, the declaration of D gives the interface generator enough information to construct a default value for the parameter N.
Furthermore, the following exemplary declaration for B: B (input / output) COMPLEX array, dimension (LDB, NRHS) provides enough information to associate the #INOUT statement with B, create a default value for the LDB and NRHS parameters.
This process continues until all the comments have been examined and code- generator statements generated. The final result is an interface file as shown in Table 15 populated with code-generator statements.
Table 15 INTERFACE
SUBROUTINE CSTSV (N, NRHS, L, D, SUBL, B, LDB, IPIV)
INTEGER N !#INPUT, #D (#SIZE (D, #DM=1)) INTEGER NRHS !#D (#SIZE (B, #DIM=2)) COMPLEX L (*) !#INOUT COMPLEX D (*) !#INOUT COMPLEX SUBL (*) !#OUTPUT COMPLEX B (LDB, *) !#TNOUT INTEGER LDB !#D (#STRIDE (B, #DIM=2)) INTEGER IPIV (*) !#OUTPUT END SUBROUTINE
END INTERFACE
Figure 4 depicts a flowchart of the steps performed by the performance .test generator. The performance test generator performs two passes through the interface file that has been marked up with the code-generator statements. The first pass discovers information regarding a subprogram and its parameters and begins to populate a hash table with such information. The second pass provides more detailed information to the hash table. Once the hash table has been populated, the performance test generator generates performance tests using this information. The first step performed by the performance test generator is to determine whether the subprogram is a subroutine (i.e., does not return a return code) or is a function (i.e., returns a return code) (step 404). Next, the performance test generator records the name of the subprogram in a hash table (step 408). The hash table entry has the following items of information where items 2-10 are specified for each parameter of the subprogram:
1) Subprogram Name
2) Parameter Name
3) Type (logical, real, double, etc.)
4) Rank (shape)
5) Work: expression indicating amount of memory needed for this parameter.
6) Sizer: this parameter describes the size of another parameter, the name of that parameter is stored in this field.
7) Mysizer: if another parameter is a sizer for this parameter, that parameter's name is stored in this field.
8) Strider: if this parameter is a strider for another parameter, then its name is stored in this field.
9) Mystrider: if another parameter acts as the strider for this parameter, then its name is stored in this entry.
10) Intent: undefined, input, output, or i/o.
After recording the name, the performance test generator examines the parameter list to determine the number of parameters as well as their name for the subprogram and stores this information in the hash table (step 412). The performance test generator then identifies the details of each parameter including its shape and type and stores this in the hash table (step 416). After identifying the parameter details, the performance test generator processes the code-generator statements by inserting various information into the hash table (step 420). The following table indicates the code-generator statements and the processing that occurs for each one:
Table 16
Code-Generator Statement Processing That Occurs D (default expression) Record expression as the default value for the parameter. If the value for the parameter is not read from input, use the default value to initialize the parameter.
If (expression, defaultl, else, default2)
Save the expression and the two possible default values. Include code in the performance test that tests the expression and chooses one of the default values if the value for the parameter is not read from input.
Inout, Input, Output Set the intent field for the parameter to the appropriate value.
Rank Set to value in Rank
Size (expression) Assign the size value for the parameter to expression. Given this information the performance test generator can correctly allocate memory for the parameter and initialize the parameter.
Stride Assign the stride value for the parameter to expression. Given this information, the performance test generator can correctly allocate memory for the parameter and initialize the parameter.
Work Assign the work value for the parameter to expression. Given this information, the performance test generator can correctly allocate memory for the parameter. After processing the code-generator statements, the performance test generator generates the performance test (step 432). Table 3, above, includes an exemplary performance test. If one of the parameters is a work parameter, then the performance test generator allocates the appropriate memory before the call and deallocates the memory afterwards. After generating the performance test code, processing ends.
Although the present invention has been described with reference to a preferred embodiment thereof, those skilled in the art will know of various changes in form and detail which may be made without departing from the spirit and scope of the present invention as defined in the appended claims and their full scope of equivalents.

Claims

WHAT IS CLAIMED IS:
1. A method in a data processing system for automatically generating a performance test, comprising the steps of: creating an interface file for the subprogram; adding to the interface file a comment for the parameter, the comment indicating characteristics of the parameter; reading the interface file to obtain the characteristics of the parameter; and using the characteristics of the parameter to automatically generate a performance test that collects performance analysis data of the subprogram.
2. The method of claim 1, further including the step of automatically running the performance test.
3. The method of claim 1, wherein the step of adding to the interface file a comment for the parameter includes adding information indicating how to generate a value for the parameter.
4. A method in a data processing system for automatically generating a performance test, comprising the steps of: receiving source code for an executable subprogram; and automatically generating a performance test that runs the executable subprogram and that measures performance characteristics of the executable subprogram while the executable subprogram is running.
5. The method of claim 4, wherein the executable subprogram has a parameter and wherein the automatically generating step further includes the steps of: creating an interface file for the executable subprogram; inserting a comment statement into the interface file, the comment statement describing the parameter of the executable subprogram; and using the interface file to create a subprogram that collects performance analysis data of the executable subprogram.
6. The method of claim 4, wherein the automatically generating step includes generating a plurality of performance tests with parameters having varying values.
7. A computer-readable memory device encoded with a program having instructions for execution by a processor, the program comprising: source code with a subprogram having a parameter; and a performance test generator that automatically generates a performance test that reads the source code, that runs the subprogram, and that collects performance analysis data of the subprogram while the subprogram is running.
8. The computer-readable memory device of claim 7, wherein the source code has a parameter and the performance test generator receives a set of parameter values and creates a value for a parameter from the received set of parameter values.
9. A data processing system, comprising: a storage device, comprising: source code for a subprogram having a parameter; an interface generator that reads the subprogram and generates an interface file including an indication of characteristics of the parameter; and a performance test generator that reads the interface file, that generates a performance test which collects performance analysis data of the subprogram by using the characteristics of the parameter, and that runs the subprogram; and a processor for running the interface generator and the performance test generator.
10. The data processing system of claim 9 wherein the source code contains a comment statement indicating the characteristics of the parameter.
11. The data processing system of claim 9, wherein the performance test generator receives a set of parameter values and generates a value for the parameter from the received set of parameter values.
12. The data processing system of claim 9, wherein the characteristics include an indication that the value of the parameter is calculatable from another source.
13. The data processing system of claim 9, wherein the characteristics include an indication of a conditional value for the parameter.
14. The data processing system of claim 9, wherein the characteristics include an indication of whether the parameter is used to contain a return value.
15. The data processing system of claim 9, wherein the characteristics include a directionality of the parameter.
16. The data processing system of claim 9, wherein the characteristics include an indication of whether the parameter returns a multidimensional variable.
17. The data processing system of claim 9, wherein the characteristics include an indication of whether a size of the parameter is based on another parameter.
18. The data processing system of claim 9, wherein the characteristics include an indication of whether a value of the parameter indicates a stride of another parameter.
19. The data processing system of claim 9 wherein the characteristics include an indication of whether the parameter is a work space parameter.
20. The data processing system of claim 9, wherein the source code contains an indication of a number of the plurality of performance tests that the performance test generator generates.
21. A computer-readable medium containing instructions for controlling a data processing system to perform a method, comprising the steps of: receiving an executable subprogram; and automatically generating an interface file used for developing a performance routine that determines a performance level of the executable subprogram.
22. The computer-readable medium of claim 21, wherein the automatically generating step further includes the steps of: creating an interface for the subprogram; inserting into the interface a code-generator statement describing a characteristic of a parameter of the subprogram; and using the interface to create the performance routine.
23. The computer-readable medium of claim 21 wherein the automatically generating step further includes the step of: re¬
generating a plurality of performance subprograms, wherein at least one of the plurality of performance subprograms includes varying parameter values.
24. A computer-readable medium containing instructions for controlling a data processing system to perform a method, the data processing system having an executable subprogram, the method comprising the steps of: reading the executable subprogram; invoking the executable subprogram; and generating a performance routine that determines a performance level of the subprogram.
25. A data processing system comprising: means for receiving source code; and means for automatically generating a performance routine that invokes execution of the source code and collects data reflecting a performance of the source code.
PCT/US2001/031886 2000-10-12 2001-10-12 Automatic performance test generation WO2002031657A2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2002213142A AU2002213142A1 (en) 2000-10-12 2001-10-12 Automatic performance test generation

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US68636900A 2000-10-12 2000-10-12
US09/686,369 2000-10-12

Publications (2)

Publication Number Publication Date
WO2002031657A2 true WO2002031657A2 (en) 2002-04-18
WO2002031657A3 WO2002031657A3 (en) 2003-01-09

Family

ID=24756028

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2001/031886 WO2002031657A2 (en) 2000-10-12 2001-10-12 Automatic performance test generation

Country Status (2)

Country Link
AU (1) AU2002213142A1 (en)
WO (1) WO2002031657A2 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102043624A (en) * 2010-12-17 2011-05-04 青岛海信网络科技股份有限公司 Method for automatically programming intermediate communication interface and device
US8732005B2 (en) 2000-07-26 2014-05-20 Rewards Network Incorporated System and method for providing consumer rewards
CN112579438A (en) * 2020-12-01 2021-03-30 河南芯盾网安科技发展有限公司 Multifunctional automatic test tool and test method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5301312A (en) * 1991-08-21 1994-04-05 International Business Machines Corporation Method and system for utilizing benign fault occurrence to measure interrupt-blocking times
US5905856A (en) * 1996-02-29 1999-05-18 Bankers Trust Australia Limited Determination of software functionality

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5301312A (en) * 1991-08-21 1994-04-05 International Business Machines Corporation Method and system for utilizing benign fault occurrence to measure interrupt-blocking times
US5905856A (en) * 1996-02-29 1999-05-18 Bankers Trust Australia Limited Determination of software functionality

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
KROPP N P ET AL: "Automated robustness testing of off-the-shelf software components" FAULT-TOLERANT COMPUTING, 1998. DIGEST OF PAPERS. TWENTY-EIGHTH ANNUAL INTERNATIONAL SYMPOSIUM ON MUNICH, GERMANY 23-25 JUNE 1998, LOS ALAMITOS, CA, USA,IEEE COMPUT. SOC, US, 23 June 1998 (1998-06-23), pages 230-239, XP010291290 ISBN: 0-8186-8470-4 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8732005B2 (en) 2000-07-26 2014-05-20 Rewards Network Incorporated System and method for providing consumer rewards
CN102043624A (en) * 2010-12-17 2011-05-04 青岛海信网络科技股份有限公司 Method for automatically programming intermediate communication interface and device
CN112579438A (en) * 2020-12-01 2021-03-30 河南芯盾网安科技发展有限公司 Multifunctional automatic test tool and test method

Also Published As

Publication number Publication date
AU2002213142A1 (en) 2002-04-22
WO2002031657A3 (en) 2003-01-09

Similar Documents

Publication Publication Date Title
US5394347A (en) Method and apparatus for generating tests for structures expressed as extended finite state machines
US5784553A (en) Method and system for generating a computer program test suite using dynamic symbolic execution of JAVA programs
EP1388064B1 (en) System and method for combinatorial test generation in a compatibility testing environment
US6286133B1 (en) Method and apparatus for strategic compilation of source programs into two or more target languages
US5446900A (en) Method and apparatus for statement level debugging of a computer program
US5651111A (en) Method and apparatus for producing a software test system using complementary code to resolve external dependencies
EP1004961B1 (en) Method and system for correlating profile data dynamically generated from an optimized executable program with source code statements
US7240343B2 (en) System and method for handling an exception in a program
US6553565B2 (en) Method and apparatus for debugging optimized code
EP0406602B1 (en) Method and apparatus for debugging parallel programs by serialization
EP0428084A2 (en) Method and apparatus for compiling computer programs with interprocedural register allocation
US6647546B1 (en) Avoiding gather and scatter when calling Fortran 77 code from Fortran 90 code
US7089535B2 (en) Code coverage with an integrated development environment
EP1170661A2 (en) Method and system for improving performance of applications that employ a cross-language interface
US6912708B2 (en) Method and apparatus to facilitate debugging a platform-independent virtual machine
EP0646864A1 (en) Optimising compiler
JPH06110734A (en) Method and device for generating automatic inspection test function
JPH1097430A (en) Method and system for inserting assembly code routine into source code routine for optimization
US7793277B2 (en) Compiler apparatus and method for devirtualizing virtual method calls
JPH04330527A (en) Optimization method for compiler
US6360360B1 (en) Object-oriented compiler mechanism for automatically selecting among multiple implementations of objects
US20020174418A1 (en) Constant return optimization transforming indirect calls to data fetches
US6330714B1 (en) Method and computer program product for implementing redundant lock avoidance
US7406681B1 (en) Automatic conversion of source code from 32-bit to 64-bit
EP0520708A2 (en) Method and apparatus for converting high level form abstract syntaxes into an intermediate form

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PH PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application
REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase in:

Ref country code: JP