WO2016140654A1 - Performance testing using service call executors - Google Patents

Performance testing using service call executors Download PDF

Info

Publication number
WO2016140654A1
WO2016140654A1 PCT/US2015/018564 US2015018564W WO2016140654A1 WO 2016140654 A1 WO2016140654 A1 WO 2016140654A1 US 2015018564 W US2015018564 W US 2015018564W WO 2016140654 A1 WO2016140654 A1 WO 2016140654A1
Authority
WO
WIPO (PCT)
Prior art keywords
service call
executors
entries
database
service
Prior art date
Application number
PCT/US2015/018564
Other languages
French (fr)
Inventor
Hugh HAMILL
Original Assignee
Hewlett Packard Enterprise Development Lp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hewlett Packard Enterprise Development Lp filed Critical Hewlett Packard Enterprise Development Lp
Priority to PCT/US2015/018564 priority Critical patent/WO2016140654A1/en
Publication of WO2016140654A1 publication Critical patent/WO2016140654A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/50Network service management, e.g. ensuring proper service fulfilment according to agreements
    • H04L41/5003Managing SLA; Interaction between SLA and QoS
    • H04L41/5009Determining service level performance parameters or violations of service level contracts, e.g. violations of agreed response time or mean time between failures [MTBF]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3409Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for performance assessment
    • G06F11/3414Workload generation, e.g. scripts, playback
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3409Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for performance assessment
    • G06F11/3433Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for performance assessment for load management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3409Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for performance assessment
    • G06F11/3419Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for performance assessment by assessing time
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/50Network service management, e.g. ensuring proper service fulfilment according to agreements
    • H04L41/508Network service management, e.g. ensuring proper service fulfilment according to agreements based on type of value added network service under agreement
    • H04L41/5087Network service management, e.g. ensuring proper service fulfilment according to agreements based on type of value added network service under agreement wherein the managed service relates to voice services
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/50Testing arrangements

Definitions

  • a typical performance test often uses historical data to recreate service calls against the application or service being tested.
  • a web server performance test may use a large number of prior web page requests against other web servers in order to simulate normal operations the web server is expected to handle.
  • FIG. 1 is a block diagram of an example performance testing device consistent with disclosed implementations
  • FIG. 2 is a flowchart of an embodiment of a method for performance testing consistent with disclosed implementations
  • FIG. 3 is a block diagram of a system for performance testing consistent with disclosed implementations
  • FIG. 4 is a block diagram of an example performance test system consistent with disclosed implementations.
  • FIG. 5 is a block diagram of an example service call executor engine consistent with disclosed implementations.
  • a performance test of a service and/or application often relies on re-processing historical service call data to re-create typical loads on the service.
  • Such historical service call data may be stored in a database and may include metadata such as a remote procedure call (RPC) name, parameters, a relative start time, past performance test results, etc. This metadata allows the service call to be re-created and run against the service to be tested.
  • RPC remote procedure call
  • machine-readable storage medium refers to any electronic, magnetic, optical, or other physical storage device that stores executable instructions or other data (e.g., a hard disk drive, random access memory, flash memory, etc.).
  • FIG. 1 is a block diagram of an example performance testing device 100 consistent with disclosed implementations.
  • Performance testing device 100 may comprise a processor 1 10 and a non-transitory machine-readable storage medium 120.
  • Performance testing device 100 may comprise a computing device such as a server computer, a desktop computer, a laptop computer, a handheld computing device, a mobile phone, or the like.
  • Processor 1 10 may comprise a central processing unit (CPU), a semiconductor-based microprocessor, or any other hardware device suitable for retrieval and execution of instructions stored in machine-readable storage medium 120.
  • processor 1 10 may fetch, decode, and execute a plurality of create executor instructions 130, assign database interlace number instructions 132, receive ready signal instructions 134, create result listener instructions 136, receive complete signal instructions 138, and generate performance test log instructions 140 to implement the functionality described in detail below.
  • Executable instructions such as create executor instructions 130, assign database interlace number instructions 132, receive ready signal instructions 134, create result listener instructions 136, receive complete signal instructions 138, and generate performance test log instructions 140 may be stored in any portion and/or component of machine-readable storage medium 120.
  • the machine-readable storage medium 120 may comprise both volatile and/or nonvolatile memory and data storage components. Volatile components are those that do not retain data values upon loss of power. Nonvolatile components are those that retain data upon a loss of power.
  • the machine-readable storage medium 120 may comprise, for example, random access memory (RAM), read-only memory (ROM), hard disk drives, solid- state drives, USB flash drives, memory cards accessed via a memory card reader, floppy disks accessed via an associated floppy disk drive, optical discs accessed via an optical disc drive, magnetic tapes accessed via an appropriate tape drive, and/or other memory components, and/or a combination of any two and/or more of these memory components.
  • the RAM may comprise, for example, static random access memory (SRAM), dynamic random access memory (DRAM), and/or magnetic random access memory (MRAM) and other such devices.
  • the ROM may comprise, for example, a programmable read-only memory (PROM), an erasable programmable read-only memory (EPROM), an electrically erasable programmable read-only memory (EEPROM), and/or other like memory device.
  • PROM programmable read-only memory
  • EPROM erasable programmable read-only memory
  • EEPROM electrically erasable programmable read-only memory
  • Create executor instructions 130 may cause the processor to create a plurality of service call executors.
  • create executor instructions 130 may calculate a number of needed service call executors based on a configured service call execution rate for each of the plurality of service call executors.
  • the calculated number of needed service call executors may be based on a database read rate for each of the plurality of service call executors.
  • the number of needed service call executors may be calculated according to a total number of service calls to be executed, a total time of the performance test, and a desired service call execution rate.
  • Each of the historic service call requests may be associated with a service call entry stored as a row in a database.
  • the desired execution rate which may be configured for each performance test as needed, may comprise, for example, three service calls per second per service call executor.
  • the service call execution rate may depend upon a read rate from the database of historical service call data, such that each executor may continually retrieve additional service call data from the database faster than the service calls are executed.
  • a performance test may comprise 30,000 historical web page requests to be run against a web server. If the desired time for the performance test is 1000 seconds, then the needed number of service call executors to handle the performance test is 10, each executing three service calls per second.
  • the create executor instructions 130 may thus spawn a plurality of executor processes to execute the service calls of the performance test according to the calculated needed number of executors.
  • the assign database interlace number instructions 132 may assign a database entry interlace number to each of the plurality of service call executors.
  • Each of the service call executors may connect to a database to retrieve the historical service call data.
  • the executors may read a full memory page of data from the database with each request, and each page may comprise a plurality of rows of service call data.
  • the database interlace number may be based on the number of executors, and may be used by each executor to determine which rows of service calls to retrieve. For example, when ten service call executors have been spawned by the create executor instructions 130, each executor may be given a starting row and a database interlace number of ten so that each executor retrieves every tenth service call. The first executor retrieves the service calls in rows 1 , 1 1 , 21 , 31 , etc., the second executor retrieves the service calls in rows 2, 12, 22, 32, etc., and so on for each of the service call executors. In some implementations, the database rows may be ordered according to an order in which the service calls are to be executed.
  • Each of the service call executors may retrieve a plurality of rows, each comprising a service call entry from the database and load a service call queue with the entries.
  • the size of the queue may be large enough to hold several reads from the database, and each read may comprise multiple database rows.
  • each database read may retrieve 10 service call entries, and the service call queue may store 30 of these entries.
  • An additional buffer queue may also be used by each executor to store additional service call data entries.
  • Each of the service call entries may comprise data associated with executing the service call such as a procedure name (e.g., a remote procedure call, or RPC), at least one parameter for the procedure, a relative start time, and an event identifier.
  • the relative start time may comprise an amount of time after the performance test is started when the service call is to be executed.
  • the event identifier may comprise a unique identification number and/or string for the service call that may be used to track the individual performance of that service call.
  • the receive ready signal instructions 134 may receive a ready signal from each of the plurality of service call executors.
  • the ready signal may indicate a full service call queue for each of the plurality of service call executors. For example, each executor may read enough service call entries from the database to fill its respective service call queue, then provide a ready signal to receive ready signal instructions 134 to indicate that the service call executor is ready to begin the performance test.
  • receive ready signal instructions 134 may signal each of the plurality of service call executors to begin the performance test by executing the service call entries in their respective service call queues.
  • the create result listener instructions 136 may create a service call result listener to receive a plurality of service call results associated with the performance test. For example, each of the service call executors may execute the service call entries on their respective service call queues without waiting for the results from the execution of those service call entries. The service call result listener may receive the results instead and may collect performance data such as amount of time taken to complete each of the service call entries.
  • the receive complete signal instructions 138 may receive a complete signal from each of the plurality of service call executors. For example, each of the plurality of service executors may continually read additional service call entries from the database and load them into their respective service call queues and/or their respective buffer queues. In some implementations, the service call entries in the buffer queues may be used to re-load the service call queues. Once no more service call entries are available for a service call executor from the database and each of the retrieved service call entries has been executed, the service call executor may send a signal to the receive complete signal instructions 138 that it has finished executing its assigned service call entries.
  • the generate performance test log instructions 140 may generate a performance test log comprising the plurality of service call results received by the service call result listener.
  • the performance test log may comprise an amount of time each service call took to complete, a comparison between the amount of time taken to complete and past completion times, aggregations of such completion time data across multiple service calls, etc.
  • FIG. 2 is a flowchart of an embodiment of a method 200 for performance testing consistent with disclosed implementations. Although execution of method 200 is described below with reference to the components of performance testing device 100, other suitable components for execution of method 200 may be used.
  • Method 200 may start in block 205 and proceed to block 210 where device 100 may calculate a needed number of service call executors according to a configured service call execution rate.
  • the number of needed service call executors may be calculated according to a total number of service calls to be executed, a total time of the performance test, and a desired service call execution rate.
  • Each of the historic service call requests may be associated with a service call entry stored as a row in a database.
  • the desired execution rate which may be configured for each performance test as needed, may comprise, for example, three service calls per second per service call executor.
  • the service call execution rate may depend upon a read rate from the database of historical service call data, such that each executor may continually retrieve additional service call data from the database faster than the service calls are executed.
  • a performance test may comprise 30,000 historical web page requests to be run against a web server. If the desired time for the performance test is 1000 seconds, then the needed number of service call executors to handle the performance test is 10, each executing three service calls per second.
  • the create executor instructions 130 may thus spawn a plurality of executor processes to execute the service calls of the performance test according to the calculated needed number of executors.
  • Method 200 may proceed to block 215 where device 100 may create a plurality of service call executors according to the calculated needed number of service call executors.
  • create executor instructions 130 may calculate a number of needed service call executors based on a configured service call execution rate for each of the plurality of service call executors.
  • the calculated number of needed service call executors may be based on a database read rate for each of the plurality of service call executors.
  • Method 200 may proceed to block 220 where device 100 may assign a database interlace number to each of the plurality of service call executors.
  • Each of the service call executors may connect to a database to retrieve the historical service call data.
  • the executors may read a full memory page of data from the database with each request, and each page may comprise a plurality of rows of service call data.
  • the database interlace number may be based on the number of executors, and may be used by each executor to determine which rows of service calls to retrieve. For example, when ten service call executors have been spawned by the create executor instructions 130, each executor may be given a starting row and a database interlace number of ten so that each executor retrieves every tenth service call. The first executor retrieves the service calls in rows 1 , 1 1 , 21 , 31 , etc., the second executor retrieves the service calls in rows 2, 12, 22, 32, etc., and so on for each of the service call executors. In some implementations, the database rows may be ordered according to an order in which the service calls are to be executed.
  • Each of the service call executors may retrieve a plurality of rows, each comprising a service call entry from the database and load a service call queue with the entries.
  • the size of the queue may be large enough to hold several reads from the database, and each read may comprise multiple database rows.
  • each database read may retrieve 10 service call entries, and the service call queue may store 30 of these entries.
  • An additional buffer queue may also be used by each executor to store additional service call data entries.
  • Method 200 may proceed to block 225 where device 100 may load a service call queue for each of the plurality of service call executors.
  • the service call queue may be loaded from an interlaced number of rows each comprising a prior service call entry from a database according to the database interlace number.
  • each of the service call entries may comprise data associated with executing the service call such as a procedure name (e.g., a remote procedure call, or RPC), a parameter, a relative start time, and an event identifier.
  • the relative start time may comprise an amount of time after the performance test is started when the service call is to be executed.
  • the event identifier may comprise a unique identification number and/or string for the service call that may be used to track the individual performance of that service call.
  • Method 200 may proceed to block 230 where device 100 may initiate a performance test.
  • the performance test may comprise concurrent execution of a plurality of prior service call entries from the respective service call queues of each of the plurality of service call executors. For example, a ready signal may be received from each of the service call executors to indicate a full service call queue. Each executor may read enough service call entries from the database to fill its respective service call queue, then provide the ready signal to receive ready signal instructions 134 to indicate that the service call executor is ready to begin the performance test. Once the ready signal has been received from each of the executors, receive ready signal instructions 134 may signal each of the plurality of service call executors to begin the performance test by executing the service call entries in their respective service call queues.
  • the service call executors may execute each of the plurality of prior service call entries from the respective service call queue according to the relative start time for each of the plurality of prior service call entries.
  • each service call entry may comprise a relative start time comprising a number of seconds after the performance test has been initiated at which the service call should be executed.
  • Method 200 may proceed to block 235 where device 100 may receive a plurality of service call results from the plurality of service calls.
  • each of the service call executors may execute the service call entries on their respective service call queues without waiting for the results from the execution of those service call entries.
  • a service call result listener may be spawned by create result listener instructions 136 to receive the results instead and may collect performance data such as amount of time taken to complete each of the service call entries.
  • Method 200 may proceed to block 240 where device 100 may determine whether each of the plurality of service call executors has executed all of the service calls in the respective service call queue. For example, each of the plurality of service executors may determine whether a plurality of additional service call entries are available from the database. The service executors may read additional service call entries from the database and load them into their respective service call queues and/or their respective buffer queues. In some implementations, the service call entries in the buffer queues may be used to re-load the service call queues.
  • the service call executors may determine whether the plurality of additional service call entries are available from the database on a rotating basis according to the database interlace number assigned to each of the plurality of service call executors. For example, each service call executor may perform a read, such as a full memory page of rows of service call entries, from the database in turn and/or in order of the assigned database interlace number. For example, a first service call executor may read rows 1 , 1 1 , and 21 , then other service call executors may read in their respective rows before the first service call executor reads rows 31 , 41 , and 51 , and so on.
  • the service call executors may determine whether the plurality of additional service call entries are available from the database independently of one another. For example, each service call executor may perform ongoing reads of service call entries from the database simultaneously with the other executors to keep their respective service call and/or buffer queues full.
  • the service call executor may send a signal to the receive complete signal instructions 138 that it has finished executing its assigned service call entries. If device 100 determines that any of the service call executors has not executed all of the service call entries in its service call queue, method 200 may return to stage 235 where device 100 may continue to receive the service call results.
  • method 200 may proceed to block 245 where device 100 may generate a performance test log according to the received plurality of service call results.
  • the performance test log may comprise an amount of time each service call took to complete, a comparison between the amount of time taken to complete and past completion times, aggregations of such completion time data across multiple service calls, etc. Method 200 may then end at block 250.
  • FIG. 3 is a block diagram of a system 300 for performance testing consistent with disclosed implementations.
  • System 300 may comprise a computing device 310 and a database 340.
  • Computing device 310 may comprise, for example, a general and/or special purpose computer, server, mainframe, desktop, laptop, tablet, smart phone, game console, and/or any other system capable of providing computing capability consistent with providing the implementations described herein.
  • Database 340 may comprise a local and/or remote structured data storage device.
  • Computing device 310 may comprise a performance test engine 320 and a service call executor engine 330.
  • Performance test engine 320 and service call executor engine 330 may each comprise, for example, instructions stored a machine readable medium executable by a processor, logic circuitry, and/or other implementations of hardware and/or software.
  • Performance test engine 320 may calculate a needed number of service call executors according to a configured service call execution rate, spawn a plurality of service call executors according to the calculated needed number of service call executors, assign a database interlace number to each of the plurality of service call executors, cause each of the plurality of service call executors to begin a performance test in response to a ready signal received from each of the plurality of service call executors, and create a service call result listener to receive results from a respective plurality of prior service call entries performed by each of the plurality of service call executors. Performance test engine 320 may initiate the performance test by signaling each of the plurality of service call executors to begin execution of their respective service call queues.
  • Service call executor engine 330 may load a service call queue with a first plurality of prior service call entries, load a buffer queue with a second plurality of prior service call entries, execute each of the first plurality of prior service call entries from the service call queue according to the relative start time for each of the first plurality of prior service call entries, and refill the service call queue with the second plurality of prior service call entries.
  • each of the first plurality of prior service call entries comprises a row in a service call database comprising a procedure name, at least one parameter, a relative start time, and an event identifier.
  • the first plurality of prior service call entries and the second plurality of prior service call entries are retrieved from the service call database according to the assigned database interlace number and a row number of the first plurality of prior service call entries and the second plurality of prior service call entries.
  • system 200 may comprise more than one computing device 310 and more than one database 340.
  • At least one of the computing devices may be employed and arranged, for example, in at least one server bank, computer bank, data center, and/or other arrangements.
  • the computing devices together may include a cloud computing resource, a grid computing resource, and/or any other distributed computing arrangement.
  • Such computing devices may be located in a single installation and/or may be distributed among many different geographical locations.
  • FIG. 4 is a block diagram of an example performance test system 400 consistent with disclosed implementations.
  • Performance test system 400 may comprise a performance test engine 410, a plurality of service call executors 420(A)- (C), a service call result listener 430 and a database 440.
  • each of the plurality of service call executors 420(A)-(C) may be associated with one of a plurality of separate service call result listeners.
  • the separate service call result listeners may aggregate their results at the end of the performance test to produce a performance test log.
  • Database 440 may comprise a plurality of rows, each comprising one of a plurality of prior service call entries 450(A)-(N).
  • Performance test engine 410 may calculate a number of needed service call executors based on a configured service call execution rate (e.g., three per second for each service call executor). In some implementations, the calculated number of needed service call executors may be based on a database read rate for each of the plurality of service call executors. The needed number of service call executors may be spawned and assigned a database interlace number.
  • a configured service call execution rate e.g., three per second for each service call executor.
  • the calculated number of needed service call executors may be based on a database read rate for each of the plurality of service call executors.
  • the needed number of service call executors may be spawned and assigned a database interlace number.
  • Each of the service call executors(A)-(C) may load a respective service call queue from database 440 by reading the prior service call entries according to its assigned database interlace number.
  • the three service call executors 420(A)-(C) illustrated may be assigned interlace numbers of 1 , 2, and 3.
  • First service call executor for example, may perform a first interlaced database read 460 beginning with a first prior service call entry 450(A) and continuing to read every third row.
  • Second service call executor 420(B) and third service call executor 420(C) may similarly perform a second interlaced database read 470 and a third interlaced database read 480, with each reading every third row.
  • Service call executor engine 500 may comprise a service call queue 520, a buffer queue 530, and a queue loader 540.
  • Queue loader 540 may read interlaced rows from database 440 comprising prior service call entries and copy those entries into a first plurality of service call entries 525(A)-(E) in service call queue 520. Once service call queue 520 is full, queue loader may continue to read interlaced rows from database 440 to load buffer queue 530 with a second plurality of service call entries 535(A)-(E).
  • service call executor engine 500 may begin executing the first plurality of service call entries 525(A)-(E) in order.
  • Queue loader 540 may move second plurality of service call entries 535(A)-(E) into service call queue 520 to keep service call queue 520 full.
  • Queue loader 540 may further continue to read interlaced rows from database 400 to refill buffer queue 530 with service call entries.
  • the disclosed examples may include systems, devices, computer- readable storage media, and methods for progressive buffer generation. For purposes of explanation, certain examples are described with reference to the components illustrated in FIGs. 1 -5. The functionality of the illustrated components may overlap, however, and may be present in a fewer or greater number of elements and components. Further, all or part of the functionality of illustrated elements may co-exist or be distributed among several geographically dispersed locations. Moreover, the disclosed examples may be implemented in various environments and are not limited to the illustrated examples.

Abstract

Examples disclosed herein comprise performance testing instructions to create a plurality of service call executors, assign a database entry interlace number to each of the plurality of service call executors, receive a ready signal from each of the plurality of service call executors, signal each of the plurality of service call executors to begin a performance test, create a service call result listener to receive a plurality of service call results associated with the performance test, receive a complete signal from each of the plurality of service call executors, and generate a performance test log comprising the plurality of service call results received by the service call result listener.

Description

PERFORMANCE TESTING USING SERVICE CALL EXECUTORS
BACKGROUND
[0001 ] A typical performance test often uses historical data to recreate service calls against the application or service being tested. For example, a web server performance test may use a large number of prior web page requests against other web servers in order to simulate normal operations the web server is expected to handle.
BRIEF DESCRIPTION OF THE DRAWINGS
[0002] In the accompanying drawings, like numerals refer to like components or blocks. The following detailed description references the drawings, wherein:
[0003] FIG. 1 is a block diagram of an example performance testing device consistent with disclosed implementations;
[0004] FIG. 2 is a flowchart of an embodiment of a method for performance testing consistent with disclosed implementations;
[0005] FIG. 3 is a block diagram of a system for performance testing consistent with disclosed implementations;
[0006] FIG. 4 is a block diagram of an example performance test system consistent with disclosed implementations; and
[0007] FIG. 5 is a block diagram of an example service call executor engine consistent with disclosed implementations.
DETAILED DESCRIPTION
[0008] As described above, a performance test of a service and/or application often relies on re-processing historical service call data to re-create typical loads on the service. Such historical service call data may be stored in a database and may include metadata such as a remote procedure call (RPC) name, parameters, a relative start time, past performance test results, etc. This metadata allows the service call to be re-created and run against the service to be tested.
[0009] In the description that follows, reference is made to the term, "machine- readable storage medium." As used herein, the term "machine-readable storage medium" refers to any electronic, magnetic, optical, or other physical storage device that stores executable instructions or other data (e.g., a hard disk drive, random access memory, flash memory, etc.).
[0010] Referring now to the drawings, FIG. 1 is a block diagram of an example performance testing device 100 consistent with disclosed implementations. Performance testing device 100 may comprise a processor 1 10 and a non-transitory machine-readable storage medium 120. Performance testing device 100 may comprise a computing device such as a server computer, a desktop computer, a laptop computer, a handheld computing device, a mobile phone, or the like.
[001 1 ] Processor 1 10 may comprise a central processing unit (CPU), a semiconductor-based microprocessor, or any other hardware device suitable for retrieval and execution of instructions stored in machine-readable storage medium 120. In particular, processor 1 10 may fetch, decode, and execute a plurality of create executor instructions 130, assign database interlace number instructions 132, receive ready signal instructions 134, create result listener instructions 136, receive complete signal instructions 138, and generate performance test log instructions 140 to implement the functionality described in detail below.
[0012] Executable instructions such as create executor instructions 130, assign database interlace number instructions 132, receive ready signal instructions 134, create result listener instructions 136, receive complete signal instructions 138, and generate performance test log instructions 140 may be stored in any portion and/or component of machine-readable storage medium 120. The machine-readable storage medium 120 may comprise both volatile and/or nonvolatile memory and data storage components. Volatile components are those that do not retain data values upon loss of power. Nonvolatile components are those that retain data upon a loss of power.
[0013] The machine-readable storage medium 120 may comprise, for example, random access memory (RAM), read-only memory (ROM), hard disk drives, solid- state drives, USB flash drives, memory cards accessed via a memory card reader, floppy disks accessed via an associated floppy disk drive, optical discs accessed via an optical disc drive, magnetic tapes accessed via an appropriate tape drive, and/or other memory components, and/or a combination of any two and/or more of these memory components. In addition, the RAM may comprise, for example, static random access memory (SRAM), dynamic random access memory (DRAM), and/or magnetic random access memory (MRAM) and other such devices. The ROM may comprise, for example, a programmable read-only memory (PROM), an erasable programmable read-only memory (EPROM), an electrically erasable programmable read-only memory (EEPROM), and/or other like memory device.
[0014] Create executor instructions 130 may cause the processor to create a plurality of service call executors. In some implementations, create executor instructions 130 may calculate a number of needed service call executors based on a configured service call execution rate for each of the plurality of service call executors. In some implementations, the calculated number of needed service call executors may be based on a database read rate for each of the plurality of service call executors.
[0015] The number of needed service call executors may be calculated according to a total number of service calls to be executed, a total time of the performance test, and a desired service call execution rate. Each of the historic service call requests may be associated with a service call entry stored as a row in a database. The desired execution rate, which may be configured for each performance test as needed, may comprise, for example, three service calls per second per service call executor. In some implementations, the service call execution rate may depend upon a read rate from the database of historical service call data, such that each executor may continually retrieve additional service call data from the database faster than the service calls are executed.
[0016] For example, a performance test may comprise 30,000 historical web page requests to be run against a web server. If the desired time for the performance test is 1000 seconds, then the needed number of service call executors to handle the performance test is 10, each executing three service calls per second. The create executor instructions 130 may thus spawn a plurality of executor processes to execute the service calls of the performance test according to the calculated needed number of executors.
[0017] The assign database interlace number instructions 132 may assign a database entry interlace number to each of the plurality of service call executors. Each of the service call executors may connect to a database to retrieve the historical service call data. The executors may read a full memory page of data from the database with each request, and each page may comprise a plurality of rows of service call data.
[0018] The database interlace number may be based on the number of executors, and may be used by each executor to determine which rows of service calls to retrieve. For example, when ten service call executors have been spawned by the create executor instructions 130, each executor may be given a starting row and a database interlace number of ten so that each executor retrieves every tenth service call. The first executor retrieves the service calls in rows 1 , 1 1 , 21 , 31 , etc., the second executor retrieves the service calls in rows 2, 12, 22, 32, etc., and so on for each of the service call executors. In some implementations, the database rows may be ordered according to an order in which the service calls are to be executed.
[0019] Each of the service call executors may retrieve a plurality of rows, each comprising a service call entry from the database and load a service call queue with the entries. The size of the queue may be large enough to hold several reads from the database, and each read may comprise multiple database rows. For example, each database read may retrieve 10 service call entries, and the service call queue may store 30 of these entries. An additional buffer queue may also be used by each executor to store additional service call data entries.
[0020] Each of the service call entries may comprise data associated with executing the service call such as a procedure name (e.g., a remote procedure call, or RPC), at least one parameter for the procedure, a relative start time, and an event identifier. The relative start time may comprise an amount of time after the performance test is started when the service call is to be executed. The event identifier may comprise a unique identification number and/or string for the service call that may be used to track the individual performance of that service call.
[0021 ] The receive ready signal instructions 134 may receive a ready signal from each of the plurality of service call executors. The ready signal may indicate a full service call queue for each of the plurality of service call executors. For example, each executor may read enough service call entries from the database to fill its respective service call queue, then provide a ready signal to receive ready signal instructions 134 to indicate that the service call executor is ready to begin the performance test. Once the ready signal has been received from each of the executors, receive ready signal instructions 134 may signal each of the plurality of service call executors to begin the performance test by executing the service call entries in their respective service call queues.
[0022] The create result listener instructions 136 may create a service call result listener to receive a plurality of service call results associated with the performance test. For example, each of the service call executors may execute the service call entries on their respective service call queues without waiting for the results from the execution of those service call entries. The service call result listener may receive the results instead and may collect performance data such as amount of time taken to complete each of the service call entries.
[0023] The receive complete signal instructions 138 may receive a complete signal from each of the plurality of service call executors. For example, each of the plurality of service executors may continually read additional service call entries from the database and load them into their respective service call queues and/or their respective buffer queues. In some implementations, the service call entries in the buffer queues may be used to re-load the service call queues. Once no more service call entries are available for a service call executor from the database and each of the retrieved service call entries has been executed, the service call executor may send a signal to the receive complete signal instructions 138 that it has finished executing its assigned service call entries.
[0024] The generate performance test log instructions 140 may generate a performance test log comprising the plurality of service call results received by the service call result listener. For example, the performance test log may comprise an amount of time each service call took to complete, a comparison between the amount of time taken to complete and past completion times, aggregations of such completion time data across multiple service calls, etc.
[0025] FIG. 2 is a flowchart of an embodiment of a method 200 for performance testing consistent with disclosed implementations. Although execution of method 200 is described below with reference to the components of performance testing device 100, other suitable components for execution of method 200 may be used.
[0026] Method 200 may start in block 205 and proceed to block 210 where device 100 may calculate a needed number of service call executors according to a configured service call execution rate. The number of needed service call executors may be calculated according to a total number of service calls to be executed, a total time of the performance test, and a desired service call execution rate. Each of the historic service call requests may be associated with a service call entry stored as a row in a database. The desired execution rate, which may be configured for each performance test as needed, may comprise, for example, three service calls per second per service call executor. In some implementations, the service call execution rate may depend upon a read rate from the database of historical service call data, such that each executor may continually retrieve additional service call data from the database faster than the service calls are executed. [0027] For example, a performance test may comprise 30,000 historical web page requests to be run against a web server. If the desired time for the performance test is 1000 seconds, then the needed number of service call executors to handle the performance test is 10, each executing three service calls per second. The create executor instructions 130 may thus spawn a plurality of executor processes to execute the service calls of the performance test according to the calculated needed number of executors.
[0028] Method 200 may proceed to block 215 where device 100 may create a plurality of service call executors according to the calculated needed number of service call executors. In some implementations, create executor instructions 130 may calculate a number of needed service call executors based on a configured service call execution rate for each of the plurality of service call executors. In some implementations, the calculated number of needed service call executors may be based on a database read rate for each of the plurality of service call executors.
[0029] Method 200 may proceed to block 220 where device 100 may assign a database interlace number to each of the plurality of service call executors. Each of the service call executors may connect to a database to retrieve the historical service call data. The executors may read a full memory page of data from the database with each request, and each page may comprise a plurality of rows of service call data.
[0030] The database interlace number may be based on the number of executors, and may be used by each executor to determine which rows of service calls to retrieve. For example, when ten service call executors have been spawned by the create executor instructions 130, each executor may be given a starting row and a database interlace number of ten so that each executor retrieves every tenth service call. The first executor retrieves the service calls in rows 1 , 1 1 , 21 , 31 , etc., the second executor retrieves the service calls in rows 2, 12, 22, 32, etc., and so on for each of the service call executors. In some implementations, the database rows may be ordered according to an order in which the service calls are to be executed. [0031 ] Each of the service call executors may retrieve a plurality of rows, each comprising a service call entry from the database and load a service call queue with the entries. The size of the queue may be large enough to hold several reads from the database, and each read may comprise multiple database rows. For example, each database read may retrieve 10 service call entries, and the service call queue may store 30 of these entries. An additional buffer queue may also be used by each executor to store additional service call data entries.
[0032] Method 200 may proceed to block 225 where device 100 may load a service call queue for each of the plurality of service call executors. The service call queue may be loaded from an interlaced number of rows each comprising a prior service call entry from a database according to the database interlace number. For example, each of the service call entries may comprise data associated with executing the service call such as a procedure name (e.g., a remote procedure call, or RPC), a parameter, a relative start time, and an event identifier. The relative start time may comprise an amount of time after the performance test is started when the service call is to be executed. The event identifier may comprise a unique identification number and/or string for the service call that may be used to track the individual performance of that service call.
[0033] Method 200 may proceed to block 230 where device 100 may initiate a performance test. The performance test may comprise concurrent execution of a plurality of prior service call entries from the respective service call queues of each of the plurality of service call executors. For example, a ready signal may be received from each of the service call executors to indicate a full service call queue. Each executor may read enough service call entries from the database to fill its respective service call queue, then provide the ready signal to receive ready signal instructions 134 to indicate that the service call executor is ready to begin the performance test. Once the ready signal has been received from each of the executors, receive ready signal instructions 134 may signal each of the plurality of service call executors to begin the performance test by executing the service call entries in their respective service call queues. [0034] In some implementations, the service call executors may execute each of the plurality of prior service call entries from the respective service call queue according to the relative start time for each of the plurality of prior service call entries. For example, each service call entry may comprise a relative start time comprising a number of seconds after the performance test has been initiated at which the service call should be executed.
[0035] Method 200 may proceed to block 235 where device 100 may receive a plurality of service call results from the plurality of service calls. For example, each of the service call executors may execute the service call entries on their respective service call queues without waiting for the results from the execution of those service call entries. A service call result listener may be spawned by create result listener instructions 136 to receive the results instead and may collect performance data such as amount of time taken to complete each of the service call entries.
[0036] Method 200 may proceed to block 240 where device 100 may determine whether each of the plurality of service call executors has executed all of the service calls in the respective service call queue. For example, each of the plurality of service executors may determine whether a plurality of additional service call entries are available from the database. The service executors may read additional service call entries from the database and load them into their respective service call queues and/or their respective buffer queues. In some implementations, the service call entries in the buffer queues may be used to re-load the service call queues.
[0037] In some implementations, the service call executors may determine whether the plurality of additional service call entries are available from the database on a rotating basis according to the database interlace number assigned to each of the plurality of service call executors. For example, each service call executor may perform a read, such as a full memory page of rows of service call entries, from the database in turn and/or in order of the assigned database interlace number. For example, a first service call executor may read rows 1 , 1 1 , and 21 , then other service call executors may read in their respective rows before the first service call executor reads rows 31 , 41 , and 51 , and so on. [0038] In some implementations, the service call executors may determine whether the plurality of additional service call entries are available from the database independently of one another. For example, each service call executor may perform ongoing reads of service call entries from the database simultaneously with the other executors to keep their respective service call and/or buffer queues full.
[0039] Once no more service call entries are available for a service call executor from the database and each of the retrieved service call entries has been executed, the service call executor may send a signal to the receive complete signal instructions 138 that it has finished executing its assigned service call entries. If device 100 determines that any of the service call executors has not executed all of the service call entries in its service call queue, method 200 may return to stage 235 where device 100 may continue to receive the service call results.
[0040] In response to determining that each of the plurality of service call executors has executed all of the service calls in the respective service call queue, method 200 may proceed to block 245 where device 100 may generate a performance test log according to the received plurality of service call results. For example, the performance test log may comprise an amount of time each service call took to complete, a comparison between the amount of time taken to complete and past completion times, aggregations of such completion time data across multiple service calls, etc. Method 200 may then end at block 250.
[0041 ] FIG. 3 is a block diagram of a system 300 for performance testing consistent with disclosed implementations. System 300 may comprise a computing device 310 and a database 340. Computing device 310 may comprise, for example, a general and/or special purpose computer, server, mainframe, desktop, laptop, tablet, smart phone, game console, and/or any other system capable of providing computing capability consistent with providing the implementations described herein. Database 340 may comprise a local and/or remote structured data storage device.
[0042] Computing device 310 may comprise a performance test engine 320 and a service call executor engine 330. Performance test engine 320 and service call executor engine 330 may each comprise, for example, instructions stored a machine readable medium executable by a processor, logic circuitry, and/or other implementations of hardware and/or software.
[0043] Performance test engine 320 may calculate a needed number of service call executors according to a configured service call execution rate, spawn a plurality of service call executors according to the calculated needed number of service call executors, assign a database interlace number to each of the plurality of service call executors, cause each of the plurality of service call executors to begin a performance test in response to a ready signal received from each of the plurality of service call executors, and create a service call result listener to receive results from a respective plurality of prior service call entries performed by each of the plurality of service call executors. Performance test engine 320 may initiate the performance test by signaling each of the plurality of service call executors to begin execution of their respective service call queues. Service call executor engine 330 may load a service call queue with a first plurality of prior service call entries, load a buffer queue with a second plurality of prior service call entries, execute each of the first plurality of prior service call entries from the service call queue according to the relative start time for each of the first plurality of prior service call entries, and refill the service call queue with the second plurality of prior service call entries.
[0044] In some implementations, each of the first plurality of prior service call entries comprises a row in a service call database comprising a procedure name, at least one parameter, a relative start time, and an event identifier. In some implementations, the first plurality of prior service call entries and the second plurality of prior service call entries are retrieved from the service call database according to the assigned database interlace number and a row number of the first plurality of prior service call entries and the second plurality of prior service call entries.
[0045] Although one computing device 310 and one database 340 are depicted in FIG. 3, certain implementations of system 200 may comprise more than one computing device 310 and more than one database 340. At least one of the computing devices may be employed and arranged, for example, in at least one server bank, computer bank, data center, and/or other arrangements. For example, the computing devices together may include a cloud computing resource, a grid computing resource, and/or any other distributed computing arrangement. Such computing devices may be located in a single installation and/or may be distributed among many different geographical locations.
[0046] FIG. 4 is a block diagram of an example performance test system 400 consistent with disclosed implementations. Performance test system 400 may comprise a performance test engine 410, a plurality of service call executors 420(A)- (C), a service call result listener 430 and a database 440. In some implementations, each of the plurality of service call executors 420(A)-(C) may be associated with one of a plurality of separate service call result listeners. The separate service call result listeners may aggregate their results at the end of the performance test to produce a performance test log. Database 440 may comprise a plurality of rows, each comprising one of a plurality of prior service call entries 450(A)-(N).
[0047] Performance test engine 410 may calculate a number of needed service call executors based on a configured service call execution rate (e.g., three per second for each service call executor). In some implementations, the calculated number of needed service call executors may be based on a database read rate for each of the plurality of service call executors. The needed number of service call executors may be spawned and assigned a database interlace number.
[0048] Each of the service call executors(A)-(C) may load a respective service call queue from database 440 by reading the prior service call entries according to its assigned database interlace number. For example, the three service call executors 420(A)-(C) illustrated may be assigned interlace numbers of 1 , 2, and 3. First service call executor, for example, may perform a first interlaced database read 460 beginning with a first prior service call entry 450(A) and continuing to read every third row. Second service call executor 420(B) and third service call executor 420(C), may similarly perform a second interlaced database read 470 and a third interlaced database read 480, with each reading every third row. [0049] FIG. 5 is a block diagram of an example service call executor engine 500 consistent with disclosed implementations. Service call executor engine 500 may comprise a service call queue 520, a buffer queue 530, and a queue loader 540. Queue loader 540 may read interlaced rows from database 440 comprising prior service call entries and copy those entries into a first plurality of service call entries 525(A)-(E) in service call queue 520. Once service call queue 520 is full, queue loader may continue to read interlaced rows from database 440 to load buffer queue 530 with a second plurality of service call entries 535(A)-(E).
[0050] Once a performance test has been initiated, service call executor engine 500 may begin executing the first plurality of service call entries 525(A)-(E) in order. Queue loader 540 may move second plurality of service call entries 535(A)-(E) into service call queue 520 to keep service call queue 520 full. Queue loader 540 may further continue to read interlaced rows from database 400 to refill buffer queue 530 with service call entries.
[0051 ] The disclosed examples may include systems, devices, computer- readable storage media, and methods for progressive buffer generation. For purposes of explanation, certain examples are described with reference to the components illustrated in FIGs. 1 -5. The functionality of the illustrated components may overlap, however, and may be present in a fewer or greater number of elements and components. Further, all or part of the functionality of illustrated elements may co-exist or be distributed among several geographically dispersed locations. Moreover, the disclosed examples may be implemented in various environments and are not limited to the illustrated examples.
[0052] Moreover, as used in the specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context indicates otherwise. Additionally, although the terms first, second, etc. may be used herein to describe various elements, these elements should not be limited by these terms. Instead, these terms are only used to distinguish one element from another. [0053] Further, the sequence of operations described in connection with FIGs. 1 -5 are examples and are not intended to be limiting. Additional or fewer operations or combinations of operations may be used or may vary without departing from the scope of the disclosed examples. Thus, the present disclosure merely sets forth possible examples of implementations, and many variations and modifications may be made to the described examples. All such modifications and variations are intended to be included within the scope of this disclosure and protected by the following claims.

Claims

CLAIMS We claim:
1 . A non-transitory machine-readable storage medium including instructions for performance testing which, when executed by a processor, cause the processor to:
assign a database entry interlace number to each of a plurality of service call executors;
receive a ready signal from each of the plurality of service call executors; signal each of the plurality of service call executors to begin a performance test;
receive a complete signal from each of the plurality of service call executors; and
generate a performance test log comprising a plurality of service call results.
2. The non-transitory machine-readable medium of claim 1 , wherein the ready signal indicates a full service call queue for each of the plurality of service call executors.
3. The non-transitory machine-readable medium of claim 2, wherein each of the plurality of service call executors load a service call queue from a database comprising a plurality of prior service call entries.
4. The non-transitory machine-readable medium of claim 3, wherein each of the plurality of prior service call entries comprise a procedure name, at least one parameter, a relative start time, and an event identifier.
5. The non-transitory machine-readable medium of claim 3, wherein each of the plurality of service call executors load the service call queue from the database according to the database entry interlace number.
6. The non-transitory machine-readable medium of claim 1 , wherein the processor creates the plurality of service call executors according to a calculated number of needed service call executors, wherein the calculated number of needed service call executors is based on a configured service call execution rate for each of the plurality of service call executors.
7. The non-transitory machine-readable medium of claim 6, wherein the calculated number of needed service call executors is further based on a database read rate for each of the plurality of service call executors.
8. A computer-implemented method for performance testing comprising: calculating a needed number of service call executors according to a configured service call execution rate;
creating a plurality of service call executors according to the calculated needed number of service call executors;
assigning a database interlace number to each of the plurality of service call executors;
loading a service call queue for each of the plurality of service call executors, wherein the service call queue is loaded from an interlaced number of rows each comprising a prior service call entry from a database according to the database interlace number;
initiating a performance test, wherein the performance test comprises concurrent execution of a plurality of prior service call entries from the respective service call queues of each of the plurality of service call executors;
receiving a plurality of service call results from the plurality of service calls; determining whether each of the plurality of service call executors has executed all of the service calls in the respective service call queue; and
in response to determining that each of the plurality of service call executors has executed all of the service calls in the respective service call queue, generating a performance test log according to the received plurality of service call results.
9. The computer-implemented method of claim 8, wherein each of the plurality of service call executors comprises a buffer service call queue.
10. The computer-implemented method of claim 9, wherein each of the plurality of service call executors determines whether a plurality of additional service call entries are available from the database; and
wherein each of the plurality of service call executors, in response to determining that the plurality of additional service call entries are available from the database, loads the buffer service call queue with the plurality of additional service call entries from the database.
1 1 . The computer-implemented method of claim 10, wherein each of the plurality of service call executors reloads the respective service call queue from the buffer service call queue.
12. The computer-implemented method of claim 10, wherein each of the plurality of service call executors determines whether the plurality of additional service call entries are available from the database on a rotating basis according to the database interlace number assigned to each of the plurality of service call executors.
13. The computer-implemented method of claim 8, wherein each of the prior service call entries comprise a procedure name, at least one parameter, a relative start time, and an event identifier.
14. The computer-implemented method of claim 12, wherein each of the plurality of service call executors executes each of the plurality of prior service call entries from the respective service call queue according to the relative start time for each of the plurality of prior service call entries.
15. A system for performance testing, comprising:
a performance test engine to:
calculate a needed number of service call executors according to a configured service call execution rate,
spawn a plurality of service call executors according to the calculated needed number of service call executors,
assign a database interlace number to each of the plurality of service call executors,
cause each of the plurality of service call executors to begin a performance test in response to a ready signal received from each of the plurality of service call executors, and
create a service call result listener to receive results from a respective plurality of prior service call entries performed by each of the plurality of service call executors; and
a service call executor engine to:
load a service call queue with a first plurality of prior service call entries, wherein each of the first plurality of prior service call entries comprises a row in a service call database comprising a procedure name, at least one parameter, a relative start time, and an event identifier,
load a buffer queue with a second plurality of prior service call entries, wherein the first plurality of prior service call entries and the second plurality of prior service call entries are retrieved from the service call database according to the assigned database interlace number and a row number of the first plurality of prior service call entries and the second plurality of prior service call entries,
execute each of the first plurality of prior service call entries from the service call queue according to the relative start time for each of the first plurality of prior service call entries, and
refill the service call queue with the second plurality of prior service call entries.
PCT/US2015/018564 2015-03-04 2015-03-04 Performance testing using service call executors WO2016140654A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/US2015/018564 WO2016140654A1 (en) 2015-03-04 2015-03-04 Performance testing using service call executors

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2015/018564 WO2016140654A1 (en) 2015-03-04 2015-03-04 Performance testing using service call executors

Publications (1)

Publication Number Publication Date
WO2016140654A1 true WO2016140654A1 (en) 2016-09-09

Family

ID=56848444

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2015/018564 WO2016140654A1 (en) 2015-03-04 2015-03-04 Performance testing using service call executors

Country Status (1)

Country Link
WO (1) WO2016140654A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108183832A (en) * 2017-11-28 2018-06-19 北京空间技术研制试验中心 A kind of acquisition methods of network data
CN111008124A (en) * 2019-10-25 2020-04-14 武汉迎风聚智科技有限公司 Task scheduling method and device for database test
CN115473860A (en) * 2022-09-05 2022-12-13 北京京东振世信息技术有限公司 Multi-interface joint data distribution method and device based on online flow and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6522995B1 (en) * 1999-12-28 2003-02-18 International Business Machines Corporation Method and apparatus for web-based control of a web-based workload simulation
US20070083630A1 (en) * 2005-09-27 2007-04-12 Bea Systems, Inc. System and method for performance testing framework
US20100333072A1 (en) * 2009-06-30 2010-12-30 Sap Ag Integrated performance and load testing tool for application servers
KR20130043572A (en) * 2011-10-20 2013-04-30 한국전자통신연구원 Method for performance test of online game server
US20140040667A1 (en) * 2012-07-31 2014-02-06 Meidan Zemer Enhancing test scripts

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6522995B1 (en) * 1999-12-28 2003-02-18 International Business Machines Corporation Method and apparatus for web-based control of a web-based workload simulation
US20070083630A1 (en) * 2005-09-27 2007-04-12 Bea Systems, Inc. System and method for performance testing framework
US20100333072A1 (en) * 2009-06-30 2010-12-30 Sap Ag Integrated performance and load testing tool for application servers
KR20130043572A (en) * 2011-10-20 2013-04-30 한국전자통신연구원 Method for performance test of online game server
US20140040667A1 (en) * 2012-07-31 2014-02-06 Meidan Zemer Enhancing test scripts

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108183832A (en) * 2017-11-28 2018-06-19 北京空间技术研制试验中心 A kind of acquisition methods of network data
CN108183832B (en) * 2017-11-28 2020-09-15 北京空间技术研制试验中心 Network data acquisition method
CN111008124A (en) * 2019-10-25 2020-04-14 武汉迎风聚智科技有限公司 Task scheduling method and device for database test
CN111008124B (en) * 2019-10-25 2023-01-24 武汉迎风聚智科技有限公司 Task scheduling method and device for database test
CN115473860A (en) * 2022-09-05 2022-12-13 北京京东振世信息技术有限公司 Multi-interface joint data distribution method and device based on online flow and electronic equipment
CN115473860B (en) * 2022-09-05 2023-12-05 北京京东振世信息技术有限公司 Multi-interface joint data distribution method and device based on online traffic and electronic equipment

Similar Documents

Publication Publication Date Title
KR20160132437A (en) Exception handling in microprocessor systems
US9760818B2 (en) Machine-readable watermarks and barcodes in images
WO2016140654A1 (en) Performance testing using service call executors
CN109471697A (en) The method, apparatus and storage medium that system is called in a kind of monitoring virtual machine
CN109359060B (en) Data extraction method, device, computing equipment and computer storage medium
US20180211197A1 (en) Metric correlation
CN107220165B (en) Method and device for simulating online pressure test
US10606652B2 (en) Determining tenant priority based on resource utilization in separate time intervals and selecting requests from a particular tenant based on the priority
CN111078510B (en) Task processing progress recording method and device
WO2016095687A1 (en) Virtualisation security detection method and system
CN107526551B (en) Method, device and equipment for processing IO (input/output) request of CPU (Central processing Unit) with multiple cores
US20150277768A1 (en) Relocating data between storage arrays
CN109240914B (en) Monitoring management method for security test task and terminal equipment
CN109558249B (en) Control method and device for concurrent operation
US10505866B2 (en) System and method for recommending hosting platforms for user workload execution
US10303553B2 (en) Providing data backup
CN115757066A (en) Hard disk performance test method, device, equipment, storage medium and program product
CN106155563B (en) A kind of disk access control method and device
US10528737B2 (en) Randomized heap allocation
CN107741872B (en) Auditing method and device for virtual machine identification and virtual machine identification system
CN106878369B (en) Service processing method and device
WO2016186602A1 (en) Deletion prioritization
US9600508B1 (en) Data layer service availability
CN110069220B (en) Distributed scheduling method, device, equipment and computer readable storage medium
US10733033B2 (en) Dynamic messaging for an application programming interface

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15884119

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15884119

Country of ref document: EP

Kind code of ref document: A1