US20070168734A1 - Apparatus, system, and method for persistent testing with progressive environment sterilzation - Google Patents

Apparatus, system, and method for persistent testing with progressive environment sterilzation Download PDF

Info

Publication number
US20070168734A1
US20070168734A1 US11/281,646 US28164605A US2007168734A1 US 20070168734 A1 US20070168734 A1 US 20070168734A1 US 28164605 A US28164605 A US 28164605A US 2007168734 A1 US2007168734 A1 US 2007168734A1
Authority
US
United States
Prior art keywords
test
cases
module
execution
environment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US11/281,646
Inventor
Phil Vasile
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
International Business Machines Corp
Original Assignee
International Business Machines Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Business Machines Corp filed Critical International Business Machines Corp
Priority to US11/281,646 priority Critical patent/US20070168734A1/en
Assigned to INTERNATIONAL BUSINESS MACHINES CORPORATION reassignment INTERNATIONAL BUSINESS MACHINES CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: VASILE, PHIL
Publication of US20070168734A1 publication Critical patent/US20070168734A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management

Definitions

  • This invention relates to software testing and more particularly relates to software testing using an automated software testing system.
  • Software bugs must be found and fixed. In an atmosphere of job specialization and finger pointing, the job of finding bugs is often assigned to software testers. Software testers create special test systems to test software in an effort to identify bugs in software. Software engineers use many terms to identify the software being tested and the software test system. For purposes of this application, the software being tested is “the software” or “the software under test” and the test system comprising computing devices, test cases, test setup software, and the like is “the test system.”
  • test cases that define a specific scenario through which the software must pass.
  • the test case may define inputs to the software and outputs that the software must produce.
  • the test case may include operating system configuration requirements as well as interactions with other devices and systems.
  • a tester may design a test case to test a new version of the IBM (International Business Machines) IMS (Information Management System) software product.
  • IMS is the software under test.
  • the tester may specify that the software under test will run on an IBM mainframe running the z/OS Version 8 operating system.
  • the test case may test whether the software under test can successfully receive a database query from a web service client, correctly retrieve a response from an IMS database, and send the response to the web service client.
  • the test case may define the web service client as an Apache Axis web service client running on a second mainframe, running a specific version of Linux.
  • test case After writing a test case, the tester follows the steps outlined by the test case to configure the test case environment, execute the test case steps, and determine whether the software under test properly responds as predicted by the test case. If the tester detects discrepancies between the predicted outcome and the actual outcome, then the tester flags the test case as failing.
  • a failing test case may indicate that one of three problems exists: 1) the software under test has a bug, 2) the test case is defective, or 3) the test environment is defective. Testers and developers work together to find and fix software bugs and defective test cases. Solving these problems results in better software and more robust test cases.
  • problems caused by a defective test environment often are not true bugs or test case defects.
  • a software engineer may spend countless hours isolating a test environment defect rather than tracking down and fixing an actual software bug.
  • Environment defects may include failure to initialize all file systems before running a test case.
  • a software tester may run two successive test cases without initializing all file systems to a predetermined initial state. The second test may fail because the first test case modified a critical file. Reinitializing the test environment prior to running each test case may eliminate similar environmental defects. However, reinitializing the test environment may slow down the testing process.
  • Test cases often define specific outputs that the software must exhibit within specific time periods. For instance, the test case may expect IMS to respond to a web service client request within 0.2 seconds. A tester may flag the test case as failing if IMS responds in 0.3 seconds. However, IMS may respond more slowly than on previous occasions simply due to an increased system load on the mainframe. This type of test environment induced test case failure may warrant a longer wait time for the response depending on the system load during test case execution.
  • a software tester automates a group of test cases using a test automation system.
  • a tester may start a test suite of fifty test cases.
  • the automation system may run for several hours, using valuable computing resources to execute the entire test suite.
  • the automation system reports the failed test cases.
  • Software developers and testers must carefully track down the cause of each test case failure. Software engineers may waste valuable time examining test case failures caused by test environment defects rather than resolving software code defects.
  • the software tester may program the test automation system to reinitialize the test environment after the execution of each test case. Additionally, the tester may program extremely long wait times for each test case to alleviate system load problems. However, these adjustments may double or triple the time required to execute the entire test suite. The software tester faces a dilemma: reduce test environment caused failures or reduce the time required to execute the test suite.
  • test automation systems often generate a report with a disproportionate number of test case failures.
  • a single environment defect or a single software bug may cause a fifty percent test case failure rate. Knowing that a test case failure rate exceeds a certain threshold level after a limited number of test cases have been executed may cause a software tester to abort the execution of a test suite and conserve valuable computing resources.
  • a software tester may determine the cause of the high failure rate or enlist software developers to assist in finding the cause after only a few test case failures rather than waiting several hours or days for the test suite to finish executing.
  • the present invention has been developed in response to the present state of the art, and in particular, in response to the problems and needs in the art that have not yet been fully solved by currently available software testing systems. Accordingly, the present invention has been developed to provide an apparatus, system, and method for automatically executing a plurality of test cases that overcome many or all of the above-discussed shortcomings in the art.
  • a method for automating the execution of a plurality of test cases includes executing a quick test of a test suite of test cases.
  • the test cases that fail the quick test are compiled into a set of questionable test cases.
  • the method further includes executing an adjusted test of the questionable test cases.
  • the test cases that fail the adjusted test are compiled into a set of suspect test cases.
  • the method further includes executing a sterilized test of the suspect test cases.
  • the test cases that fail the sterilized test are compiled into a set of broken test cases.
  • executing the adjusted test case further comprises adjusting delay parameters associated with each test case.
  • the adjustment of the delay parameters may depend on the system load at the time of the quick test and also may depend on the number of test cases that failed during execution of the quick test.
  • a signal bearing medium tangibly embodying a program of machine-readable instructions executable by a digital processing apparatus to perform an operation to test a computer application is also presented.
  • the operation of the program substantially comprises the same functions as described above with respect to the described method.
  • the operation of the program further discloses the execution of the quick test, the adjusted test, and the sterilized test in conjunction with a test environment comprising Multiple Virtual Storage (MVS) guest machines running on a Virtual Machine (VM) operating system.
  • the embodied program typically runs on an International Business Machines (IBM) mainframe.
  • a system of the present invention is also presented to progressively test a plurality of test cases in a progressively sterilized environment.
  • the system may be embodied in software running on a single computing device or on a plurality of computing devices.
  • the system in the disclosed embodiments substantially includes the modules and structures necessary to carry out the functions presented above with respect to the described method.
  • the system in one embodiment, includes a computing device, a test environment, a test suite, a control module, a quick test module, an adjusted test module, a sterilized test module and a watch module configured to carry out the functions of the described method.
  • the test environment may comprise a plurality of userids running on the computing device.
  • the test suite comprises a plurality of test cases.
  • the quick test module is configured to execute the test suite using the test environment and compile a set of questionable test cases from the set of test cases failed by the quick test module.
  • the adjusted test module is configured to execute the set of questionable test cases in the test environment and compile a set of suspect test cases from the set of test cases failed by the adjusted test module.
  • the sterilized test module is configured to execute the set of suspect test cases and compile a set of broken test cases from the set of test cases failed by the sterilized test module.
  • the watch module is configured to detect testing irregularities and reinitialize the test environment and the control module in response to detected irregularities. After a re-initialization, the control module is configured to continue execution of the test cases.
  • the system in one embodiment, is configured to track the execution of each test case and maintain an execution status for each test case.
  • the system is further configured, in one embodiment, to notify an operator during the execution of the test cases if the test case failure rate exceeds a predefined threshold.
  • the apparatus may be configured to reinitialize the apparatus if one of the modules of the apparatus behaves irregularly and to continue testing the non-executed test cases.
  • FIG. 1 is a schematic block diagram illustrating one embodiment of a test system in accordance with the present invention
  • FIG. 2 is a schematic block diagram illustrating one embodiment of a test environment in accordance with the present invention.
  • FIG. 3 is a schematic block diagram illustrating one embodiment of a test id in accordance with the present invention.
  • FIG. 4 is a schematic block diagram illustrating one embodiment of a test system in accordance with the present invention.
  • FIG. 5 is a schematic block diagram illustrating one embodiment of the progression of test case classifications in accordance with the present invention.
  • FIG. 6 is a schematic flow chart diagram illustrating one embodiment of a test case execution method in accordance with the present invention.
  • modules may be implemented as a hardware circuit comprising custom VLSI circuits or gate arrays, off-the-shelf semiconductors such as logic chips, transistors, or other discrete components.
  • a module may also be implemented in programmable hardware devices such as field programmable gate arrays, programmable array logic, programmable logic devices or the like.
  • Modules may also be implemented in software for execution by various types of processors.
  • An identified module of executable code may, for instance, comprise one or more physical or logical blocks of computer instructions which may, for instance, be organized as an object, procedure, or function. Nevertheless, the executables of an identified module need not be physically located together, but may comprise disparate instructions stored in different locations which, when joined logically together, comprise the module and achieve the stated purpose for the module.
  • a module of executable code may be a single instruction, or many instructions, and may even be distributed over several different code segments, among different programs, and across several memory devices.
  • operational data may be identified and illustrated herein within modules, and may be embodied in any suitable form and organized within any suitable type of data structure. The operational data may be collected as a single data set, or may be distributed over different locations including over different storage devices, and may exist, at least partially, merely as electronic signals on a system or network.
  • Reference to a signal bearing medium may take any form capable of generating a signal, causing a signal to be generated, or causing execution of a program of machine-readable instructions on a digital processing apparatus.
  • a signal bearing medium may be embodied by a transmission line, a compact disk, digital-video disk, a magnetic tape, a Bernoulli drive, a magnetic disk, a punch card, flash memory, integrated circuits, or other digital processing apparatus memory device.
  • FIG. 1 illustrates a schematic block diagram of one embodiment of a test system 100 used for testing software.
  • Software testers use the test system 100 to test newly developed software and for regression testing of released software.
  • the test system 100 is configured to automatically test software with little or no human intervention.
  • Software testing with the automated test system 100 improves software quality and customer satisfaction.
  • Using the automated test system 100 reduces the time required to test a software product, reduces testing expenses, and shortens software development and delivery times.
  • the test system 100 comprises one or more computing devices 112 , a test environment 120 , a test suite 130 , a control module 140 , and a watch module 160 .
  • the test system 100 further comprises a piece of software to be tested or a software under test (SUT) 110 .
  • the computing device 112 may be a desktop computer, a specialized test computer, a mainframe, or other type of computing device.
  • the various modules of the test system 100 may all execute on one computing device 112 or on multiple computing devices 112 .
  • the SUT 110 is a piece of software to be tested. For example, IBM tests a new IMS version before the product is released to customers. While the new version of IMS is undergoing new release testing and regression testing, the new version of IMS is a SUT 110 .
  • the SUT 110 may be the complete new IMS version.
  • the SUT 1 10 may be a module of IMS such as a transaction module or a database module. Defects found in the SUT 110 are termed software bugs or bugs.
  • the overarching purpose of the test system 100 is to assist software engineers to find and eliminate bugs in the SUT 110 .
  • the test environment 120 provides a controllable, reproducible simulation of a computing environment in which the SUT 110 may execute.
  • the test environment 120 is controllable in that each element of the test environment 120 is under the control of the test system 100 .
  • the test environment 120 is reproducible in that each element of the test environment 120 is carefully defined to include specific elements and configurations.
  • the test system 100 may recreate the test environment 120 using the same definitions and configurations to reproduce an identical test environment 120 .
  • the test environment 120 comprises a set of test IDs 150 , the computing device 112 , the files, and other software products that will interact with the SUT 110 . Identifying and understanding the limits of the test environment 120 assists the software tester to correctly isolate test failures and determine whether the test failure resulted from a bug in the SUT 110 , a defect in the test environment 120 , or a defect in a test case.
  • the test IDs 150 are userids on a single computing device 112 .
  • the test IDs 150 may be userids on a plurality of virtual machines running on a single computing device 112 or they may be separate physical computing devices 112 .
  • the test suite 130 comprises the test cases to be executed by the test system 100 during a particular test run.
  • a test case comprises a series of commands to execute in the test environment to test the SUT 110 .
  • a command may directly instruct the SUT 110 to perform an action or a command may instruct another application running in the test environment 120 to perform an action that will impact on the SUT 110 .
  • the test case may further comprise expected outputs and delay parameters. For example, a test case may issue a command to cause VTAM to display its active logic units (LUs).
  • the expected output might comprise a list of expected active LUs.
  • a delay parameter may indicate that VTAM should be allowed 0.5 seconds to display its active LUs. If VTAM displays the expected LUs within the delay parameters timeframe, then the SUT 110 passes the test, otherwise it fails.
  • a test case may comprise hundreds or thousands of commands and expected outputs.
  • the test suite 130 is often a subset of a larger test library of test cases.
  • the test suite 130 generally comprises test cases that require the same or similar test environments 120 . If all of the test cases in a single test suite 130 use the same test environment 120 , then the test system 100 need only configure the test environment 120 one time for execution of the entire test suite 130 . This eliminates redundant setup processing and accelerates test case execution.
  • Test cases in one test suite 130 may also be selected to test specific functions of the SUT 110 . For instance, a series of fifty test cases in a test suite 130 may test various aspects of a database backup function.
  • the control module 140 controls test case execution.
  • the control module 140 comprises logic to load a test suite 130 and execute individual test cases in the test environment 120 .
  • the control module 140 maintains an execution status for each test case by tracking whether each test case completes successfully resulting in a test case pass or completes unsuccessfully resulting in a test case failure also known as a failed test case.
  • the control module 140 further comprises logic to initialize the test environment 120 and modify the test environment 120 when appropriate in response to test case failures.
  • the control module 140 may notify the operator of the test system 100 of important events prior to the completion of test suite execution, including a test case failure rate that exceeds a predefined level.
  • the control module 140 may comprise logical sub-modules that perform the functionality of the control module 140 .
  • the watch module 160 is an independent process or module that monitors various aspects of the test system 100 .
  • the control module 140 , the test environment 120 , or the SUT 110 may behave irregularly.
  • Irregular behavior or a testing irregularity comprises behavior by the SUT 110 or any module of the test system 100 including the control module 140 and the test environment 120 which delays or frustrates the execution of test cases.
  • Irregular behavior does not include a test case failure that does not prevent the continued operation of the test system.
  • a single test ID 150 may stop responding.
  • the control module 140 may hang or crash.
  • the control module 140 normally will monitor the test IDs 150 and reinitialize the test environment 120 in response to a test ID 150 hang. However, the control module 140 may not detect the crash of the control module 140 .
  • the watch module 160 monitors the control module 140 as well as other test system 100 modules and restarts the test system 100 upon detecting a testing irregularity such as a crash or a non-responsive module or test ID 150 .
  • the watch module 160 may also notify the operator of the test system 100 and/or log the testing irregularity event.
  • the watch module 160 ensures that the test system 100 does not hang indefinitely.
  • the watch module 160 may also track test case completion in coordination with the control module 140 .
  • the watch module 160 Upon detecting an irregular condition, the watch module 160 restarts the test system 100 . Following the restart, the control module 140 continues execution of the test cases according to the execution status of each test case.
  • FIG. 2 illustrates a schematic block diagram of one embodiment of a test environment 120 in communication with a SUT 110 .
  • the test environment 120 comprises test IDs 150 running on MVS (Multiple Virtual Storage) guest machines 250 .
  • the term “test ID” may refer to a userid or logon for a computing device 112 .
  • a test ID 150 refers to one userid on an MVS guest machine 250 from the group of MVS guest machines 250 a - n.
  • an MVS guest machine 250 runs as a virtual machine under a VM (Virtual Machine) operating system on an IBM mainframe.
  • VM Virtual Machine
  • a tester may configure a test environment 120 comprising a plurality of MVS guest machines 250 running on a single IBM mainframe. In fact, a tester may configure hundreds of MVS guest machines 250 on a single IBM mainframe and execute several test suites 130 simultaneously.
  • FIG. 2 illustrates a single test environment 120 comprising a plurality of test IDs 150 running on MVS guest machines 250 .
  • the test environment 120 and the test IDs 150 may access and/or load the SUT 110 to test the SUT 110 according to the test cases in the test suite 130 .
  • Carefully defining the precise configuration of the test environment 120 aids testers in determining the causes of test case failures.
  • Paramount in the design of test cases and the test environment 120 is the ability to reproduce the same inputs to the SUT 110 each time the same test case is executed. Any variation in the test environment 120 from one test case to another makes it more difficult to determine whether a test case failure resulted from a bug in the SUT 110 , a defect in the test case, or a variation in the test environment 120 .
  • FIG. 3 is a schematic block diagram illustrating one embodiment of an MVS guest machine 250 in accordance with the present invention.
  • One or more MVS guest machines 250 may comprise the test environment 120 .
  • the MVS guest machine 250 executes the software under test 110 .
  • a test developer designs and configures the MVS guest machine 250 such that a reproducible MVS guest machine 250 is created each time a particular test environment 120 is initialized.
  • One MVS guest machine 250 may vary from another MVS guest machine 250 in a test environment 120 , according to the planned design of the test environment 120 and the test cases. However, each time the test system 100 executes a particular test case, a particular MVS guest machine 250 should be configured in the same way.
  • the MVS guest machine 250 comprises test machine files 310 , an MVS operating system 320 , a VTAM software product 330 , an IMS software product 340 , and one or more test IDs 150 , as well as other application software specific to a specific test environment 120 or test case.
  • the components of the MVS guest machine 250 in FIG. 3 are simply given for illustrative purposes.
  • Other MVS guest machines 250 and indeed other test environments 120 without MVS guest machines 250 may be designed by those of skill in the art utilizing different modules and components to achieve the purposes of the test system 100 .
  • the test machine files 310 provide initialization and configuration files for the software running in the MVS guest machine 250 .
  • the test machine files 310 may comprise configuration files for the MVS 320 operating system and also for the VTAM 330 communications product.
  • the execution of one test case modifies the test machine files 310 and thus changes the configuration of the MVS guest machine 250 and the test environment 120 .
  • Execution of a subsequent test case may be affected by such a modification to the test environment 120 .
  • Re-initialization of the test environment 120 and the MVS guest machines 250 overwrites the modified test machine files 310 and returns the test environment 120 and the MVS guest machines 250 to an initial or pristine state.
  • the test system 100 may execute a test case without re-initializing the test environment 120 . Such a decision may accelerate test case execution; however, such a decision may cause a test case failure due to an environmental defect.
  • the test system 100 tracks such failures and re-tests such test cases according to logic described below.
  • the MVS guest machine 250 uses the MVS operating system 320 .
  • the MVS operating system 320 runs as a process in a virtual machine under the VM operating system.
  • the test environment 120 may initialize the MVS operating system 320 for each MVS guest machine 250 as part of initializing of the test environment 120 .
  • the MVS operating system 320 provides to the MVS guest machine 250 the standard MVS functionality.
  • the MVS operating system 320 relies on the test machine files 310 as well as operator commands issued by the control module 140 for proper initialization. Operator commands may be scripted as part of a test case in order to ensure uniform initialization.
  • the VTAM software product 330 provides communications services to the MVS guest machine 250 .
  • VTAM 330 relies on the test machine files 310 as well as scripted initialization commands to ensure uniform initialization.
  • the IMS software product 340 relies on the test machine files 310 as well as initialization commands to ensure uniform initialization.
  • Other software applications or modules may also run on the MVS guest machine 250 , requiring use of the test machine files 310 and also requiring initialization commands.
  • the initialization commands may be issued by an operator through the control module 140 . However, preferably, the initialization commands are scripted in an automated form to ensure uniform initialization of the test environment 120 .
  • MVS guest machine 250 a (see FIG. 2 ) may differ from MVS guest machine 250 b, for a given test run, MVS guest machine 250 a is preferably configured identically for each execution of the same test case in order to properly isolate defects and their causes.
  • FIG. 4 is a schematic block diagram illustrating one embodiment of a test system 100 comprising a control module 140 , a test environment 120 , a test suite 130 , and a watch module 160 .
  • the control module 140 communicates with three test modules: a quick test module 410 , an adjusted test module 420 , and a sterilized test module 430 .
  • the test modules 410 , 420 , 430 may be modules separate from the control module 140 or the test modules 410 , 420 , 430 may be sub-modules contained within the control module 140 .
  • the logic of the test modules 410 , 420 , 430 may be comprised by other modules of the test system 100 or the test modules 410 , 420 , 430 may exist as separate modules.
  • control module 140 selectively executes test cases using the logic of the test modules 410 , 420 , 430 .
  • the control module 140 may pass control of test case execution to individual test modules 410 , 420 , 430 which then control the execution of the sequential steps of each test case and maintain complete control of the test environment 120 .
  • the control module may completely control execution of each test case and may completely control the test environment, calling the test modules 410 , 420 , 430 simply as subroutines or procedures to tailor the successive execution of certain test cases.
  • the execution of a test case may be carried out by the control module 140 , by the individual test modules 410 , 420 , 430 , or by the test environment 120 .
  • the control module 140 reads script commands from a test case and sequentially executes those commands by issuing a command on a test ID 150 running on an MVS virtual machine 250 in the test environment 120 .
  • the first test instruction in a test case may instruct the test system 100 to execute an operator command on a specific MVS guest machine 250 to initialize the IMS product.
  • the control module 140 may enter the operator command on the test ID 150 on an MVS guest machine 250 .
  • the next test instruction may require the initialization of a second IMS product and so forth.
  • test modules 410 , 420 , 430 may read the test instructions and execute the test instructions on the test IDs 150 .
  • test instructions typically, only one test case is run in one test environment 120 at a time.
  • a single control module 140 may control execution of multiple test cases in a plurality of test environments 120 , one test case per environment.
  • Each of the test modules 410 , 420 , 430 may comprise distinct logic to handle test environment 120 initialization and logic to modify test case execution within certain parameters.
  • the test modules 410 , 420 , 430 work in coordination with the control module 140 to ensure that each test case executes under very specific test environment 120 conditions.
  • control module 140 is a program named LCTRUN which executes under VM.
  • An operator logs onto a control ID representing the control module 140 and starts the LCTRUN program.
  • the LCTRUN program creates a separate watch module 160 executing a watch program.
  • the LCTRUN program then begins execution of a test case from a test suite 130 .
  • the test case contains a script of instructions which the LCTRUN program executes. Each instruction may comprise individual operator commands to be executed on specific test IDs 150 running on specific MVS guest machines 250 . As the LCTRUN program executes, it monitors test case execution and records failures and successes for each test case.
  • an operator may execute an LCTSTART command which communicates with a plurality of control IDs.
  • Executing the LCTSTART program causes each of the plurality of control IDs to execute an LCTRUN program.
  • the LCTSTART program may cause dozens of control modules 140 to execute dozens of test suites 130 simultaneously in dozens of test environments 120 .
  • the LCTSTART program may control the execution of LCTRUN on separate VM machines running on separate computing devices 112 or mainframes.
  • the control module 140 starts execution of a set of test cases by accessing a test suite 130 .
  • the test suite 130 typically is a set of test cases which require similar or identical test environments 120 .
  • the test suite 130 may be a subset of test cases from a larger test case library.
  • the test suite 130 comprises a set of initial test cases 402 that the control module 140 executes.
  • control module 140 tracks the completion of test case execution.
  • a test case completes test case execution only after the control module 140 marks the test case as passed or broken.
  • the control module 140 marks a test case as passed if the test case successfully completes execution under the quick test module 410 , the adjusted test module 420 , or the sterilized test module 430 .
  • the control module 140 marks a test case as broken only after the test case has failed execution under all three test modules 410 , 420 , 430 .
  • the control module 140 successively executes test cases using the quick test module 410 , the adjusted test module 420 , and the sterilized test module 430 in a waterfall approach.
  • the adjusted test module 420 tests only those test cases that fail the quick test module 410 .
  • the sterilized test module 430 tests only those test cases that fail the adjusted test module 420 .
  • the control module 140 marks those test cases that pass the quick test module 410 as passed and does not continue executing a passed test case.
  • the control module also marks as passed those test cases that fail the quick test module 410 and then pass the adjusted test module 420 .
  • the control module 140 marks as passed those test cases that fail the quick test module 410 and the adjusted test module 420 and then pass the sterilized test module 430 .
  • the control module marks the test cases that fail all of the test module 410 , 420 , 430 as broken.
  • the system 100 may experience a testing irregularity at any time during test execution. If the watch module restarts the system 100 due to a testing irregularity or if the system 100 stops test case execution for any reason, the system 100 may continue test case execution upon a subsequent initialization. Execution of an uncompleted test suite 130 continues after initialization of the test system 100 according to the execution status of each test case; however, the control module sets the status of the test case that was executing at the time of the testing irregularity to failed for the particular test module 410 , 420 , 430 under which the test case was executing.
  • test case execution for one test suite 130 continues until the control sets the execution status for each test in the test suite 130 as passed or broken.
  • the control module 140 creates a set of initial test cases 402 comprising the test cases from the test suite 130 .
  • the quick test module 410 may execute the initial test cases 402 in a relatively expedited manner.
  • the quick test module 410 initializes the test environment 120 and starts execution of the initial test cases 402 , one test case at a time.
  • the quick test module 410 tracks test case passes and test case failures, and may reinitialize the test environment 120 after each test case failure. However, the quick test module 410 preferably does not reinitialize the test environment 120 after successful test cases.
  • the quick test module 410 favors speed of test case execution over a higher pass rate and avoids reinitializing the test environment 120 except following test case failures.
  • a test case may cause the SUT 110 to hang.
  • the quick test module 410 may hang, the control module 140 may hang, VTAM 330 in one MVS guest machine 250 may stop responding, or the test system 100 may otherwise exhibit a testing irregularity.
  • the control module 140 monitors the test IDs 150 and various modules in the test system 100 for signs of test irregularities.
  • the watch module 160 monitors the control module 140 and additionally may monitor individual test IDs 150 .
  • the control module 140 or the watch module 160 may restart the test system 100 to recover from a testing irregularity. Testing of the test suite 130 automatically continues after a restart.
  • control module 140 may detect that an MVS guest machine 250 no longer responds to operator commands.
  • the control module 140 may mark the execution status of the currently executing test case as failing and restart the test environment 120 .
  • Restarting the test environment 120 may include shutting down the test IDs 150 , shutting down the MVS guest machines 250 , restoring test machine files 310 on each MVS guest machine 250 , initializing MVS 320 on each MVS guest machine 250 , bringing up VTAM 330 on each MVS guest machine, and logging onto each test ID 150 .
  • the watch module 160 may detect that the control module 140 no longer responds to operator display commands. The watch module 160 may then restart the control module 140 and allow the control module 140 to initialize the test environment 120 as described above. Following a restart of the control module 140 , the control module continues execution of the test cases according to the recorded execution status of each test case.
  • a test suite 130 may experience an unusually high failure rate.
  • a severe software bug in the SUT 110 , a test environment 120 defect, or a test case defect common to several test cases in a single test suite 130 may cause a high failure rate.
  • a tester may abort test case execution to determine the cause of the high failure rate. Aborting test case execution as early as possible may save days of wasted testing and conserve valuable testing resources.
  • the control module 140 may track test case failures during the execution of the test modules 410 , 420 , 430 and notify an operator if the failure rate exceeds a certain threshold. For example, the control module 140 may compare the failure rate for the first ten test cases executed by the quick test module 410 and notify an operator if the failure rate exceeds fifty percent. The control module 140 may continue monitoring failure rates throughout the testing process and notify the operator of predetermined failure rates or other events that may warrant operator intervention. Preferably, the control module 140 always reports the current pass/failure status of each test case. However, the operator may configure the control module 140 to notify the operator using an audible alert, a flashing console message, or other mechanism to highlight certain failure rates or conditions which may warrant immediate action.
  • the control module 140 marks the passing test cases as passed and compiles the failing test cases into a set of questionable test cases 404 .
  • the control module 140 continues execution of the questionable test cases 404 using the adjusted test module 420 . Because the quick test module 410 does not test each test case in a pristine test environment 120 and due to the fact that system load may have contributed to some of the test case failures, the test system 100 does not yet mark the failing test cases as broken.
  • the adjusted test module 420 receives the questionable test cases 404 for further testing.
  • the adjusted test module 420 reinitializes the test environment 120 .
  • the adjusted test module 420 determines whether system load during the execution of the initial test cases 402 in the quick test module 410 may have contributed to the failure of the questionable test cases 404 .
  • Some test cases are more sensitive to timing considerations and system load.
  • Other test cases may be more sensitive to network load.
  • the adjusted test module 420 may consider the percentage of test case failures from the quick test module 410 , system load, network load, and the sensitivities of the individual test cases to various timing situations, as well as other factors.
  • the adjusted test module 420 may adjust delay parameters used by the questionable test cases 404 .
  • Delay parameters are wait times prescribed by each test case. For instance, a test case may require an IMS software product 340 to respond to a database query in one second. If the test case failed waiting for a database response from the IMS software product 340 , the adjusted test module 420 may increase a delay parameter allowing the IMS software product 340 two seconds to respond to a database query.
  • the adjusted test module 420 executes the questionable test cases 404 . All other aspects of the testing related to execution by the quick test module 410 apply to the testing carried out by the adjusted test module 420 . In other words, the adjusted test module 420 reinitializes the test environment 120 only after a test case fails.
  • the test system 100 restarts itself if the test system 100 experiences a testing irregularity. Following a restart of the test system 100 during execution of the adjusted test module 420 , the test system 100 resumes execution with the adjusted test module 420 .
  • the test system 100 may mark the test case that was executing prior to the restart as failed and continues with the questionable test cases 404 that were not yet executed by the adjusted test module 420 .
  • the control module 140 compiles the failed test cases from the adjusted test module into a set of suspect test cases 406 . However, the suspect test cases 406 are not yet marked as broken because they did not all fail in a pristine test environment 120 .
  • the sterilized test module 430 receives the suspect test cases 406 for further testing.
  • the sterilized test module 430 initializes the test environment 120 and executes each of the suspect test cases 406 . Following each test case execution, regardless of success or failure, the sterilized test module 430 reinitializes the test environment 120 . If any of the modules of the test system 100 hang or crash, the test system 100 may restart the test system 100 . Following a restart, the previously executing test case is marked as failed and the sterilized test module 430 reinitializes the test environment and executes the suspect test cases 406 that have not yet been tested.
  • the control module 140 compiles the set of failed test cases from the sterilized test module 430 as broken test cases 408 .
  • the test system 100 may generate a report detailing the passed and broken test cases.
  • the report may comprise a final execution status for each test case including the execution status of each test case for each test module 410 , 420 , 430 .
  • the design of the test system 100 creates a high degree of confidence that the broken test cases 408 are broken due to software bugs or defects in the test cases rather than defects in the test environment 120 .
  • the test system 100 systematically executes each test case in an expedited fashion, in a delayed fashion as needed, and in a pristine test environment 120 .
  • FIG. 5 is a schematic block diagram summarizing the progression of test cases 500 through the test system 100 as controlled and monitored by the control module 140 and the test modules 410 , 420 , 430 (see FIG. 4 ).
  • the test system 100 selects a test suite 130 comprising a set of test cases that require a similar test environment 120 .
  • the quick test module 410 executes the test suite 130 .
  • the control module 140 groups test cases that pass the quick test module 410 into quick test passing test cases 510 while marking failing test cases as questionable test cases 404 .
  • the adjusted test module 420 executes the questionable test cases 404 .
  • the control module 140 groups test cases that pass the adjusted test module 420 into adjusted test passing test cases 520 while marking failing test cases as suspect test cases 406 .
  • the sterilized test module 430 executes the suspect test cases 406 .
  • the control module 140 groups test cases that pass the sterilized test module 430 into sterilized test passing test cases 530 while marking failing test cases as broken test cases
  • FIG. 6 is a schematic flow chart diagram illustrating one embodiment of a test case execution method 600 for executing a test suite 130 of test cases in accordance with the present invention.
  • Initializing 604 the test environment 120 brings the test environment 120 to an initial or pristine state.
  • the test modules 410 , 420 , 430 may also initialize the test environment 120 according to module specific logic described below.
  • the test environment 120 is initialized only after test case failures.
  • the quick test module 410 tracks test case failures and determines 608 if the test case failure rate exceeds a predetermined threshold. As an example, the operator may configure the threshold rate to be fifty percent. If the failure rate exceeds a predetermined threshold, the quick test module 410 notifies 610 the test system operator of the high failure rate. Notification alerts the operator that a severe defect in the SUT 110 or the test environment 120 may exist. Typically, a single test suite 130 may execute for several hours or several days. Timely notification of a potential severe defect may avert several days of wasted testing time and may accelerate the removal of the defect. The operator may abort the test case execution method 600 at any time.
  • the test system 100 compiles 612 a set of questionable test cases 404 from the test cases that failed during the execution 606 of the quick test module 410 . Based on the system load during the execution of the quick test module 410 and the current system load, the test system 100 determines 614 if delay parameters should be adjusted and updates 616 the delay parameters accordingly.
  • the adjusted test module 420 executes 618 the questionable test cases 404 . Following each test failure, the adjusted test module 420 reinitializes the test environment 120 . The adjusted test module 420 compiles 620 the failing test cases into a set of suspect test cases 406 .
  • the sterilized test module 430 executes 622 the suspect test cases 406 .
  • the sterilized test module 430 reinitializes the test environment 120 prior to each test case execution.
  • Failed test cases are compiled 624 into a set of broken test cases 408 .
  • test system 100 may generate 626 a report based on the test case passes and failures.
  • the test system 100 progressively sterilizes the test environment 120 throughout the testing process.
  • the design of the test system 100 balances the need to verify quickly that test cases run correctly against the need to rule out test environment 120 defects before marking a test case as broken. Once the test system 100 marks a test case as broken, testers and developers can, with a high degree of certainty, look for either a defect in the test case or a bug in the SUT 110 rather than blaming the failure on a test environment 120 defect.

Abstract

An apparatus, system, and method are disclosed for automatically testing a plurality of software test cases. The testing executes a quick test of the test cases which executes each test case in a test environment that is initialized just prior to the first test case and after subsequent test case failures. The testing further executes an adjusted test of the failing test cases in which delay parameters associated with the failing test cases are increased in accordance with a system load recorded during the quick test. Finally, the testing executes a sterilized test of the remaining failing test cases in a test environment that is initialized prior to each test case execution.

Description

    BACKGROUND OF THE INVENTION
  • 1. Field of the Invention
  • This invention relates to software testing and more particularly relates to software testing using an automated software testing system.
  • 2. Description of the Related Art
  • With the advent of software development came the need for software testing. Software developers write programs which control computing devices as simple as an alarm clock and as complex as the space shuttle. Despite the best efforts of software writers, bugs creep into the code.
  • Software bugs must be found and fixed. In an atmosphere of job specialization and finger pointing, the job of finding bugs is often assigned to software testers. Software testers create special test systems to test software in an effort to identify bugs in software. Software engineers use many terms to identify the software being tested and the software test system. For purposes of this application, the software being tested is “the software” or “the software under test” and the test system comprising computing devices, test cases, test setup software, and the like is “the test system.”
  • In testing the software, testers write test cases that define a specific scenario through which the software must pass. The test case may define inputs to the software and outputs that the software must produce. The test case may include operating system configuration requirements as well as interactions with other devices and systems. For example, a tester may design a test case to test a new version of the IBM (International Business Machines) IMS (Information Management System) software product. In this example, IMS is the software under test. The tester may specify that the software under test will run on an IBM mainframe running the z/OS Version 8 operating system. The test case may test whether the software under test can successfully receive a database query from a web service client, correctly retrieve a response from an IMS database, and send the response to the web service client. The test case may define the web service client as an Apache Axis web service client running on a second mainframe, running a specific version of Linux.
  • After writing a test case, the tester follows the steps outlined by the test case to configure the test case environment, execute the test case steps, and determine whether the software under test properly responds as predicted by the test case. If the tester detects discrepancies between the predicted outcome and the actual outcome, then the tester flags the test case as failing. A failing test case may indicate that one of three problems exists: 1) the software under test has a bug, 2) the test case is defective, or 3) the test environment is defective. Testers and developers work together to find and fix software bugs and defective test cases. Solving these problems results in better software and more robust test cases.
  • However, problems caused by a defective test environment often are not true bugs or test case defects. A software engineer may spend countless hours isolating a test environment defect rather than tracking down and fixing an actual software bug. Environment defects may include failure to initialize all file systems before running a test case. To save time, a software tester may run two successive test cases without initializing all file systems to a predetermined initial state. The second test may fail because the first test case modified a critical file. Reinitializing the test environment prior to running each test case may eliminate similar environmental defects. However, reinitializing the test environment may slow down the testing process.
  • Another defective test environment problem relates to timing issues. Test cases often define specific outputs that the software must exhibit within specific time periods. For instance, the test case may expect IMS to respond to a web service client request within 0.2 seconds. A tester may flag the test case as failing if IMS responds in 0.3 seconds. However, IMS may respond more slowly than on previous occasions simply due to an increased system load on the mainframe. This type of test environment induced test case failure may warrant a longer wait time for the response depending on the system load during test case execution.
  • In many cases, a software tester automates a group of test cases using a test automation system. With a single command, a tester may start a test suite of fifty test cases. The automation system may run for several hours, using valuable computing resources to execute the entire test suite. At the conclusion of the test suite execution, the automation system reports the failed test cases. Software developers and testers must carefully track down the cause of each test case failure. Software engineers may waste valuable time examining test case failures caused by test environment defects rather than resolving software code defects.
  • To reduce the number of test case failures due to test environment defects, the software tester may program the test automation system to reinitialize the test environment after the execution of each test case. Additionally, the tester may program extremely long wait times for each test case to alleviate system load problems. However, these adjustments may double or triple the time required to execute the entire test suite. The software tester faces a dilemma: reduce test environment caused failures or reduce the time required to execute the test suite.
  • In addition, current test automation systems often generate a report with a disproportionate number of test case failures. In some instances, a single environment defect or a single software bug may cause a fifty percent test case failure rate. Knowing that a test case failure rate exceeds a certain threshold level after a limited number of test cases have been executed may cause a software tester to abort the execution of a test suite and conserve valuable computing resources. A software tester may determine the cause of the high failure rate or enlist software developers to assist in finding the cause after only a few test case failures rather than waiting several hours or days for the test suite to finish executing.
  • From the foregoing discussion, it should be apparent that a need exists for an apparatus, system, and method for automated test case execution that reduces the time required to execute a test suite of test cases while simultaneously eliminating test case failures caused by test environment defects. Additionally, a need exists for an apparatus, system, and method for automated test case execution that notifies testers of unusually high test case failure rates early in the execution of a test suite. Beneficially, such an apparatus, system, and method would reduce or eliminate test case failures caused by test environment defects, reduce the number of hours wasted tracking down test environment defects, and conserve test computing resources.
  • SUMMARY OF THE INVENTION
  • The present invention has been developed in response to the present state of the art, and in particular, in response to the problems and needs in the art that have not yet been fully solved by currently available software testing systems. Accordingly, the present invention has been developed to provide an apparatus, system, and method for automatically executing a plurality of test cases that overcome many or all of the above-discussed shortcomings in the art.
  • A method for automating the execution of a plurality of test cases is presented. In one embodiment, the method includes executing a quick test of a test suite of test cases. The test cases that fail the quick test are compiled into a set of questionable test cases. The method further includes executing an adjusted test of the questionable test cases. The test cases that fail the adjusted test are compiled into a set of suspect test cases. The method further includes executing a sterilized test of the suspect test cases. The test cases that fail the sterilized test are compiled into a set of broken test cases.
  • In another embodiment, executing the adjusted test case further comprises adjusting delay parameters associated with each test case. The adjustment of the delay parameters may depend on the system load at the time of the quick test and also may depend on the number of test cases that failed during execution of the quick test.
  • A signal bearing medium tangibly embodying a program of machine-readable instructions executable by a digital processing apparatus to perform an operation to test a computer application is also presented. The operation of the program substantially comprises the same functions as described above with respect to the described method. The operation of the program further discloses the execution of the quick test, the adjusted test, and the sterilized test in conjunction with a test environment comprising Multiple Virtual Storage (MVS) guest machines running on a Virtual Machine (VM) operating system. The embodied program typically runs on an International Business Machines (IBM) mainframe.
  • A system of the present invention is also presented to progressively test a plurality of test cases in a progressively sterilized environment. The system may be embodied in software running on a single computing device or on a plurality of computing devices. The system in the disclosed embodiments substantially includes the modules and structures necessary to carry out the functions presented above with respect to the described method. In particular, the system, in one embodiment, includes a computing device, a test environment, a test suite, a control module, a quick test module, an adjusted test module, a sterilized test module and a watch module configured to carry out the functions of the described method.
  • The test environment may comprise a plurality of userids running on the computing device. The test suite comprises a plurality of test cases. The quick test module is configured to execute the test suite using the test environment and compile a set of questionable test cases from the set of test cases failed by the quick test module. The adjusted test module is configured to execute the set of questionable test cases in the test environment and compile a set of suspect test cases from the set of test cases failed by the adjusted test module. The sterilized test module is configured to execute the set of suspect test cases and compile a set of broken test cases from the set of test cases failed by the sterilized test module. The watch module is configured to detect testing irregularities and reinitialize the test environment and the control module in response to detected irregularities. After a re-initialization, the control module is configured to continue execution of the test cases.
  • The system, in one embodiment, is configured to track the execution of each test case and maintain an execution status for each test case. The system is further configured, in one embodiment, to notify an operator during the execution of the test cases if the test case failure rate exceeds a predefined threshold.
  • In a further embodiment, the apparatus may be configured to reinitialize the apparatus if one of the modules of the apparatus behaves irregularly and to continue testing the non-executed test cases.
  • Reference throughout this specification to features, advantages, or similar language does not imply that all of the features and advantages that may be realized with the present invention should be or are in any single embodiment of the invention. Rather, language referring to the features and advantages is understood to mean that a specific feature, advantage, or characteristic described in connection with an embodiment is included in at least one embodiment of the present invention. Thus, discussion of the features and advantages, and similar language, throughout this specification may, but do not necessarily, refer to the same embodiment.
  • Furthermore, the described features, advantages, and characteristics of the invention may be combined in any suitable manner in one or more embodiments. One skilled in the relevant art will recognize that the invention may be practiced without one or more of the specific features or advantages of a particular embodiment. In other instances, additional features and advantages may be recognized in certain embodiments that may not be present in all embodiments of the invention.
  • These features and advantages of the present invention will become more fully apparent from the following description and appended claims, or may be learned by the practice of the invention as set forth hereinafter.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • In order that the advantages of the invention will be readily understood, a more particular description of the invention briefly described above will be rendered by reference to specific embodiments that are illustrated in the appended drawings. Understanding that these drawings depict only typical embodiments of the invention and are not therefore to be considered to be limiting of its scope, the invention will be described and explained with additional specificity and detail through the use of the accompanying drawings, in which:
  • FIG. 1 is a schematic block diagram illustrating one embodiment of a test system in accordance with the present invention;
  • FIG. 2 is a schematic block diagram illustrating one embodiment of a test environment in accordance with the present invention;
  • FIG. 3 is a schematic block diagram illustrating one embodiment of a test id in accordance with the present invention;
  • FIG. 4 is a schematic block diagram illustrating one embodiment of a test system in accordance with the present invention;
  • FIG. 5 is a schematic block diagram illustrating one embodiment of the progression of test case classifications in accordance with the present invention; and
  • FIG. 6 is a schematic flow chart diagram illustrating one embodiment of a test case execution method in accordance with the present invention.
  • DETAILED DESCRIPTION OF THE INVENTION
  • Many of the functional units described in this specification have been labeled as modules, in order to more particularly emphasize their implementation independence. For example, a module may be implemented as a hardware circuit comprising custom VLSI circuits or gate arrays, off-the-shelf semiconductors such as logic chips, transistors, or other discrete components. A module may also be implemented in programmable hardware devices such as field programmable gate arrays, programmable array logic, programmable logic devices or the like.
  • Modules may also be implemented in software for execution by various types of processors. An identified module of executable code may, for instance, comprise one or more physical or logical blocks of computer instructions which may, for instance, be organized as an object, procedure, or function. Nevertheless, the executables of an identified module need not be physically located together, but may comprise disparate instructions stored in different locations which, when joined logically together, comprise the module and achieve the stated purpose for the module.
  • Indeed, a module of executable code may be a single instruction, or many instructions, and may even be distributed over several different code segments, among different programs, and across several memory devices. Similarly, operational data may be identified and illustrated herein within modules, and may be embodied in any suitable form and organized within any suitable type of data structure. The operational data may be collected as a single data set, or may be distributed over different locations including over different storage devices, and may exist, at least partially, merely as electronic signals on a system or network.
  • Reference throughout this specification to “one embodiment,” “an embodiment,” or similar language means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, appearances of the phrases “in one embodiment,” “in an embodiment,” and similar language throughout this specification may, but do not necessarily, all refer to the same embodiment.
  • Reference to a signal bearing medium may take any form capable of generating a signal, causing a signal to be generated, or causing execution of a program of machine-readable instructions on a digital processing apparatus. A signal bearing medium may be embodied by a transmission line, a compact disk, digital-video disk, a magnetic tape, a Bernoulli drive, a magnetic disk, a punch card, flash memory, integrated circuits, or other digital processing apparatus memory device.
  • Furthermore, the described features, structures, or characteristics of the invention may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided, such as examples of programming, software modules, user selections, network transactions, database queries, database structures, hardware modules, hardware circuits, hardware chips, etc., to provide a thorough understanding of embodiments of the invention. One skilled in the relevant art will recognize, however, that the invention may be practiced without one or more of the specific details, or with other methods, components, materials, and so forth. In other instances, well-known structures, materials, or operations are not shown or described in detail to avoid obscuring aspects of the invention.
  • The schematic flow chart diagrams that follow are generally set forth as logical flow chart diagrams. As such, the depicted order and labeled steps are indicative of one embodiment of the presented method. Other steps and methods may be conceived that are equivalent in function, logic, or effect to one or more steps, or portions thereof, of the illustrated method. Additionally, the format and symbols employed are provided to explain the logical steps of the method and are understood not to limit the scope of the method. Although various arrow types and line types may be employed in the flow chart diagrams, they are understood not to limit the scope of the corresponding method. Indeed, some arrows or other connectors may be used to indicate only the logical flow of the method. For instance, an arrow may indicate a waiting or monitoring period of unspecified duration between enumerated steps of the depicted method. Additionally, the order in which a particular method occurs may or may not strictly adhere to the order of the corresponding steps shown.
  • FIG. 1 illustrates a schematic block diagram of one embodiment of a test system 100 used for testing software. Software testers use the test system 100 to test newly developed software and for regression testing of released software. In a preferred embodiment, the test system 100 is configured to automatically test software with little or no human intervention. Software testing with the automated test system 100 improves software quality and customer satisfaction. Using the automated test system 100 reduces the time required to test a software product, reduces testing expenses, and shortens software development and delivery times.
  • The test system 100 comprises one or more computing devices 112, a test environment 120, a test suite 130, a control module 140, and a watch module 160. The test system 100 further comprises a piece of software to be tested or a software under test (SUT) 110. The computing device 112 may be a desktop computer, a specialized test computer, a mainframe, or other type of computing device. The various modules of the test system 100 may all execute on one computing device 112 or on multiple computing devices 112.
  • The SUT 110 is a piece of software to be tested. For example, IBM tests a new IMS version before the product is released to customers. While the new version of IMS is undergoing new release testing and regression testing, the new version of IMS is a SUT 110. The SUT 110 may be the complete new IMS version. Alternatively, the SUT 1 10 may be a module of IMS such as a transaction module or a database module. Defects found in the SUT 110 are termed software bugs or bugs. The overarching purpose of the test system 100 is to assist software engineers to find and eliminate bugs in the SUT 110.
  • The test environment 120 provides a controllable, reproducible simulation of a computing environment in which the SUT 110 may execute. The test environment 120 is controllable in that each element of the test environment 120 is under the control of the test system 100. The test environment 120 is reproducible in that each element of the test environment 120 is carefully defined to include specific elements and configurations. The test system 100 may recreate the test environment 120 using the same definitions and configurations to reproduce an identical test environment 120.
  • The test environment 120 comprises a set of test IDs 150, the computing device 112, the files, and other software products that will interact with the SUT 110. Identifying and understanding the limits of the test environment 120 assists the software tester to correctly isolate test failures and determine whether the test failure resulted from a bug in the SUT 110, a defect in the test environment 120, or a defect in a test case. In one embodiment, the test IDs 150 are userids on a single computing device 112. Alternatively, the test IDs 150 may be userids on a plurality of virtual machines running on a single computing device 112 or they may be separate physical computing devices 112.
  • The test suite 130 comprises the test cases to be executed by the test system 100 during a particular test run. A test case comprises a series of commands to execute in the test environment to test the SUT 110. A command may directly instruct the SUT 110 to perform an action or a command may instruct another application running in the test environment 120 to perform an action that will impact on the SUT 110. The test case may further comprise expected outputs and delay parameters. For example, a test case may issue a command to cause VTAM to display its active logic units (LUs). The expected output might comprise a list of expected active LUs. A delay parameter may indicate that VTAM should be allowed 0.5 seconds to display its active LUs. If VTAM displays the expected LUs within the delay parameters timeframe, then the SUT 110 passes the test, otherwise it fails. A test case may comprise hundreds or thousands of commands and expected outputs.
  • The test suite 130 is often a subset of a larger test library of test cases. The test suite 130 generally comprises test cases that require the same or similar test environments 120. If all of the test cases in a single test suite 130 use the same test environment 120, then the test system 100 need only configure the test environment 120 one time for execution of the entire test suite 130. This eliminates redundant setup processing and accelerates test case execution. Test cases in one test suite 130 may also be selected to test specific functions of the SUT 110. For instance, a series of fifty test cases in a test suite 130 may test various aspects of a database backup function.
  • The control module 140 controls test case execution. The control module 140 comprises logic to load a test suite 130 and execute individual test cases in the test environment 120. The control module 140 maintains an execution status for each test case by tracking whether each test case completes successfully resulting in a test case pass or completes unsuccessfully resulting in a test case failure also known as a failed test case. The control module 140 further comprises logic to initialize the test environment 120 and modify the test environment 120 when appropriate in response to test case failures. The control module 140 may notify the operator of the test system 100 of important events prior to the completion of test suite execution, including a test case failure rate that exceeds a predefined level. The control module 140 may comprise logical sub-modules that perform the functionality of the control module 140.
  • In one embodiment, the watch module 160 is an independent process or module that monitors various aspects of the test system 100. Under certain circumstances, the control module 140, the test environment 120, or the SUT 110 may behave irregularly. Irregular behavior or a testing irregularity comprises behavior by the SUT 110 or any module of the test system 100 including the control module 140 and the test environment 120 which delays or frustrates the execution of test cases. Irregular behavior does not include a test case failure that does not prevent the continued operation of the test system. As an example of irregular behavior, a single test ID 150 may stop responding. Alternatively, the control module 140 may hang or crash.
  • The control module 140 normally will monitor the test IDs 150 and reinitialize the test environment 120 in response to a test ID 150 hang. However, the control module 140 may not detect the crash of the control module 140. The watch module 160 monitors the control module 140 as well as other test system 100 modules and restarts the test system 100 upon detecting a testing irregularity such as a crash or a non-responsive module or test ID 150. The watch module 160 may also notify the operator of the test system 100 and/or log the testing irregularity event. The watch module 160 ensures that the test system 100 does not hang indefinitely. The watch module 160 may also track test case completion in coordination with the control module 140. Upon detecting an irregular condition, the watch module 160 restarts the test system 100. Following the restart, the control module 140 continues execution of the test cases according to the execution status of each test case.
  • FIG. 2 illustrates a schematic block diagram of one embodiment of a test environment 120 in communication with a SUT 110. The test environment 120 comprises test IDs 150 running on MVS (Multiple Virtual Storage) guest machines 250. The term “test ID” may refer to a userid or logon for a computing device 112. In FIG. 2, a test ID 150 refers to one userid on an MVS guest machine 250 from the group of MVS guest machines 250 a-n. Typically, an MVS guest machine 250 runs as a virtual machine under a VM (Virtual Machine) operating system on an IBM mainframe. Using VM, a tester may configure a test environment 120 comprising a plurality of MVS guest machines 250 running on a single IBM mainframe. In fact, a tester may configure hundreds of MVS guest machines 250 on a single IBM mainframe and execute several test suites 130 simultaneously.
  • FIG. 2 illustrates a single test environment 120 comprising a plurality of test IDs 150 running on MVS guest machines 250. The test environment 120 and the test IDs 150 may access and/or load the SUT 110 to test the SUT 110 according to the test cases in the test suite 130. Carefully defining the precise configuration of the test environment 120 aids testers in determining the causes of test case failures. Paramount in the design of test cases and the test environment 120 is the ability to reproduce the same inputs to the SUT 110 each time the same test case is executed. Any variation in the test environment 120 from one test case to another makes it more difficult to determine whether a test case failure resulted from a bug in the SUT 110, a defect in the test case, or a variation in the test environment 120.
  • FIG. 3 is a schematic block diagram illustrating one embodiment of an MVS guest machine 250 in accordance with the present invention. One or more MVS guest machines 250 may comprise the test environment 120. Typically, the MVS guest machine 250 executes the software under test 110. Preferably, a test developer designs and configures the MVS guest machine 250 such that a reproducible MVS guest machine 250 is created each time a particular test environment 120 is initialized. One MVS guest machine 250 may vary from another MVS guest machine 250 in a test environment 120, according to the planned design of the test environment 120 and the test cases. However, each time the test system 100 executes a particular test case, a particular MVS guest machine 250 should be configured in the same way.
  • Typically, the MVS guest machine 250 comprises test machine files 310, an MVS operating system 320, a VTAM software product 330, an IMS software product 340, and one or more test IDs 150, as well as other application software specific to a specific test environment 120 or test case. The components of the MVS guest machine 250 in FIG. 3 are simply given for illustrative purposes. Other MVS guest machines 250 and indeed other test environments 120 without MVS guest machines 250 may be designed by those of skill in the art utilizing different modules and components to achieve the purposes of the test system 100.
  • The test machine files 310 provide initialization and configuration files for the software running in the MVS guest machine 250. For example, the test machine files 310 may comprise configuration files for the MVS 320 operating system and also for the VTAM 330 communications product. Occasionally, the execution of one test case modifies the test machine files 310 and thus changes the configuration of the MVS guest machine 250 and the test environment 120. Execution of a subsequent test case may be affected by such a modification to the test environment 120. Re-initialization of the test environment 120 and the MVS guest machines 250 overwrites the modified test machine files 310 and returns the test environment 120 and the MVS guest machines 250 to an initial or pristine state. In some situations, the test system 100 may execute a test case without re-initializing the test environment 120. Such a decision may accelerate test case execution; however, such a decision may cause a test case failure due to an environmental defect. The test system 100 tracks such failures and re-tests such test cases according to logic described below.
  • The MVS guest machine 250 uses the MVS operating system 320. In one embodiment, the MVS operating system 320 runs as a process in a virtual machine under the VM operating system. The test environment 120 may initialize the MVS operating system 320 for each MVS guest machine 250 as part of initializing of the test environment 120. The MVS operating system 320 provides to the MVS guest machine 250 the standard MVS functionality. The MVS operating system 320 relies on the test machine files 310 as well as operator commands issued by the control module 140 for proper initialization. Operator commands may be scripted as part of a test case in order to ensure uniform initialization.
  • The VTAM software product 330 provides communications services to the MVS guest machine 250. As with MVS 320, VTAM 330 relies on the test machine files 310 as well as scripted initialization commands to ensure uniform initialization. Similarly, the IMS software product 340 relies on the test machine files 310 as well as initialization commands to ensure uniform initialization. Other software applications or modules may also run on the MVS guest machine 250, requiring use of the test machine files 310 and also requiring initialization commands. The initialization commands may be issued by an operator through the control module 140. However, preferably, the initialization commands are scripted in an automated form to ensure uniform initialization of the test environment 120. Although MVS guest machine 250 a (see FIG. 2) may differ from MVS guest machine 250 b, for a given test run, MVS guest machine 250 a is preferably configured identically for each execution of the same test case in order to properly isolate defects and their causes.
  • FIG. 4 is a schematic block diagram illustrating one embodiment of a test system 100 comprising a control module 140, a test environment 120, a test suite 130, and a watch module 160. The control module 140 communicates with three test modules: a quick test module 410, an adjusted test module 420, and a sterilized test module 430. The test modules 410, 420, 430 may be modules separate from the control module 140 or the test modules 410, 420, 430 may be sub-modules contained within the control module 140. Those of skill in the art will understand that the logic of the test modules 410, 420, 430 may be comprised by other modules of the test system 100 or the test modules 410, 420, 430 may exist as separate modules.
  • In one embodiment, the control module 140 selectively executes test cases using the logic of the test modules 410, 420,430. The control module 140 may pass control of test case execution to individual test modules 410, 420, 430 which then control the execution of the sequential steps of each test case and maintain complete control of the test environment 120. Alternatively, the control module may completely control execution of each test case and may completely control the test environment, calling the test modules 410, 420, 430 simply as subroutines or procedures to tailor the successive execution of certain test cases.
  • The execution of a test case may be carried out by the control module 140, by the individual test modules 410, 420, 430, or by the test environment 120. In one embodiment, the control module 140 reads script commands from a test case and sequentially executes those commands by issuing a command on a test ID 150 running on an MVS virtual machine 250 in the test environment 120. For example, the first test instruction in a test case may instruct the test system 100 to execute an operator command on a specific MVS guest machine 250 to initialize the IMS product. The control module 140 may enter the operator command on the test ID 150 on an MVS guest machine 250. The next test instruction may require the initialization of a second IMS product and so forth. Alternatively, the individual test modules 410, 420, 430 may read the test instructions and execute the test instructions on the test IDs 150. Typically, only one test case is run in one test environment 120 at a time. However, a single control module 140 may control execution of multiple test cases in a plurality of test environments 120, one test case per environment.
  • Each of the test modules 410, 420, 430 may comprise distinct logic to handle test environment 120 initialization and logic to modify test case execution within certain parameters. The test modules 410, 420, 430 work in coordination with the control module 140 to ensure that each test case executes under very specific test environment 120 conditions.
  • In one embodiment, the control module 140 is a program named LCTRUN which executes under VM. An operator logs onto a control ID representing the control module 140 and starts the LCTRUN program. The LCTRUN program creates a separate watch module 160 executing a watch program. The LCTRUN program then begins execution of a test case from a test suite 130. The test case contains a script of instructions which the LCTRUN program executes. Each instruction may comprise individual operator commands to be executed on specific test IDs 150 running on specific MVS guest machines 250. As the LCTRUN program executes, it monitors test case execution and records failures and successes for each test case.
  • In an alternative embodiment, an operator may execute an LCTSTART command which communicates with a plurality of control IDs. Executing the LCTSTART program causes each of the plurality of control IDs to execute an LCTRUN program. In this manner, the LCTSTART program may cause dozens of control modules 140 to execute dozens of test suites 130 simultaneously in dozens of test environments 120. In one embodiment, the LCTSTART program may control the execution of LCTRUN on separate VM machines running on separate computing devices 112 or mainframes.
  • Typically, the control module 140 starts execution of a set of test cases by accessing a test suite 130. The test suite 130 typically is a set of test cases which require similar or identical test environments 120. The test suite 130 may be a subset of test cases from a larger test case library. The test suite 130 comprises a set of initial test cases 402 that the control module 140 executes.
  • In one embodiment, the control module 140 tracks the completion of test case execution. A test case completes test case execution only after the control module 140 marks the test case as passed or broken. The control module 140 marks a test case as passed if the test case successfully completes execution under the quick test module 410, the adjusted test module 420, or the sterilized test module 430. The control module 140 marks a test case as broken only after the test case has failed execution under all three test modules 410, 420, 430.
  • The control module 140 successively executes test cases using the quick test module 410, the adjusted test module 420, and the sterilized test module 430 in a waterfall approach. The adjusted test module 420 tests only those test cases that fail the quick test module 410. Similarly, the sterilized test module 430 tests only those test cases that fail the adjusted test module 420. The control module 140 marks those test cases that pass the quick test module 410 as passed and does not continue executing a passed test case. The control module also marks as passed those test cases that fail the quick test module 410 and then pass the adjusted test module 420. Similarly, the control module 140 marks as passed those test cases that fail the quick test module 410 and the adjusted test module 420 and then pass the sterilized test module 430. The control module marks the test cases that fail all of the test module 410, 420, 430 as broken.
  • As mentioned above, the system 100 may experience a testing irregularity at any time during test execution. If the watch module restarts the system 100 due to a testing irregularity or if the system 100 stops test case execution for any reason, the system 100 may continue test case execution upon a subsequent initialization. Execution of an uncompleted test suite 130 continues after initialization of the test system 100 according to the execution status of each test case; however, the control module sets the status of the test case that was executing at the time of the testing irregularity to failed for the particular test module 410, 420, 430 under which the test case was executing.
  • In one embodiment, upon restarting the test system 100, the control module 140 continues execution of the test suite 130 until each test case passes one test module 410, 420, 430 or the test case fails all three test modules 410, 420, 430. Thus, test case execution for one test suite 130 continues until the control sets the execution status for each test in the test suite 130 as passed or broken.
  • At the start of test case execution, the control module 140 creates a set of initial test cases 402 comprising the test cases from the test suite 130. The quick test module 410 may execute the initial test cases 402 in a relatively expedited manner. The quick test module 410 initializes the test environment 120 and starts execution of the initial test cases 402, one test case at a time. The quick test module 410 tracks test case passes and test case failures, and may reinitialize the test environment 120 after each test case failure. However, the quick test module 410 preferably does not reinitialize the test environment 120 after successful test cases. Although failing to initialize the test environment 120 after each test case execution may result in a higher number of failures, the quick test module 410 favors speed of test case execution over a higher pass rate and avoids reinitializing the test environment 120 except following test case failures.
  • On occasion, a test case may cause the SUT 110 to hang. Alternatively, the quick test module 410 may hang, the control module 140 may hang, VTAM 330 in one MVS guest machine 250 may stop responding, or the test system 100 may otherwise exhibit a testing irregularity. The control module 140 monitors the test IDs 150 and various modules in the test system 100 for signs of test irregularities. The watch module 160 monitors the control module 140 and additionally may monitor individual test IDs 150. The control module 140 or the watch module 160 may restart the test system 100 to recover from a testing irregularity. Testing of the test suite 130 automatically continues after a restart.
  • As an example, in one embodiment, the control module 140 may detect that an MVS guest machine 250 no longer responds to operator commands. The control module 140 may mark the execution status of the currently executing test case as failing and restart the test environment 120.
  • Restarting the test environment 120 may include shutting down the test IDs 150, shutting down the MVS guest machines 250, restoring test machine files 310 on each MVS guest machine 250, initializing MVS 320 on each MVS guest machine 250, bringing up VTAM 330 on each MVS guest machine, and logging onto each test ID 150.
  • In another alternative scenario of the same embodiment, the watch module 160 may detect that the control module 140 no longer responds to operator display commands. The watch module 160 may then restart the control module 140 and allow the control module 140 to initialize the test environment 120 as described above. Following a restart of the control module 140, the control module continues execution of the test cases according to the recorded execution status of each test case.
  • From time to time, a test suite 130 may experience an unusually high failure rate. A severe software bug in the SUT 110, a test environment 120 defect, or a test case defect common to several test cases in a single test suite 130 may cause a high failure rate. Upon recognizing the occurrence of a high failure rate, a tester may abort test case execution to determine the cause of the high failure rate. Aborting test case execution as early as possible may save days of wasted testing and conserve valuable testing resources.
  • In one embodiment of the test system 100, the control module 140 may track test case failures during the execution of the test modules 410, 420, 430 and notify an operator if the failure rate exceeds a certain threshold. For example, the control module 140 may compare the failure rate for the first ten test cases executed by the quick test module 410 and notify an operator if the failure rate exceeds fifty percent. The control module 140 may continue monitoring failure rates throughout the testing process and notify the operator of predetermined failure rates or other events that may warrant operator intervention. Preferably, the control module 140 always reports the current pass/failure status of each test case. However, the operator may configure the control module 140 to notify the operator using an audible alert, a flashing console message, or other mechanism to highlight certain failure rates or conditions which may warrant immediate action.
  • At the conclusion of the quick test module 410 execution, the control module 140 marks the passing test cases as passed and compiles the failing test cases into a set of questionable test cases 404. The control module 140 continues execution of the questionable test cases 404 using the adjusted test module 420. Because the quick test module 410 does not test each test case in a pristine test environment 120 and due to the fact that system load may have contributed to some of the test case failures, the test system 100 does not yet mark the failing test cases as broken.
  • The adjusted test module 420 receives the questionable test cases 404 for further testing. The adjusted test module 420 reinitializes the test environment 120. Prior to executing the questionable test cases 404, the adjusted test module 420 determines whether system load during the execution of the initial test cases 402 in the quick test module 410 may have contributed to the failure of the questionable test cases 404. Some test cases are more sensitive to timing considerations and system load. Other test cases may be more sensitive to network load. The adjusted test module 420 may consider the percentage of test case failures from the quick test module 410, system load, network load, and the sensitivities of the individual test cases to various timing situations, as well as other factors. If the adjusted test module 420 determines that system load, the network load, or another timing situation may have contributed to the test case failures in the quick test module 410 or if system load is high enough to affect the upcoming testing, the adjusted test module 420 may adjust delay parameters used by the questionable test cases 404.
  • Delay parameters are wait times prescribed by each test case. For instance, a test case may require an IMS software product 340 to respond to a database query in one second. If the test case failed waiting for a database response from the IMS software product 340, the adjusted test module 420 may increase a delay parameter allowing the IMS software product 340 two seconds to respond to a database query.
  • After initializing the test environment 120 and adjusting wait parameters in accordance with system load measurements, the adjusted test module 420 executes the questionable test cases 404. All other aspects of the testing related to execution by the quick test module 410 apply to the testing carried out by the adjusted test module 420. In other words, the adjusted test module 420 reinitializes the test environment 120 only after a test case fails.
  • In addition, the test system 100 restarts itself if the test system 100 experiences a testing irregularity. Following a restart of the test system 100 during execution of the adjusted test module 420, the test system 100 resumes execution with the adjusted test module 420. The test system 100 may mark the test case that was executing prior to the restart as failed and continues with the questionable test cases 404 that were not yet executed by the adjusted test module 420. At the conclusion of the execution of the adjusted test module 420, the control module 140 compiles the failed test cases from the adjusted test module into a set of suspect test cases 406. However, the suspect test cases 406 are not yet marked as broken because they did not all fail in a pristine test environment 120.
  • The sterilized test module 430 receives the suspect test cases 406 for further testing. The sterilized test module 430 initializes the test environment 120 and executes each of the suspect test cases 406. Following each test case execution, regardless of success or failure, the sterilized test module 430 reinitializes the test environment 120. If any of the modules of the test system 100 hang or crash, the test system 100 may restart the test system 100. Following a restart, the previously executing test case is marked as failed and the sterilized test module 430 reinitializes the test environment and executes the suspect test cases 406 that have not yet been tested. The control module 140 compiles the set of failed test cases from the sterilized test module 430 as broken test cases 408.
  • After completion of testing, the test system 100 may generate a report detailing the passed and broken test cases. The report may comprise a final execution status for each test case including the execution status of each test case for each test module 410, 420, 430. The design of the test system 100 creates a high degree of confidence that the broken test cases 408 are broken due to software bugs or defects in the test cases rather than defects in the test environment 120. The test system 100 systematically executes each test case in an expedited fashion, in a delayed fashion as needed, and in a pristine test environment 120.
  • FIG. 5 is a schematic block diagram summarizing the progression of test cases 500 through the test system 100 as controlled and monitored by the control module 140 and the test modules 410, 420, 430 (see FIG. 4). The test system 100 selects a test suite 130 comprising a set of test cases that require a similar test environment 120. The quick test module 410 executes the test suite 130. The control module 140 groups test cases that pass the quick test module 410 into quick test passing test cases 510 while marking failing test cases as questionable test cases 404. The adjusted test module 420 executes the questionable test cases 404. The control module 140 groups test cases that pass the adjusted test module 420 into adjusted test passing test cases 520 while marking failing test cases as suspect test cases 406. The sterilized test module 430 executes the suspect test cases 406. The control module 140 groups test cases that pass the sterilized test module 430 into sterilized test passing test cases 530 while marking failing test cases as broken test cases 408.
  • FIG. 6 is a schematic flow chart diagram illustrating one embodiment of a test case execution method 600 for executing a test suite 130 of test cases in accordance with the present invention. Initializing 604 the test environment 120 brings the test environment 120 to an initial or pristine state. The test modules 410, 420, 430 may also initialize the test environment 120 according to module specific logic described below.
  • During execution 606 of the quick test module 410, the test environment 120 is initialized only after test case failures. The quick test module 410 tracks test case failures and determines 608 if the test case failure rate exceeds a predetermined threshold. As an example, the operator may configure the threshold rate to be fifty percent. If the failure rate exceeds a predetermined threshold, the quick test module 410 notifies 610 the test system operator of the high failure rate. Notification alerts the operator that a severe defect in the SUT 110 or the test environment 120 may exist. Typically, a single test suite 130 may execute for several hours or several days. Timely notification of a potential severe defect may avert several days of wasted testing time and may accelerate the removal of the defect. The operator may abort the test case execution method 600 at any time.
  • The test system 100 compiles 612 a set of questionable test cases 404 from the test cases that failed during the execution 606 of the quick test module 410. Based on the system load during the execution of the quick test module 410 and the current system load, the test system 100 determines 614 if delay parameters should be adjusted and updates 616 the delay parameters accordingly.
  • The adjusted test module 420 executes 618 the questionable test cases 404. Following each test failure, the adjusted test module 420 reinitializes the test environment 120. The adjusted test module 420 compiles 620 the failing test cases into a set of suspect test cases 406.
  • The sterilized test module 430 executes 622 the suspect test cases 406. The sterilized test module 430 reinitializes the test environment 120 prior to each test case execution. Failed test cases are compiled 624 into a set of broken test cases 408.
  • Finally, the test system 100 may generate 626 a report based on the test case passes and failures. The test system 100 progressively sterilizes the test environment 120 throughout the testing process. The design of the test system 100 balances the need to verify quickly that test cases run correctly against the need to rule out test environment 120 defects before marking a test case as broken. Once the test system 100 marks a test case as broken, testers and developers can, with a high degree of certainty, look for either a defect in the test case or a bug in the SUT 110 rather than blaming the failure on a test environment 120 defect.
  • The present invention may be embodied in other specific forms without departing from its spirit or essential characteristics. The described embodiments are to be considered in all respects only as illustrative and not restrictive. The scope of the invention is, therefore, indicated by the appended claims rather than by the foregoing description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope.

Claims (20)

1. A method for automating execution of a plurality of test cases, the method: comprising:
executing a quick test of a test suite comprising a plurality of test cases;
compiling a set of questionable test cases that failed the quick test;
executing an adjusted test of the questionable test cases;
compiling a set of suspect test cases that failed the adjusted test;
executing a sterilized test of the suspect test cases; and
compiling a set of broken test cases that failed the sterilized test.
2. The method of claim 1, wherein executing an adjusted test further comprises adjusting delay parameters associated with the set of questionable test cases based on the percentage of test cases that failed the quick test.
3. The method of claim 2, wherein adjusting delay parameters comprises increasing delay parameters in response to a system load of a computer system executing the quick test, adjusted test, and sterilized test.
4. A system to automate the execution of a plurality of test cases and systematically identify a set of broken test cases, the system comprising:
at least one computing device;
a test environment comprising a plurality of userids on the at least one computing device;
a test suite comprising a plurality of test cases that utilize at least one of the userids;
a quick test module configured to execute the test suite using the test environment and compile a set of questionable test cases comprising the failed test cases executed by the quick test module;
an adjusted test module configured to execute the set of questionable test cases using the test environment and compile a set of suspect test cases comprising the failed test cases executed by the adjusted test module, wherein the adjusted test module initializes the test environment prior to execution of the set of questionable test cases and subsequent to the failed execution of a questionable test case and wherein the adjusted test module increases delay parameters associated with the set of questionable test cases based on a percentage of failed test cases from the execution of the test suite by the quick test module;
a sterilized test module configured to execute the set of suspect test cases using the test environment and compile a set of broken test cases comprising the failed test cases executed by the sterilized test module, wherein the sterilized test module initializes the test environment prior to executing each suspect test case;
a control module configured to control execution of the quick test module, the adjusted test module, and the sterilized test module; and
a watch module configured to detect a testing irregularity and reinitialize the control module in response to the detected irregularity in the control module, such that the control module continues execution.
5. The system of claim 4, the control module further configured to track an execution status of each test case based on the results of the execution of the test case by the quick test module, the adjusted test, and the sterilized test module.
6. The system of claim 5, the control module further configured to notify a system operator of test case failures that exceed a threshold, wherein the notification is sent prior to the completion of the execution of the test suite.
7. The system of claim 5, the control module further configured to generate a report of broken test cases.
8. The system of claim 5, the control module further configured to continue execution of the test suite in response to the re-initialization of the control module according to the execution status of each test case.
9. The system of claim 4, wherein the at least one computing device comprises at least one International Business Machines (IBM) mainframe running the Virtual Machine (VM) operating system and the test environment comprises Multiple Virtual Storage (MVS) guest machines running under the VM operating system.
10. The system of claim 9, wherein the control module is further configured to initialize the test environment to an initial state prior to execution of test cases by the quick test module, the adjusted test module, and the sterilized test module.
11. The system of claim 10 wherein initializing the test environment comprises copying a set of initialization files to each MVS guest machine.
12. The system of claim 11, wherein initializing the test environment further comprises initializing each MVS guest machine by initializing MVS, Virtual Telecommunications Access Method (VTAM) and Information Management System (IMS) according to specifications associated with the test suite.
13. A signal bearing medium tangibly embodying a program of machine-readable instructions executable by a digital processing apparatus to perform an operation to test a computer application the operation comprising:
executing a quick test of a test suite comprising a plurality of test cases configured to execute in a test environment comprising Multiple Virtual Storage (MVS) guest machines running on a Virtual Machine (VM) operating system on a mainframe;
compiling a set of questionable test cases that failed the quick test;
increasing delay parameters in the questionable test cases in accordance with the percentage of questionable test cases compared to the plurality of test cases
executing an adjusted test of the questionable test cases;
compiling a set of suspect test cases that failed the adjusted test;
executing a sterilized test of the suspect test cases;
compiling a set of broken test cases that failed the sterilized test;
maintaining an execution status for each test; and
monitoring the execution of the quick test, the adjusted test, and the sterilized test for a testing irregularity and restarting the execution of the quick test, the adjusted test, and the sterilized test in response to a detected testing irregularity according to the execution status of each test case.
14. The signal bearing medium of claim 13, wherein the instructions further comprise lengthening the delay parameters in accordance with a system load during the execution of the quick test.
15. The signal bearing medium of claim 13, wherein the instructions further comprise lengthening the delay parameters in accordance with a system load during the execution of the adjusted test.
16. The signal bearing medium of claim 13, wherein maintaining the execution status of each test case comprises tracking for each test case successful and failed completion of the execution of the quick test, the adjusted test, and the sterilized and wherein restarting the execution of the quick test, the adjusted test, and the sterilized test comprises completing the execution of each test case having no execution status.
17. The signal bearing medium of claim 13, wherein a service person causes the instructions to be executed to validate the integrity of a software installation.
18. The signal bearing medium of claim 13, wherein executing an adjusted test further comprises initializing the test environment to an initial state prior to executing the questionable test cases.
19. The signal bearing medium of claim 18, wherein executing an adjusted test further comprises detecting a failure of a questionable test case and initializing the test environment to the initial state prior to executing a next questionable test case.
20. The signal bearing medium of claim 19, wherein executing a sterilized test further comprises initializing the test environment to the initial state prior to executing each test case from the set of suspect test cases.
US11/281,646 2005-11-17 2005-11-17 Apparatus, system, and method for persistent testing with progressive environment sterilzation Abandoned US20070168734A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US11/281,646 US20070168734A1 (en) 2005-11-17 2005-11-17 Apparatus, system, and method for persistent testing with progressive environment sterilzation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US11/281,646 US20070168734A1 (en) 2005-11-17 2005-11-17 Apparatus, system, and method for persistent testing with progressive environment sterilzation

Publications (1)

Publication Number Publication Date
US20070168734A1 true US20070168734A1 (en) 2007-07-19

Family

ID=38264683

Family Applications (1)

Application Number Title Priority Date Filing Date
US11/281,646 Abandoned US20070168734A1 (en) 2005-11-17 2005-11-17 Apparatus, system, and method for persistent testing with progressive environment sterilzation

Country Status (1)

Country Link
US (1) US20070168734A1 (en)

Cited By (45)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070220341A1 (en) * 2006-02-28 2007-09-20 International Business Machines Corporation Software testing automation framework
US20080263523A1 (en) * 2007-04-19 2008-10-23 Rupert Fruth Method for testing engineering software
US20080275944A1 (en) * 2007-05-04 2008-11-06 International Business Machines Corporation Transaction-initiated batch processing
US20080280278A1 (en) * 2007-05-10 2008-11-13 Raindrop Network Ltd. Method and Apparatus for Affecting Behavioral Patterns of a Child
US20090265681A1 (en) * 2008-04-21 2009-10-22 Microsoft Corporation Ranking and optimizing automated test scripts
US20090276663A1 (en) * 2007-05-02 2009-11-05 Rauli Ensio Kaksonen Method and arrangement for optimizing test case execution
US7685472B1 (en) * 2007-01-30 2010-03-23 Intuit, Inc. Method and apparatus for testing object-oriented-programming methods
US7809988B1 (en) * 2006-12-29 2010-10-05 The Mathworks, Inc. Test environment having synchronous and asynchronous distributed testing
US20100318325A1 (en) * 2007-12-21 2010-12-16 Phoenix Contact Gmbh & Co. Kg Signal processing device
WO2011142985A1 (en) * 2010-05-13 2011-11-17 Bank Of America Technology infrastructure-change risk model
US20120005537A1 (en) * 2010-05-28 2012-01-05 Salesforce.Com, Inc. Identifying bugs in a database system environment
JP5104958B2 (en) * 2008-10-03 2012-12-19 富士通株式会社 Virtual computer system test method, test program, recording medium thereof, and virtual computer system
US8359284B2 (en) 2010-05-13 2013-01-22 Bank Of America Corporation Organization-segment-based risk analysis model
US8499286B2 (en) * 2010-07-27 2013-07-30 Salesforce.Com, Inc. Module testing adjustment and configuration
US8533537B2 (en) 2010-05-13 2013-09-10 Bank Of America Corporation Technology infrastructure failure probability predictor
US8661412B2 (en) 2010-11-19 2014-02-25 Microsoft Corporation Managing automated and manual application testing
US20150254173A1 (en) * 2014-03-07 2015-09-10 Ca, Inc. Automated generation of test cases for regression testing
US9201763B1 (en) * 2013-05-31 2015-12-01 The Mathworks, Inc. Efficient sharing of test fixtures and ordering of test suites
CN105182860A (en) * 2015-10-08 2015-12-23 赵风雪 Disinfection supply chamber management system management method
CN105718326A (en) * 2016-01-27 2016-06-29 惠州市德赛西威汽车电子股份有限公司 Restorability testing method of embedded system
WO2017017691A1 (en) * 2015-07-27 2017-02-02 Hewlett Packard Enterprise Development Lp Testing computing devices
WO2017127047A1 (en) * 2016-01-19 2017-07-27 Entit Software Llc Impairment in an application test execution
US20180181456A1 (en) * 2016-12-26 2018-06-28 Samsung Electronics Co., Ltd. Internet of things framework and method of operating the same
US10019486B2 (en) 2016-02-24 2018-07-10 Bank Of America Corporation Computerized system for analyzing operational event data
US10067984B2 (en) 2016-02-24 2018-09-04 Bank Of America Corporation Computerized system for evaluating technology stability
US10216798B2 (en) 2016-02-24 2019-02-26 Bank Of America Corporation Technical language processor
US10223425B2 (en) 2016-02-24 2019-03-05 Bank Of America Corporation Operational data processor
US10275183B2 (en) 2016-02-24 2019-04-30 Bank Of America Corporation System for categorical data dynamic decoding
US10275182B2 (en) 2016-02-24 2019-04-30 Bank Of America Corporation System for categorical data encoding
US20190196946A1 (en) * 2017-12-21 2019-06-27 Sap Se Software testing systems and methods
CN110022244A (en) * 2019-04-03 2019-07-16 北京字节跳动网络技术有限公司 Method and apparatus for sending information
US10366337B2 (en) 2016-02-24 2019-07-30 Bank Of America Corporation Computerized system for evaluating the likelihood of technology change incidents
US10366367B2 (en) 2016-02-24 2019-07-30 Bank Of America Corporation Computerized system for evaluating and modifying technology change events
US10366338B2 (en) 2016-02-24 2019-07-30 Bank Of America Corporation Computerized system for evaluating the impact of technology change incidents
US10387230B2 (en) 2016-02-24 2019-08-20 Bank Of America Corporation Technical language processor administration
US10430743B2 (en) 2016-02-24 2019-10-01 Bank Of America Corporation Computerized system for simulating the likelihood of technology change incidents
WO2020055475A1 (en) * 2018-09-12 2020-03-19 Microsoft Technology Licensing, Llc Detection of code defects via analysis of telemetry data across internal validation rings
US10719427B1 (en) * 2017-05-04 2020-07-21 Amazon Technologies, Inc. Contributed test management in deployment pipelines
US10776341B2 (en) * 2015-03-25 2020-09-15 Sartorius Stedim Biotech Gmbh Method for quality assurance of filtration processes
CN111708697A (en) * 2020-06-15 2020-09-25 楚天科技股份有限公司 Isolator biological purification development verification method and system
US10831647B2 (en) 2017-09-20 2020-11-10 Sap Se Flaky test systems and methods
US11061705B2 (en) * 2015-03-16 2021-07-13 Bmc Software, Inc. Maintaining virtual machine templates
US11481295B2 (en) * 2017-02-10 2022-10-25 Optofidelity Oy Method, an all-in-one tester and computer program product
US11615016B2 (en) * 2020-01-23 2023-03-28 Hcl Technologies Limited System and method for executing a test case
US11630749B2 (en) 2021-04-09 2023-04-18 Bank Of America Corporation Electronic system for application monitoring and preemptive remediation of associated events

Citations (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5130936A (en) * 1990-09-14 1992-07-14 Arinc Research Corporation Method and apparatus for diagnostic testing including a neural network for determining testing sufficiency
US5193178A (en) * 1990-10-31 1993-03-09 International Business Machines Corporation Self-testing probe system to reveal software errors
US5371883A (en) * 1993-03-26 1994-12-06 International Business Machines Corporation Method of testing programs in a distributed environment
US5412801A (en) * 1990-01-17 1995-05-02 E-Net Gap recovery for off-site data storage and recovery systems
US5421004A (en) * 1992-09-24 1995-05-30 International Business Machines Corporation Hierarchical testing environment
US5557539A (en) * 1994-06-13 1996-09-17 Centigram Communications Corporation Apparatus and method for testing an interactive voice messaging system
US5671351A (en) * 1995-04-13 1997-09-23 Texas Instruments Incorporated System and method for automated testing and monitoring of software applications
US5673387A (en) * 1994-05-16 1997-09-30 Lucent Technologies Inc. System and method for selecting test units to be re-run in software regression testing
US5751941A (en) * 1996-04-04 1998-05-12 Hewlett-Packard Company Object oriented framework for testing software
US5831998A (en) * 1996-12-13 1998-11-03 Northern Telecom Limited Method of testcase optimization
US5892947A (en) * 1996-07-01 1999-04-06 Sun Microsystems, Inc. Test support tool system and method
US5960457A (en) * 1997-05-01 1999-09-28 Advanced Micro Devices, Inc. Cache coherency test system and methodology for testing cache operation in the presence of an external snoop
US6011830A (en) * 1996-12-10 2000-01-04 Telefonaktiebolaget Lm Ericsson Operational test device and method of performing an operational test for a system under test
US6023580A (en) * 1996-07-03 2000-02-08 Objectswitch Corporation Apparatus and method for testing computer systems
US6031990A (en) * 1997-04-15 2000-02-29 Compuware Corporation Computer software testing management
US6088690A (en) * 1997-06-27 2000-07-11 Microsoft Method and apparatus for adaptively solving sequential problems in a target system utilizing evolutionary computation techniques
US6148427A (en) * 1995-09-11 2000-11-14 Compaq Computer Corporation Method and apparatus for test data generation
US6182245B1 (en) * 1998-08-31 2001-01-30 Lsi Logic Corporation Software test case client/server system and method
US20010052089A1 (en) * 2000-04-27 2001-12-13 Microsoft Corporation Automated testing
US20030018932A1 (en) * 2001-07-21 2003-01-23 International Business Machines Corporation Method and system for performing automated regression tests in a state-dependent data processing system
US20030037314A1 (en) * 2001-08-01 2003-02-20 International Business Machines Corporation Method and apparatus for testing and evaluating a software component using an abstraction matrix
US20040025088A1 (en) * 2002-08-01 2004-02-05 Sun Microsystems, Inc. Software application test coverage analyzer
US6715108B1 (en) * 1999-10-12 2004-03-30 Worldcom, Inc. Method of and system for managing test case versions
US20040088677A1 (en) * 2002-11-04 2004-05-06 International Business Machines Corporation Method and system for generating an optimized suite of test cases
US6782518B2 (en) * 2002-03-28 2004-08-24 International Business Machines Corporation System and method for facilitating coverage feedback testcase generation reproducibility
US20040181713A1 (en) * 2003-03-10 2004-09-16 Lambert John Robert Automatic identification of input values that expose output failures in software object
US20040205436A1 (en) * 2002-09-27 2004-10-14 Sandip Kundu Generalized fault model for defects and circuit marginalities
US20050154939A1 (en) * 2003-11-26 2005-07-14 International Business Machines Corporation Methods and apparatus for adaptive problem determination in distributed service-based applications

Patent Citations (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5412801A (en) * 1990-01-17 1995-05-02 E-Net Gap recovery for off-site data storage and recovery systems
US5130936A (en) * 1990-09-14 1992-07-14 Arinc Research Corporation Method and apparatus for diagnostic testing including a neural network for determining testing sufficiency
US5193178A (en) * 1990-10-31 1993-03-09 International Business Machines Corporation Self-testing probe system to reveal software errors
US5421004A (en) * 1992-09-24 1995-05-30 International Business Machines Corporation Hierarchical testing environment
US5371883A (en) * 1993-03-26 1994-12-06 International Business Machines Corporation Method of testing programs in a distributed environment
US5673387A (en) * 1994-05-16 1997-09-30 Lucent Technologies Inc. System and method for selecting test units to be re-run in software regression testing
US5557539A (en) * 1994-06-13 1996-09-17 Centigram Communications Corporation Apparatus and method for testing an interactive voice messaging system
US5671351A (en) * 1995-04-13 1997-09-23 Texas Instruments Incorporated System and method for automated testing and monitoring of software applications
US6148427A (en) * 1995-09-11 2000-11-14 Compaq Computer Corporation Method and apparatus for test data generation
US5751941A (en) * 1996-04-04 1998-05-12 Hewlett-Packard Company Object oriented framework for testing software
US5892947A (en) * 1996-07-01 1999-04-06 Sun Microsystems, Inc. Test support tool system and method
US6023580A (en) * 1996-07-03 2000-02-08 Objectswitch Corporation Apparatus and method for testing computer systems
US6011830A (en) * 1996-12-10 2000-01-04 Telefonaktiebolaget Lm Ericsson Operational test device and method of performing an operational test for a system under test
US5831998A (en) * 1996-12-13 1998-11-03 Northern Telecom Limited Method of testcase optimization
US6031990A (en) * 1997-04-15 2000-02-29 Compuware Corporation Computer software testing management
US6219829B1 (en) * 1997-04-15 2001-04-17 Compuware Corporation Computer software testing management
US5960457A (en) * 1997-05-01 1999-09-28 Advanced Micro Devices, Inc. Cache coherency test system and methodology for testing cache operation in the presence of an external snoop
US6088690A (en) * 1997-06-27 2000-07-11 Microsoft Method and apparatus for adaptively solving sequential problems in a target system utilizing evolutionary computation techniques
US6182245B1 (en) * 1998-08-31 2001-01-30 Lsi Logic Corporation Software test case client/server system and method
US6715108B1 (en) * 1999-10-12 2004-03-30 Worldcom, Inc. Method of and system for managing test case versions
US20010052089A1 (en) * 2000-04-27 2001-12-13 Microsoft Corporation Automated testing
US20030018932A1 (en) * 2001-07-21 2003-01-23 International Business Machines Corporation Method and system for performing automated regression tests in a state-dependent data processing system
US20030037314A1 (en) * 2001-08-01 2003-02-20 International Business Machines Corporation Method and apparatus for testing and evaluating a software component using an abstraction matrix
US6782518B2 (en) * 2002-03-28 2004-08-24 International Business Machines Corporation System and method for facilitating coverage feedback testcase generation reproducibility
US20040025088A1 (en) * 2002-08-01 2004-02-05 Sun Microsystems, Inc. Software application test coverage analyzer
US20040205436A1 (en) * 2002-09-27 2004-10-14 Sandip Kundu Generalized fault model for defects and circuit marginalities
US20040088677A1 (en) * 2002-11-04 2004-05-06 International Business Machines Corporation Method and system for generating an optimized suite of test cases
US20040181713A1 (en) * 2003-03-10 2004-09-16 Lambert John Robert Automatic identification of input values that expose output failures in software object
US20050154939A1 (en) * 2003-11-26 2005-07-14 International Business Machines Corporation Methods and apparatus for adaptive problem determination in distributed service-based applications

Cited By (60)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070220341A1 (en) * 2006-02-28 2007-09-20 International Business Machines Corporation Software testing automation framework
US8914679B2 (en) * 2006-02-28 2014-12-16 International Business Machines Corporation Software testing automation framework
US7809988B1 (en) * 2006-12-29 2010-10-05 The Mathworks, Inc. Test environment having synchronous and asynchronous distributed testing
US7685472B1 (en) * 2007-01-30 2010-03-23 Intuit, Inc. Method and apparatus for testing object-oriented-programming methods
US20080263523A1 (en) * 2007-04-19 2008-10-23 Rupert Fruth Method for testing engineering software
US20090276663A1 (en) * 2007-05-02 2009-11-05 Rauli Ensio Kaksonen Method and arrangement for optimizing test case execution
US20080275944A1 (en) * 2007-05-04 2008-11-06 International Business Machines Corporation Transaction-initiated batch processing
US7958188B2 (en) * 2007-05-04 2011-06-07 International Business Machines Corporation Transaction-initiated batch processing
US20110197194A1 (en) * 2007-05-04 2011-08-11 International Business Machines Corporation Transaction-initiated batch processing
US8495136B2 (en) 2007-05-04 2013-07-23 International Business Machines Corporation Transaction-initiated batch processing
US20080280278A1 (en) * 2007-05-10 2008-11-13 Raindrop Network Ltd. Method and Apparatus for Affecting Behavioral Patterns of a Child
US8965735B2 (en) * 2007-12-21 2015-02-24 Phoenix Contact Gmbh & Co. Kg Signal processing device
US20100318325A1 (en) * 2007-12-21 2010-12-16 Phoenix Contact Gmbh & Co. Kg Signal processing device
US20090265681A1 (en) * 2008-04-21 2009-10-22 Microsoft Corporation Ranking and optimizing automated test scripts
US8266592B2 (en) 2008-04-21 2012-09-11 Microsoft Corporation Ranking and optimizing automated test scripts
JP5104958B2 (en) * 2008-10-03 2012-12-19 富士通株式会社 Virtual computer system test method, test program, recording medium thereof, and virtual computer system
US8359284B2 (en) 2010-05-13 2013-01-22 Bank Of America Corporation Organization-segment-based risk analysis model
US8230268B2 (en) 2010-05-13 2012-07-24 Bank Of America Corporation Technology infrastructure failure predictor
WO2011142985A1 (en) * 2010-05-13 2011-11-17 Bank Of America Technology infrastructure-change risk model
US8533537B2 (en) 2010-05-13 2013-09-10 Bank Of America Corporation Technology infrastructure failure probability predictor
US20120005537A1 (en) * 2010-05-28 2012-01-05 Salesforce.Com, Inc. Identifying bugs in a database system environment
US8583964B2 (en) * 2010-05-28 2013-11-12 Salesforce.Com, Inc. Identifying bugs in a database system environment
US8499286B2 (en) * 2010-07-27 2013-07-30 Salesforce.Com, Inc. Module testing adjustment and configuration
US8661412B2 (en) 2010-11-19 2014-02-25 Microsoft Corporation Managing automated and manual application testing
US9201763B1 (en) * 2013-05-31 2015-12-01 The Mathworks, Inc. Efficient sharing of test fixtures and ordering of test suites
US20150254173A1 (en) * 2014-03-07 2015-09-10 Ca, Inc. Automated generation of test cases for regression testing
US9361211B2 (en) * 2014-03-07 2016-06-07 Ca, Inc. Automated generation of test cases for regression testing
US11392404B2 (en) 2015-03-16 2022-07-19 Bmc Software, Inc. Maintaining virtual machine templates
US11061705B2 (en) * 2015-03-16 2021-07-13 Bmc Software, Inc. Maintaining virtual machine templates
US10776341B2 (en) * 2015-03-25 2020-09-15 Sartorius Stedim Biotech Gmbh Method for quality assurance of filtration processes
WO2017017691A1 (en) * 2015-07-27 2017-02-02 Hewlett Packard Enterprise Development Lp Testing computing devices
CN105182860A (en) * 2015-10-08 2015-12-23 赵风雪 Disinfection supply chamber management system management method
WO2017127047A1 (en) * 2016-01-19 2017-07-27 Entit Software Llc Impairment in an application test execution
US10705947B2 (en) 2016-01-19 2020-07-07 Micro Focus Llc Impairment in an application test execution
CN105718326A (en) * 2016-01-27 2016-06-29 惠州市德赛西威汽车电子股份有限公司 Restorability testing method of embedded system
US10366367B2 (en) 2016-02-24 2019-07-30 Bank Of America Corporation Computerized system for evaluating and modifying technology change events
US10366338B2 (en) 2016-02-24 2019-07-30 Bank Of America Corporation Computerized system for evaluating the impact of technology change incidents
US10275182B2 (en) 2016-02-24 2019-04-30 Bank Of America Corporation System for categorical data encoding
US10216798B2 (en) 2016-02-24 2019-02-26 Bank Of America Corporation Technical language processor
US10067984B2 (en) 2016-02-24 2018-09-04 Bank Of America Corporation Computerized system for evaluating technology stability
US10366337B2 (en) 2016-02-24 2019-07-30 Bank Of America Corporation Computerized system for evaluating the likelihood of technology change incidents
US10019486B2 (en) 2016-02-24 2018-07-10 Bank Of America Corporation Computerized system for analyzing operational event data
US10275183B2 (en) 2016-02-24 2019-04-30 Bank Of America Corporation System for categorical data dynamic decoding
US10387230B2 (en) 2016-02-24 2019-08-20 Bank Of America Corporation Technical language processor administration
US10430743B2 (en) 2016-02-24 2019-10-01 Bank Of America Corporation Computerized system for simulating the likelihood of technology change incidents
US10474683B2 (en) 2016-02-24 2019-11-12 Bank Of America Corporation Computerized system for evaluating technology stability
US10838969B2 (en) 2016-02-24 2020-11-17 Bank Of America Corporation Computerized system for evaluating technology stability
US10223425B2 (en) 2016-02-24 2019-03-05 Bank Of America Corporation Operational data processor
US20180181456A1 (en) * 2016-12-26 2018-06-28 Samsung Electronics Co., Ltd. Internet of things framework and method of operating the same
US11481295B2 (en) * 2017-02-10 2022-10-25 Optofidelity Oy Method, an all-in-one tester and computer program product
US10719427B1 (en) * 2017-05-04 2020-07-21 Amazon Technologies, Inc. Contributed test management in deployment pipelines
US10831647B2 (en) 2017-09-20 2020-11-10 Sap Se Flaky test systems and methods
US10747653B2 (en) * 2017-12-21 2020-08-18 Sap Se Software testing systems and methods
US20190196946A1 (en) * 2017-12-21 2019-06-27 Sap Se Software testing systems and methods
WO2020055475A1 (en) * 2018-09-12 2020-03-19 Microsoft Technology Licensing, Llc Detection of code defects via analysis of telemetry data across internal validation rings
US10956307B2 (en) 2018-09-12 2021-03-23 Microsoft Technology Licensing, Llc Detection of code defects via analysis of telemetry data across internal validation rings
CN110022244A (en) * 2019-04-03 2019-07-16 北京字节跳动网络技术有限公司 Method and apparatus for sending information
US11615016B2 (en) * 2020-01-23 2023-03-28 Hcl Technologies Limited System and method for executing a test case
CN111708697A (en) * 2020-06-15 2020-09-25 楚天科技股份有限公司 Isolator biological purification development verification method and system
US11630749B2 (en) 2021-04-09 2023-04-18 Bank Of America Corporation Electronic system for application monitoring and preemptive remediation of associated events

Similar Documents

Publication Publication Date Title
US20070168734A1 (en) Apparatus, system, and method for persistent testing with progressive environment sterilzation
US6532552B1 (en) Method and system for performing problem determination procedures in hierarchically organized computer systems
US7519866B2 (en) Computer boot operation utilizing targeted boot diagnostics
US20070220370A1 (en) Mechanism to generate functional test cases for service oriented architecture (SOA) applications from errors encountered in development and runtime
Cotroneo et al. Fault triggers in open-source software: An experience report
US6560721B1 (en) Testcase selection by the exclusion of disapproved, non-tested and defect testcases
US8726225B2 (en) Testing of a software system using instrumentation at a logging module
US7702966B2 (en) Method and apparatus for managing software errors in a computer system
US7284237B2 (en) Testing flow control at test assertion level
US20040060045A1 (en) Programmatic application installation diagnosis and cleaning
US20040220774A1 (en) Early warning mechanism for enhancing enterprise availability
US20130047036A1 (en) Self validating applications
US11175902B2 (en) Testing an upgrade to a microservice in a production environment
US6550019B1 (en) Method and apparatus for problem identification during initial program load in a multiprocessor system
US20200310779A1 (en) Validating a firmware compliance policy prior to use in a production system
CN105718340A (en) Crontab based CPU stability testing method
US20080172579A1 (en) Test Device For Verifying A Batch Processing
US7478283B2 (en) Provisional application management with automated acceptance tests and decision criteria
US7325227B2 (en) System, method, and computer program product for identifying code development errors
US9250942B2 (en) Hardware emulation using on-the-fly virtualization
US9507690B2 (en) Method and device for analyzing an execution of a predetermined program flow on a physical computer system
US20200104244A1 (en) Scriptless software test automation
CN115599645A (en) Method and device for testing stability of linux drive module
JPH02294739A (en) Fault detecting system
US20070028218A1 (en) Apparatus, system, and method for a software test coverage analyzer using embedded hardware

Legal Events

Date Code Title Description
AS Assignment

Owner name: INTERNATIONAL BUSINESS MACHINES CORPORATION, NEW Y

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:VASILE, PHIL;REEL/FRAME:017032/0090

Effective date: 20051117

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION