METHOD FOR USING STATISTICAL ANALYSIS TO MONITOR AND ANALYZE PERFORMANCE
OF NEW NETWORK INFRASTRUCTURE OR SOFTWARE APPLICATIONS FOR DEPLOYMENT THEREOF
CROSS REFERENCE TO RELATED
 This application claims priority under 35 U.S.C. § 19(e) to U.S. Provisional Patent Application Nos. 60/579, 984 filed on Jun. 15,2004, entitled Methods and Systems for Determining and Using a Software Footprint, which is incorporated herein by reference in their entirety.
 This application is related to the following U.S.
patent applications (Ser. Nos. TBA), filed on an even
date herewith, entitled as follows:
 System and Method for Monitoring Performance of Arbitrary Groupings of Network Infrastructure and Applications;
 System and Method for Monitoring Perfor-
mance of Network Infrastructure and Applications
by Automatically Identifying System Variables or
Components Constructed from Such Variables that
Dominate Variance of Performance; and
 Method for Using Statistical Analysis to
Monitor and Analyze Performance of New Network
Infrastructure or Software Applications Before
BACKGROUND  1. Technical Field
 This invention generally relates to the field of software and network systems management and more specifically to monitoring performance of groupings of network infrastructure and applications using statistical analysis.
 2. Discussion of Related Art
 In today's information technology (IT) operating environments, software applications are changing with increasing frequency. This is in response to security vulnerabilities, rapidly evolving end-user business requirements and the increased speed of software development cycles. Furthermore, the production environments into which these software applications are being deployed have also increased in complexity and are often interlinked and interrelated with other 'shared' components.
 Software application change is one of the primary reasons for application downtime or failure. For example, roughly half of all software patches and updates within enterprise environments fail when being applied and require some form IT operator intervention. The issues are even worse when dealing with large scale applications that are designed and written by many different people, and when operating environments need to support large numbers of live users and transactions.
 The core of the problem is rooted in the software release decision itself and the tradeoff that is made between the risks of downtime and application vulnerability. All changes to the software code can have un-intended consequences to other applications or infrastructure components.
Thus far, the inability to quantify that risk in the deployment of software means that most decisions are made blindly, oftentimes with significant implications.
 The current approach to increasing confidence in a software release decision is done through testing. There are a number of tools and techniques that address the various stages of the quality assurance process. The tools range from the use of code verification and complier technology to automated test scripts to load/demand generators that can be applied against software. The problem is: how much testing is enough?
 Ultimately, the complication is that the testing environments are simply different from production environments. In addition to being physically distinct with different devices and topologies, testing environments also differ in regards to both aggregate load and the load curve characteristics. Furthermore, as infrastructure components are shared across multiple software applications, or when customers consume different combinations of components within a service environment, of when third party applications are utilized or embedded within an application, the current testing environments are rendered particularly insufficient.
 As the usage of software applications has matured, corporations have grown increasingly reliant upon software systems to support mission critical business processes. As these applications have evolved and grown increasingly complex, so have the difficulties and expenses associated with managing and supporting them. This is especially true of distributed applications delivered over the Internet to multiple types of clients and end-users.
 Software delivered over the Internet (vs. on a closed network) is characterized by frequent change, software code deployed into high volume and variable load production environments, and end-user functionality may be comprised of multiple 'applications' served from different operating infrastructures and potentially different physical networks. Managing availability, performance and problem resolution requires new capabilities and approaches.
 The current state of the technology in application performance management is characterized by several categories of solutions.
 The first category is the monitoring platform; it provides a near real-time environment focused on alerting an operator when a particular variable within a monitored device has exceeded a pre-determined performance threshold. Data is gathered from the monitored device (network, server or software application) via agents, (or via an agentless techniques, or directly outputted by the code) and they are aggregated in a single database. In situations where data volumes are large, the monitoring information may be reduced, filtered or summarized and/or stored across a set of coordinated databases. Different datatypes are usually normalized into a common format and rendered through a viewable console. Most major systems management tools companies like BMC, Net IQ, CA/Unicenter, IBM's (Tivoli), HP (HPOV), Micromuse, Quest, Veritas and Smarts provides these capabilities.
 A second category consists of various analytical modules that are designed to work in concert with a monitoring environment. These consist of (i) correlation, impact