WO2001026008A1 - Method and estimator for event/fault monitoring - Google Patents

Method and estimator for event/fault monitoring Download PDF

Info

Publication number
WO2001026008A1
WO2001026008A1 PCT/US2000/027629 US0027629W WO0126008A1 WO 2001026008 A1 WO2001026008 A1 WO 2001026008A1 US 0027629 W US0027629 W US 0027629W WO 0126008 A1 WO0126008 A1 WO 0126008A1
Authority
WO
WIPO (PCT)
Prior art keywords
monitoring
task
infrastructure
technology infrastructure
designing
Prior art date
Application number
PCT/US2000/027629
Other languages
French (fr)
Inventor
Ovee Rahman
William C. Bond
Original Assignee
Accenture Llp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Accenture Llp filed Critical Accenture Llp
Priority to AU78666/00A priority Critical patent/AU7866600A/en
Publication of WO2001026008A1 publication Critical patent/WO2001026008A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/10Office automation; Time management
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/80Management or planning

Definitions

  • IT Information Technology
  • Such a framework needs to be a single framework describing an entire IT capability, whether as functions, systems or tasks.
  • the IT framework should be a framework of functions, a representation of a complete checklist of all relevant activities performed in an IT enterprise.
  • a single IT Framework should represent all functions operative in an IT enterprise.
  • an event and fault management or monitoring function that is sent from system components such as hardware, application software and system software, and communications systems. Incidents could be interpreted as either faults (failures) or events (warnings).
  • An event/fault monitoring function should coordinate with other function categories to provide input and should aim to continuously improve current IT services and offerings. Such a function is also known as an event and fault management.
  • one embodiment of the invention is a method for providing for an event and fault monitoring function that receives, logs, classifies, analyzes and presents incidents based upon pre-established filters or thresholds.
  • the method includes planning, designing, building, testing and deploying an event and fault monitoring function in an IT organization.
  • the method preferably includes designing business processes, skills, and user interaction for the design phase.
  • the method further includes designing an organization infrastructure and a performance enhancement n ras ruc ure or mon or ng. e me o a so nc u es es gn ng ec no ogy infrastructure and operations architecture for the design phase of monitoring.
  • the building phase of the method the technology infrastructure and the operations architecture is built. Also business policies, procedures, performance support, and learning products for monitoring are built.
  • the technology infrastructure and the operations architecture are tested.
  • the deploying stage the technology infrastructure for the IT organization is deployed.
  • Another aspect of the present invention is a method for providing an estimate for building an event/fault monitoring function in an information technology organization.
  • This aspect of the present invention allows an IT consultant to give on site estimations to a client within minutes.
  • the estimator produces a detailed break down of cost and time to complete a project by displaying the costs and time corresponding to each stage of a project along with each task.
  • Another aspect of the present invention is a computer system for allocating time and computing cost for building an event/fault monitoring function in an information technology organization.
  • Figure 1a is a representation of a network/systems management function including monitoring functions.
  • Figure 1 b is a representation of an event/fault monitoring function including sub-elements of the function.
  • Figure 2 shows a representation of a method for providing a monitoring function according to the presently preferred embodiment of the invention.
  • Figure 3 shows a representation of a task for defining a business performance model for monitoring.
  • Figure 4 shows a representation of a task for designing business processes, skills, and user interaction for monitoring.
  • Figure 5 shows a representation of a task for designing technology infrastructure requirements for monitoring.
  • Figure 6 shows a representation of a task for designing an organization infrastructure for monitoring.
  • Figure 7 shows a representation of a task for designing a performance enhancement infrastructure for monitoring.
  • Figure 8 shows a representation of a task for designing operations architecture for monitoring.
  • Figure 9 shows a representation of a task for validating a technology infrastructure for monitoring.
  • Figure 10 shows a representation of a task for acquiring a technology infrastructure for monitoring.
  • Figure 1 1 shows a representation of a task for building and testing operations architecture for monitoring.
  • Figure 12 shows a representation of a task for developing business policies, procedures, and performance support architecture for monitoring.
  • Figure 13 shows a representation of a task for developing learning products for monitoring.
  • Figure 14 shows a representation of a task for testing a technology infrastructure product for monitoring.
  • Figure 15 shows a representation of a task for deploying a technology infrastructure for monitoring.
  • Figure 16 shows a flow chart for obtaining an estimate of cost and time allocation for a project.
  • Figures 17a and 17b show one embodiment of an estimating worksheet for an event/fault monitoring estimating guide.
  • an information technology (“IT”) enterprise may be considered to be a business organization, charitable organization, government organization, etc., that uses an information technology system with or to support its activities.
  • An IT organization is the group, associated systems and processes within the enterprise that are responsible for the management and delivery of information technology services to users in the enterprise.
  • multiple functions may be organized and categorized to provide comprehensive service to the user.
  • the various operations management functionalities within the IT framework include a customer service management function; a service integration function; a service delivery function, a capability development function; a change administration function; a strategy, architecture and planning function; a management and administration function; a human performance management function; and a governance and strategic relationships function.
  • monitoring plays an important role.
  • the present invention includes a method for providing a monitoring system or function for an information technology organization. Before describing the method for providing a monitoring function, a brief explanation is in order concerning event/fault monitoring, and its systems, functions and tasks.
  • Event/fault Monitoring is a group of tasks or functions within a network or systems management function.
  • Such a network/systems management function 31 is depicted in Figure 1 a, with several functions, including production scheduling 311 , output/print management 312, network/systems operations 313, operations architecture management 314, network addressing management 315, storage management 316, backup/restore management 317, archiving 318, and event/fault management 319.
  • Other functions may include system performance management 3110, security management 3111 , and disaster recovery maintenance and testing 3112.
  • the scope of Monitoring 319 includes four organizations, monitoring 3191 , analyzing 3192, and classifying
  • Event/Fault Management A group of functions useful in information technology may be termed Event/Fault Management. These functions receive, log, classify, analyze, and present incidents based upon pre-established filters or thresholds. Incidents are interpreted as either faults (failures) or events (warnings). Event and fault information is sent from system components such as hardware, application/system software, and communications resources. Systems, groups, or function with event/fault management may include those for monitoring, analyzing, and classifying and displaying.
  • Monitoring Requirements Management manages the requirements for new monitors and adjusts existing monitors.
  • the requirements will typically identify resources and components that will be monitored and map the threshold levels into the event and fault categories.
  • Incident classification classifies incidents to promote to an event, fault, or to ignore. This group assigns severity levels and assesses impact. Once the data is pulled in, the incident is defined or classified. A severity level, system impact, and notification are then determined.
  • a fault is defined as a failure of a device or a critical component of that device.
  • the groups correlate faults or events from multiple devices to assist in problem analysis if applicable.
  • An event is defined as a tripping of a significant threshold or warning, and could be based on performance or indications of a potential failure of a device or critical component of that device.
  • Part of the fault/event series of functions may be a traffic analysis group. This group identifies critical nodes that are representative of enterprise performance. These probes are then used to gather information on protocols and stations communicating on an enterprise segment. Event/fault trend reporting functions report on event/fault alerts over a time period. This function provides trending information on frequency of events/faults and potential sources of future problems and feedback into the adjustments of thresholds. Finally, under event/fault management, there is desirably a function for display management. This function maintains an effective and ergonomically correct view of the event and fault alerts presented to the operations staff.
  • the method for providing Operations Management (“OM”) event/fault monitoring includes the tasks involved in building a particular OM function. These specific tasks are described in reference to the Operations Management Planning Chart ("OMPC") that is shown on Figure 2.
  • OMPC Operations Management Planning Chart
  • This chart provides a methodology for capability delivery, which includes tasks such as planning analysis, design, build & test, and deployment.
  • Each OM function includes process, organization, and technology elements that are addressed throughout the description of the corresponding OM function.
  • the method comprises four phases, as described below in connection with Figure 2.
  • the first phase, "plan delivery" 102, or planning includes the step of defining a business performance model 2110.
  • the second phase, design, 104 has a plurality of steps, including design of business processes, skills and user interactions 2410, design of organizational infrastructure 2710, design of performance enhancement infrastructure 2750, analyze technology infrastructure requirements 3510, select and design operations architecture 3550, and validate technology infrastructure 3590.
  • a third phase, build and test 106 has a second plurality of steps, acquire technology infrastructure 5510, build and test operations architecture 5550, develop policies, procedures and performance support 6220, develop learning products 6260 and prepare and execute technology infrastructure product tests 5590.
  • the fourth phase 108 includes the step of deploying 7170. In the following description, the details of the tasks within each step are discussed.
  • Monitoring delivery and deployment focuses improving business capability.
  • One such improvement may be to upgrade the monitoring capability of an information technology system within an enterprise.
  • One of the key steps in defining business and performance requirements is identifying all of the types of support and levels of support that end users and other stakeholders should receive from monitoring. While monitoring personnel may be responsible for performing other OM functions in the organization, this set of task packages is limited to analysis of functions which are nearly always associated with monitoring. They include monitoring, classifying, analyzing and displaying. Step 2110 - Refine Business Performance Model
  • step 2110 the business model requirements for monitoring are defined, and the scope of the delivery and deployment effort for any upgraded capability is determined.
  • Figure 3 shows a representation of the tasks for carrying out these functions according to the presently preferred embodiment of the invention.
  • Figure 3 is a more detailed look at the business performance model 2110, which may include the functions of confirming business architecture 2111 , analyzing operating constraints 2113, analyzing current business capabilities 2115, identifying best operating practices 2117, refine business capability requirements 2118, and updating the business performance model 2119.
  • Task 2111 includes assessing the current business architecture, confirms the goals and objectives, and refines the components of the business architecture. Preferably, the task includes reviewing the planning stage documentation, confirming or refining the overall monitoring architecture, and ensuring management commitment to the project. The amount of analysis performed in this task depends on the work previously performed in the planning phase of the project. Process, technology, organization, and performance issues are included in the analysis. As part of a business integration project, monitoring delivery and deployment focuses on enhancing a business capability, whereas an enterprise-wide monitoring deployment requires analysis of multiple applications rather than a single business capability. Monitoring covers the functions of event management, fault management, and system performance management. Monitoring terminology can mean different things in different organizations.
  • Terminology to be defined includes, but is not limited to, organizational groups responsible for the monitoring process, and severity levels, e.g., "fatal”, “critical”, “minor” and “warning”.
  • Task 2113 Analyze Operating Constraints
  • Task 2113 includes identifying the operating constraints and limitations, and assessing their potential impact on the operations environment.
  • the task includes assessing the organization's strategy and culture and its potential impact on the project, and assesses organization, technology, process, equipment, and facilities for the constraints.
  • the task includes assessing the organization's ability to adapt to changes as part of the constraints analysis. It is desirable to identify scheduled maintenance times for servers, network devices, and other infrastructure equipment.
  • Analyzing the current monitoring capability 2115 is the next task in the process.
  • One way to accomplish this is to document current activities and procedures to establish a performance baseline, if there is an existing system.
  • An estimator may also assess strengths and weaknesses of any existing Monitoring capability in order to better plan and design for the future.
  • Important considerations include understanding the Monitoring processes before looking into how they are currently measured. Another important consideration is to perform this task to the level of detail needed to understand the degree of change required to move to a new monitoring capability.
  • Task 2117 Identify Monitoring Best Practices Task includes identifying the best operating practices 21 17 for the operation, and to identify the Monitoring areas that could benefit from application of best practices. In one embodiment, the user will research and identify the optimum best practices to meet the environment and objectives.
  • Task 2118 Refine Monitoring Requirements Task next in the planning 102 may be to refine monitoring capability requirements 2118.
  • Capability requirements define what the Monitoring infrastructure will do; capability performance requirements define how well it will operate.
  • Monitoring requirements should be defined and requirements should be allocated across changes to human performance, business processes, and technology. The requirements should be defined with reference both to the performance and to monitoring interfaces with other OM components. The requirements should be developed by integrating operating constraints, current capabilities, and best practices information.
  • Task 2119 Update Business Performance Model
  • the last block in Figure 3 calls for updating the business performance model 2119. To accomplish this, it is necessary to understand the performance and operational objectives previously defined.
  • the provider will align the metrics and target service levels with performance provisions for batch Monitoring and processing as outlined in service level agreements. Considerations may include a business performance model to define the overall design requirements for the Monitoring capability. It is advantageous to keep the metrics as simple and straightforward as possible and to consider the Monitoring infrastructure's suppliers and customers in defining the metrics.
  • the step of designing 104 may proceed simultaneously along two or more tracks. One track focuses on the business aspects of the task, while the other focuses on technology.
  • function block 2410 calls for designing business processes, skills and user interactions, while block 3510 calls for analyzing the technology and infrastructure requirements.
  • step 2410 the business processes, skills, and user interaction are taken into account, as shown in Figure 4.
  • the provider designs the new monitoring processes, and develops the framework and formats for monitoring.
  • Figure 4 shows a representation of the tasks for carrying out these functions, according to the presently preferred embodiment of the invention.
  • One task 2411 is to design workflows, or to create the workflows diagrams and define the workloads for all monitoring activities.
  • Other tasks include defining the physical environment interactions 2412, identifying skills requirements for performing monitoring tasks 2413, defining application interactions, that is, the human-computer interactions necessary to fulfill key monitoring activities 2415.
  • Still other tasks include identifying performance support requirements 2416, developing a capability interaction model 2417, and verifying and validating business processes, skills and user interaction 2419.
  • Task 241 1 Design Workflows for Processes, Activities and Tasks
  • relationships are defined between core and supporting processes, activities, and tasks, and the metrics associated with the processes and activities are also defined. Considerations may include whether or not packaged software has already been selected for monitoring. If so, the business processes implied by that package or selection should be used. These should be the starting point for developing the process elements. Reporting requirements should be analyzed and documented in as much detail as possible.
  • a next step is to define the physical environment interaction 2412.
  • the objective of this function is to understand the implications of the monitoring processes on the physical environment; mainly this involves location, layout and equipment requirements.
  • the provider will want to take into account a physical environment interaction model.
  • Costing elements may include identifying the Workflow/ Physical environment interfaces, designing the facilities, layout and equipment required for monitoring, identifying distributed monitoring physical requirements, if any, as well as cen ra nee s. ons era ons may nc u e e n erac on a e nes e layout and co-location implications of the monitoring workflows and the physical environment.
  • Monitoring processes and tools should be designed to interface with other processes, such as asset management, service control, and the like.
  • the next task for a comprehensive look at the design is to identify skill requirements 2413.
  • the goal is to identify the skill and behavior requirements for performing monitoring tasks.
  • the deliverables from this task may include both a role interaction model and skills definition.
  • a planner should identify critical tasks from the workflow designs, define the skills needed for the critical tasks and identify supporting skills needed and appropriate behavioral characteristics.
  • the next task is to define application interactions 2415, or to identify the human-computer interactions necessary to fulfill key monitoring activities. This will most often involve identifying required monitoring features not supported by the monitoring software and defining the human-computer interactions needed to meet the requirements. It should be recognized that packaged software has a pre-defined application interaction. This task may only be performed for activities that are not supported by packaged software. All monitoring personnel will normally require familiarity with the tracking software in order to log incidents, track them while they are open, close them once ey are comp e e, orwar em o spec a s s as nee e , or rev ew an analyze incidents to identify underlying system problems.
  • Task 2416 Identify Performance Support Requirements Identifying performance support requirements 2416 is the next task block for the planner.
  • the planner will want to analyze the Monitoring processes and determine how to support human performance within these processes.
  • the task is to analyze the critical performance factors for each Monitoring task and to select a mixture of training and support aids to maximize workforce performance in completing each task. These can include Monitoring policies and detailed procedures, on-line help screens of various kinds, checklists, etc. If the design process is a change from a present system, it is important to understand what has changed from the current processes, and use this to determine the support requirements.
  • Task 2417 Develop Capability Interaction Model The next task is to develop a capability interaction model 2417.
  • the provider will identify the relationships between the tasks in the workflow diagrams, the physical location, skills required, human-computer interactions and performance support needs.
  • a provider will develop a capability interaction model by understanding the interactions within each process for physical environment, skills, application and pe ormance suppo , an un y ng ese mo e s. e goa s an n egra e interaction model that will integrate workflows, the physical environment model, role and skill definitions, the application interactions, and support requirements to develop the capability interaction models.
  • the tasks should be mapped into a Swimlane diagram format to depict the interdependencies between the different elements.
  • the workflow diagram may be visually divided into "swimlanes" each separated from neighboring lanes by vertical solid lines on both sides.
  • Each lane represents responsibility for tasks which are part of the overall workflow, and may eventually be implemented by one or more support organizations.
  • Each task is assigned to one swimlane.
  • Such a model should illustrate how the process is performed, what roles fulfill the activities involved, and how the roles will be supported to maintain the monitoring capability.
  • Task 2419 Verify and Validate Business Processes, Skills and User Interaction
  • the final task of step 2410 is to verify and validate business processes, skills & user interaction 2419.
  • a provider will want to verify and validate that the process designs and the Capability Interaction Models meet the monitoring requirements and are internally consistent.
  • the end result is a business performance model that will help the design team and guide the project manager in both the technical and business aspects of the project.
  • a provider will use stakeholders in the monitoring domain and outside experts as well as the design teams to do the validation.
  • the provider will then verify and validate workflow diagrams in order to confirm that each process, activity, and task and its associated workflow fit together, and that the workflows meet the business capability requirements.
  • Step 2710 Design Organization Infrastructure
  • the method includes defining the structures for managing human performance, and defining what is expected of people who participate in the monitoring function, the required competencies, and how performance is managed.
  • Figure 6 shows a representation of the tasks for carrying out these functions, according to the presently preferred embodiment of the invention.
  • Task 271 1 Design Roles, Jobs and Teams
  • the task will include the design for the roles, jobs and teams. As an example, the design may wrestle with the issue of whether the monitoring function will be centralized, distributed, and decentralized. Not only will this affect the capital costs, but it may also help to determine the reporting relationships and to identify the performance measurement factors. Monitoring roles and jobs will typically be based on the breadth of functions assigned. The monitoring organization structure should be designed around all these business requirements.
  • the next task 2713 may be to design a competency model.
  • the designer can define the skills, knowledge, and behavior that people need to accomplish their roles in the monitoring process.
  • the goal of this task is a Competency Model for Skill/Knowledge/Behavior, that is, to determine the characteristics required of the individuals/teams that will fill the roles.
  • Sub-tasks or portions may include defining the individual capabilities necessary for success in these roles.
  • the manager may then organize the capabilities along a proficiency scale and relate them to the jobs and teams. Attitude and personality are factors that will impact the performance of Monitoring personnel nearly as much as technical training and expertise.
  • Task 2715 Design Performance Management Infrastructure These tasks define the people and teams that will perform in monitoring. The next task may be to design a performance management infrastructure 2715. The design here will define how individual performance will be measured, developed, and rewarded. There may be implications here on both the design and capital costs. The design here may also determine a performance management approach and appraisal criteria. The goal of the design effort may be to deliver a performance management infrastructure or design, and to develop standards for individuals and teams involved in the monitoring process. If management wishes also to identify a system to monitor the individuals' and teams' ability to perform up to the standards, the infrastructure to accomplish this is desirably included "in the ground floor," that is, when the system is designed and the cost is determined, rather than later.
  • the next task of determining the organization mobilization approach may be necessary primarily if monitoring is a new function within an , organization, or of course, if the organization itself is new.
  • the function must be staffed, or put another way, the organization must determine an infrastructure mobilization approach 2717. This is not normally a factor in capital costs, since personnel tend to be ongoing expenses. However, any peculiarities or changes from a "standard" design should be considered when costing a project or establishing a budget.
  • the project manager may want to consider at some point how to mobilize the resources required to staff the new Monitoring capability. In staffing, the manager should identify profiles of the ideal candidates for each position, identify the sourcing approaches and timing requirements, and determine the selection and recruiting approaches.
  • Task 2719 Verify and Validate Organization Infrastructure Once designed and costed, it may be prudent to verify and validate the organizational infrastructure 2719. The goal of this task is to verify and validate that the monitoring organization meets the needs of the monitoring capability and is internally consistent. A designer will want to confirm the organization with subject matter experts. The end result is that the designer will verify that the organization structure satisfies monitoring capability requirements.
  • Step 2750 Design Performance Enhancement Infrastructure: In this step, a performance enhancement infrastructure is designed.
  • Figure 7 shows a representation of the tasks for carrying out these functions, according to the presently preferred embodiment of the invention. Tasks include employee assessment 2751 , any performance enhancement needs 2753, investigation into performance enhancement products 2755, and verification and validation of the performance 2759.
  • Task 2751 Assess Employee Competency and Performance.
  • This task is to refine the information about the current monitoring staffs competency, proficiency, and performance levels in specific areas, and assess the gaps in competencies and performance levels that drive the design of the performance enhancement infrastructure.
  • the task includes assessing the competency of the current monitoring staff based on the competency model previously developed.
  • This task is to assess the performance support and training requirements necessary to close the competency and performance gaps in the workforce.
  • the task includes using the employee assessment to determine the type of performance enhancement required to close the gaps and reach the desired competency levels.
  • This task includes defining the number and structure of performance support and learning products.
  • the designer determines the delivery approaches for training and performance support, designs the learning and performance support products, and defines the support systems for delivering training and performance support.
  • Typical training and performance support design issues will revolve around the software tools to be used and the associated procedures for analysis, notification triggering, escalation and resolution.
  • the most economical plan for software training will normally be vendor-supplied materials and instruction.
  • the scope of procedural training will be dependent on the requirements and activities set up for the monitoring function in the prior analysis and design tasks.
  • This task includes developing a comprehensive approach for testing the learning products with respect to achieving each product's learning objectives.
  • the task includes identifying which learning objectives to be tested, and identifying the data capture methods to be used to test those objectives.
  • the next step in a design may be to define a learning test approach 2757.
  • the objective is to develop a comprehensive approach for testing the learning products with respect to achieving each product's learning objectives.
  • the testing process will include identification of which learning objectives will be es e an en cat on o t e a a cap ure me o s a w e use o es those objectives.
  • One approach is to concentrate on learning objectives which focus on knowledge gain and relate directly to the Monitoring Performance Model and Employee Competency Model 2713.
  • performance enhancement infrastructure is validated.
  • the task includes verifying the performance enhancement infrastructure and the learning test deliverables to determine how well they fit together to support the new monitoring capability.
  • the method simulates the processes and activities performed by the members of the monitoring team in order to identify performance enhancement weaknesses.
  • the method identifies the problems and repeats the appropriate tasks necessary to address the problems.
  • stakeholders and subject matter experts are included in this process.
  • the first functional block 3510 is analyzing technology infrastructure requirements, and is shown in more detail in Figure 5.
  • the task here is to prepare for the selection and design of the technology infrastructure and to establish preliminary plans for technology infrastructure product testing.
  • the project deliverables here will include operations architecture component requirements, a physical model of the operations architecture, and a product es approac an p an. er unc ons s own n gure nc u e as s o analyzing technology infrastructure requirements 3511 , analyzing component requirements 3515, and planning their tests 3517.
  • Task 3511 Prepare Technology Infrastructure Performance Model The first task block is to prepare a technology infrastructure performance model 3511. The goal here is simple: analyze the functional, technical, and performance requirements for the Monitoring infrastructure. In this task, the project manager or planner seeks to identify key performance parameters for Monitoring, and to establish baseline project estimates, setting measurable targets for the performance indicators. This phase of the project should also include developing functional and physical models, and a performance model as well.
  • the focus here is on the technology, and the goal should be to resolve all open issues as soon as possible, whether in this step or the next (selection and design 3550). If the organization has already purchased a Monitoring package, this is a strong indicator for reuse. If the business capability requirements suggest a change to other software, a strong business case will be needed to support the recommendation.
  • the next task 3513 is to analyze technology infrastructure component requirements. This portion of the project begins to get into hardware and software required, as the project manager analyzes and documents requirements for Monitoring components, and defines additional needs. Tasks to be accomplished include identifying any constraints imposed by the environment and refining functional, physical, and performance requirements developed in the models previously built. In order to insure a "fit" with other aspects of the enterprise, the manager or planner should also assess the interfaces to other system components to avoid redundancy and ensure consistency/ compatibility.
  • the key component of monitoring components is the actual monitoring software itself. In cases where automated event monitoring and tracking is requ re , a pac age so ut on w most e y e use . ere are many different monitoring packages available, some of which can handle cross- platform use. Depending on the scope the monitoring requirements, one or more packaged tools may be considered.
  • this task should be to assess the ability of the current monitoring infrastructure to support the new component requirements 3515.
  • this task is simply a system analysis step, in which a project manager or planner will consider the components described above in 3513, and see whether they are consistent with the desired infrastructure.
  • the steps should include identifying current standards for technology infrastructure, and noting current standards and any gap in the analysis or the capability. Details desired may include documenting and analyzing the current Monitoring technology environment. It is important to identify the areas where gaps exist between the current infrastructure and the new requirements.
  • Managers and planners will ideally be aware of constraints and limitations, in order to avoid repeating or re-doing work, or using the wrong infrastructure or components in planning the monitoring function.
  • the next step may be to plan a product test for the technology infrastructure 3517.
  • the results of this task will provide the basis on which the product test will be performed as well as the environment in which it is run.
  • the task includes defining the test objectives, scope, environment, test conditions, and expected results.
  • Sub- tasks may include defining a product test approach, designing a product test plan, and generating a deployment plan. It is important to remember that monitoring is not an island, and that all elements of monitoring need to be implemented for this test.
  • the product test is a test of the infrastructure, not just the monitoring technology components. Therefore, the organizational and process elements are within the scope of such the test.
  • Step 3550 Select and Design Operations Architecture
  • the manager will select and design the components 3551 required to support a high-level Monitoring architecture; include re-use 3552, packaged 3553, and custom components 3555. After selection and design, the architecture is validated 3557. This is the module where the manager designs monitoring and formulates component and assembly test approaches and plans 3558.
  • Task 3551 Identify Operations Architecture Component Options A first task is to identify operations architecture component options 3551. It is important to identify specific component options that will be needed to support the production environment. Tools used in this task may include an
  • the manager will be sure to identify all risks and gaps that exist in the current Monitoring environment, select components that will support the Monitoring architecture, and consider current software resources, packaged software and custom software alternatives during the selection process. If packaged software is part of the solution, the manager should submit RFPs to vendors for software products that meet basic requirements. Some packages can usually be eliminated quickly, based on such things as lack of fit with the operating system(s), server(s), or other operations architecture components already in place.
  • Task 3552 Select Reuse Operations Architecture Components A potentially useful task in costing and designing a system is whether one can select reuse operations architecture components 3552. If existing architecture components can be reused without extensive hardware, or more importantly, software changes, it may be possible to save on purchase and installation expense. This step should finalize the component selection and may be done in tandem with the package and custom tasks. The manager should evaluate component reuse options, determine gaps where (typically) software will not satisfy requirements, and select any components for reuse.
  • Packaged software will be the primary alternative for monitoring component requirements.
  • the software should be selected based on how well the options fit the requirements, the level of vendor support and cooperation, and cost factors.
  • Organizational biases for or against particular products or vendors may be issues to be addressed.
  • site visits to other organizations using the so tware may be valuable in verifying vendor claims of functionality. It may also be helpful to have independent opinions concerning vendor support and cooperation.
  • Packaged software 3553 may well be the primary alternative for Monitoring component requirements. The manager should make her or her selection is based on how well the options fit the requirements, the level of vendor support and cooperation, and cost factors. Organizational biases for or against particular products or vendors may be issues to be addressed. Site visits to other organizations using the software components are desirable to verify the vendors' claims of functionality and to obtain independent opinions about vendor support and cooperation.
  • Task 3555 Design Custom Operations Architecture Components If custom-designed components 3555 are considered, then any custom components may have to be designed, rather than merely purchased. On the other hand, it may be possible to customize a reuse or packaged component.
  • a manger should evaluate the time, cost, and risk associated with custom development. Areas in monitoring where custom design may be needed typically include the three situations. The first is the design of custom reports. The second is the scripting or parameterization needed to install the software. The last is the design of interfaces to other components to facilitate automated transfers of data or other communications. These may include, but are not limited to, network software, asset management software, application databases, e-mail software, pagers, and the like. This portion of the task may be reiterated as necessary until the manager is satisfied with the choices made.
  • the next task may be to develop a high-level design for the architecture, or to design and validate an operations architecture 3557.
  • This portion of the design is pr mar y concerned wit com n ng t e reuse , pac age an custom components into an integrated design and ensuring that the selected architecture meets the requirements for monitoring of the enterprise.
  • One portion of the task may be to define the standards and procedures for component building and testing. The manager may even consider prototyping if there are any complex interfaces to other components of the operations architecture. The end result of this task is to finish with a design for monitoring, complete with standards and procedures.
  • Task 3558 Develop Operations Architecture Component and Assembly Test Approach, and Plan
  • a component and assembly test approach and plan 3558 is needed.
  • the outputs may include separate plans for a test approach and plan for components, assemblies, and acceptance procedures.
  • objectives, scope, metrics, a regression test approach, and risks associated with each test may include component testing for the components selected above, whether new or reused. These tests are tests of the monitoring software components only, not the process and organization elements.
  • the manager validates the chosen technology infrastructure 3590, as shown in Figure 9.
  • An analysis is undertaken of the monitoring design 3591 , the technology infrastructure is validated 3593, the infrastructure design is validated 3595, and the plans for deploying the system and its test approach are reviewed and revised as necessary 3597.
  • the manager will verify that the Monitoring design is integrated, compatible, and consistent with the other components of the Technology Infrastructure Design, and meets the Business Performance Model and Business Capability Requirements.
  • Task 3591 Review and Refine Technology Infrastructure Design
  • a first sub-task may be to review and refine the technology infrastructure design 3591. This task is undertaken to ensure that the Monitoring infrastructure design is compatible with other elements of the technology infrastructure. The manager may want to ensure that the monitoring function is integrated and consistent with the other components of the technology infrastructure. It may also be prudent to develop an issue list, or "punch list” for design items that conflict with the infrastructure or items that dont meet performance goals or requirements. This "punch list” may be subsequently used to refine the Monitoring infrastructure if needed.
  • the next step in the design process may be to establish a technology infrastructure validation environment 3593.
  • the manager designs, builds, and implements the validation environment for the technology infrastructure, and may deliver a validation schedule.
  • Specific tasks may include establishing the environment, that is, the timing, and selecting and training participants. It may be valuable in the validation task to include designers and architects of OM components that will interface with monitoring in the evaluation. as : a a e ec no ogy n ras ruc ure es gn Having established the environment, the next task is to validate the technology infrastructure design 3595.
  • the manager at this point will desirably identify gaps between the design and the technology infrastructure requirements defined earlier. Projects will proceed smoothly if the manager will record issues as they arise during this phase for corrective action. The manager should also, during this phase, identify and resolve any remaining gaps between the design and the expectation or the required service.
  • Part of the process is to iterate through the validation until all critical issues have been resolved and to develop action plans for less critical issues.
  • Monitoring is being installed as part of a larger business capability, this phase may serve as a checkpoint to verify that the most current requirements from the business capability release are being considered. Monitoring may be only one component of the infrastructure being tested at this point. Monitoring will typically be deployed in a single release. A manager may want to confirm that this is still appropriate by validating the monitoring interfaces to other elements of the technology infrastructure.
  • the final task sub-block in the task of validating the technology infrastructure is to analyze the impact of the system and to revise plans 3597 as necessary. Tasks to be accomplished during this phase include analyzing the extent and scope of the work required for modifications and enhancements, analyzing the impact of validation outcomes on costs and benefits, and refining the plans for deployment testing. The result of this task should be a deployment plan, a test approach, a test plan and an infrastructure design.
  • the point of this task is to update the appropriate technology infrastructure delivery plans based on the outcome of the validation process. Since the point of the information technology group is to service an enterprise, monitoring itself may only be part of the validation scope. Confirm also that a single release is appropriate. er es gn ng e even au mon or ng unc on an o a n ng authorization for build and test 112, the project may proceed along three timelines in the build and test portion 106 of Figure 2. One time-line continues in the technical vein, that is, acquiring the technology infrastructure 5510 and building and testing the selected operations architecture 5550. At the same time, other groups or personnel may develop learning products 6260 and other groups or personnel may develop policies, procedures and performance support 6220 for the new system. With these tasks completed, the project manager will proceed to prepare and execute a test of the new system, that is, a technology infrastructure product test 5590. With these tasks completed, all that remains is to deploy the new system 7170.
  • Acquiring the technology infrastructure 5510, Figure 10 is the first step in build and test 106.
  • Tasks forming a part of this block include planning and executing the acquisition of components 5511 , which suppliers will supply the components and services 5513, and how they will be supplied.
  • This task package is primarily required if new packaged software is to be procured and installed as part of the project.
  • the economic impact or implications are evaluated 5515, and the organization prepares and executes acceptance tests 5517 for the new components.
  • the first task may be to initiate acquisition of the technology infrastructure components, primarily packaged software 5511.
  • a "normal" procurement plan will suffice, so long as it includes RFP/RFQ documentation, defined vendor selection criteria, selecting from among the offering vendors, and so on.
  • the process is smoothed if component capability and performance requirements are clearly defined in the documentation provided to vendors. as : e ec an ppo n en ors
  • the next task may include selecting and appointing vendors 5513.
  • the task may include evaluation of the several product offerings, negotiating contracts, and arranging for delivery and timing of delivery. It may be desirable if software training is negotiated as part of the contractual agreement. If multiple components and multiple vendors are involved, the project manager may find it advantageous to have delivery and installation of the components occur simultaneously so that the component interfaces can be tested with vendor representatives on site.
  • the next task is to determine the impact and deployment implications of the software and vendor selection 5515 on the project economics and the enterprise served.
  • the manager at this point may wish to compare procurement costs with project estimates, and assess the impact on the business situation. Revisions should be made and any approvals needed should be obtained. The manager should ensure that the economics of the transaction(s) are consistent with plan documentation, or changed as appropriate.
  • Task 5517 Prepare and Execute Acceptance Test of Technology Architecture Components.
  • the next task is to prepare and execute an acceptance test of the new components 5517.
  • steps are taken to ensure that the Monitoring packaged components meet the technology infrastructure requirements. Personnel in this step build the test scripts, the test drivers, the input data, and the output data to complete the Technology
  • a build and test stage 5550 depicted in Figure 11.
  • personnel design and program the Monitoring components This is also the time to perform componen an assem y es ng. a or as s may nc u e e a e es gn o the operations architecture 5551 , assembly test plan 5552, building of the system 5553, component tests 5555, and assembly and test 5557.
  • Task 5551 Perform Operations Architecture Detailed Design
  • Detailed design should include the preparation of program specifications for custom and customized components. This task also desirably includes a design of the packaged software configuration, and detailed design reviews. Consideration should include custom components with interfaces to other OM components and any special reporting requirements for monitoring. Event correlation is one of the more sophisticated mechanisms for event management user interaction. While sophisticated, the correlation base rule can be complicated to doe and difficult to maintain. Special attention should be paid to this phase of the design.
  • Task 5552 Revise Operations Architecture Component and Assembly, Test Approach, and Plan
  • this task shows the need for any revisions, they should be accomplished when personnel revise the operations architecture component assembly test approach and plan 5552.
  • This task includes updating the monitoring test plans to reflect the components' detailed design, and defining revised considerations or changes to the requirements.
  • the task includes reviewing the test approaches and plans, and revising as needed for new or updated requirements. If other OM components interface with monitoring software, these interfaces should be tested, either in this task or in the product test task.
  • the project may then proceed to building the components 5553.
  • personnel will build (or program) all custom monitoring components and extensions to packaged or reuse components.
  • Some packages may have unique or proprietary languages for customizing or configuring. If so, there may be a need for training.
  • This task includes building all custom monitoring components and extensions to packaged or reuse components.
  • t e as nc u es u ng e cus om componen s, u ng e cus om ze extensions to package or reuse components, and configuring the packaged components.
  • Task 5555 Prepare and Execute Component Test of Custom Operations Components
  • the next task is to prepare and execute tests of the custom operations components 5555. This testing will ensure that each custom Monitoring component and each customized component meets its requirements.
  • the manager verifies the component test model, sets up the test environment, executes the test, and makes component fixes and retests as required. Tests should confirm component performance as well as their functionality. System performance should not be compromised by the amount of customization. The tests are not limited to this stage, but may proceed in subsequent testing tasks.
  • Task 5557 Prepare and Execute Operations Assembly Test Following component tests, the project engineer or manager then proceeds to prepare and execute an operations assembly test 5557.
  • a full test is performed of all interactions between Monitoring components.
  • Personnel verify the assembly test model, set up a test environment, execute the test, and make fixes and retest as needed, again in an iterative fashion.
  • Shell programs or stub programs may be needed to perform the assembly test. If shell programs are used, it is important to test not only successful completion, but to build in the error conditions which would cause abnormal endings or problems.
  • personnel verify that all interfaces to other components are tested and operate correctly for successful, predictable outcomes and error conditions. This completes the build and test stage.
  • Step 6220 Develop Policies, Procedures and Performance Support: Having completed the technical aspects, the project manager now considers some longer-term portions of the project, the policies, procedures and performance support detailed design 6220, as shown in Figure 12, needed for ongoing operation of the service. The purpose of this task is to produce a finalized, detailed set of new Monitoring policies, procedures, and reference materials. It is also desirable to conduct a usability test and review to verify ease of use with both monitoring personnel and personnel from the supported enterprise. Upon successful completion of this task, the operating personnel will have Monitoring Policies & Procedures and may also have any performance support products that may be necessary or useful. Subtasks include writing or performing the policies and procedures 6221 , developing business policies and procedures 6223, user procedures 6225, reference materials and job aids 6227, and validating and testing 6229.
  • Task 6221 Perform Policies, Procedures and Performance Support Detailed Design
  • Subtasks include writing or performing the policies and procedures, and a detailed design for performance support 6221.
  • This task includes providing the product structure for all the new Monitoring policies, procedures, reference materials, and job aids. It may also be desirable to provide templates for each product, and to create prototype products with reference to the overall project.
  • Task 6223 Develop Business Policies and Procedures It may also be necessary or desirable to develop a set of business policies and procedures 6223 for the operation. This is typically a rule set governing workflows and priorities.
  • Business policies in this context describe the business rules governing workflows.
  • Business procedures describe the sequential sets of tasks to follow based on the policies. Specific tasks within this task include collecting and reviewing content information, drafting policies and procedures, and planning for the production of the materials. Procedures should generally be organized into three main elements of monitoring, that is, event management, fault management, and system performance management. In developing these materials, this three- way organization is most appropriate where different people or groups will have primary responsibility for each element.
  • Task 6225 Develop User Procedures
  • a detailed set of monitoring user procedures are delivered.
  • User procedures provide the details necessary to enable smooth execution of new tasks within a given business procedure.
  • the provider collects and reviews content information, drafts the procedures, verifies consistency with business policies and procedures, and plans for the production of the materials. Outside personnel who interface with the monitoring process will generally do so on a very infrequent basis. They cannot be expected to review a procedure manual each time there is a need to interact.
  • Task 6227 Develop Reference Materials and Job Aids Along with policies and procedures, it may be useful to develop reference materials and job aids for monitoring personnel 6227.
  • the provider drafts any reference materials and job aids that make a task easier or more efficient are prepared.
  • the provider should collect and review content information, draft the performance support products, verify consistency of the material with policies and procedures, and then plan for the production of materials.
  • Performance support materials will be very desirable in environments where monitoring is a decentralized function performed by multiple groups across the organization. Such materials will help provide consistency in the handling of problem situations.
  • Task 6229 Validate and Test Policies, Procedures and Performance
  • the project manager may now want to test and validate 6229 them. This task will confirm that the products meet the requirements of the Monitoring capability and the needs of the personnel who will use them. It is also useful as a follow-up tool to resolve open issues.
  • a desirable step may include development of learning products 6260, as shown in Figure 13.
  • a first task may include defining the needs for learning products and the environment in which they are to be used 6261.
  • Technical training in Monitoring software components may come from the package vendor or a third party training organization. Procedural training for an organization's procedures is often custom built or tailored for the situation.
  • the next tasks are to perform a learning program detailed design 6263 and to make prototypes 6265. Using the prototypes, actual learning products may then be made, and produced 6267.
  • the products should be tested 6269. Testing may take place later in the cycle, as depicted in Figure 13, or earlier, using prototypes, to achieve feedback and ensure the effort is on track and useful to the students or trainees.
  • Task 6261 Develop Learning Product Standards and Development Environment
  • the environment for developing the monitoring learning products is developed.
  • the provider selects authoring and development tools, defines standards, and designs templates and procedures for product development.
  • Technical training in monitoring software components may come from the package vendor or a third party training organization. Procedural training is custom built.
  • Task 6263 Perform Learning Program Detailed Design
  • the provider specifies how each learning product identified in the learning product design is developed.
  • the task includes defining learning objectives and context, designing the learning activities, and preparing the test plan. Learning objectives and their context should be defined in preparation for designing the learning activities and preparing a test plan. It may be helpful to modularize the products by separating the monitoring activities into separate learning products.
  • the monitoring software is integrated into the learning program, and following the completion of software technical training.
  • prototypes are completed and ease-of-use sessions on classroom-based learning components (i.e., activities, support system, instructor guide) are conducted.
  • the task includes creating prototype components, and conducts and evaluates the prototype.
  • Task 6267 Create Learning Products
  • the learning materials proposed and prototyped during the design activities are developed.
  • the provider develops activities, content, and evaluation and support materials required, develops a maintenance plan, trains instructors/facilitators, and arranges for production.
  • This task includes testing each product with the intended audience to ensure that the product meets the stated learning objectives, that the instructors are effective, and that the learning product meets the overall learning objectives for monitoring.
  • the task includes confirming the Test Plan, executing a learning test, and reviewing and making required modifications. If the target audience is small, this test serves as the formal training session for the group. Multiple sessions may be appropriate if responsibilities are split and all personnel are not responsible for knowing all activities.
  • Step 5590 Prepare and Execute Technology Infrastructure Product Test: At this point, much of the project work has been completed, and the product is ready for testing in a realistic environment 5590 to insure it is ready for deployment. A series of tests is depicted in Figure 14. The test and its design or model are first prepared 5591 , with expected results. The test is then performed 5593, by executing the tests prepared earlier. The tests should simulate actual working conditions, including any related manuals, policies and procedures produced earlier. An objective of the test should be to notice any deficiencies and make changes as required. Following these tests, a deployment test should be executed 5595, to ensure that the monitoring infrastructure can be gainfully deployed within the enterprise or organization. If this test is successful, the last stage of testing may then be executed, the technology infrastructure configuration test 5597. This test will ensure that the performance of the Technology Infrastructure, including
  • This task is to create the monitoring infrastructure test model.
  • the provider creates the test data and expected results, creates the testing scripts for production, deployment, and configuration tests.
  • the provider also conducts the monitoring training not yet completed, and reviews and approves the test model. If a complete business capability is being deployed, this is a comprehensive test with monitoring being one piece.
  • the product test should occur in a production-ready environment and should include the hardware and software to be used in production. If monitoring is being implemented independently, then all or a portion of the production environment can be used as the "test" application.
  • Task 5593 Execute Technology Infrastructure Product Test This task is to verify that the technology infrastructure successfully supports the requirements outlined in the business capability design stage.
  • the provider executes the test scripts, verifies the results, and makes changes as required. It is helpful if the actual monitoring working conditions are used or simulated, including related manuals and procedures.
  • Task 5595 Execute Technology Infrastructure Deployment Test
  • the provider ensures that the new monitoring infrastructure is correctly deployed within the organization.
  • the provider executes the test scripts, verifies the results, and makes changes as required.
  • Deployment testing and configuration testing are usually minimal for monitoring, since it is a "behind-the-scenes" application, with limited visibility to the rest of the enterprise supported by the information technology organization.
  • Task 5597 Execute Technology Infrastructure Configuration Test This task is to ensure that the performance of the technology infrastructure, including monitoring, is consistent with the technology infrastructure performance model after the infrastructure has been deployed.
  • the provider executes the test scripts, verifies the results and makes changes as required, and updates the risk assessment.
  • Deployment testing and configuration testing are usually minimal for monitoring, since it is a "behind-the-scenes" application, with limited visibility to the rest of the enterprise supported by the information technology organization.
  • the monitoring infrastructure may be deployed online 7170, Figure 15.
  • the tasks remaining include configuring the technology infrastructure 7171 to prepare for any new business capability components.
  • the technology infrastructure may then be installed 7173.
  • all documentation, performance support tools and training must be completed and in place prior to the deployment.
  • a final task may be to verify the technology infrastructure
  • the deployment unit's technology infrastructure is customized to prepare for the new business capability components.
  • the task includes reviewing the customization requirements, performing the customization, and verifying the infrastructure configuration.
  • Customizing the infrastructure is normally completed in task package 5550, u an es opera ons arc ec ure. s as w genera y e requ re the Monitoring capability is being deployed at more than one site (i.e., individual desktops or multiple servers). In these cases, variances in the existing configurations will determine any customization required.
  • the technology infrastructure for monitoring is installed.
  • the task includes preparing installation environment, installing monitoring infrastructure, and verifying the installation.
  • the documentation, performance support and training tools are completed and put in place prior to the deployment.
  • the new technology infrastructure environment is verified and the issues raised as a result of the testing are addressed.
  • the task includes performing the infrastructure verification, making changes as required, and notifying stakeholders.
  • a follow-up audit is recommended after some period of production operations to confirm the validity and accuracy of service reports. This task should require minimal effort if monitoring is being installed independently.
  • the present invention also includes a method and apparatus for providing an estimate for building a monitoring function in an information technology system.
  • the method and apparatus generate a preliminary work estimate (time by task) and financial estimate (dollars by classification) based on input of a set of estimating factors that identify the scope and difficulty of key aspects to the system.
  • Fig. 16 is a flow chart of one embodiment a method for providing an estimate of the time and cost to build a monitoring function in an information technology system.
  • a provider of monitoring functions such as an IT consultant, for example, Andersen Consulting, obtains estimating factors from the client 202. This is a combined effort with the provider adding expertise and knowledge to help in determining the quantity and difficulty of each factor.
  • Estimating factors represent key business drivers for a given operations management OM function. Table 1 lists and defines the factors to be considered along with examples of a quantity and difficulty rating for each factor.
  • the computer program is a spreadsheet, such as EXCEL, by Microsoft Corp. of Redmond, Washington, USA.
  • the consultant and the client will continue determining the number and difficulty rating for each of the remaining estimating factors 206.
  • this information is transferred to an assumption sheet 208, and the assumptions for each factor are defined.
  • the assumption sheet 208 allows the consultant to enter in comments relating to each estimating factor, and to document the underlying reasoning for a specific estimating factor.
  • an estimating worksheet is generated and reviewed 210 by the consultant, client, or both.
  • An example of a worksheet is shown in FIGS. 17a and 17b.
  • the default estimates of the time required for each task will populate the worksheet, with time estimates based on the number factors and difficulty rating previously assigned to the estimating factors that correspond to each task.
  • the amount of time per task is based on a predetermined time per unit required for the estimating factor multiplied by a factor corresponding to the level of difficulty.
  • Each task listed on the worksheet is described above in connection with details of the method for providing the monitoring function.
  • the same numbers in the description of the method above correspond to the same steps, tasks, and task packages of activities shown on the worksheet of FIGS. 17a and 17b.
  • the worksheet is reviewed 210 by the provider and the client for accuracy.
  • Adjustments can be made to task level estimates by either returning to the factors sheet and adjusting the units 212 or by entering an override estimate in the 'Used' column 214 on the worksheet.
  • This override may be used when the estimating factor produces a task estimate that is not appropriate for the task, for example, when a task is not required on a particular project.
  • Figs. 17a and 17b these columns are designated as Partner - "Ptnr”, Manager - “Mgr”, Consultant - “Cnslt”, and Analyst - “Anlst”, respectively. These allocations are adjusted to meet project requirements and are typically based on experience with delivering various stages of a project. It should be noted that the staffing factors should add up to 1.
  • the workplan contains the total time required in days per stage and per task required to complete the project. Tasks may be aggregated into a "task package" of subtasks or activities for convenience.
  • a worksheet as shown in FIGS. 17a and 17b, may be used, also for convenience. This worksheet may be used to adjust tasks or times as desired, from the experience of the provider, the customer, or both.
  • the total estimated payroll cost for the project will then be computed and displayed, generating final estimates.
  • a determination of out-of- pocket expenses 222 may be applied to the final estimates to determine a final project cost 224.
  • the provider will then review the final estimates with an internal functional expert 226.
  • project management costs for managing the provider's work are included in the estimator. These are task dependant and usually run between 10 and 15% of the tasks being managed, depending on the level of difficulty. These management allocations may appear on the worksheet and work plan.
  • the time allocations for planning and managing a project are typically broken down for each of a plurality of task packages where the task packages are planning project execution 920, organizing project resources 940, controlling project work 960, and completing project 960, as shown in FIG. 17a.

Abstract

A method for monitoring in an information technology organization provides the tasks involved in building a monitoring function (100). The tasks include planning (110), analysis (3510), design (2110, 2410, 2710, 2750, 3550), build (112), test (5550), and deployment (7170) of monitoring. Each task includes process, organization, and technology infrastructure elements.

Description

METHOD AND ESTIMATOR FOR EVENT/FAULT MONITORING RELATED APPLICATIONS
This application claims the benefit of U.S. Provisional Application 60/158,259, filed October 6, 1999. This application is related to Application Serial No. entitled "Organization of Information Technology
Functions," by Dove et al. (Atty. Docket No. 10022/45), filed herewith. These applications are incorporated herein by reference in their entirety.
BACKGROUND OF THE INVENTION Expenditures on information technology have risen over the past twenty years to the point where they are almost always a significant amount in the capital budget of any enterprise. These enterprises include business enterprises, and may also include non-for-profit businesses, charitable institutions, religious institutions, educational establishments, governmental agencies, non-governmental organizations, and other organizations of many types.
The expenditures are not only for computers and their software, but also for many other purposes associated with computers and information technology. These further expenses often include the cost of networking a plurality of computers. Once networks are established, servers of several varieties may be used, as well as other computers and peripherals. As the Internet and e-commerce have come of age, firewalls, intranets, and web servers are constructed and must be administered. Computer security concerns arise as well. The biggest challenges in Information Technology ("IT") development today are actually not in the technologies, but in the management of those technologies in a complex business environment. From idea conception to capability delivery, and to operation, all IT activities, including strategy development, planning, administration, coordination of project requests, change administration, and managing demand for discretionary and non- discretionary activities and operations, must be collectively managed. A shared understanding and representation of IT management is needed because today's technological and business environment demands it. The new technological management orientation should include ways for planning, assessing, and deploying technology within and across enterprises. Businesses need to balance technological capability with enterprise capability in order to remain modern organizations with a chance of survival.
There is a need, therefore, to construct a complete yet simple IT framework that would quickly convey the entire scope of IT capability in a functional decomposition. Such a framework needs to be a single framework describing an entire IT capability, whether as functions, systems or tasks. The IT framework should be a framework of functions, a representation of a complete checklist of all relevant activities performed in an IT enterprise. A single IT Framework should represent all functions operative in an IT enterprise.
Within that framework, there is also a need for an event and fault management or monitoring function that is sent from system components such as hardware, application software and system software, and communications systems. Incidents could be interpreted as either faults (failures) or events (warnings). An event/fault monitoring function should coordinate with other function categories to provide input and should aim to continuously improve current IT services and offerings. Such a function is also known as an event and fault management.
BRIEF SUMMARY OF THE INVENTION
The present invention is defined by the following claims, and nothing in this section should be taken as a limitation on those claims. By way of introduction, one embodiment of the invention is a method for providing for an event and fault monitoring function that receives, logs, classifies, analyzes and presents incidents based upon pre-established filters or thresholds. The method includes planning, designing, building, testing and deploying an event and fault monitoring function in an IT organization. The method preferably includes designing business processes, skills, and user interaction for the design phase. The method further includes designing an organization infrastructure and a performance enhancement n ras ruc ure or mon or ng. e me o a so nc u es es gn ng ec no ogy infrastructure and operations architecture for the design phase of monitoring. In the building phase of the method, the technology infrastructure and the operations architecture is built. Also business policies, procedures, performance support, and learning products for monitoring are built. The technology infrastructure and the operations architecture are tested. In the deploying stage, the technology infrastructure for the IT organization is deployed.
Another aspect of the present invention is a method for providing an estimate for building an event/fault monitoring function in an information technology organization. This aspect of the present invention allows an IT consultant to give on site estimations to a client within minutes. The estimator produces a detailed break down of cost and time to complete a project by displaying the costs and time corresponding to each stage of a project along with each task. Another aspect of the present invention is a computer system for allocating time and computing cost for building an event/fault monitoring function in an information technology organization.
These and other features and advantages of the invention will become apparent upon review of the following detailed description of the presently preferred embodiments of the invention, taken in conjunction with the appended drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
The present invention is illustrated by way of example and not limitation in the accompanying figures. In the figures, like reference numbers indicate identical or functionally similar elements.
Figure 1a is a representation of a network/systems management function including monitoring functions.
Figure 1 b is a representation of an event/fault monitoring function including sub-elements of the function. Figure 2 shows a representation of a method for providing a monitoring function according to the presently preferred embodiment of the invention.
Figure 3 shows a representation of a task for defining a business performance model for monitoring. Figure 4 shows a representation of a task for designing business processes, skills, and user interaction for monitoring.
Figure 5 shows a representation of a task for designing technology infrastructure requirements for monitoring. Figure 6 shows a representation of a task for designing an organization infrastructure for monitoring.
Figure 7 shows a representation of a task for designing a performance enhancement infrastructure for monitoring.
Figure 8 shows a representation of a task for designing operations architecture for monitoring.
Figure 9 shows a representation of a task for validating a technology infrastructure for monitoring.
Figure 10 shows a representation of a task for acquiring a technology infrastructure for monitoring. Figure 1 1 shows a representation of a task for building and testing operations architecture for monitoring.
Figure 12 shows a representation of a task for developing business policies, procedures, and performance support architecture for monitoring.
Figure 13 shows a representation of a task for developing learning products for monitoring.
Figure 14 shows a representation of a task for testing a technology infrastructure product for monitoring.
Figure 15 shows a representation of a task for deploying a technology infrastructure for monitoring. Figure 16 shows a flow chart for obtaining an estimate of cost and time allocation for a project.
Figures 17a and 17b show one embodiment of an estimating worksheet for an event/fault monitoring estimating guide.
DETAILED DESCRIPTION OF THE INVENTION For the purposes of this invention, an information technology ("IT") enterprise may be considered to be a business organization, charitable organization, government organization, etc., that uses an information technology system with or to support its activities. An IT organization is the group, associated systems and processes within the enterprise that are responsible for the management and delivery of information technology services to users in the enterprise. In a modern IT enterprise, multiple functions may be organized and categorized to provide comprehensive service to the user. Thereby, an information technology framework for understanding the interrelationships of the various functionalities, and for managing the complex IT organization is provided.
The various operations management functionalities within the IT framework include a customer service management function; a service integration function; a service delivery function, a capability development function; a change administration function; a strategy, architecture and planning function; a management and administration function; a human performance management function; and a governance and strategic relationships function. Within the service delivery function, monitoring plays an important role. The present invention includes a method for providing a monitoring system or function for an information technology organization. Before describing the method for providing a monitoring function, a brief explanation is in order concerning event/fault monitoring, and its systems, functions and tasks. Event/fault Monitoring is a group of tasks or functions within a network or systems management function. Such a network/systems management function 31 is depicted in Figure 1 a, with several functions, including production scheduling 311 , output/print management 312, network/systems operations 313, operations architecture management 314, network addressing management 315, storage management 316, backup/restore management 317, archiving 318, and event/fault management 319. Other functions may include system performance management 3110, security management 3111 , and disaster recovery maintenance and testing 3112. In one embodiment as depicted in Figure 1 b, the scope of Monitoring 319 includes four organizations, monitoring 3191 , analyzing 3192, and classifying
3193, and displaying 3194.
A group of functions useful in information technology may be termed Event/Fault Management. These functions receive, log, classify, analyze, and present incidents based upon pre-established filters or thresholds. Incidents are interpreted as either faults (failures) or events (warnings). Event and fault information is sent from system components such as hardware, application/system software, and communications resources. Systems, groups, or function with event/fault management may include those for monitoring, analyzing, and classifying and displaying.
Monitoring Requirements Management manages the requirements for new monitors and adjusts existing monitors. The requirements will typically identify resources and components that will be monitored and map the threshold levels into the event and fault categories. Incident classification classifies incidents to promote to an event, fault, or to ignore. This group assigns severity levels and assesses impact. Once the data is pulled in, the incident is defined or classified. A severity level, system impact, and notification are then determined.
Analysis and correlation groups for both faults and events then analyze any faults to identify whether the origin is with a specific device or whether an entire segment of the enterprise is affected. A fault is defined as a failure of a device or a critical component of that device. The groups correlate faults or events from multiple devices to assist in problem analysis if applicable. An event is defined as a tripping of a significant threshold or warning, and could be based on performance or indications of a potential failure of a device or critical component of that device.
Part of the fault/event series of functions may be a traffic analysis group. This group identifies critical nodes that are representative of enterprise performance. These probes are then used to gather information on protocols and stations communicating on an enterprise segment. Event/fault trend reporting functions report on event/fault alerts over a time period. This function provides trending information on frequency of events/faults and potential sources of future problems and feedback into the adjustments of thresholds. Finally, under event/fault management, there is desirably a function for display management. This function maintains an effective and ergonomically correct view of the event and fault alerts presented to the operations staff.
METHOD FOR PROVIDING EVENT/FAULT MONITORING According to the present invention, the method for providing Operations Management ("OM") event/fault monitoring includes the tasks involved in building a particular OM function. These specific tasks are described in reference to the Operations Management Planning Chart ("OMPC") that is shown on Figure 2. This chart provides a methodology for capability delivery, which includes tasks such as planning analysis, design, build & test, and deployment. Each OM function includes process, organization, and technology elements that are addressed throughout the description of the corresponding OM function. The method comprises four phases, as described below in connection with Figure 2. The first phase, "plan delivery" 102, or planning, includes the step of defining a business performance model 2110. The second phase, design, 104, has a plurality of steps, including design of business processes, skills and user interactions 2410, design of organizational infrastructure 2710, design of performance enhancement infrastructure 2750, analyze technology infrastructure requirements 3510, select and design operations architecture 3550, and validate technology infrastructure 3590. A third phase, build and test 106, has a second plurality of steps, acquire technology infrastructure 5510, build and test operations architecture 5550, develop policies, procedures and performance support 6220, develop learning products 6260 and prepare and execute technology infrastructure product tests 5590. The fourth phase 108 includes the step of deploying 7170. In the following description, the details of the tasks within each step are discussed.
Monitoring delivery and deployment focuses improving business capability. One such improvement may be to upgrade the monitoring capability of an information technology system within an enterprise. One of the key steps in defining business and performance requirements is identifying all of the types of support and levels of support that end users and other stakeholders should receive from monitoring. While monitoring personnel may be responsible for performing other OM functions in the organization, this set of task packages is limited to analysis of functions which are nearly always associated with monitoring. They include monitoring, classifying, analyzing and displaying. Step 2110 - Refine Business Performance Model
In step 2110, the business model requirements for monitoring are defined, and the scope of the delivery and deployment effort for any upgraded capability is determined. Figure 3 shows a representation of the tasks for carrying out these functions according to the presently preferred embodiment of the invention. Figure 3 is a more detailed look at the business performance model 2110, which may include the functions of confirming business architecture 2111 , analyzing operating constraints 2113, analyzing current business capabilities 2115, identifying best operating practices 2117, refine business capability requirements 2118, and updating the business performance model 2119.
Task 2111 : Confirm Business Architecture Task 2111 includes assessing the current business architecture, confirms the goals and objectives, and refines the components of the business architecture. Preferably, the task includes reviewing the planning stage documentation, confirming or refining the overall monitoring architecture, and ensuring management commitment to the project. The amount of analysis performed in this task depends on the work previously performed in the planning phase of the project. Process, technology, organization, and performance issues are included in the analysis. As part of a business integration project, monitoring delivery and deployment focuses on enhancing a business capability, whereas an enterprise-wide monitoring deployment requires analysis of multiple applications rather than a single business capability. Monitoring covers the functions of event management, fault management, and system performance management. Monitoring terminology can mean different things in different organizations. Monitoring management teams should define and clarify terms carefully with their sponsor or management to ensure all parties understand the scope of the effort. Terminology to be defined includes, but is not limited to, organizational groups responsible for the monitoring process, and severity levels, e.g., "fatal", "critical", "minor" and "warning". Task 2113: Analyze Operating Constraints
Task 2113 includes identifying the operating constraints and limitations, and assessing their potential impact on the operations environment. Preferably, the task includes assessing the organization's strategy and culture and its potential impact on the project, and assesses organization, technology, process, equipment, and facilities for the constraints. The task includes assessing the organization's ability to adapt to changes as part of the constraints analysis. It is desirable to identify scheduled maintenance times for servers, network devices, and other infrastructure equipment.
Task 2115: Analyze Current Monitoring Capability
Analyzing the current monitoring capability 2115 is the next task in the process. One way to accomplish this is to document current activities and procedures to establish a performance baseline, if there is an existing system. An estimator may also assess strengths and weaknesses of any existing Monitoring capability in order to better plan and design for the future.
Important considerations include understanding the Monitoring processes before looking into how they are currently measured. Another important consideration is to perform this task to the level of detail needed to understand the degree of change required to move to a new monitoring capability.
Task 2117: Identify Monitoring Best Practices Task includes identifying the best operating practices 21 17 for the operation, and to identify the Monitoring areas that could benefit from application of best practices. In one embodiment, the user will research and identify the optimum best practices to meet the environment and objectives.
Use of best practices is most valuable in organizations where no monitoring policies and procedures exist. In a preferred embodiment, there will be a focus on meeting monitoring requirements in the most effective way possible.
Task 2118: Refine Monitoring Requirements Task next in the planning 102 may be to refine monitoring capability requirements 2118. Capability requirements define what the Monitoring infrastructure will do; capability performance requirements define how well it will operate. Monitoring requirements should be defined and requirements should be allocated across changes to human performance, business processes, and technology. The requirements should be defined with reference both to the performance and to monitoring interfaces with other OM components. The requirements should be developed by integrating operating constraints, current capabilities, and best practices information.
The requirements should indicate how operating constraints will be overcome and how current capabilities will be utilized. Any existing service level agreements should be reviewed for performance requirements related to monitoring functions. Typically, these agreements will relate to timing of responses to problems, level of overall performance, and performance levels of specific elements of the infrastructure. Naturally, these decisions also affect the cost of the design and deployment of the Monitoring function.
Task 2119: Update Business Performance Model The last block in Figure 3 calls for updating the business performance model 2119. To accomplish this, it is necessary to understand the performance and operational objectives previously defined. In a preferred embodiment, the provider will align the metrics and target service levels with performance provisions for batch Monitoring and processing as outlined in service level agreements. Considerations may include a business performance model to define the overall design requirements for the Monitoring capability. It is advantageous to keep the metrics as simple and straightforward as possible and to consider the Monitoring infrastructure's suppliers and customers in defining the metrics. After refining the business performance model 2110, and completing the planning step 102 by delivering the plan 110 to the client, the step of designing 104 may proceed simultaneously along two or more tracks. One track focuses on the business aspects of the task, while the other focuses on technology. Thus, referring to Figure 2, function block 2410 calls for designing business processes, skills and user interactions, while block 3510 calls for analyzing the technology and infrastructure requirements. Step 2410 - Design Business Processes, Skills & User Interaction:
In step 2410, the business processes, skills, and user interaction are taken into account, as shown in Figure 4. The provider designs the new monitoring processes, and develops the framework and formats for monitoring. Figure 4 shows a representation of the tasks for carrying out these functions, according to the presently preferred embodiment of the invention. One task 2411 is to design workflows, or to create the workflows diagrams and define the workloads for all monitoring activities. Other tasks include defining the physical environment interactions 2412, identifying skills requirements for performing monitoring tasks 2413, defining application interactions, that is, the human-computer interactions necessary to fulfill key monitoring activities 2415. Still other tasks include identifying performance support requirements 2416, developing a capability interaction model 2417, and verifying and validating business processes, skills and user interaction 2419.
Task 241 1 : Design Workflows for Processes, Activities and Tasks As part of the design or capability analysis stage 104, relationships are defined between core and supporting processes, activities, and tasks, and the metrics associated with the processes and activities are also defined. Considerations may include whether or not packaged software has already been selected for monitoring. If so, the business processes implied by that package or selection should be used. These should be the starting point for developing the process elements. Reporting requirements should be analyzed and documented in as much detail as possible.
Task 2412: Define Physical Environment Interaction
A next step is to define the physical environment interaction 2412. The objective of this function is to understand the implications of the monitoring processes on the physical environment; mainly this involves location, layout and equipment requirements. As part of the analysis, the provider will want to take into account a physical environment interaction model. Costing elements may include identifying the Workflow/ Physical environment interfaces, designing the facilities, layout and equipment required for monitoring, identifying distributed monitoring physical requirements, if any, as well as cen ra nee s. ons era ons may nc u e e n erac on a e nes e layout and co-location implications of the monitoring workflows and the physical environment. Monitoring processes and tools should be designed to interface with other processes, such as asset management, service control, and the like.
Task 2413: Identify Skill Requirements
The next task for a comprehensive look at the design is to identify skill requirements 2413. The goal is to identify the skill and behavior requirements for performing monitoring tasks. The deliverables from this task may include both a role interaction model and skills definition. A planner should identify critical tasks from the workflow designs, define the skills needed for the critical tasks and identify supporting skills needed and appropriate behavioral characteristics.
The scope and breadth of services defined in prior tasks will also assist in determining the skill requirements for Monitoring. If the scope of the project covers basic Monitoring functions, skill requirements would normally include technical skills related to platform and support software, such as operating systems, network management software, print administration, groupware, voice software, etc. Valuable customer service skills that include all the skills (i.e.,- listening, interpersonal communication, etc.) necessary when communicating and working with customers.
Task 2415: Define Application Interaction
The next task is to define application interactions 2415, or to identify the human-computer interactions necessary to fulfill key monitoring activities. This will most often involve identifying required monitoring features not supported by the monitoring software and defining the human-computer interactions needed to meet the requirements. It should be recognized that packaged software has a pre-defined application interaction. This task may only be performed for activities that are not supported by packaged software. All monitoring personnel will normally require familiarity with the tracking software in order to log incidents, track them while they are open, close them once ey are comp e e, orwar em o spec a s s as nee e , or rev ew an analyze incidents to identify underlying system problems.
Task 2416: Identify Performance Support Requirements Identifying performance support requirements 2416 is the next task block for the planner. The planner will want to analyze the Monitoring processes and determine how to support human performance within these processes. The task is to analyze the critical performance factors for each Monitoring task and to select a mixture of training and support aids to maximize workforce performance in completing each task. These can include Monitoring policies and detailed procedures, on-line help screens of various kinds, checklists, etc. If the design process is a change from a present system, it is important to understand what has changed from the current processes, and use this to determine the support requirements.
The delivery mechanisms should be carefully considered. For example, if a group of support items undergoes frequent changes, the support aid containing these items should probably be accessible on-line so it can be more easily maintained. Depending on the applications and other functions supported, Monitoring performance support could entail a wide range of items. Policy and Procedure manuals and operations manuals on each application are normal requirements. A key feature of applications documentation would be a comprehensive list of error messages that can be generated and the common problems that cause each message to appear. While this invention is not meant for ongoing, operational costing, this analysis may be useful in designing a monitoring function that avoids mistakes and takes advantages of opportunities for improvement over an existing system.
Task 2417: Develop Capability Interaction Model The next task is to develop a capability interaction model 2417. In this task, the provider will identify the relationships between the tasks in the workflow diagrams, the physical location, skills required, human-computer interactions and performance support needs. In one embodiment, a provider will develop a capability interaction model by understanding the interactions within each process for physical environment, skills, application and pe ormance suppo , an un y ng ese mo e s. e goa s an n egra e interaction model that will integrate workflows, the physical environment model, role and skill definitions, the application interactions, and support requirements to develop the capability interaction models. The tasks should be mapped into a Swimlane diagram format to depict the interdependencies between the different elements. The workflow diagram may be visually divided into "swimlanes" each separated from neighboring lanes by vertical solid lines on both sides. Each lane represents responsibility for tasks which are part of the overall workflow, and may eventually be implemented by one or more support organizations. Each task is assigned to one swimlane. Such a model should illustrate how the process is performed, what roles fulfill the activities involved, and how the roles will be supported to maintain the monitoring capability.
Task 2419: Verify and Validate Business Processes, Skills and User Interaction
The final task of step 2410 is to verify and validate business processes, skills & user interaction 2419. A provider will want to verify and validate that the process designs and the Capability Interaction Models meet the monitoring requirements and are internally consistent. The end result is a business performance model that will help the design team and guide the project manager in both the technical and business aspects of the project. In one embodiment, a provider will use stakeholders in the monitoring domain and outside experts as well as the design teams to do the validation. The provider will then verify and validate workflow diagrams in order to confirm that each process, activity, and task and its associated workflow fit together, and that the workflows meet the business capability requirements.
Step 2710 - Design Organization Infrastructure:
In step 2710, the method includes defining the structures for managing human performance, and defining what is expected of people who participate in the monitoring function, the required competencies, and how performance is managed. Figure 6 shows a representation of the tasks for carrying out these functions, according to the presently preferred embodiment of the invention. After defining business processes, skills and user interactions, a provider may proceed to design an organizational infrastructure 2710. The design may define the structures for managing human performance and also define what is expected of people who participate in the monitoring function 2711 , the required competencies 2713, and how performance is managed
2715. Other tasks in this area will include organizational infrastructure mobilization 2717, or hiring, and lastly, to verify and validate 2719 that the organization is meeting monitoring needs.
Task 271 1 : Design Roles, Jobs and Teams The task will include the design for the roles, jobs and teams. As an example, the design may wrestle with the issue of whether the monitoring function will be centralized, distributed, and decentralized. Not only will this affect the capital costs, but it may also help to determine the reporting relationships and to identify the performance measurement factors. Monitoring roles and jobs will typically be based on the breadth of functions assigned. The monitoring organization structure should be designed around all these business requirements.
In this portion of the process, organizational interfaces to other support organizations should be defined, e.g., to help desk, service level reporting, service control, capacity planning, and so on. Performance measuring factors are desirably included here also. Monitoring roles and jobs are based on the breadth of functions assigned.
Task 2713: Design Competency Model
After designing the roles and teams, the next task 2713 may be to design a competency model. In this task, the designer can define the skills, knowledge, and behavior that people need to accomplish their roles in the monitoring process. The goal of this task is a Competency Model for Skill/Knowledge/Behavior, that is, to determine the characteristics required of the individuals/teams that will fill the roles. Sub-tasks or portions may include defining the individual capabilities necessary for success in these roles. The manager may then organize the capabilities along a proficiency scale and relate them to the jobs and teams. Attitude and personality are factors that will impact the performance of Monitoring personnel nearly as much as technical training and expertise.
Task 2715: Design Performance Management Infrastructure These tasks define the people and teams that will perform in monitoring. The next task may be to design a performance management infrastructure 2715. The design here will define how individual performance will be measured, developed, and rewarded. There may be implications here on both the design and capital costs. The design here may also determine a performance management approach and appraisal criteria. The goal of the design effort may be to deliver a performance management infrastructure or design, and to develop standards for individuals and teams involved in the monitoring process. If management wishes also to identify a system to monitor the individuals' and teams' ability to perform up to the standards, the infrastructure to accomplish this is desirably included "in the ground floor," that is, when the system is designed and the cost is determined, rather than later.
Task 2717: Determine Organization Infrastructure Mobilization Approach
The next task of determining the organization mobilization approach may be necessary primarily if monitoring is a new function within an , organization, or of course, if the organization itself is new. The function must be staffed, or put another way, the organization must determine an infrastructure mobilization approach 2717. This is not normally a factor in capital costs, since personnel tend to be ongoing expenses. However, any peculiarities or changes from a "standard" design should be considered when costing a project or establishing a budget. In any case, the project manager may want to consider at some point how to mobilize the resources required to staff the new Monitoring capability. In staffing, the manager should identify profiles of the ideal candidates for each position, identify the sourcing approaches and timing requirements, and determine the selection and recruiting approaches. Task 2719: Verify and Validate Organization Infrastructure Once designed and costed, it may be prudent to verify and validate the organizational infrastructure 2719. The goal of this task is to verify and validate that the monitoring organization meets the needs of the monitoring capability and is internally consistent. A designer will want to confirm the organization with subject matter experts. The end result is that the designer will verify that the organization structure satisfies monitoring capability requirements.
Step 2750 - Design Performance Enhancement Infrastructure: In this step, a performance enhancement infrastructure is designed.
The designer determines the training needed for new monitoring functions, and determines the on-line help text, procedures, job aids, and other information to be used. Figure 7 shows a representation of the tasks for carrying out these functions, according to the presently preferred embodiment of the invention. Tasks include employee assessment 2751 , any performance enhancement needs 2753, investigation into performance enhancement products 2755, and verification and validation of the performance 2759.
Task 2751 : Assess Employee Competency and Performance.
This task is to refine the information about the current monitoring staffs competency, proficiency, and performance levels in specific areas, and assess the gaps in competencies and performance levels that drive the design of the performance enhancement infrastructure. Preferably the task includes assessing the competency of the current monitoring staff based on the competency model previously developed.
Task 2753: Determine Performance Enhancement Needs
This task is to assess the performance support and training requirements necessary to close the competency and performance gaps in the workforce. Preferably, the task includes using the employee assessment to determine the type of performance enhancement required to close the gaps and reach the desired competency levels.
Plans for performance support tools need to be carefully considered. The team responsible for monitoring system performance may be a dedicated eam. ra n ng an on e o exper ence, ey may qu c y grasp e subtleties of the positions and have little use for performance enhancement tools. If the responsibilities are widespread in scope, then specific types or system faults may occur quite infrequently. In such cases, readily available support tools may be very valuable in recommending the proper course of action. When in doubt, one rule is to err on the side of more performance support rather then less. If monitoring software is stable and properly installed, then maintenance to keep the performance support tools up to date should be minimal.
Task 2755: Design Performance Enhancement Products
This task includes defining the number and structure of performance support and learning products. Preferably, the designer determines the delivery approaches for training and performance support, designs the learning and performance support products, and defines the support systems for delivering training and performance support.
Typical training and performance support design issues will revolve around the software tools to be used and the associated procedures for analysis, notification triggering, escalation and resolution. The most economical plan for software training will normally be vendor-supplied materials and instruction. The scope of procedural training will be dependent on the requirements and activities set up for the monitoring function in the prior analysis and design tasks.
Task 2757: Define Learning Test Approach
This task includes developing a comprehensive approach for testing the learning products with respect to achieving each product's learning objectives. Preferably, the task includes identifying which learning objectives to be tested, and identifying the data capture methods to be used to test those objectives. With an idea of the needs and specific products in mind, the next step in a design may be to define a learning test approach 2757. The objective is to develop a comprehensive approach for testing the learning products with respect to achieving each product's learning objectives. The testing process will include identification of which learning objectives will be es e an en cat on o t e a a cap ure me o s a w e use o es those objectives. One approach is to concentrate on learning objectives which focus on knowledge gain and relate directly to the Monitoring Performance Model and Employee Competency Model 2713.
Task 2759: Verify and Validate Performance Enhancement
Infrastructure
In this task, performance enhancement infrastructure is validated. The task includes verifying the performance enhancement infrastructure and the learning test deliverables to determine how well they fit together to support the new monitoring capability. Preferably the method simulates the processes and activities performed by the members of the monitoring team in order to identify performance enhancement weaknesses. The method identifies the problems and repeats the appropriate tasks necessary to address the problems. In a preferred embodiment, stakeholders and subject matter experts are included in this process.
Technical Aspects
While the above sections have dealt with organizational aspects of the invention, it may now be appropriate to consider certain technical aspects. This subject will pertain to the method shown in the lower left portion of Figure
2: studying how to analyze technology requirements 3510, selection and design of operations architecture 3550, and validating the choices for technology. When this portion is completed, the planning stages of the project for the project will be complete.
Step 3510 - Analyze Technology Infrastructure Requirements:
The first functional block 3510 is analyzing technology infrastructure requirements, and is shown in more detail in Figure 5. The task here is to prepare for the selection and design of the technology infrastructure and to establish preliminary plans for technology infrastructure product testing. The project deliverables here will include operations architecture component requirements, a physical model of the operations architecture, and a product es approac an p an. er unc ons s own n gure nc u e as s o analyzing technology infrastructure requirements 3511 , analyzing component requirements 3515, and planning their tests 3517.
Task 3511 : Prepare Technology Infrastructure Performance Model The first task block is to prepare a technology infrastructure performance model 3511. The goal here is simple: analyze the functional, technical, and performance requirements for the Monitoring infrastructure. In this task, the project manager or planner seeks to identify key performance parameters for Monitoring, and to establish baseline project estimates, setting measurable targets for the performance indicators. This phase of the project should also include developing functional and physical models, and a performance model as well.
The focus here is on the technology, and the goal should be to resolve all open issues as soon as possible, whether in this step or the next (selection and design 3550). If the organization has already purchased a Monitoring package, this is a strong indicator for reuse. If the business capability requirements suggest a change to other software, a strong business case will be needed to support the recommendation.
Task 3513: Analyze Technology Infrastructure Component Requirements
The next task 3513 is to analyze technology infrastructure component requirements. This portion of the project begins to get into hardware and software required, as the project manager analyzes and documents requirements for Monitoring components, and defines additional needs. Tasks to be accomplished include identifying any constraints imposed by the environment and refining functional, physical, and performance requirements developed in the models previously built. In order to insure a "fit" with other aspects of the enterprise, the manager or planner should also assess the interfaces to other system components to avoid redundancy and ensure consistency/ compatibility.
The key component of monitoring components is the actual monitoring software itself. In cases where automated event monitoring and tracking is requ re , a pac age so ut on w most e y e use . ere are many different monitoring packages available, some of which can handle cross- platform use. Depending on the scope the monitoring requirements, one or more packaged tools may be considered.
Task 3515: Assess Technology Infrastructure Current Environment
Once the components have been selected, the next task should be to assess the ability of the current monitoring infrastructure to support the new component requirements 3515. In one sense, this task is simply a system analysis step, in which a project manager or planner will consider the components described above in 3513, and see whether they are consistent with the desired infrastructure. The steps should include identifying current standards for technology infrastructure, and noting current standards and any gap in the analysis or the capability. Details desired may include documenting and analyzing the current Monitoring technology environment. It is important to identify the areas where gaps exist between the current infrastructure and the new requirements.
Existing monitoring capabilities should be documented and analyzed, defining the resource being monitored, the monitoring schedule, threshold levels, and any automated recovery/notification configuration. Managers and planners will ideally be aware of constraints and limitations, in order to avoid repeating or re-doing work, or using the wrong infrastructure or components in planning the monitoring function.
Task 3517: Plan Technology Infrastructure Product Test
Once the components and system are planned, the next step may be to plan a product test for the technology infrastructure 3517. The results of this task will provide the basis on which the product test will be performed as well as the environment in which it is run. The task includes defining the test objectives, scope, environment, test conditions, and expected results. Sub- tasks may include defining a product test approach, designing a product test plan, and generating a deployment plan. It is important to remember that monitoring is not an island, and that all elements of monitoring need to be implemented for this test. The product test is a test of the infrastructure, not just the monitoring technology components. Therefore, the organizational and process elements are within the scope of such the test.
Step 3550 - Select and Design Operations Architecture:
After the infrastructure requirements have been analyzed, it is time for the task of selecting and designing an operations architecture 3550, Figure 8.
In this task, the manager will select and design the components 3551 required to support a high-level Monitoring architecture; include re-use 3552, packaged 3553, and custom components 3555. After selection and design, the architecture is validated 3557. This is the module where the manager designs monitoring and formulates component and assembly test approaches and plans 3558.
Task 3551 : Identify Operations Architecture Component Options A first task is to identify operations architecture component options 3551. It is important to identify specific component options that will be needed to support the production environment. Tools used in this task may include an
Request for Proposal/ Request for Quotation (RFP/RFQ) approach with vendors, and a monitoring component summary for internal use.
In this step the manager will be sure to identify all risks and gaps that exist in the current Monitoring environment, select components that will support the Monitoring architecture, and consider current software resources, packaged software and custom software alternatives during the selection process. If packaged software is part of the solution, the manager should submit RFPs to vendors for software products that meet basic requirements. Some packages can usually be eliminated quickly, based on such things as lack of fit with the operating system(s), server(s), or other operations architecture components already in place.
Comparative analysis should be performed by prioritizing and ranking the requirements based on what is most important to the organization. It may be helpful to assign a numerical rating for how well each component option supports the given requirement (e.g. 0=does not support, 1 = partially supported, 2 = supported in full). A requirements matrix is helpful in completing this task. It is also helpful to create short lists which group componen op ons n o reuse, pac age, an cus om ca egor es as app ca e. Generally, no more than three options in each category should be carried forward in the selection process.
Task 3552: Select Reuse Operations Architecture Components A potentially useful task in costing and designing a system is whether one can select reuse operations architecture components 3552. If existing architecture components can be reused without extensive hardware, or more importantly, software changes, it may be possible to save on purchase and installation expense. This step should finalize the component selection and may be done in tandem with the package and custom tasks. The manager should evaluate component reuse options, determine gaps where (typically) software will not satisfy requirements, and select any components for reuse.
In monitoring, this task will seldom be required. If the organization already has change request handling software in place, it would normally only undertake a monitoring project in order to replace the old system, indicating that reuse has been eliminated as a possibility. In one instance, an organization may wish to use an existing report writer for monitoring reporting, while installing new request-handling software. In this case, the issue would be the compatibility of the report writer with the monitoring database.
Task 3553: Select Packaged Operations Architecture Components
This same analysis applied to "packaged" operations architecture components, where the project manager may wish to select packaged operations architecture components 3553, or custom components 3555. In the same manner described for architecture components, a manager may evaluate packaged component options or custom options against the selection criteria in order to determine the best fit. In both cases, the options should be considered, vendor demonstrations and site visits may be conducted.
Packaged software will be the primary alternative for monitoring component requirements. The software should be selected based on how well the options fit the requirements, the level of vendor support and cooperation, and cost factors. Organizational biases for or against particular products or vendors may be issues to be addressed. As mentioned above, site visits to other organizations using the so tware may be valuable in verifying vendor claims of functionality. It may also be helpful to have independent opinions concerning vendor support and cooperation.
Packaged software 3553 may well be the primary alternative for Monitoring component requirements. The manager should make her or her selection is based on how well the options fit the requirements, the level of vendor support and cooperation, and cost factors. Organizational biases for or against particular products or vendors may be issues to be addressed. Site visits to other organizations using the software components are desirable to verify the vendors' claims of functionality and to obtain independent opinions about vendor support and cooperation.
Task 3555: Design Custom Operations Architecture Components If custom-designed components 3555 are considered, then any custom components may have to be designed, rather than merely purchased. On the other hand, it may be possible to customize a reuse or packaged component.
There is usually more risk associated with custom components, if only because of the time constraints. Before deciding on custom components, a manger should evaluate the time, cost, and risk associated with custom development. Areas in monitoring where custom design may be needed typically include the three situations. The first is the design of custom reports. The second is the scripting or parameterization needed to install the software. The last is the design of interfaces to other components to facilitate automated transfers of data or other communications. These may include, but are not limited to, network software, asset management software, application databases, e-mail software, pagers, and the like. This portion of the task may be reiterated as necessary until the manager is satisfied with the choices made.
Task 3557: Design and Validate Operations Architecture
Having selected the components needed by the enterprise, the next task may be to develop a high-level design for the architecture, or to design and validate an operations architecture 3557. This portion of the design is pr mar y concerned wit com n ng t e reuse , pac age an custom components into an integrated design and ensuring that the selected architecture meets the requirements for monitoring of the enterprise. One portion of the task may be to define the standards and procedures for component building and testing. The manager may even consider prototyping if there are any complex interfaces to other components of the operations architecture. The end result of this task is to finish with a design for monitoring, complete with standards and procedures.
Task 3558: Develop Operations Architecture Component and Assembly Test Approach, and Plan
With components and a system designed, a component and assembly test approach and plan 3558 is needed. In this task, we define the approach and test conditions for the Monitoring Architecture Assembly, Component, and Component Acceptance Test Approaches and Plans. The outputs may include separate plans for a test approach and plan for components, assemblies, and acceptance procedures. For each plan, there should be defined objectives, scope, metrics, a regression test approach, and risks associated with each test. Details may include component testing for the components selected above, whether new or reused. These tests are tests of the monitoring software components only, not the process and organization elements.
Step 3590 - Validate Technology Infrastructure:
The next block in a technology portion of the method or cost planner a step wherein the manager validates the chosen technology infrastructure 3590, as shown in Figure 9. An analysis is undertaken of the monitoring design 3591 , the technology infrastructure is validated 3593, the infrastructure design is validated 3595, and the plans for deploying the system and its test approach are reviewed and revised as necessary 3597. The manager will verify that the Monitoring design is integrated, compatible, and consistent with the other components of the Technology Infrastructure Design, and meets the Business Performance Model and Business Capability Requirements.
Task 3591 : Review and Refine Technology Infrastructure Design
A first sub-task may be to review and refine the technology infrastructure design 3591. This task is undertaken to ensure that the Monitoring infrastructure design is compatible with other elements of the technology infrastructure. The manager may want to ensure that the monitoring function is integrated and consistent with the other components of the technology infrastructure. It may also be prudent to develop an issue list, or "punch list" for design items that conflict with the infrastructure or items that dont meet performance goals or requirements. This "punch list" may be subsequently used to refine the Monitoring infrastructure if needed.
Task 3593: Establish Technology Infrastructure Validation Environment
The next step in the design process may be to establish a technology infrastructure validation environment 3593. In this task, the manager designs, builds, and implements the validation environment for the technology infrastructure, and may deliver a validation schedule. Specific tasks may include establishing the environment, that is, the timing, and selecting and training participants. It may be valuable in the validation task to include designers and architects of OM components that will interface with monitoring in the evaluation. as : a a e ec no ogy n ras ruc ure es gn Having established the environment, the next task is to validate the technology infrastructure design 3595. The manager at this point will desirably identify gaps between the design and the technology infrastructure requirements defined earlier. Projects will proceed smoothly if the manager will record issues as they arise during this phase for corrective action. The manager should also, during this phase, identify and resolve any remaining gaps between the design and the expectation or the required service.
Part of the process is to iterate through the validation until all critical issues have been resolved and to develop action plans for less critical issues.
If Monitoring is being installed as part of a larger business capability, this phase may serve as a checkpoint to verify that the most current requirements from the business capability release are being considered. Monitoring may be only one component of the infrastructure being tested at this point. Monitoring will typically be deployed in a single release. A manager may want to confirm that this is still appropriate by validating the monitoring interfaces to other elements of the technology infrastructure.
Task 3597: Analyze Impact and Revise Plans for Technology
Infrastructure The final task sub-block in the task of validating the technology infrastructure is to analyze the impact of the system and to revise plans 3597 as necessary. Tasks to be accomplished during this phase include analyzing the extent and scope of the work required for modifications and enhancements, analyzing the impact of validation outcomes on costs and benefits, and refining the plans for deployment testing. The result of this task should be a deployment plan, a test approach, a test plan and an infrastructure design.
The point of this task is to update the appropriate technology infrastructure delivery plans based on the outcome of the validation process. Since the point of the information technology group is to service an enterprise, monitoring itself may only be part of the validation scope. Confirm also that a single release is appropriate. er es gn ng e even au mon or ng unc on an o a n ng authorization for build and test 112, the project may proceed along three timelines in the build and test portion 106 of Figure 2. One time-line continues in the technical vein, that is, acquiring the technology infrastructure 5510 and building and testing the selected operations architecture 5550. At the same time, other groups or personnel may develop learning products 6260 and other groups or personnel may develop policies, procedures and performance support 6220 for the new system. With these tasks completed, the project manager will proceed to prepare and execute a test of the new system, that is, a technology infrastructure product test 5590. With these tasks completed, all that remains is to deploy the new system 7170.
Step 5510 - Acquire Technology Infrastructure:
Acquiring the technology infrastructure 5510, Figure 10, is the first step in build and test 106. Tasks forming a part of this block include planning and executing the acquisition of components 5511 , which suppliers will supply the components and services 5513, and how they will be supplied. This task package is primarily required if new packaged software is to be procured and installed as part of the project. The economic impact or implications are evaluated 5515, and the organization prepares and executes acceptance tests 5517 for the new components.
Task 5511 : Initiate Acquisition of Technology Infrastructure Components
The first task may be to initiate acquisition of the technology infrastructure components, primarily packaged software 5511. A "normal" procurement plan will suffice, so long as it includes RFP/RFQ documentation, defined vendor selection criteria, selecting from among the offering vendors, and so on. The process is smoothed if component capability and performance requirements are clearly defined in the documentation provided to vendors. as : e ec an ppo n en ors
The next task may include selecting and appointing vendors 5513. The task may include evaluation of the several product offerings, negotiating contracts, and arranging for delivery and timing of delivery. It may be desirable if software training is negotiated as part of the contractual agreement. If multiple components and multiple vendors are involved, the project manager may find it advantageous to have delivery and installation of the components occur simultaneously so that the component interfaces can be tested with vendor representatives on site.
Task 5515: Evaluate Deployment Implications of Vendor Appointments
Having chosen vendors and arranged for delivery, the next task is to determine the impact and deployment implications of the software and vendor selection 5515 on the project economics and the enterprise served. The manager at this point may wish to compare procurement costs with project estimates, and assess the impact on the business situation. Revisions should be made and any approvals needed should be obtained. The manager should ensure that the economics of the transaction(s) are consistent with plan documentation, or changed as appropriate.
Task 5517: Prepare and Execute Acceptance Test of Technology Architecture Components.
With these tasks completed, the next task is to prepare and execute an acceptance test of the new components 5517. In this step, steps are taken to ensure that the Monitoring packaged components meet the technology infrastructure requirements. Personnel in this step build the test scripts, the test drivers, the input data, and the output data to complete the Technology
Architecture Component Acceptance Test Model. They then execute the test and document any fixes or changes required of the component vendor(s). Software component training may be scheduled and conducted as soon as the new Monitoring components are installed. Step 5550 - Build and Test Operations Architecture:
Having acquired the technology, the project now proceeds to a build and test stage 5550, depicted in Figure 11. In this stage, personnel design and program the Monitoring components. This is also the time to perform componen an assem y es ng. a or as s may nc u e e a e es gn o the operations architecture 5551 , assembly test plan 5552, building of the system 5553, component tests 5555, and assembly and test 5557.
Task 5551 : Perform Operations Architecture Detailed Design
Detailed design should include the preparation of program specifications for custom and customized components. This task also desirably includes a design of the packaged software configuration, and detailed design reviews. Consideration should include custom components with interfaces to other OM components and any special reporting requirements for monitoring. Event correlation is one of the more sophisticated mechanisms for event management user interaction. While sophisticated, the correlation base rule can be complicated to doe and difficult to maintain. Special attention should be paid to this phase of the design.
Task 5552: Revise Operations Architecture Component and Assembly, Test Approach, and Plan
If this task shows the need for any revisions, they should be accomplished when personnel revise the operations architecture component assembly test approach and plan 5552. This task includes updating the monitoring test plans to reflect the components' detailed design, and defining revised considerations or changes to the requirements. Preferably, the task includes reviewing the test approaches and plans, and revising as needed for new or updated requirements. If other OM components interface with monitoring software, these interfaces should be tested, either in this task or in the product test task.
Task 5553: Build Operations Architecture Components
The project may then proceed to building the components 5553. In this task, personnel will build (or program) all custom monitoring components and extensions to packaged or reuse components. Some packages may have unique or proprietary languages for customizing or configuring. If so, there may be a need for training. This task includes building all custom monitoring components and extensions to packaged or reuse components. Preferably, t e as nc u es u ng e cus om componen s, u ng e cus om ze extensions to package or reuse components, and configuring the packaged components.
Task 5555: Prepare and Execute Component Test of Custom Operations Components
Having built the system, the next task is to prepare and execute tests of the custom operations components 5555. This testing will ensure that each custom Monitoring component and each customized component meets its requirements. In this task, the manager verifies the component test model, sets up the test environment, executes the test, and makes component fixes and retests as required. Tests should confirm component performance as well as their functionality. System performance should not be compromised by the amount of customization. The tests are not limited to this stage, but may proceed in subsequent testing tasks.
Task 5557: Prepare and Execute Operations Assembly Test Following component tests, the project engineer or manager then proceeds to prepare and execute an operations assembly test 5557. In this task, a full test is performed of all interactions between Monitoring components. Personnel verify the assembly test model, set up a test environment, execute the test, and make fixes and retest as needed, again in an iterative fashion. Shell programs or stub programs may be needed to perform the assembly test. If shell programs are used, it is important to test not only successful completion, but to build in the error conditions which would cause abnormal endings or problems. Here, personnel verify that all interfaces to other components are tested and operate correctly for successful, predictable outcomes and error conditions. This completes the build and test stage.
Step 6220 - Develop Policies, Procedures and Performance Support: Having completed the technical aspects, the project manager now considers some longer-term portions of the project, the policies, procedures and performance support detailed design 6220, as shown in Figure 12, needed for ongoing operation of the service. The purpose of this task is to produce a finalized, detailed set of new Monitoring policies, procedures, and reference materials. It is also desirable to conduct a usability test and review to verify ease of use with both monitoring personnel and personnel from the supported enterprise. Upon successful completion of this task, the operating personnel will have Monitoring Policies & Procedures and may also have any performance support products that may be necessary or useful. Subtasks include writing or performing the policies and procedures 6221 , developing business policies and procedures 6223, user procedures 6225, reference materials and job aids 6227, and validating and testing 6229.
Task 6221 : Perform Policies, Procedures and Performance Support Detailed Design
Subtasks include writing or performing the policies and procedures, and a detailed design for performance support 6221. This task includes providing the product structure for all the new Monitoring policies, procedures, reference materials, and job aids. It may also be desirable to provide templates for each product, and to create prototype products with reference to the overall project.
Task 6223: Develop Business Policies and Procedures It may also be necessary or desirable to develop a set of business policies and procedures 6223 for the operation. This is typically a rule set governing workflows and priorities. Business policies in this context describe the business rules governing workflows. Business procedures describe the sequential sets of tasks to follow based on the policies. Specific tasks within this task include collecting and reviewing content information, drafting policies and procedures, and planning for the production of the materials. Procedures should generally be organized into three main elements of monitoring, that is, event management, fault management, and system performance management. In developing these materials, this three- way organization is most appropriate where different people or groups will have primary responsibility for each element. Task 6225: Develop User Procedures
In this task, a detailed set of monitoring user procedures are delivered. User procedures provide the details necessary to enable smooth execution of new tasks within a given business procedure. Preferably, the provider collects and reviews content information, drafts the procedures, verifies consistency with business policies and procedures, and plans for the production of the materials. Outside personnel who interface with the monitoring process will generally do so on a very infrequent basis. They cannot be expected to review a procedure manual each time there is a need to interact.
Task 6227: Develop Reference Materials and Job Aids Along with policies and procedures, it may be useful to develop reference materials and job aids for monitoring personnel 6227. In this task, the provider drafts any reference materials and job aids that make a task easier or more efficient are prepared. To accomplish this task, the provider should collect and review content information, draft the performance support products, verify consistency of the material with policies and procedures, and then plan for the production of materials. Performance support materials will be very desirable in environments where monitoring is a decentralized function performed by multiple groups across the organization. Such materials will help provide consistency in the handling of problem situations.
Task 6229: Validate and Test Policies, Procedures and Performance
Support
Having accomplished these tasks for developing policies procedures and performance support materials, the project manager may now want to test and validate 6229 them. This task will confirm that the products meet the requirements of the Monitoring capability and the needs of the personnel who will use them. It is also useful as a follow-up tool to resolve open issues.
Specific acts that may be useful in this vein include, but are not limited to, preparing validation scenarios, validating content and ease of use of materials, and testing on-line support products. All open issues should be resolved. ep - eve op earn ng ro uc s:
Though not strictly a part of project hardware building, a successful project will typically include some thought to training its users. Thus, a desirable step may include development of learning products 6260, as shown in Figure 13. A first task may include defining the needs for learning products and the environment in which they are to be used 6261. Technical training in Monitoring software components may come from the package vendor or a third party training organization. Procedural training for an organization's procedures is often custom built or tailored for the situation. After learning these requirements, the next tasks are to perform a learning program detailed design 6263 and to make prototypes 6265. Using the prototypes, actual learning products may then be made, and produced 6267. The products should be tested 6269. Testing may take place later in the cycle, as depicted in Figure 13, or earlier, using prototypes, to achieve feedback and ensure the effort is on track and useful to the students or trainees.
Task 6261 : Develop Learning Product Standards and Development Environment
In this task, the environment for developing the monitoring learning products is developed. Preferably, the provider selects authoring and development tools, defines standards, and designs templates and procedures for product development. Technical training in monitoring software components may come from the package vendor or a third party training organization. Procedural training is custom built.
Task 6263: Perform Learning Program Detailed Design In this task, the provider specifies how each learning product identified in the learning product design is developed. Preferably, the task includes defining learning objectives and context, designing the learning activities, and preparing the test plan. Learning objectives and their context should be defined in preparation for designing the learning activities and preparing a test plan. It may be helpful to modularize the products by separating the monitoring activities into separate learning products. In a preferred embodiment, the monitoring software is integrated into the learning program, and following the completion of software technical training.
Task 6265: Prototype Learning Products
In this task, prototypes are completed and ease-of-use sessions on classroom-based learning components (i.e., activities, support system, instructor guide) are conducted. Preferably, the task includes creating prototype components, and conducts and evaluates the prototype.
Task 6267: Create Learning Products
In this task, the learning materials proposed and prototyped during the design activities are developed. Preferably, the provider develops activities, content, and evaluation and support materials required, develops a maintenance plan, trains instructors/facilitators, and arranges for production.
Task 6269: Test Learning Products
This task includes testing each product with the intended audience to ensure that the product meets the stated learning objectives, that the instructors are effective, and that the learning product meets the overall learning objectives for monitoring. Preferably, the task includes confirming the Test Plan, executing a learning test, and reviewing and making required modifications. If the target audience is small, this test serves as the formal training session for the group. Multiple sessions may be appropriate if responsibilities are split and all personnel are not responsible for knowing all activities.
Step 5590 - Prepare and Execute Technology Infrastructure Product Test: At this point, much of the project work has been completed, and the product is ready for testing in a realistic environment 5590 to insure it is ready for deployment. A series of tests is depicted in Figure 14. The test and its design or model are first prepared 5591 , with expected results. The test is then performed 5593, by executing the tests prepared earlier. The tests should simulate actual working conditions, including any related manuals, policies and procedures produced earlier. An objective of the test should be to notice any deficiencies and make changes as required. Following these tests, a deployment test should be executed 5595, to ensure that the monitoring infrastructure can be gainfully deployed within the enterprise or organization. If this test is successful, the last stage of testing may then be executed, the technology infrastructure configuration test 5597. This test will ensure that the performance of the Technology Infrastructure, including
Monitoring, will be consistent with the Technology Infrastructure Performance Model after the infrastructure has been deployed. The test should be made with an eye to risk assessment of the integration of the new system within the enterprise, and the risk assessment should be updated as needed.
Task 5591 : Prepare Technology Infrastructure Test Model
This task is to create the monitoring infrastructure test model. Preferably, the provider creates the test data and expected results, creates the testing scripts for production, deployment, and configuration tests. The provider also conducts the monitoring training not yet completed, and reviews and approves the test model. If a complete business capability is being deployed, this is a comprehensive test with monitoring being one piece. The product test should occur in a production-ready environment and should include the hardware and software to be used in production. If monitoring is being implemented independently, then all or a portion of the production environment can be used as the "test" application.
Task 5593: Execute Technology Infrastructure Product Test This task is to verify that the technology infrastructure successfully supports the requirements outlined in the business capability design stage.
Preferably, the provider executes the test scripts, verifies the results, and makes changes as required. It is helpful if the actual monitoring working conditions are used or simulated, including related manuals and procedures.
Task 5595: Execute Technology Infrastructure Deployment Test In this task, the provider ensures that the new monitoring infrastructure is correctly deployed within the organization. Preferably, the provider executes the test scripts, verifies the results, and makes changes as required.
Deployment testing and configuration testing are usually minimal for monitoring, since it is a "behind-the-scenes" application, with limited visibility to the rest of the enterprise supported by the information technology organization.
Task 5597: Execute Technology Infrastructure Configuration Test This task is to ensure that the performance of the technology infrastructure, including monitoring, is consistent with the technology infrastructure performance model after the infrastructure has been deployed. Preferably, the provider executes the test scripts, verifies the results and makes changes as required, and updates the risk assessment. Deployment testing and configuration testing are usually minimal for monitoring, since it is a "behind-the-scenes" application, with limited visibility to the rest of the enterprise supported by the information technology organization.
Step 7170 - Deploy Technology Infrastructure:
Following successful testing, the monitoring infrastructure may be deployed online 7170, Figure 15. At this point, the tasks remaining include configuring the technology infrastructure 7171 to prepare for any new business capability components.
If the configuration is complete, the technology infrastructure may then be installed 7173. In addition to the Monitoring software, all documentation, performance support tools and training must be completed and in place prior to the deployment. A final task may be to verify the technology infrastructure
7179 and address any issues raised as a result of the testing or the deployment. Customers and monitoring members, as well as enterprise management should be kept abreast of developments, successful and less successful, so all issues can be resolved quickly. This task should require minimal effort if monitoring is being installed independently.
Task 7171 : Configure Technology Infrastructure
In this task, the deployment unit's technology infrastructure is customized to prepare for the new business capability components.
Preferably, the task includes reviewing the customization requirements, performing the customization, and verifying the infrastructure configuration.
Customizing the infrastructure is normally completed in task package 5550, u an es opera ons arc ec ure. s as w genera y e requ re the Monitoring capability is being deployed at more than one site (i.e., individual desktops or multiple servers). In these cases, variances in the existing configurations will determine any customization required.
Task 7173: Install Technology Infrastructure
In this task, the technology infrastructure for monitoring is installed. Preferably, the task includes preparing installation environment, installing monitoring infrastructure, and verifying the installation. In addition to the monitoring software, the documentation, performance support and training tools are completed and put in place prior to the deployment.
Task 7179: Verify Technology Infrastructure
In this task, the new technology infrastructure environment is verified and the issues raised as a result of the testing are addressed. Preferably, the task includes performing the infrastructure verification, making changes as required, and notifying stakeholders. A follow-up audit is recommended after some period of production operations to confirm the validity and accuracy of service reports. This task should require minimal effort if monitoring is being installed independently.
In addition to the method for providing the monitoring function, as described above, the present invention also includes a method and apparatus for providing an estimate for building a monitoring function in an information technology system. The method and apparatus generate a preliminary work estimate (time by task) and financial estimate (dollars by classification) based on input of a set of estimating factors that identify the scope and difficulty of key aspects to the system.
Previous estimators only gave a bottom line cost figures and were directed to business rather than OM functions. It would take days or weeks before the IT consultant produced these figures for the client. If the project came in either above or below cost, there was no way of telling who or what was responsible. Therefore, a need exists for an improved estimator
Fig. 16 is a flow chart of one embodiment a method for providing an estimate of the time and cost to build a monitoring function in an information technology system. In Fig. 16, a provider of monitoring functions, such as an IT consultant, for example, Andersen Consulting, obtains estimating factors from the client 202. This is a combined effort with the provider adding expertise and knowledge to help in determining the quantity and difficulty of each factor. Estimating factors represent key business drivers for a given operations management OM function. Table 1 lists and defines the factors to be considered along with examples of a quantity and difficulty rating for each factor.
Table 1
Figure imgf000040_0001
or examp e, as an us ra on o e me o o e nven on, e provider, with the help of the client, will determine an estimating factor for the number of service level agreements ("SLA") 202. Next comes the determination of the difficulty rating 204. Each of these determinations depends on the previous experience of the consultant. The provider or consultant with a high level of experience will have a greater opportunity to determine the correct number and difficulty. The number and difficulty rating are input into a computer program. In the preferred embodiment, the computer program is a spreadsheet, such as EXCEL, by Microsoft Corp. of Redmond, Washington, USA. The consultant and the client will continue determining the number and difficulty rating for each of the remaining estimating factors 206.
After the difficulty rating has been determined for all of the estimating factors, this information is transferred to an assumption sheet 208, and the assumptions for each factor are defined. The assumption sheet 208 allows the consultant to enter in comments relating to each estimating factor, and to document the underlying reasoning for a specific estimating factor.
Next, an estimating worksheet is generated and reviewed 210 by the consultant, client, or both. An example of a worksheet is shown in FIGS. 17a and 17b. The default estimates of the time required for each task will populate the worksheet, with time estimates based on the number factors and difficulty rating previously assigned to the estimating factors that correspond to each task. The amount of time per task is based on a predetermined time per unit required for the estimating factor multiplied by a factor corresponding to the level of difficulty. Each task listed on the worksheet is described above in connection with details of the method for providing the monitoring function. The same numbers in the description of the method above correspond to the same steps, tasks, and task packages of activities shown on the worksheet of FIGS. 17a and 17b. The worksheet is reviewed 210 by the provider and the client for accuracy. Adjustments can be made to task level estimates by either returning to the factors sheet and adjusting the units 212 or by entering an override estimate in the 'Used' column 214 on the worksheet. This override may be used when the estimating factor produces a task estimate that is not appropriate for the task, for example, when a task is not required on a particular project.
Next, the provider and the client review and adjust, if necessary, the personnel time staffing factors for allocations 216 for the seniority levels of personnel needed for the project. Referring to Figs. 17a and 17b, these columns are designated as Partner - "Ptnr", Manager - "Mgr", Consultant - "Cnslt", and Analyst - "Anlst", respectively. These allocations are adjusted to meet project requirements and are typically based on experience with delivering various stages of a project. It should be noted that the staffing factors should add up to 1.
The consultant or provider and the client now review the workplan 218, and may optionally include labor to be provided by the client. In one embodiment, the workplan contains the total time required in days per stage and per task required to complete the project. Tasks may be aggregated into a "task package" of subtasks or activities for convenience. A worksheet, as shown in FIGS. 17a and 17b, may be used, also for convenience. This worksheet may be used to adjust tasks or times as desired, from the experience of the provider, the customer, or both.
Finally, a financial estimate is generated in which the provider and client enter the agreed upon billing rates for Ptnr, Mgr, Cnslt, and Anlst 220.
The total estimated payroll cost for the project will then be computed and displayed, generating final estimates. At this point, a determination of out-of- pocket expenses 222 may be applied to the final estimates to determine a final project cost 224. Preferably, the provider will then review the final estimates with an internal functional expert 226.
Other costs may also be added to the project, such as hardware and software purchase costs, project management costs, and the like. Typically, project management costs for managing the provider's work are included in the estimator. These are task dependant and usually run between 10 and 15% of the tasks being managed, depending on the level of difficulty. These management allocations may appear on the worksheet and work plan. The time allocations for planning and managing a project are typically broken down for each of a plurality of task packages where the task packages are planning project execution 920, organizing project resources 940, controlling project work 960, and completing project 960, as shown in FIG. 17a.
It will be appreciated that a wide range of changes and modifications to the method as described are contemplated. Accordingly, while preferred embodiments have been shown and described in detail by way of examples, further modifications and embodiments are possible without departing from the scope of the invention as defined by the examples set forth. It is therefore intended that the invention be defined by the appended claims and all legal equivalents.
While this invention has been shown and described in connection with the embodiments described, it is apparent that certain changes and modifications, in addition to those mentioned above may be made from the basic features of this invention. Many types of enterprises may benefit from the use of this invention, e.g., any enterprise wishing to use a monitoring function within an information technology organization. In addition, there are many different types of computer systems, and computer software and hardware, that may be utilized in practicing the invention, and the invention is not limited to the examples given above. Accordingly, it is the intention of the applicants to protect all variations and modifications within the valid scope of the present invention. It is intended that the invention be defined by the following claims, including all equivalents.

Claims

1. A method for providing an event/fault monitoring function in an IT organization comprising:
(a) planning for said monitoring function ; (b) designing said monitoring function ;
(c) building said monitoring function ;
(d) testing said monitoring function ; and
(e) deploying said monitoring function .
2. The method of claim 1 wherein said planning act includes: (f) developing a business performance model for said monitoring function .
3. The method of claim 2 wherein said developing act includes: (g) confirming business architecture;
(h) analyzing a plurality of operating constraints; (i) analyzing a current monitoring capability;
(j) identifying a plurality of best practices for said monitoring;
(k) defining a plurality of requirements for said monitoring; and
(I) developing said business performance model.
4. The method of claim 1 wherein said designing act includes:
(f) designing business processes, skills, and user interaction for said monitoring function.
5. The method of claim 4 wherein said step (f) includes:
(g) designing a plurality of workflows for processes, activities, and tasks for said monitoring;
(h) identifying physical environment interactions;
(i) identifying skill requirements for performing said monitoring;
(j) defining application interactions; (k) identifying performance support requirements; (I) developing a capability interaction model; and
(m) developing said business processes, skills, and user interaction.
6. The method of claim 1 wherein said designing act includes: (f) designing an organization infrastructure for said monitoring.
7. The method of claim 6 wherein said step (f) includes: (g) designing a plurality of roles, jobs, and teams; (h) designing a competency model; (i) designing a performance management infrastructure;
(j) determining an organization infrastructure mobilization approach; and
(k) developing said organization infrastructure.
8. The method of claim 1 wherein said designing act includes: (f) designing a performance enhancement infrastructure for said monitoring.
9. The method of claim 8 wherein said step (f) includes:
(g) assessing employee competency and performance for a monitoring organization; (h) determining performance enhancement needs;
(i) designing performance enhancement products; (j) defining a learning test approach; and (k) developing said performance enhancement infrastructure.
10. The method of claim 1 wherein said designing act includes: (f) designing a technology infrastructure for said monitoring.
11. The method of claim 10 wherein said step (f) includes:
(g) preparing a technology infrastructure performance model; (h) analyzing a plurality of technology infrastructure component requirements; assess ng a curren ec no ogy n rastruc ure;
(j) developing a technology infrastructure design; and
(k) planning a technology infrastructure product test.
12. The method of claim 1 wherein said designing act includes: (f) designing operations architecture for said monitoring function.
13. The method of claim 12 wherein said step (f) includes:
(g) identifying operations architecture components;
(h) selecting reuse operations architecture components;
(i) selecting packaged operations architecture components;
(j) designing custom operations architecture components; and
(k) designing the operations architecture.
14. The method of claim 10 wherein said testing act includes: (g) validating said technology infrastructure for said monitoring function.
15. The method of claim 14 wherein said validating act includes:
(h) reviewing said technology infrastructure;
(i) establishing an environment for validating said technology infrastructure;
(j) validating said technology infrastructure; and
(k) analyzing an impact of said technology infrastructure.
16. The method of claim 14 wherein said building act includes: (h) acquiring a plurality of technology infrastructure components for said monitoring function.
17. The method of claim 16 wherein said acquiring act includes:
(i) defining acquisition criteria;
(j) selecting vendors for said technology infrastructure components;
(k) appointing said vendors; (I) evaluating deployment implications of said selecting and appointing; and
(m) testing said technology infrastructure components for acceptance.
18. The method of claim 13 wherein said building act includes: (I) building said operations architecture components.
19. The method of claim 18 wherein said testing act includes: (m) testing said operations architecture components; and (n) testing said operations architecture.
20. The method of claim 1 wherein said building act includes:
(f) developing policies, procedures, and performance support for said monitoring function.
21. The method of claim 20 wherein said developing act includes:
(g) developing business policies and procedures; (h) developing user procedures;
(i) developing reference materials and job aids; and (j) validating said policies, procedures, and reference materials.
22. The method of claim 1 wherein said building act includes:
(f) developing learning products for said monitoring function.
23. The method of claim 22 wherein said developing act includes: (g) developing learning products standards; (h) prototyping said learning products; (i) building said learning products; and (j) testing said learning.
24. The method of claim 16 wherein said testing act includes:
(i) testing said technology infrastructure for said monitoring function.
25. The method of claim 24 wherein said step (i) includes: (j) preparing a plurality of test models for said technology infrastructure;
(k) executing a technology infrastructure product test; (I) executing a technology infrastructure deployment test models; and
(m) executing a technology infrastructure configuration test model.
26. The method of claim 24 wherein said deploying act includes: (j) deploying said technology infrastructure for said monitoring function.
27. The method of claim 26 wherein said step (i) includes: (k) configuring said technology infrastructure;
(I) installing said technology infrastructure; and (m) verifying said technology infrastructure.
28. A method for providing an estimate for building a monitoring function in an information technology organization, the method comprising:
(a) obtaining a plurality of estimating factors;
(b) determining a difficulty rating for each of said estimating factors; (c) generating a time allocation for building said monitoring function based on said estimating factor and said difficulty rating; and
(d) generating a cost for building said monitoring based on said time allocation.
29. The method as recited in claim 28, wherein obtaining said estimating factor further includes receiving said estimating factors from a client.
30. The method as recited in claim 28, wherein said estimating factors include the number of at least one of activities, processes, organizations, components, end nodes, locations, platforms, and service level agreements.
. e me o , u is selected from the group of simple, moderate, or complex.
32. The method as recited in claim 28, wherein said time allocation includes time allocated for a plurality of individual team members where said individual team members include at least one of partner, manager, consultant, and analyst.
33. The method as recited in claim 28, wherein said cost depends on said time allocation and a billing rate for said individual team member.
34. The method as recited in claim 28, wherein said cost is broken down for each of a plurality of stages for building said monitoring where said stages include at least one of plan and manage, capability analysis, capability release design, capability release build and test, and deployment stages.
35. The method as recited in claim 28, wherein said time allocation is used to generate a project work plan.
36. The method as recited in claim 28, wherein said billing rate generates a financial summary of said cost.
37. The method as recited in claim 35, wherein said work plan is broken down for each of a plurality of stages for building said monitoring function where said stages are plan and manage, capability analysis and design release, capability release build and test, and deployment.
38. The method as recited in claim 37, wherein said plan and manage stage is broken down for each of a plurality of task packages where said task packages are plan project execution, organize project resources, control project work, and project complete.
39. A computer system for allocating time and computing cost for building a monitoring function in an information technology organization, comprising:
(a) a processor; (b) a software program for receiving a plurality of estimating factors and difficulty rating for each of said estimating factors and generating a time allocation and cost for building said monitoring function; and
(c) a memory that stores said time allocation and cost under control of said processor.
PCT/US2000/027629 1999-10-06 2000-10-06 Method and estimator for event/fault monitoring WO2001026008A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU78666/00A AU7866600A (en) 1999-10-06 2000-10-06 Method and estimator for event/fault monitoring

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US15825999P 1999-10-06 1999-10-06
US60/158,259 1999-10-06

Publications (1)

Publication Number Publication Date
WO2001026008A1 true WO2001026008A1 (en) 2001-04-12

Family

ID=22567316

Family Applications (12)

Application Number Title Priority Date Filing Date
PCT/US2000/027593 WO2001026028A1 (en) 1999-10-06 2000-10-06 Method and estimator for providing change control
PCT/US2000/027803 WO2001026013A1 (en) 1999-10-06 2000-10-06 Method and estimator for providing service level management
PCT/US2000/027856 WO2001025970A1 (en) 1999-10-06 2000-10-06 Method and estimator for providing operations maturity model assessment
PCT/US2000/027795 WO2001025876A2 (en) 1999-10-06 2000-10-06 Method and estimator for providing capacity modeling and planning
PCT/US2000/027518 WO2001026005A1 (en) 1999-10-06 2000-10-06 Method for determining total cost of ownership
PCT/US2000/027801 WO2001026011A1 (en) 1999-10-06 2000-10-06 Method and estimator for providing operation management strategic planning
PCT/US2000/027629 WO2001026008A1 (en) 1999-10-06 2000-10-06 Method and estimator for event/fault monitoring
PCT/US2000/027804 WO2001026014A1 (en) 1999-10-06 2000-10-06 Method and estimator for providing service control
PCT/US2000/027802 WO2001026012A1 (en) 1999-10-06 2000-10-06 Method and estimator for providing storage management
PCT/US2000/027857 WO2001025877A2 (en) 1999-10-06 2000-10-06 Organization of information technology functions
PCT/US2000/027796 WO2001026010A1 (en) 1999-10-06 2000-10-06 Method and estimator for production scheduling
PCT/US2000/027592 WO2001026007A1 (en) 1999-10-06 2000-10-06 Method and estimator for providing business recovery planning

Family Applications Before (6)

Application Number Title Priority Date Filing Date
PCT/US2000/027593 WO2001026028A1 (en) 1999-10-06 2000-10-06 Method and estimator for providing change control
PCT/US2000/027803 WO2001026013A1 (en) 1999-10-06 2000-10-06 Method and estimator for providing service level management
PCT/US2000/027856 WO2001025970A1 (en) 1999-10-06 2000-10-06 Method and estimator for providing operations maturity model assessment
PCT/US2000/027795 WO2001025876A2 (en) 1999-10-06 2000-10-06 Method and estimator for providing capacity modeling and planning
PCT/US2000/027518 WO2001026005A1 (en) 1999-10-06 2000-10-06 Method for determining total cost of ownership
PCT/US2000/027801 WO2001026011A1 (en) 1999-10-06 2000-10-06 Method and estimator for providing operation management strategic planning

Family Applications After (5)

Application Number Title Priority Date Filing Date
PCT/US2000/027804 WO2001026014A1 (en) 1999-10-06 2000-10-06 Method and estimator for providing service control
PCT/US2000/027802 WO2001026012A1 (en) 1999-10-06 2000-10-06 Method and estimator for providing storage management
PCT/US2000/027857 WO2001025877A2 (en) 1999-10-06 2000-10-06 Organization of information technology functions
PCT/US2000/027796 WO2001026010A1 (en) 1999-10-06 2000-10-06 Method and estimator for production scheduling
PCT/US2000/027592 WO2001026007A1 (en) 1999-10-06 2000-10-06 Method and estimator for providing business recovery planning

Country Status (4)

Country Link
EP (2) EP1226523A4 (en)
AU (12) AU7996000A (en)
CA (1) CA2386788A1 (en)
WO (12) WO2001026028A1 (en)

Families Citing this family (39)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2002256550A1 (en) * 2000-12-11 2002-06-24 Skill Development Associates Ltd Integrated business management system
US7937281B2 (en) 2001-12-07 2011-05-03 Accenture Global Services Limited Accelerated process improvement framework
US7035809B2 (en) * 2001-12-07 2006-04-25 Accenture Global Services Gmbh Accelerated process improvement framework
WO2004040409A2 (en) 2002-10-25 2004-05-13 Science Applications International Corporation Determining performance level capabilities using predetermined model criteria
DE10331207A1 (en) 2003-07-10 2005-01-27 Daimlerchrysler Ag Method and apparatus for predicting failure frequency
US8572003B2 (en) * 2003-07-18 2013-10-29 Sap Ag Standardized computer system total cost of ownership assessments and benchmarking
US8566147B2 (en) * 2005-10-25 2013-10-22 International Business Machines Corporation Determining the progress of adoption and alignment of information technology capabilities and on-demand capabilities by an organization
EP1808803A1 (en) * 2005-12-15 2007-07-18 International Business Machines Corporation System and method for automatically selecting one or more metrics for performing a CMMI evaluation
US8457297B2 (en) 2005-12-30 2013-06-04 Aspect Software, Inc. Distributing transactions among transaction processing systems
US8355938B2 (en) 2006-01-05 2013-01-15 Wells Fargo Bank, N.A. Capacity management index system and method
US7523082B2 (en) * 2006-05-08 2009-04-21 Aspect Software Inc Escalating online expert help
WO2008105825A1 (en) * 2007-02-26 2008-09-04 Unisys Corporation A method for multi-sourcing technology based services
US20100312612A1 (en) * 2007-10-25 2010-12-09 Hugh Carr Modification of service delivery infrastructure in communication networks
US8326660B2 (en) 2008-01-07 2012-12-04 International Business Machines Corporation Automated derivation of response time service level objectives
US8320246B2 (en) * 2009-02-19 2012-11-27 Bridgewater Systems Corp. Adaptive window size for network fair usage controls
US8200188B2 (en) 2009-02-20 2012-06-12 Bridgewater Systems Corp. System and method for adaptive fair usage controls in wireless networks
US8577329B2 (en) 2009-05-04 2013-11-05 Bridgewater Systems Corp. System and methods for carrier-centric mobile device data communications cost monitoring and control
US9203629B2 (en) 2009-05-04 2015-12-01 Bridgewater Systems Corp. System and methods for user-centric mobile device-based data communications cost monitoring and control
US20110066476A1 (en) * 2009-09-15 2011-03-17 Joseph Fernard Lewis Business management assessment and consulting assistance system and associated method
US20110231229A1 (en) * 2010-03-22 2011-09-22 Computer Associates Think, Inc. Hybrid Software Component and Service Catalog
US20130219163A1 (en) * 2010-10-27 2013-08-22 Yaniv Sayers Systems and methods for scheduling changes
US11172022B2 (en) 2014-02-21 2021-11-09 Hewlett Packard Enterprise Development Lp Migrating cloud resources
US10148757B2 (en) 2014-02-21 2018-12-04 Hewlett Packard Enterprise Development Lp Migrating cloud resources
US20170032297A1 (en) * 2014-04-03 2017-02-02 Dale Chalfant Systems and Methods for Increasing Capability of Systems of Business Through Maturity Evolution
US9984044B2 (en) 2014-11-16 2018-05-29 International Business Machines Corporation Predicting performance regression of a computer system with a complex queuing network model
US10044786B2 (en) 2014-11-16 2018-08-07 International Business Machines Corporation Predicting performance by analytically solving a queueing network model
US10460272B2 (en) * 2016-02-25 2019-10-29 Accenture Global Solutions Limited Client services reporting
CN106682385B (en) * 2016-09-30 2020-02-11 广州英康唯尔互联网服务有限公司 Health information interaction system
EP3782021A4 (en) * 2018-04-16 2022-01-05 Ingram Micro, Inc. System and method for matching revenue streams in a cloud service broker platform
US11481711B2 (en) 2018-06-01 2022-10-25 Walmart Apollo, Llc System and method for modifying capacity for new facilities
US20190369590A1 (en) 2018-06-01 2019-12-05 Walmart Apollo, Llc Automated slot adjustment tool
US11483350B2 (en) 2019-03-29 2022-10-25 Amazon Technologies, Inc. Intent-based governance service
CN110096423A (en) * 2019-05-14 2019-08-06 深圳供电局有限公司 A kind of server memory capacity analyzing and predicting method based on big data analysis
US11119877B2 (en) 2019-09-16 2021-09-14 Dell Products L.P. Component life cycle test categorization and optimization
EP4058958A4 (en) * 2019-11-11 2023-11-29 Snapit Solutions LLC System for producing and delivering information technology products and services
US11288150B2 (en) 2019-11-18 2022-03-29 Sungard Availability Services, Lp Recovery maturity index (RMI)-based control of disaster recovery
US20210160143A1 (en) 2019-11-27 2021-05-27 Vmware, Inc. Information technology (it) toplogy solutions according to operational goals
US11501237B2 (en) 2020-08-04 2022-11-15 International Business Machines Corporation Optimized estimates for support characteristics for operational systems
US11329896B1 (en) 2021-02-11 2022-05-10 Kyndryl, Inc. Cognitive data protection and disaster recovery policy management

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5793632A (en) * 1996-03-26 1998-08-11 Lockheed Martin Corporation Cost estimating system using parametric estimating and providing a split of labor and material costs

Family Cites Families (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4827423A (en) * 1987-01-20 1989-05-02 R. J. Reynolds Tobacco Company Computer integrated manufacturing system
JPH03111969A (en) * 1989-09-27 1991-05-13 Hitachi Ltd Method for supporting plan formation
US5233513A (en) * 1989-12-28 1993-08-03 Doyle William P Business modeling, software engineering and prototyping method and apparatus
WO1993012488A1 (en) * 1991-12-13 1993-06-24 White Leonard R Measurement analysis software system and method
US5701419A (en) * 1992-03-06 1997-12-23 Bell Atlantic Network Services, Inc. Telecommunications service creation apparatus and method
US5586021A (en) * 1992-03-24 1996-12-17 Texas Instruments Incorporated Method and system for production planning
US5646049A (en) * 1992-03-27 1997-07-08 Abbott Laboratories Scheduling operation of an automated analytical system
US5978811A (en) * 1992-07-29 1999-11-02 Texas Instruments Incorporated Information repository system and method for modeling data
US5630069A (en) * 1993-01-15 1997-05-13 Action Technologies, Inc. Method and apparatus for creating workflow maps of business processes
US5819270A (en) * 1993-02-25 1998-10-06 Massachusetts Institute Of Technology Computer system for displaying representations of processes
CA2118885C (en) * 1993-04-29 2005-05-24 Conrad K. Teran Process control system
WO1994029804A1 (en) * 1993-06-16 1994-12-22 Electronic Data Systems Corporation Process management system
US5485574A (en) * 1993-11-04 1996-01-16 Microsoft Corporation Operating system based performance monitoring of programs
US5724262A (en) * 1994-05-31 1998-03-03 Paradyne Corporation Method for measuring the usability of a system and for task analysis and re-engineering
US5563951A (en) * 1994-07-25 1996-10-08 Interval Research Corporation Audio interface garment and communication system for use therewith
US5745880A (en) * 1994-10-03 1998-04-28 The Sabre Group, Inc. System to predict optimum computer platform
JP3315844B2 (en) * 1994-12-09 2002-08-19 株式会社東芝 Scheduling device and scheduling method
JPH08320855A (en) * 1995-05-24 1996-12-03 Hitachi Ltd Method and system for evaluating system introduction effect
EP0770967A3 (en) * 1995-10-26 1998-12-30 Koninklijke Philips Electronics N.V. Decision support system for the management of an agile supply chain
US5875431A (en) * 1996-03-15 1999-02-23 Heckman; Frank Legal strategic analysis planning and evaluation control system and method
US5960417A (en) * 1996-03-19 1999-09-28 Vanguard International Semiconductor Corporation IC manufacturing costing control system and process
US5960200A (en) * 1996-05-03 1999-09-28 I-Cube System to transition an enterprise to a distributed infrastructure
US5673382A (en) * 1996-05-30 1997-09-30 International Business Machines Corporation Automated management of off-site storage volumes for disaster recovery
US5864483A (en) * 1996-08-01 1999-01-26 Electronic Data Systems Corporation Monitoring of service delivery or product manufacturing
US5974395A (en) * 1996-08-21 1999-10-26 I2 Technologies, Inc. System and method for extended enterprise planning across a supply chain
US5930762A (en) * 1996-09-24 1999-07-27 Rco Software Limited Computer aided risk management in multiple-parameter physical systems
US6044354A (en) * 1996-12-19 2000-03-28 Sprint Communications Company, L.P. Computer-based product planning system
US5903478A (en) * 1997-03-10 1999-05-11 Ncr Corporation Method for displaying an IT (Information Technology) architecture visual model in a symbol-based decision rational table
US6028602A (en) * 1997-05-30 2000-02-22 Telefonaktiebolaget Lm Ericsson Method for managing contents of a hierarchical data model
US6106569A (en) * 1997-08-14 2000-08-22 International Business Machines Corporation Method of developing a software system using object oriented technology
US6092047A (en) * 1997-10-07 2000-07-18 Benefits Technologies, Inc. Apparatus and method of composing a plan of flexible benefits
US6131099A (en) * 1997-11-03 2000-10-10 Moore U.S.A. Inc. Print and mail business recovery configuration method and system
US6119097A (en) * 1997-11-26 2000-09-12 Executing The Numbers, Inc. System and method for quantification of human performance factors
US6157916A (en) * 1998-06-17 2000-12-05 The Hoffman Group Method and apparatus to control the operating speed of a papermaking facility

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5793632A (en) * 1996-03-26 1998-08-11 Lockheed Martin Corporation Cost estimating system using parametric estimating and providing a split of labor and material costs

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
BRIMSON J.A.: "Activity accounting: An activity-based costing approach", 1991, JOHN WILEY & SONS, INC., XP002936962 *
DAVIS W.S. ET AL.: "The information system consultant's handbook: Systems, analysis and design", 1 December 1998, CRC PRESS, XP002936958 *
KERZNER H. PHD: "Project management: A systems approach to planning, scheduling and controlling", 1995, XP002936961 *
OSTERLE H. ET AL.: "Total information system management: A European approach", 1993, JOHN WILEY & SONS, LTD., XP002936959 *
WARD J. ET AL.: "Strategic planning for information systems", 1996, JOHN WILEY & SONS, LTD., XP002936960 *

Also Published As

Publication number Publication date
WO2001025877A3 (en) 2001-09-07
AU8001800A (en) 2001-05-10
AU7861800A (en) 2001-05-10
WO2001026007A1 (en) 2001-04-12
EP1222510A4 (en) 2007-10-31
EP1222510A2 (en) 2002-07-17
WO2001026010A1 (en) 2001-04-12
AU8001700A (en) 2001-05-10
WO2001026014A1 (en) 2001-04-12
WO2001026013A1 (en) 2001-04-12
AU7866600A (en) 2001-05-10
WO2001025877A2 (en) 2001-04-12
WO2001025876A2 (en) 2001-04-12
WO2001026011A1 (en) 2001-04-12
AU7996000A (en) 2001-05-10
AU1193601A (en) 2001-05-10
AU1431801A (en) 2001-05-10
WO2001025970A1 (en) 2001-04-12
AU1193801A (en) 2001-05-10
EP1226523A4 (en) 2003-02-19
EP1226523A1 (en) 2002-07-31
WO2001026012A1 (en) 2001-04-12
WO2001025876A3 (en) 2001-08-30
WO2001026005A1 (en) 2001-04-12
AU1431701A (en) 2001-05-10
AU1653901A (en) 2001-05-10
AU7996100A (en) 2001-05-10
WO2001025970A8 (en) 2001-09-27
WO2001026028A8 (en) 2001-07-26
WO2001026028A1 (en) 2001-04-12
CA2386788A1 (en) 2001-04-12
AU7756600A (en) 2001-05-10

Similar Documents

Publication Publication Date Title
WO2001026008A1 (en) Method and estimator for event/fault monitoring
US6738736B1 (en) Method and estimator for providing capacacity modeling and planning
US7035809B2 (en) Accelerated process improvement framework
US7937281B2 (en) Accelerated process improvement framework
US20050114829A1 (en) Facilitating the process of designing and developing a project
US20160321583A1 (en) Change navigation toolkit
Letavec et al. The PMOSIG's Program Management Office Handbook: Strategic and Tactical Insights for Improving Results
Pilorget Implementing IT processes
CISM Managing Software Deliverables: A Software Development Management Methodology
Nejmeh et al. The PERFECT approach to experience-based process evolution
Weed-Schertzer The Authentic Service Progression (TASP)
Pilorget et al. IT Portfolio and Project Management
Howard IT Release Management: A Hands-on Guide
Solin IT-documentation framework for an Engineering and Service Company
Ma Assessing capability maturity tools for process management improvement: A case study
Singh downloaded from the King’s Research Portal at https://kclpure. kcl. ac. uk/portal
Clapp et al. A guide to conducting independent technical assessments
Macholz XP Project Management
von Holten Developing a Quality Management Framework for a Knowledge Intensive Company: Quality Management Framework to Support the Ongoing Product Development Relocations
Majchrzak et al. EFFECTIVE INTEGRATION PLANNING TO SUPPORT AGILE MANUFACTURING, REENGINEERING, AND CONCURRENT ENGINEERING
Enterprise et al. Request for Proposal (RFP) Proc Main
Engelbrecht Successfully Implementing a Manufacturing Execution Systems (MES) Solutions
Bernard et al. CMMI (Trademark) Acquisition Module (CMMI-AM) Version 1.0
Hui A two-tier adaptive approach to securing successful ERP implementation
Aygün Unification of it process models into a simple framework supplemented by Turkish web based application

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: JP