US20040153692A1 - Method for managing faults it a computer system enviroment - Google Patents

Method for managing faults it a computer system enviroment Download PDF

Info

Publication number
US20040153692A1
US20040153692A1 US10/250,345 US25034504A US2004153692A1 US 20040153692 A1 US20040153692 A1 US 20040153692A1 US 25034504 A US25034504 A US 25034504A US 2004153692 A1 US2004153692 A1 US 2004153692A1
Authority
US
United States
Prior art keywords
detector
policy
detectors
fault
faults
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US10/250,345
Inventor
Michael O'Brien
Peter Gravestock
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
GoAhead Software Inc
Original Assignee
GoAhead Software Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by GoAhead Software Inc filed Critical GoAhead Software Inc
Priority to US10/250,345 priority Critical patent/US20040153692A1/en
Priority claimed from PCT/US2001/049945 external-priority patent/WO2002054255A1/en
Assigned to GOAHEAD SOFTWARE, INC. reassignment GOAHEAD SOFTWARE, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: GRAVESTOCK, PETER, O'BRIEN, MICHAEL
Publication of US20040153692A1 publication Critical patent/US20040153692A1/en
Assigned to GOAHEAD SOFTWARE INC. reassignment GOAHEAD SOFTWARE INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: O'BRIEN, MICHAEL, GRAVESTOCK, PETER
Priority to US11/489,032 priority patent/US7337373B2/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/0703Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
    • G06F11/0706Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation the processing taking place on a specific hardware platform or in a specific software environment
    • G06F11/0709Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation the processing taking place on a specific hardware platform or in a specific software environment in a distributed system consisting of a plurality of standalone computer nodes, e.g. clusters, client-server systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/0703Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
    • G06F11/0793Remedial or corrective actions

Definitions

  • the present invention applies to the field of fault diagnostics in computing systems using detectors and policies.
  • the present invention uses a method for detecting faults in a computing environment and then taking action on those faults. If the fault detected meets predetermined criteria then the detection module sends an event signal to a policy module that in turn takes a programmed action depending on predetermined criteria that analyzes the variables associated with the event signal. The resulting action may range from sending email to causing a device switchover from a defective device to a correctly operating device.
  • the detection modules are also capable of sending event signals to other detection modules. These other detection modules may only react if multiple signal are received from the primary detection modules. This aids in the diagnosis of the system fault. Data is continually collected from the computing system and this data is kept in a readily accessible database that may be read by the detection modules. The computing system data is continually updated so the information is current and fresh. The detection modules continually scan the data appropriate for the particular detection module interested in that information.
  • FIG. 1 shows an overview of the fault detection system.
  • FIG. 2 shows an example of fault monitoring system hardware.
  • FIG. 3 shows how fault monitoring helps in system diagnostics.
  • the preferred embodiment and best mode of this invention provides a framework for diagnosing faults in a computing system environment It includes the capabilities of detecting, and diagnosing computing systems problems and individual device problems within that system.
  • the detection capability identifies an undesirable condition that may lead to the loss of service from the system or device. Detection also includes the discovery of a fault using error detection or inference. The detection may occur through direct observation, by correlating multiple events in time, or by inference, that is, by observing other types of behavior of the system. Some sets of circumstances may lead to the conclusion that an event is a fault whereas another set of circumstances may lead to a different conclusion (e.g. the event is normal system behavior).
  • Diagnosis occurs when one or more events and system parameters are used to determine the nature and location of a fault. This step can be performed by the fault detection system or invoked separately as a user diagnostic. The diagnosis may be acted upon automatically by the system or may be reported to the user for some user action to be taken. In some systems it's possible that a single fault may lead to multiple errors being detected. By doing a root cause analysis the fault may be able to be isolated and acted upon. Isolation actions contain the error or problem and keep it from spreading throughout the system. Isolation actions and recovery actions are often done in conjunction with each other.
  • An example of an isolation action is one in which a memory usage size is imposed upon an application when the fault management system recognizes that the application is continually absorbing memory without a follow-on release of said used memory when no longer needed by the application.
  • Another isolation example is where the power to a board is terminated when the board is recognized as having a failed component.
  • Recovery occurs when a fault management system takes action to restore a system to service once a fault has occurred.
  • the recovery actions cover a wide range of activities, from restarting a failed application to failing over to a standby hardware card.
  • the recovery process is often takes multiple steps wherein those steps comprise several actions that must be taken in specific order. In some cases, the recovery process is multitiered in that, if a specific action doesn't recover the system then some other action must be taken.
  • Notifying or logging to either the system or to the user of the diagnosis made and the resultant action taken is known as reporting. For example if an application crashes it might be both recovered, for example by restarting, and reported via email or paging. Repair is defined as the replacement of the hardware or software components as necessary. Hardware components may be hot swapped (taken out and replaced while the system, still running, switches over to another component), for example network interface cards, or, instead of hot swapping the system may be shut down and the failed part manually replaced and repaired.
  • Detectors and policies can be arranged in networks of arbitrary complexity to capture the dependencies between events, faults, and errors. The actions taken may be arbitrarily complex functions or even calls to other programs.
  • the detectors and policies are encoded in multiple XML-based files, which help achieve the cross-platform, extensible, and modular design goals.
  • Table 1 shows a typical database table for a detector. The columns of the table specify the attributes of the detector component. Because detectors and policies are implemented in XML and embedded JavaScript changes to policies and reconfiguration of detectors can be done easily and without recompilation. This run-time modification of behavior supports faster development. Detectors and policies can be developed independently and plugged into the fault management system. TABLE 1 Detector Column Name Description name Name that identifies the detector. Must be globally uni among all detectors defined for all SelfReliant extensio Used primarily when running one detector from anothe when specifying detector sets in schedules.
  • Detectors “listen” for specified events and can be also be made aware if other detectors have triggered. This approach is the opposite of function calling because it allows new detectors to be added to listen for new events without requiring an edit of the set of functions. This capability, along with run-time interpreting of detectors and policies provide support for modularity and reusability.
  • Detectors gather data from various sources including collector databases, events, and applications and even from other detectors. Based on this information decisions are made about the condition of the system and how the system parameters compare to the predetermined parameters that judge whether the system is running optimally or not. If a detector decides that the information it has obtained represents a problem condition then the detector fires (sends a message) and passes that information to a policy or another detector. Note that the detector doesn't decide what action needs to be taken to correct the situation it just passes the condition to one or more policies for analysis and decision making. Detectors can be activated asynchronously by responding to fault management events originated from the system hardware, application software, or the operating system software.
  • the detectors may also be executed in a synchronous or polled manner according to a predetermined schedule. Detectors can also run other detectors through an embedded JavaScript API and detectors may be triggered by other detectors if the first detectors are configured to listen to other detector types.
  • FIG. 1 shows a hierarchy of the detector and policy objects. A process 102 , a memory load 104 , the network traffic 106 , and the time of day 108 data is collected and made available to the appropriate detectors 110 . Note that detector 110 can trigger another detector 111 . The detectors in turn trigger the appropriate policies 112 . Note that some policies 113 can respond to more than one detector.
  • the policies can, in turn, trigger various recovery actions such as paging 114 , sending an email 116 , rebooting the system 120 , restarting the system 122 , switching over resources 124 , engaging a SNMP trap 126 , or some other custom action 128 .
  • the detectors are locked out from listening to themselves.
  • a detector invokes its rule to determine the status of the information it watches.
  • This rule is implemented in embedded JavaScript and contained in an XML file.
  • a value watched by a detector violates the rule the detector triggers one or more policies.
  • a detector triggers its output can be set to a “fuzzy” value ranging from zero to a hundred as determined by the rule.
  • FIG. 2 shows an interface between a systems hardware, detectors, events and policies.
  • a typical piece of hardware can be a fan 200 or a network interface card (NIC) 202 .
  • the detectors 203 can monitor the performance of the hardware devices through the operating system (for example, using heartbeats).
  • a hardware anomaly is flagged by detector 203 that can be set to trigger another detector 206 , which in turn triggers a policy 208 . Note that it is possible to trigger detector 206 only if there is also an event occurrence triggered by an outside condition 210 .
  • An application 212 can also provide input into a detector 203 that either combines data from elsewhere to cause the detector 203 to trigger, or on the other hand prevent the detector 203 from triggering by the event's presence.
  • Policies decide what action to take, based on information provided by detectors.
  • Policies can be configured to listen to a set of detectors as specified by the detector type. If a policy listening to a detector sees the detector fire (that is, have an output value greater than zero) then the policy rule runs.
  • Policies can react to multiple detectors and invoke multiple actions. Policies use the output and any passed information of the relevant detectors to determine the recovery and/or notification action to take. For example, if a fault is detected on a Monday afternoon during business hours, the policy may page a technician in real time, if the fault is detected after nours then the policy may send an email to the technician.
  • Table 2 shows the attributes of the policy component of the fault management system. TABLE 2 Policy Column Name Description name Name that identifies the detector.
  • a policy When a policy responds to a fault occurrence it may call a recovery action.
  • Recoveries can be either a corrective action or a notification.
  • Recovery functions are usually implemented using the C programming language and they are called by the embedded JavaScript rules in the policies. Actions can include failovers to standby components.
  • detectors and policies both run embedded JavaScript rules in response to certain conditions, they serve different functions. The primary function of detectors is to detect certain conditions, evaluate the output of other detectors, and, if necessary, fire to signal that a specific condition or sets of conditions have been found. Detector rules should be relatively short and fast. Networks of detectors help produce a more accurate and complete diagnosis by evaluating the results of multiple inputs.
  • a policy rule on the other hand needs to take action given that a certain condition was detected.
  • a policy is invoked when a detector of a specified type fires. This allows one policy to respond to several different detectors in the same way.
  • a policy rule simply allows configuration of what actions will be taken in response to various conditions or faults detected.
  • the detectors, the policies, and the schedules are defined in XML database tables.
  • This embodiment of a multimode fault management system allows a certain degree of multithreading.
  • Each individual detector and policy that is running is locked. This prevents another thread from running the same event or policy simultaneously. However the other detectors remain unlocked and can be run at the same time the first detectors and policy is running. If one detector triggers or sends an event to another that is momentarily locked by another thread, the first thread will wait until it can acquire the lock.
  • Each detector and policy has a local scope that is populated when data is passed from one to another. During this data transfer both objects are locked. After the transfer is complete the object that fired is unlocked.
  • an XML detector description defines a scheduled detector that monitors memory use through a database collector. If the amount of memory used exceeds a certain threshold, the policy triggers and calls a logging action. See additional comments in the XML file below for more information.
  • This detector collects the base memory table, causing the table to be updated with current values relating to memory usage.
  • the detector will publish the name of the resource that is low to any listening policies and fire with a value equal to that of the percentage of used memory.
  • ⁇ COL name “description”>Low Memory ⁇ /COL>
  • ⁇ COL name “type”>lowResource ⁇ /COL>
  • ⁇ COL name “url”>/fm/memorySmartExplanation.htm ⁇ /COL>
  • ⁇ COL name “enable”>1 ⁇ /COL>
  • ⁇ COL name “public”>1 ⁇ /COL>
  • ⁇ COL name “events”> ⁇ /COL>
  • This policy listens to detectors of type “lowResource”. Any number of detectors can detect low resources for various system components, and this policy will handle all of them.
  • This schedule runs every five seconds, causing the lowMemory detector to run and fire the policy if the memory usage is high.
  • Additional resource detectors can be added to this schedule set to allow more resources to be monitored.
  • Networks of detectors are useful in diagnosing intermittent problems that may not be directly testable because of interface limitations or the intermittence of the problem. In these cases, it is useful to correlate faults that have occurred in other related components, and make a diagnosis based on those faults.
  • FIG. 3 illustrates a scenario that assumes a hardware platform with five PCI slots and a PCI bridge chip. Assume the bridge chip is not be able to notify the system of its failure. One symptom of the bridge chip failure is that the five cards bridged by the chip become unable to communicate with the host processor. The loss of a hardware heartbeat is detectable by the fault management process. An individual card can also stop responding to heartbeats because of electrical or physical disconnection, or other hardware and software faults. By determining the correct cause of a failure, the system is better equipped to ensure rapid failover between the correct components.
  • a lost heartbeat event from a card will cause the lost card heartbeat detector 314 to run.
  • This detector populates a table that stores the name of the card that missed a heartbeat, the current time, and the number of times the heartbeat has failed. This information is important because it allows the second level detectors to provide fault correlation. This detector 314 always fires.
  • Both the bridge 310 detector and the card failure detector 306 listen to the lost heartbeat detector 314 .
  • the detectors will run serially, in the order defined in the XML file, but in general, the rules for each are designed so that the order in which they run does not matter. For this example, we assume the bridge failure detector 310 runs first.
  • the bridge supports diagnostics, they can be called from the bridge failure detector 310 .
  • the results of the tests can be used to determine that the bridge has failed, and fire the detector immediately.
  • the bridge detector by firing, invokes the bridge failure policy 316 to run. If the problem is intermittent, or the diagnostics cannot detect certain conditions, event correlation must be done by the bridge failure detector 310 .
  • the bridge failure detector 310 looks at the card database table to determine if all of the cards have had heartbeat failures within a given period of time. If they have, the bridge is assumed to be bad, and the bridge failure detector 310 fires.
  • the card failure detector 306 engages in a similar process.
  • the card failure detector can invoke the card failure policy 312 . If card diagnostics can show the card has failed, the detector can run those diagnostics to determine whether to fire based on that condition. Because the diagnostics may not run correctly in the case of a bridge failure or other intermittent problem, the correlation table once again comes into play. If the card that lost a heartbeat has repeatedly lost heartbeats recently, and at least one card in the correlation table has not lost any heartbeats, the bridge chip has not failed, but the card has. The bridge failure event and the card failure event show two additional methods by which a failure in these components can be detected.
  • driver code the interface software between the operating system and the device
  • driver code can internally detect a card or bridge failure
  • the event can be sent directly.
  • second level detector was triggered through an external event, no additional diagnosis or correlation would be required, and the detector would fire.
  • Detectors can determine whether or not an event caused them to fire by looking at the local “_EVENT” embedded JavaScript variable.

Abstract

Many computing system environments require continuous availability and high operational readiness. The ability to find, diagnose, and correct actual faults and potential faults in these systems is high. By combining a continually updated database of computing system performance with the ability to analyze that information to detect faults and then communicating that fault information to correct the fault or provide appropriate notification of the fault results in achieving the goals of high availability and operational readiness. FIG. (1) shows the data collectors (102, 104, 106 and 108) fault detectors (110) and policy actions (112) are combined to meet these goals.

Description

    BACKGROUND—FIELD
  • The present invention applies to the field of fault diagnostics in computing systems using detectors and policies. [0001]
  • BACKGROUND—DESCRIPTION OF RELATED ART
  • Comprehensive fault management plays an important role in keeping critical computing systems in a continuous highly available mode of operation. These systems must incur minimum downtime, typically in the range of seconds or minutes per year. In order to meet this goal every critical component (a critical component is one that, upon failing, fails the entire corresponding system) must be closely monitored for both occurring faults and potentially occurring faults. In addition it is important that these faults be handled in real time and within the system rather than remotely as is done in many monitoring systems today. An example of a remote monitoring system is a system that follows the Simple Network Management Protocol (SNMP). For the foregoing reasons there is a need for a fast, small footprint, real time system to detect and diagnose problems. In addition it's preferred that this system also be cross-platform, extensible and modular. [0002]
  • SUMMARY
  • The present invention uses a method for detecting faults in a computing environment and then taking action on those faults. If the fault detected meets predetermined criteria then the detection module sends an event signal to a policy module that in turn takes a programmed action depending on predetermined criteria that analyzes the variables associated with the event signal. The resulting action may range from sending email to causing a device switchover from a defective device to a correctly operating device. The detection modules are also capable of sending event signals to other detection modules. These other detection modules may only react if multiple signal are received from the primary detection modules. This aids in the diagnosis of the system fault. Data is continually collected from the computing system and this data is kept in a readily accessible database that may be read by the detection modules. The computing system data is continually updated so the information is current and fresh. The detection modules continually scan the data appropriate for the particular detection module interested in that information.[0003]
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 shows an overview of the fault detection system. [0004]
  • FIG. 2 shows an example of fault monitoring system hardware. [0005]
  • FIG. 3 shows how fault monitoring helps in system diagnostics.[0006]
  • DETAILED DESCRIPTION OF THE INVENTION
  • The preferred embodiment and best mode of this invention provides a framework for diagnosing faults in a computing system environment It includes the capabilities of detecting, and diagnosing computing systems problems and individual device problems within that system. [0007]
  • The detection capability identifies an undesirable condition that may lead to the loss of service from the system or device. Detection also includes the discovery of a fault using error detection or inference. The detection may occur through direct observation, by correlating multiple events in time, or by inference, that is, by observing other types of behavior of the system. Some sets of circumstances may lead to the conclusion that an event is a fault whereas another set of circumstances may lead to a different conclusion (e.g. the event is normal system behavior). [0008]
  • Diagnosis occurs when one or more events and system parameters are used to determine the nature and location of a fault. This step can be performed by the fault detection system or invoked separately as a user diagnostic. The diagnosis may be acted upon automatically by the system or may be reported to the user for some user action to be taken. In some systems it's possible that a single fault may lead to multiple errors being detected. By doing a root cause analysis the fault may be able to be isolated and acted upon. Isolation actions contain the error or problem and keep it from spreading throughout the system. Isolation actions and recovery actions are often done in conjunction with each other. An example of an isolation action is one in which a memory usage size is imposed upon an application when the fault management system recognizes that the application is continually absorbing memory without a follow-on release of said used memory when no longer needed by the application. Another isolation example is where the power to a board is terminated when the board is recognized as having a failed component. Recovery occurs when a fault management system takes action to restore a system to service once a fault has occurred. The recovery actions cover a wide range of activities, from restarting a failed application to failing over to a standby hardware card. The recovery process is often takes multiple steps wherein those steps comprise several actions that must be taken in specific order. In some cases, the recovery process is multitiered in that, if a specific action doesn't recover the system then some other action must be taken. [0009]
  • Notifying or logging to either the system or to the user of the diagnosis made and the resultant action taken is known as reporting. For example if an application crashes it might be both recovered, for example by restarting, and reported via email or paging. Repair is defined as the replacement of the hardware or software components as necessary. Hardware components may be hot swapped (taken out and replaced while the system, still running, switches over to another component), for example network interface cards, or, instead of hot swapping the system may be shut down and the failed part manually replaced and repaired. [0010]
  • Detectors and policies can be arranged in networks of arbitrary complexity to capture the dependencies between events, faults, and errors. The actions taken may be arbitrarily complex functions or even calls to other programs. [0011]
  • In the current embodiment the detectors and policies are encoded in multiple XML-based files, which help achieve the cross-platform, extensible, and modular design goals. Table 1 shows a typical database table for a detector. The columns of the table specify the attributes of the detector component. Because detectors and policies are implemented in XML and embedded JavaScript changes to policies and reconfiguration of detectors can be done easily and without recompilation. This run-time modification of behavior supports faster development. Detectors and policies can be developed independently and plugged into the fault management system. [0012]
    TABLE 1
    Detector
    Column Name Description
    name Name that identifies the detector. Must be globally uni
    Figure US20040153692A1-20040805-P00899
    among all detectors defined for all SelfReliant extensio
    Figure US20040153692A1-20040805-P00899
    Used primarily when running one detector from anothe
    Figure US20040153692A1-20040805-P00899
    when specifying detector sets in schedules.
    description Description of what the detector detects.
    type The type of detector. The type specified here can be u
    Figure US20040153692A1-20040805-P00899
    by other detectors as well. Policies are triggered by
    detectors based on their type, so this field is the link
    between a detector and its policy.
    url URL to a Web page served by the SelfReliant
    WebServer ™ that provides an HTML-based explanati
    Figure US20040153692A1-20040805-P00899
    the detector's output.
    events Space-delimited list of events that the detector listens
    Multiple detectors can listen to the same event.
    types Space delimited list of the types of detectors that caus
    Figure US20040153692A1-20040805-P00899
    detector to fire. The detector rule will run if a detector
    Figure US20040153692A1-20040805-P00899
    type listed here fires with a non-zero output.
    enable Boolean value that indicates whether the rule for this
    detector should run. If 0, the detector rule will not be r
    Figure US20040153692A1-20040805-P00899
    matter how the detector is invoked.
    rule Embedded JavaScript rule that is run when the detect
    Figure US20040153692A1-20040805-P00899
    invoked. The rule has access to all global variables ar
    Figure US20040153692A1-20040805-P00899
    functions defined for use within the detector namespa
    Figure US20040153692A1-20040805-P00899
  • Detectors “listen” for specified events and can be also be made aware if other detectors have triggered. This approach is the opposite of function calling because it allows new detectors to be added to listen for new events without requiring an edit of the set of functions. This capability, along with run-time interpreting of detectors and policies provide support for modularity and reusability. [0013]
  • The procedural part of detectors and policies is coded in “embedded JavaScript” which is a very small footprint subset of the JavaScript language. Any function written in the C language can be nominated into and called from the JavaScript namespace. This embodiment of the invention makes extensive use of an in-memory database to store data and code. [0014]
  • Detectors gather data from various sources including collector databases, events, and applications and even from other detectors. Based on this information decisions are made about the condition of the system and how the system parameters compare to the predetermined parameters that judge whether the system is running optimally or not. If a detector decides that the information it has obtained represents a problem condition then the detector fires (sends a message) and passes that information to a policy or another detector. Note that the detector doesn't decide what action needs to be taken to correct the situation it just passes the condition to one or more policies for analysis and decision making. Detectors can be activated asynchronously by responding to fault management events originated from the system hardware, application software, or the operating system software. The detectors may also be executed in a synchronous or polled manner according to a predetermined schedule. Detectors can also run other detectors through an embedded JavaScript API and detectors may be triggered by other detectors if the first detectors are configured to listen to other detector types. FIG. 1 shows a hierarchy of the detector and policy objects. A [0015] process 102, a memory load 104, the network traffic 106, and the time of day 108 data is collected and made available to the appropriate detectors 110. Note that detector 110 can trigger another detector 111. The detectors in turn trigger the appropriate policies 112. Note that some policies 113 can respond to more than one detector. The policies can, in turn, trigger various recovery actions such as paging 114, sending an email 116, rebooting the system 120, restarting the system 122, switching over resources 124, engaging a SNMP trap 126, or some other custom action 128. To prevent a recursive event the detectors are locked out from listening to themselves. When a detector is run, it invokes its rule to determine the status of the information it watches. This rule is implemented in embedded JavaScript and contained in an XML file. When a value watched by a detector violates the rule the detector triggers one or more policies. When a detector triggers, its output can be set to a “fuzzy” value ranging from zero to a hundred as determined by the rule. The detector can also pass other information to a listening detector or policy to help analyze the information. FIG. 2 shows an interface between a systems hardware, detectors, events and policies. A typical piece of hardware can be a fan 200 or a network interface card (NIC) 202. The detectors 203 can monitor the performance of the hardware devices through the operating system (for example, using heartbeats). A hardware anomaly is flagged by detector 203 that can be set to trigger another detector 206, which in turn triggers a policy 208. Note that it is possible to trigger detector 206 only if there is also an event occurrence triggered by an outside condition 210. An application 212 can also provide input into a detector 203 that either combines data from elsewhere to cause the detector 203 to trigger, or on the other hand prevent the detector 203 from triggering by the event's presence.
  • Policies decide what action to take, based on information provided by detectors. Policies can be configured to listen to a set of detectors as specified by the detector type. If a policy listening to a detector sees the detector fire (that is, have an output value greater than zero) then the policy rule runs. Policies can react to multiple detectors and invoke multiple actions. Policies use the output and any passed information of the relevant detectors to determine the recovery and/or notification action to take. For example, if a fault is detected on a Monday afternoon during business hours, the policy may page a technician in real time, if the fault is detected after nours then the policy may send an email to the technician. Table 2 below shows the attributes of the policy component of the fault management system. [0016]
    TABLE 2
    Policy
    Column Name Description
    name Name that identifies the detector. Must be globally uni
    Figure US20040153692A1-20040805-P00899
    among all detectors defined.
    description Description of what actions the policy takes given the
    detector types that it is triggered by.
    types Space-delimited list of the types of detectors that caus
    Figure US20040153692A1-20040805-P00899
    policy to fire. The policy rule will run if a detector of a
    t
    Figure US20040153692A1-20040805-P00899
    listed here fires.
    enable Boolean value that indicates whether the rule for this
    Figure US20040153692A1-20040805-P00899
    should run. If 0, the rule will not be run.
    rule Embedded JavaScript rule that is run when the policy
    triggered. The rule has access to all global variables a
    functions defined for use within the policy namespace.
  • When a policy responds to a fault occurrence it may call a recovery action. Recoveries can be either a corrective action or a notification. Recovery functions are usually implemented using the C programming language and they are called by the embedded JavaScript rules in the policies. Actions can include failovers to standby components. Although detectors and policies both run embedded JavaScript rules in response to certain conditions, they serve different functions. The primary function of detectors is to detect certain conditions, evaluate the output of other detectors, and, if necessary, fire to signal that a specific condition or sets of conditions have been found. Detector rules should be relatively short and fast. Networks of detectors help produce a more accurate and complete diagnosis by evaluating the results of multiple inputs. A policy rule on the other hand needs to take action given that a certain condition was detected. A policy is invoked when a detector of a specified type fires. This allows one policy to respond to several different detectors in the same way. A policy rule simply allows configuration of what actions will be taken in response to various conditions or faults detected. The detectors, the policies, and the schedules are defined in XML database tables. [0017]
  • This embodiment of a multimode fault management system allows a certain degree of multithreading. Each individual detector and policy that is running is locked. This prevents another thread from running the same event or policy simultaneously. However the other detectors remain unlocked and can be run at the same time the first detectors and policy is running. If one detector triggers or sends an event to another that is momentarily locked by another thread, the first thread will wait until it can acquire the lock. Each detector and policy has a local scope that is populated when data is passed from one to another. During this data transfer both objects are locked. After the transfer is complete the object that fired is unlocked. [0018]
  • Scheduled Detector [0019]
  • In the following example, an XML detector description defines a scheduled detector that monitors memory use through a database collector. If the amount of memory used exceeds a certain threshold, the policy triggers and calls a logging action. See additional comments in the XML file below for more information. [0020]
  • <TBL name=“detector”>[0021]
  • <!—[0022]
  • Low Memory Detector [0023]
  • This detector collects the base memory table, causing the table to be updated with current values relating to memory usage. [0024]
  • If more than ninety percent of the available memory is used, the detector will publish the name of the resource that is low to any listening policies and fire with a value equal to that of the percentage of used memory. [0025]
    -->
    <ROW>
    <COL name=“name”>lowMemory</COL>
    <COL name=“description”>Low Memory</COL>
    <COL name=“type”>lowResource</COL>
    <COL name=“url”>/fm/memorySmartExplanation.htm</COL>
    <COL name=“enable”>1</COL>
    <COL name=“public”>1</COL>
    <COL name=“events”></COL>
    <COL name=“rule”><SCRIPT>
      var total, free, usage;
      var memthresh = 90;
      dbCollectTable(“base”, “baseMem”);
      total = dbRead(“base”, “baseMem”, “physical”, “0”) / 1000;
      free = dbRead(“base”, “baseMem”, “physFree”, “0”) / 1000;
      usage = ((total − free) * 100) / total;
      if (usage >= memthresh) {
        var resource = “Memory”;
        publish(“resource”);
        setOutput(usage);
      }
      </SCRIPT></COL>
    </ROW>
    </TBL>
    <TBL name=“policy”>
    <!--
  • Low Resource Policy [0026]
  • This policy listens to detectors of type “lowResource”. Any number of detectors can detect low resources for various system components, and this policy will handle all of them. [0027]
  • This policy assumes that the output of the detectors is the amount of resource utilization. It also assumes that a variable named “resource” will be published to determine which resource is low. [0028]
  • Using this information, errors are written to the error log according to how severe the resource situation is. [0029]
    -->
    <ROW>
    <COL name=“name”>lowResourcePolicy</COL>
    <COL name=“description”>Low Resource</COL>
    <COL name=“url”></COL>
    <COL name=“enable”>1</COL>
    <COL name=“public”>1</COL>
    <COL name=“types”>lowResource</COL>
    <COL name=“rule”><SCRIPT>
      var pct = getOutput( );
      if(pct > 95) {
        logError(“Very low” + resource + “(“ + pct + ”%)”);
      } else {
        logError(“Low” + resource + “(“ + pct + ”%)”);
      }
    </SCRIPT></COL>
    </ROW>
    </TBL>
    <!--
  • Resource check schedule [0030]
  • This schedule runs every five seconds, causing the lowMemory detector to run and fire the policy if the memory usage is high. [0031]
  • Additional resource detectors can be added to this schedule set to allow more resources to be monitored. [0032]
  • [0033]
  • <TBL name=“schedule”>[0034]
  • <ROW>[0035]
  • <COL name=“name”>resourceCheck</COL>[0036]
  • <COL name=“description”>Check system resources</COL>[0037]
  • <COL name=“enable”>1</COL>[0038]
  • <COL name=“period”>5000</COL>[0039]
  • <COL name=“schedule”></COL>[0040]
  • <COL name=“set”>lowMemory</COL>[0041]
  • </ROW>[0042]
  • </TBL>[0043]
  • </DB>[0044]
  • </GOAHEAD>[0045]
  • Networks of detectors are useful in diagnosing intermittent problems that may not be directly testable because of interface limitations or the intermittence of the problem. In these cases, it is useful to correlate faults that have occurred in other related components, and make a diagnosis based on those faults. [0046]
  • FIG. 3 illustrates a scenario that assumes a hardware platform with five PCI slots and a PCI bridge chip. Assume the bridge chip is not be able to notify the system of its failure. One symptom of the bridge chip failure is that the five cards bridged by the chip become unable to communicate with the host processor. The loss of a hardware heartbeat is detectable by the fault management process. An individual card can also stop responding to heartbeats because of electrical or physical disconnection, or other hardware and software faults. By determining the correct cause of a failure, the system is better equipped to ensure rapid failover between the correct components. [0047]
  • A lost heartbeat event from a card will cause the lost [0048] card heartbeat detector 314 to run. This detector populates a table that stores the name of the card that missed a heartbeat, the current time, and the number of times the heartbeat has failed. This information is important because it allows the second level detectors to provide fault correlation. This detector 314 always fires.
  • Both the [0049] bridge 310 detector and the card failure detector 306 listen to the lost heartbeat detector 314. The detectors will run serially, in the order defined in the XML file, but in general, the rules for each are designed so that the order in which they run does not matter. For this example, we assume the bridge failure detector 310 runs first.
  • If the bridge supports diagnostics, they can be called from the [0050] bridge failure detector 310. The results of the tests can be used to determine that the bridge has failed, and fire the detector immediately. The bridge detector, by firing, invokes the bridge failure policy 316 to run. If the problem is intermittent, or the diagnostics cannot detect certain conditions, event correlation must be done by the bridge failure detector 310. The bridge failure detector 310 looks at the card database table to determine if all of the cards have had heartbeat failures within a given period of time. If they have, the bridge is assumed to be bad, and the bridge failure detector 310 fires.
  • The card failure detector [0051] 306 engages in a similar process. The card failure detector can invoke the card failure policy 312. If card diagnostics can show the card has failed, the detector can run those diagnostics to determine whether to fire based on that condition. Because the diagnostics may not run correctly in the case of a bridge failure or other intermittent problem, the correlation table once again comes into play. If the card that lost a heartbeat has repeatedly lost heartbeats recently, and at least one card in the correlation table has not lost any heartbeats, the bridge chip has not failed, but the card has. The bridge failure event and the card failure event show two additional methods by which a failure in these components can be detected. If driver code (the interface software between the operating system and the device) can internally detect a card or bridge failure, the event can be sent directly. In this case, if either second level detector was triggered through an external event, no additional diagnosis or correlation would be required, and the detector would fire. Detectors can determine whether or not an event caused them to fire by looking at the local “_EVENT” embedded JavaScript variable.
  • The abovementioned description of a method for fault managing in a multinode networked computing environment according to the preferred embodiments of the present invention is merely exemplary in nature and is no way intended to limit the invention or its application or uses. Further, in the abovementioned description, numerous specific details are set forth to provide a more thorough understanding of the present invention. It will be apparent, however, to one skilled in the art, that the present invention may be practiced without these specific details. In other instances, characteristics and functions of the well-known processes have not been described so as to not obscure the present invention. [0052]

Claims (11)

We claim:
1. A method for determining faults in a computing system environment comprising the acts of:
a) detecting the occurrence of a fault event;
b) comparing the fault event to a predetermined criteria to determine if said fault event corresponds to said predetermined criteria; and
c) if said correspondence occurs then communicating said fault event to a policy module.
2. The method of claim 1 wherein upon communicating said fault event triggers a policy action.
3. The method of claim 2 wherein said policy action generates a pager signal.
4. The method of claim 2 wherein said policy action generates an email.
5. The method of claim 2 wherein said policy action generates a system reboot.
6. The method of claim 2 wherein said policy action generates a system restart.
7. The method of claim 2 wherein said policy action causes a switchover to another device.
8. The method of claim 2 wherein said policy action generates a SNMP trap.
9. The method of claim 1 for determining faults in a computing environment further comprising the acts of:
a) collecting operating data from the computing system and populating a database with said operating data; and
b) making said operating data in said database available to a detection module.
10. A system for diagnosing faults in a computing system environment comprising:
a) means for collecting operating data from the computing system;
b) means for populating a database with said data;
c) means for detecting faults derived from said operating data;
d) means for communicating said faults to a policy module; and
e) means for said policy module to take appropriate action based on said communication.
11. A method for diagnosing faults in a computing system environment comprising the acts of:
a) collecting operating system data;
b) storing the operating system data in a database accessible to a detector;
c) detecting a fault by reading the data using a first detector and comparing said data to a predetermined criteria;
d) communicating the fault to a second detector with said second detector capable of receiving multiple inputs from several said first detectors and;
e) causing a second communication to a policy module if the information received from the several first detectors meets a second predetermined criteria.
US10/250,345 2001-12-28 2001-12-28 Method for managing faults it a computer system enviroment Abandoned US20040153692A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US10/250,345 US20040153692A1 (en) 2001-12-28 2001-12-28 Method for managing faults it a computer system enviroment
US11/489,032 US7337373B2 (en) 2004-03-08 2006-07-18 Determining the source of failure in a peripheral bus

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US10/250,345 US20040153692A1 (en) 2001-12-28 2001-12-28 Method for managing faults it a computer system enviroment
PCT/US2001/049945 WO2002054255A1 (en) 2000-12-29 2001-12-28 A method for managing faults in a computer system environment

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US11/489,032 Continuation US7337373B2 (en) 2004-03-08 2006-07-18 Determining the source of failure in a peripheral bus

Publications (1)

Publication Number Publication Date
US20040153692A1 true US20040153692A1 (en) 2004-08-05

Family

ID=32770069

Family Applications (2)

Application Number Title Priority Date Filing Date
US10/250,345 Abandoned US20040153692A1 (en) 2001-12-28 2001-12-28 Method for managing faults it a computer system enviroment
US11/489,032 Expired - Lifetime US7337373B2 (en) 2004-03-08 2006-07-18 Determining the source of failure in a peripheral bus

Family Applications After (1)

Application Number Title Priority Date Filing Date
US11/489,032 Expired - Lifetime US7337373B2 (en) 2004-03-08 2006-07-18 Determining the source of failure in a peripheral bus

Country Status (1)

Country Link
US (2) US20040153692A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7043660B1 (en) * 2001-10-08 2006-05-09 Agilent Technologies, Inc. System and method for providing distributed fault management policies in a network management system
US20090300428A1 (en) * 2008-05-27 2009-12-03 Hitachi, Ltd. Method of collecting information in system network
US20120272099A1 (en) * 2005-03-04 2012-10-25 Maxsp Corporation Computer hardware and software diagnostic and report system
US20130219225A1 (en) * 2009-07-16 2013-08-22 Hitachi, Ltd. Management system for outputting information denoting recovery method corresponding to root cause of failure
US20140075244A1 (en) * 2012-09-07 2014-03-13 Canon Kabushiki Kaisha Application management system, management apparatus, application execution terminal, application management method, application execution terminal control method, and storage medium
US10320897B2 (en) * 2015-12-15 2019-06-11 Microsoft Technology Licensing, Llc Automatic system response to external field-replaceable unit (FRU) process

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7210068B1 (en) 2002-03-06 2007-04-24 Network Appliance, Inc. System and method for multipath I/O support for fibre channel devices
US7237266B2 (en) * 2003-06-30 2007-06-26 At&T Intellectual Property, Inc. Electronic vulnerability and reliability assessment
US20050038697A1 (en) * 2003-06-30 2005-02-17 Aaron Jeffrey A. Automatically facilitated marketing and provision of electronic services
US7324986B2 (en) * 2003-06-30 2008-01-29 At&T Delaware Intellectual Property, Inc. Automatically facilitated support for complex electronic services
US7409593B2 (en) * 2003-06-30 2008-08-05 At&T Delaware Intellectual Property, Inc. Automated diagnosis for computer networks
US7415634B2 (en) * 2004-03-25 2008-08-19 International Business Machines Corporation Method for fast system recovery via degraded reboot
US20060242651A1 (en) * 2005-04-21 2006-10-26 Microsoft Corporation Activity-based PC adaptability
US20060282530A1 (en) * 2005-06-14 2006-12-14 Klein Stephen D Methods and apparatus for end-user based service monitoring
JP2008061168A (en) * 2006-09-04 2008-03-13 Ricoh Co Ltd Complex terminal device
US20080256400A1 (en) * 2007-04-16 2008-10-16 Chih-Cheng Yang System and Method for Information Handling System Error Handling
US8392760B2 (en) * 2009-10-14 2013-03-05 Microsoft Corporation Diagnosing abnormalities without application-specific knowledge
US20120221884A1 (en) * 2011-02-28 2012-08-30 Carter Nicholas P Error management across hardware and software layers
US8903893B2 (en) 2011-11-15 2014-12-02 International Business Machines Corporation Diagnostic heartbeating in a distributed data processing environment
US9244796B2 (en) 2011-11-15 2016-01-26 International Business Machines Corporation Diagnostic heartbeat throttling
US8769089B2 (en) * 2011-11-15 2014-07-01 International Business Machines Corporation Distributed application using diagnostic heartbeating
US8874974B2 (en) 2011-11-15 2014-10-28 International Business Machines Corporation Synchronizing a distributed communication system using diagnostic heartbeating
US8756453B2 (en) 2011-11-15 2014-06-17 International Business Machines Corporation Communication system with diagnostic capabilities
US9891864B2 (en) 2016-01-19 2018-02-13 Micron Technology, Inc. Non-volatile memory module architecture to support memory error correction

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5715496A (en) * 1995-01-19 1998-02-03 Ricoh Company, Ltd. Remote service system for image forming apparatuses
US5768501A (en) * 1996-05-28 1998-06-16 Cabletron Systems Method and apparatus for inter-domain alarm correlation
US5822512A (en) * 1995-05-19 1998-10-13 Compaq Computer Corporartion Switching control in a fault tolerant system
US5872931A (en) * 1996-08-13 1999-02-16 Veritas Software, Corp. Management agent automatically executes corrective scripts in accordance with occurrences of specified events regardless of conditions of management interface and management engine
US5944782A (en) * 1996-10-16 1999-08-31 Veritas Software Corporation Event management system for distributed computing environment
US6112311A (en) * 1998-02-20 2000-08-29 International Business Machines Corporation Bridge failover system
US6182249B1 (en) * 1997-05-12 2001-01-30 Sun Microsystems, Inc. Remote alert monitoring and trend analysis
US6327677B1 (en) * 1998-04-27 2001-12-04 Proactive Networks Method and apparatus for monitoring a network environment
US20020097672A1 (en) * 2001-01-25 2002-07-25 Crescent Networks, Inc. Redundant control architecture for a network device
US6532552B1 (en) * 1999-09-09 2003-03-11 International Business Machines Corporation Method and system for performing problem determination procedures in hierarchically organized computer systems
US6553416B1 (en) * 1997-05-13 2003-04-22 Micron Technology, Inc. Managing computer system alerts
US6757850B1 (en) * 1998-12-30 2004-06-29 Ncr Corporation Remote services management fault escalation

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5129080A (en) * 1990-10-17 1992-07-07 International Business Machines Corporation Method and system increasing the operational availability of a system of computer programs operating in a distributed system of computers
US5390326A (en) * 1993-04-30 1995-02-14 The Foxboro Company Local area network with fault detection and recovery
US6370656B1 (en) * 1998-11-19 2002-04-09 Compaq Information Technologies, Group L. P. Computer system with adaptive heartbeat
US7020076B1 (en) * 1999-10-26 2006-03-28 California Institute Of Technology Fault-tolerant communication channel structures
US7222268B2 (en) * 2000-09-18 2007-05-22 Enterasys Networks, Inc. System resource availability manager
US6782489B2 (en) * 2001-04-13 2004-08-24 Hewlett-Packard Development Company, L.P. System and method for detecting process and network failures in a distributed system having multiple independent networks
US20030061340A1 (en) * 2001-09-25 2003-03-27 Mingqiu Sun Network health monitoring through real-time analysis of heartbeat patterns from distributed agents
US7281171B2 (en) * 2003-01-14 2007-10-09 Hewlwtt-Packard Development Company, L.P. System and method of checking a computer system for proper operation
US7395444B2 (en) * 2003-09-23 2008-07-01 American Power Conversion Corporation Power status notification
JP2005196467A (en) * 2004-01-07 2005-07-21 Hitachi Ltd Storage system, control method for storage system, and storage controller

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5715496A (en) * 1995-01-19 1998-02-03 Ricoh Company, Ltd. Remote service system for image forming apparatuses
US5822512A (en) * 1995-05-19 1998-10-13 Compaq Computer Corporartion Switching control in a fault tolerant system
US5768501A (en) * 1996-05-28 1998-06-16 Cabletron Systems Method and apparatus for inter-domain alarm correlation
US5872931A (en) * 1996-08-13 1999-02-16 Veritas Software, Corp. Management agent automatically executes corrective scripts in accordance with occurrences of specified events regardless of conditions of management interface and management engine
US5944782A (en) * 1996-10-16 1999-08-31 Veritas Software Corporation Event management system for distributed computing environment
US6182249B1 (en) * 1997-05-12 2001-01-30 Sun Microsystems, Inc. Remote alert monitoring and trend analysis
US6553416B1 (en) * 1997-05-13 2003-04-22 Micron Technology, Inc. Managing computer system alerts
US6112311A (en) * 1998-02-20 2000-08-29 International Business Machines Corporation Bridge failover system
US6327677B1 (en) * 1998-04-27 2001-12-04 Proactive Networks Method and apparatus for monitoring a network environment
US6757850B1 (en) * 1998-12-30 2004-06-29 Ncr Corporation Remote services management fault escalation
US6532552B1 (en) * 1999-09-09 2003-03-11 International Business Machines Corporation Method and system for performing problem determination procedures in hierarchically organized computer systems
US20020097672A1 (en) * 2001-01-25 2002-07-25 Crescent Networks, Inc. Redundant control architecture for a network device

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7043660B1 (en) * 2001-10-08 2006-05-09 Agilent Technologies, Inc. System and method for providing distributed fault management policies in a network management system
US20120272099A1 (en) * 2005-03-04 2012-10-25 Maxsp Corporation Computer hardware and software diagnostic and report system
US20090300428A1 (en) * 2008-05-27 2009-12-03 Hitachi, Ltd. Method of collecting information in system network
US8086905B2 (en) * 2008-05-27 2011-12-27 Hitachi, Ltd. Method of collecting information in system network
US8356208B2 (en) 2008-05-27 2013-01-15 Hitachi, Ltd. Method of collecting information in system network
US20130219225A1 (en) * 2009-07-16 2013-08-22 Hitachi, Ltd. Management system for outputting information denoting recovery method corresponding to root cause of failure
US9189319B2 (en) * 2009-07-16 2015-11-17 Hitachi, Ltd. Management system for outputting information denoting recovery method corresponding to root cause of failure
US20140075244A1 (en) * 2012-09-07 2014-03-13 Canon Kabushiki Kaisha Application management system, management apparatus, application execution terminal, application management method, application execution terminal control method, and storage medium
US9753837B2 (en) * 2012-09-07 2017-09-05 Canon Kabushiki Kaisha Application management system, management apparatus, application execution terminal, application management method, application execution terminal control method, and storage medium
US10320897B2 (en) * 2015-12-15 2019-06-11 Microsoft Technology Licensing, Llc Automatic system response to external field-replaceable unit (FRU) process

Also Published As

Publication number Publication date
US7337373B2 (en) 2008-02-26
US20070038899A1 (en) 2007-02-15

Similar Documents

Publication Publication Date Title
US7337373B2 (en) Determining the source of failure in a peripheral bus
WO2002054255A9 (en) A method for managing faults in a computer system environment
KR100898339B1 (en) Autonomous fault processing system in home network environments and operation method thereof
US7281040B1 (en) Diagnostic/remote monitoring by email
US8041996B2 (en) Method and apparatus for time-based event correlation
US7426654B2 (en) Method and system for providing customer controlled notifications in a managed network services system
EP0570505B1 (en) Knowledge based machine initiated maintenance system and method
US8812649B2 (en) Method and system for processing fault alarms and trouble tickets in a managed network services system
US20030097610A1 (en) Functional fail-over apparatus and method of operation thereof
US20060233313A1 (en) Method and system for processing fault alarms and maintenance events in a managed network services system
CN101800675A (en) Failure monitoring method, monitoring equipment and communication system
CN105610648A (en) Operation and maintenance monitoring data collection method and server
CN109286529A (en) A kind of method and system for restoring RabbitMQ network partition
CN108710545A (en) A kind of remote monitoring fault self-recovery system
US20050234919A1 (en) Cluster system and an error recovery method thereof
Duarte Jr et al. A distributed system-level diagnosis model for the implementation of unreliable failure detectors
Gautam et al. A novel approach of fault management and restoration of network services in IoT cluster to ensure disaster readiness
CN115470061A (en) Distributed storage system I/O sub-health intelligent detection and recovery method
US20120215492A1 (en) Methods &amp; apparatus for remotely diagnosing grid-based computing systems
JP4575020B2 (en) Failure analysis device
Zhang et al. A dependency matrix based framework for QoS diagnosis in SOA
US7467068B2 (en) Method and apparatus for detecting dependability vulnerabilities
JP5395951B2 (en) Network equipment
Xu et al. MAS and fault-management
JP2003186702A (en) Terminal operation monitoring system and terminal operation monitoring method

Legal Events

Date Code Title Description
AS Assignment

Owner name: GOAHEAD SOFTWARE, INC., WASHINGTON

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:O'BRIEN, MICHAEL;GRAVESTOCK, PETER;REEL/FRAME:015047/0312;SIGNING DATES FROM 20031221 TO 20040223

AS Assignment

Owner name: GOAHEAD SOFTWARE INC., WASHINGTON

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:O'BRIEN, MICHAEL;GRAVESTOCK, PETER;REEL/FRAME:015390/0361;SIGNING DATES FROM 20041012 TO 20041109

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION