US7191364B2 - Automatic root cause analysis and diagnostics engine - Google Patents

Automatic root cause analysis and diagnostics engine Download PDF

Info

Publication number
US7191364B2
US7191364B2 US10/713,867 US71386703A US7191364B2 US 7191364 B2 US7191364 B2 US 7191364B2 US 71386703 A US71386703 A US 71386703A US 7191364 B2 US7191364 B2 US 7191364B2
Authority
US
United States
Prior art keywords
hang
file
attributes
data
bug
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US10/713,867
Other versions
US20050120273A1 (en
Inventor
William Hunter Hudson
Reiner Fink
Geoff Pease
Gerald Maffeo
Yi Meng
Eric LeVine
Andrew L. Bliss
Andre Vachon
Kshitiz K. Sharma
Jing Shan
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ServiceNow Inc
Original Assignee
Microsoft Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Microsoft Corp filed Critical Microsoft Corp
Priority to US10/713,867 priority Critical patent/US7191364B2/en
Assigned to MICROSOFT CORPORATION reassignment MICROSOFT CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: BLISS, ANDREW L., FINK, REINER, HUDSON, WILLIAM HUNTER, LEVINE, ERIC, MAFFEO, GERALD, MENG, Yi, PEASE, GEOFF, SHAN, Jing, SHARMA, KSHITIZ K., VACHON, ANDRE
Publication of US20050120273A1 publication Critical patent/US20050120273A1/en
Application granted granted Critical
Publication of US7191364B2 publication Critical patent/US7191364B2/en
Assigned to MICROSOFT TECHNOLOGY LICENSING, LLC reassignment MICROSOFT TECHNOLOGY LICENSING, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MICROSOFT CORPORATION
Assigned to SERVICENOW, INC. reassignment SERVICENOW, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MICROSOFT TECHNOLOGY LICENSING, LLC
Assigned to SERVICENOW, INC. reassignment SERVICENOW, INC. CORRECTIVE ASSIGNMENT TO CORRECT THE RECORDAL TO REMOVE INADVERTENTLY RECOREDED PROPERTIES SHOWN IN ATTACHED SHEET PREVIOUSLY RECORDED AT REEL: 047681 FRAME: 0916. ASSIGNOR(S) HEREBY CONFIRMS THE ASSIGNMENT. Assignors: MICROSOFT TECHNOLOGY LICENSING, LLC
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/0703Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
    • G06F11/079Root cause analysis, i.e. error or fault diagnosis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/0703Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
    • G06F11/0706Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation the processing taking place on a specific hardware platform or in a specific software environment
    • G06F11/0748Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation the processing taking place on a specific hardware platform or in a specific software environment in a remote unit communicating with a single-box computer node experiencing an error/fault
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/0703Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
    • G06F11/0793Remedial or corrective actions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/362Software debugging
    • G06F11/366Software debugging using diagnostics
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10TECHNICAL SUBJECTS COVERED BY FORMER USPC
    • Y10STECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10S707/00Data processing: database and file management or data structures
    • Y10S707/99951File or database maintenance
    • Y10S707/99952Coherency, e.g. same view to multiple users
    • Y10S707/99953Recoverability

Definitions

  • the invention relates generally to analyzing defects in software. More specifically, the invention relates to analyzing and diagnosing software defects caused by hangs.
  • a computer e.g., personal computer (PC) or the like
  • OS operating system
  • a defect bug
  • Software typically contains a number of bugs classifiable into two general categories: crashes and hangs.
  • crashes are fatal system errors, which usually result in the abnormal termination of a program by a kernel or system thread.
  • the software provider obtains diagnostic data, attempts to reproduce the error, and, depending on the severity of the bug, creates and distributes a fix for the bug.
  • One way of diagnosing crash-induced bugs involves examining a log file containing diagnostic data including commands, events, instructions, program error number, computer processor type, and/or other pertinent diagnostic information.
  • the log file typically is generated right after a crash has been detected.
  • a Microsoft® Windows operative PC loads Watson, a debugging tool which monitors running processes and logs useful diagnostic data when a crash is detected. After a crash, the Watson log file may be sent to the software provider for analysis.
  • a log file does not contain enough information to diagnose a problem, thus, a crash dump may be required to troubleshoot the problem.
  • a crash dump is generated when the physical contents of memory are written to a predetermined file location.
  • OCA Online Crash Analysis
  • the OCA engine allows users to submit, through a web browser, a crash log or a crash mini-dump file to Microsoft.
  • the analysis engine compares data from the uploaded file to a database of known issues. If the bug is known and a patch or workaround is available, the user is notified of the solution. Otherwise, the uploaded file is used by troubleshooters to diagnose the bug.
  • a software hang occurs when a piece of software appears to stop responding or when a software thread looks inactive. Hangs often result in the abnormal termination of a recoverable software process by the end-user. Abnormal termination of software by any means, including user-induced termination, may indicate the presence of a bug in the software.
  • a piece of software may normally take 10 or 15 seconds to paint a user interface, but under a given set of circumstances, the user interface thread may call an API that takes a long time to return or, alternatively, the user interface thread may make a network call that requires a response before painting the user interface.
  • the time to paint the user interface in this instance may take an abnormally long 50 or 60 seconds to finish. Because of the abnormal delay, a user may become frustrated and manually terminate the application after 20 seconds. The fact that the user interface became unresponsive, in this instance, is a bug because it caused the user to abnormally terminate the software.
  • a hang involves a scenario where a software application crashes because of an error in a related dynamic link library (.DLL) file.
  • the software application has acquired certain system resources, like file handlers and critical sections, which are not released after the crash. Other threads need access to those acquired resources, but cannot gain access to them because they are still marked as locked by the crashed thread. Because of the lock, other running threads hang. The fact that other threads hung indicates a bug that may need to be diagnosed and fixed.
  • .DLL dynamic link library
  • hangs are usually not as dramatic as crashes, e.g., there may not be an obvious “blue screen of death”-type response by a computer to indicate a bug, so users are less likely to report the error.
  • crashes are easier to diagnose since they tend to occur after a specific instruction or event has been issued.
  • identifying the offending instruction or block of code in a hang may be more difficult to do since the bug could be related to another piece of software, to a specific environment on a PC, to an impatient user, or to any number of other issues.
  • software providers often do not emphasize hangs when fixing bugs.
  • implementations of the invention capture data in order to troubleshoot bugs associated with the hangs. From the captured data, attributes may automatically be extracted and compared to known issues. If the hang-inducing bug is known, a user may be provided with a solution to the bug. Alternatively, if the bug is unknown, implementations of the invention send the captured data to be analyzed and fixed by the software's provider.
  • the captured data is packaged into a file to be sent to the software provider and assigned an identification value for tracking the hang.
  • comparing the extracted attributes to known issues is performed on the client computing device in order to determine the potential cause of the hang event. Once the potential cause of the hang event has been determined, troubleshooting steps are performed on the client computing device to quarantine the file, module, process, thread, block of code, instruction, or the like that is likely causing the hang.
  • FIG. 1 is a block diagram of a suitable computing environment for implementing aspects of the invention.
  • FIG. 2 is a schematic diagram of a hang analysis system, in accordance with an implementation of the invention.
  • FIG. 3 is a block diagram of a packaged file component, in accordance with an implementation of the invention.
  • FIG. 4 is a block diagram illustrating a system for identifying a solution to a hang-inducing bug, in accordance with an implementation of the invention.
  • FIG. 5 is a block diagram illustrating a method for extracting attributes from hang data, in accordance with an implementation of the invention.
  • FIG. 6 is a flowchart of a method of analyzing a hang, in accordance with an implementation of the invention.
  • FIG. 7 is a flowchart of a method of analyzing an unknown bug, in accordance with an implementation of the invention.
  • FIG. 8 is a flowchart illustrating a method of identifying a bug, in accordance with an implementation of the invention.
  • FIG. 1 illustrates an exemplary system for practicing the invention, according to one implementation.
  • the system includes computing device 100 .
  • computing device 100 typically includes at least one processing unit 102 and system memory 104 .
  • Processing unit 102 includes existing and future processors, multiple processors acting together, virtual processors, and any other device or software program capable of interpreting binary executable instructions.
  • system memory 104 may be volatile (such as RAM), non-volatile (such as ROM, flash memory, etc.), or some combination of the two.
  • System memory 104 typically includes an operating system 105 , one or more program modules with their associated data 106 , and a hang analysis tool 114 .
  • Computing device 100 may also have additional features or functionality.
  • computing device 100 may also include additional data storage devices (removable and/or non-removable) such as magnetic disks, optical disks, or tape.
  • additional storage is illustrated in FIG. 1 by removable storage 107 and non-removable storage 108 .
  • Computer storage media may include volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, program modules, or other data.
  • System memory 104 , removable storage 107 , and non-removable storage 108 are all examples of computer storage media.
  • Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, DVD or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by computing device 100 .
  • Computing device 100 may also have input device(s) 112 such as keyboard, mouse, pen, voice input device, touch input device, etc.
  • Output device(s) 110 such as a display, speakers, printer, etc. may also be included. All these devices are known in the art and need not be discussed at length here.
  • Computing device 100 may also include communications connection(s) 113 that allow the device to communicate with other computing devices 120 , such as over a network.
  • Communications connection(s) 113 is an example of communication media, which typically embodies computer readable instructions, data structures, program modules, or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media.
  • modulated data signal means a signal that has one or more of its characteristics set or changed is such a manner as to encode information in the signal.
  • communication media includes wired media such as a wired network or direct wired connections, and wireless media such as acoustic, RF, infrared, and other wireless media.
  • the term computer readable media as used herein includes both storage media and communication media.
  • FIG. 2 illustrates exemplary aspects of computing device 100 to capture hang data and to transmit the data to software providers so it can be troubleshot.
  • computing device 100 includes components of the invention stored in computer storage media as illustrated in FIG. 1 .
  • computing device 100 includes one or more of the following components: one or more program modules with their associated data 202 , a data capture program 203 , one or more diagnostic data files 204 , and a triage evaluator 205 .
  • Triage evaluator may further be comprised of a database 207 and one or more history files 206 .
  • the one or more program modules with their associated data (program) 202 may include pieces of software such as a software application, a driver file, an API, a text file, an executable file or any other computer readable instructions, data structures, or software modules.
  • user 201 accesses program 202 , which subsequently hangs. Because of the hang, user 201 terminates the program.
  • user 201 will issue program termination commands to computing device 100 .
  • program termination commands For example, in a Windows environment, a user may press ⁇ Ctrl-Alt-Del> on their keyboard and, when presented with a menu of programs, user 201 may selectively end any running process or thread. Additionally, if the program happens to be an application running in user-mode, user 201 may click on the close command to terminate the process. Similarly, in a UNIX or Java environment, user 201 may issue a “kill” command at a command prompt to terminate hung program 202 .
  • data capture program 203 After computing device 100 registers the termination command, data capture program 203 is invoked, which captures data related to hung program 202 .
  • the amount of data captured typically is dependant on how sophisticated data capture program 203 is. Some data capture programs, such as Watson, will allow a user to track thousands of instructions.
  • the captured data includes a wide range of information to diagnose the hang.
  • data capture program 203 may capture a mini-dump of the hang, or alternatively, it may generate a log file containing the running version of the operating system (including support pack numbers), the name of the hung program and its corresponding thread name, software version, names and versions of other software modules or processes loaded into memory, the call stack, or any other information that may help diagnose the cause of the hang.
  • Watson technologies capture data associated with a hung program.
  • Watson technologies allow a user to specify the amount of data to be captured (e.g., the last 10 events, last 1000 events, etc.) then saves the data to a log file or a mini-dump.
  • diagnostic data file 204 may include a crash dump file, mini-dump, log file, ABEND log, text file, html file, binary file, or any other type of file stored locally or on a remote computer that contains data to help troubleshoot a bug. Additionally, diagnostic data file 204 may include data from one or more hangs. For example, captured hang data may simply be appended to the end of an existing diagnostic data file, or alternatively, diagnostic data file 204 may include a directory of files with diagnostic data. In another implementation, diagnostic data file 204 may include a searchable, relational database, where each hang is added to a database of prior hangs.
  • diagnostic data file 204 may be wrapped into a packaged file 210 and transmitted over Internet 220 to software provider 230 .
  • Components of packaged file 210 are discussed in more detail in conjunction with FIG. 3 .
  • Packaged file 210 may be sent based on a certain set of criteria. For example, in one implementation, a user may be prompted to send a report to software provider 230 after a hang has been detected. In an alternate implementation, a user may initiate the transmittal of data. In yet another implementation, packaged file 210 may be sent automatically when computing device 100 detects a hang. In another implementation, packaged file 210 may be sent to software provider 230 only after certain conditions are met (e.g., after five occurrences of similar hangs, after a fixed number of days, or after a problem of a certain severity has been encountered.
  • certain conditions e.g., after five occurrences of similar hangs, after a fixed number of days, or after a problem of a certain severity has been encountered.
  • FIG. 2 also illustrates a triage evaluator 205 , which provides a mechanism for initial triage on computing device 100 .
  • the triage evaluator 205 performs initial triage on computing device 100 to prevent a repeat of the bug and to speed up the solution process.
  • triage evaluator 205 also extracts attributes from diagnostic data files similar to datamining utility 405 described in conjunction with FIG. 4 .
  • the triage evaluator 205 further includes a database 207 of common bugs and issues related to software on computing device 100 .
  • triage evaluator 205 uses diagnostic data file 204 as initial input to determine objects, variables, addresses and modules loaded into system memory. Triage evaluator also maintains a history file 206 that describes ownership and reliability of functions and modules. In one implementation, triage evaluator 205 processes diagnostic data file 204 , looks at the call stack and uses the predetermined data in history file 206 to determine the reliability of certain modules and routines. To determine a culprit or faulty component, different weights are assigned to different data based on the information in history file 206 . Special values are assigned to candidate files, modules, and routines to calculate the likelihood that a particular module or routine is faulty. Files, modules, and routines become candidate culprits if they are part of the captured data.
  • the assigned values may be as simple (such as a counter value) or more complex (such as a math or statistical algorithm). For example, a module that has recently been patched, is used frequently, and has no history of problems may be assigned the value “unlikely to be the culprit.” Another candidate piece of software may be assigned a value “may be the culprit” because it is used often and appears somewhat frequently in hang data. As a final example, an instruction that is always on the call stack when a particular hang appears may be identified as the “likely” culprit.
  • triage evaluator 205 isolates the likely culprit (file, module, routine, or instruction), initial triage may be performed.
  • triage measures may include renaming the culprit file, installing an original version of a file, attempting to find a newer version of the file, or otherwise quarantining the faulty file, module, routine or instruction.
  • triage evaluator 205 works, consider the following example.
  • a user browses the Internet using Microsoft Internet Explorer.
  • Internet Explorer hangs, invoking Watson, which captures hang data.
  • Watson invokes triage evaluator 205 to perform initial triage on the machine.
  • Triage evaluator has maintained a history of crashes and hangs and notices that the file “bad.dll” is often associated with hangs like the one that just occurred.
  • triage evaluator marks bad.dll as the likely culprit and attempts to quarantine the file.
  • triage evaluator may try renaming bad.dll, but the file is required by Internet Explorer.
  • triage evaluator attempts to back-rev the file to an older, more stable version, but the current file is the original. Finally, triage evaluator attempts to update the file using Microsoft's Windows Update feature. If triage evaluator succeeds in finding a new file and updating bad.dll, then the bug may be fixed without further user intervention. If triage evaluator does not find a fix, then the diagnostic data files are packaged and sent to software provider 230 .
  • triage evaluator 205 prompts the user for permission to perform triage or prompts the user to perform the triage steps.
  • triage evaluator 205 maintains a known issues database 207 , which receives updates from software providers relating to fixes and solutions. For example, the issues database 207 may routinely be updated with new information as it becomes available in a fashion similar to Microsoft's Windows Update system. As part of the triage process, triage evaluator 205 compares diagnostic data to its database.
  • the issues database may either display the solution, retrieve a solution or fix, prompt user 201 for permission to apply a solution, automatically apply the solution, or perform some other similar act.
  • the bug is unknown, it will be sent to software provider 230 to be troubleshot and diagnosed.
  • triage evaluator 205 performs initial analysis and triage on a bug, thereby conserving software provider 230 resources, such as bandwidth, development costs, troubleshooting costs, disk space, and the like.
  • Sending packaged file 210 to software provider 230 may involve copying the packaged file from computing device 100 to software provider 230 .
  • the transfer of file may occur by uploading packaged file 210 to a software provider server, sending an email message with packaged file attached, connecting to a support page and attaching the file, or using some other electronic form of communication.
  • packaged file 210 is transmitted over Internet 220 .
  • software provider 230 is on the same network (e.g. LAN or WAN) as computing device 100 .
  • packaged file 210 may be stored in a packaged file repository 231 until it may be evaluated by analysis engine 232 .
  • the bug is transferred to software development 240 so the bug can be troubleshot and fixed.
  • computing device 100 and/or user 201 are notified of the solution.
  • FIG. 3 illustrates an exemplary implementation of packaged file 210 .
  • the format of packaged file 210 may be a CAB file, a ZIP file, or any other type of packaged or compressed file.
  • packaged file 210 may be encrypted, password protected, or otherwise secured before being transferred to software provider 230 .
  • Packaged file 210 typically includes among other components including a bucket ID 311 and at least a diagnostic data file 312 .
  • Bucket ID 311 provides a means for labeling the bug so it can be categorized into a bucket. Bucketing involves categorizing classes of similar or related problems, which helps to identify and troubleshoot bugs. Exemplary buckets are described in more detail in conjunction with FIG. 5 .
  • bucket ID 311 may incorporate information such as computer name, user name, MAC address, hardware serial number, client identifier, IP address, or other information uniquely identifying a computing device, user, or bug.
  • generating a bucket ID 311 for packaged file 210 involves walking call stack 305 and performing a hash on it.
  • call stack 305 may include multiple events, such as “create file” 306 , “open file” 307 , or “create thread” 308 .
  • a program calls a create file event, which creates a file with a file handler locked in critical section 309 .
  • a subsequent “create thread” event requires access to the “create file” handler.
  • packaged file 210 includes at least one diagnostic data file 312 , containing hang data.
  • diagnostic data file is a CAB file or another type of packaged or compressed file.
  • diagnostic data file 312 may be encrypted, password protected, or otherwise secured.
  • Diagnostic data file 312 includes one or more attributes 316 .
  • Attributes are diagnostic values which are provided by a debugger infrastructure to help troubleshooters understand the environment and events associated with a hang.
  • attributes 316 that may be useful for diagnosing bugs include the name of the program, thread number, application version number, instructions on the stack, and any number of other captured values and events. Attributes 316 may be in a text format or numeric format, depending on the nature of the diagnostic data file 312 .
  • FIG. 4 illustrates software provider's system for handling packaged file 210 .
  • the software provider system is typically a server with components generally similar to those in computing device 100 .
  • the software provider system would likely include a processor and computer storage media to perform analysis on received packaged file 210 .
  • packaged file 210 may be sent to diagnostic engine 401 .
  • the bucket ID and diagnostic data file are extracted and then stored.
  • packaged file 210 is stored as received and its contents are extracted prior to being analyzed by diagnostic engine 401 .
  • diagnostic engine 401 may have several components including a datamining utility 405 , an attribute structure 410 , and a bucket database 415 .
  • Datamining utility 405 loads a packaged file from packaged file repository 231 and extracts attributes from packaged file's diagnostic data files.
  • datamining utility 405 is an automated utility that extracts attributes based on a diagnostic data format. For example, in a Windows environment, mini-dumps may contain cookie crumbs that make attributes identifiable and thus extractable through automated tools.
  • data in packaged file 210 is analyzed by troubleshooters (in this case the troubleshooters become utility 405 ) in a debugger who manually extract key attributes from the file.
  • datamining utility 405 is a text file, such as a batch file, with a list of commands that are fed into a debugger for extracting attributes from packaged file 210 .
  • datamining utility 405 may look for keywords, binary patterns, offsets, or other recognizable data.
  • Datamining utility 405 repeatedly iterates on the diagnostic data files until no more attributes can be found. Once extracted, attributes are put into an analyzable format, as illustrated by attribute structure 410 . Over a period of time, datamining utility will identify a number of problem classes (buckets) and extract a large number of attributes from packaged file(s) 210 . As new problems and classes of problems are identified, this information is added to bucket database 415 .
  • buckets problem classes
  • Attribute structure 410 may be a binary tree, an array, a linked list, a text file, HTML file, a database entry, or other comparable data structure that stores and organizes data in an analyzable format.
  • FIG. 5 illustrates another exemplary implementation of attribute structure 410 .
  • diagnostic engine 401 queries its bucket database 415 to see if the mined data in attribute structure 410 belongs to a known bucket.
  • Bucket database 405 consists of entries (buckets) that contain groups of similar or related bugs categorized based on a given set of criteria. Buckets may contain bugs relating to a particular software application, a module name, an application version, the bucket ID, an attribute, a thread name, an error number, a computer address, a user name, a combination of these factors, or some other reasonable means of categorizing software bugs.
  • attribute structure 410 is compared to entries in bucket database 415 . If attribute structure 410 corresponds to a known bucket and a lookup of the bug indicates a solution 420 is available, the solution 420 is sent to computing device 100 . In another implementation, if attribute structure 410 corresponds to a known bucket but a fix is not available, then software development 430 is notified and values in the bucket database may be updated. In yet another implementation, attribute structure 410 is stored according to its associated bucket ID. In still another implementation, only attribute structure 410 is stored. In another implementation, a counter is updated to indicate that one more instance of a bucketed problem has been encountered. Other implementations may store username and computer device identifying data to notify a user when solution 420 becomes available.
  • the stored data may be used to create a bug fix priority list 432 because certain bugs need to be fixed sooner than other bugs. Any number of factors may be used to determine the priority, such as the security risks posed by a bug, the likelihood of data loss, frequency of the error, and other similar factors. For example, if a bug creates an exploitable security flaw in an application, such as a heap overflow error, then that bug will be prioritized higher than other bugs. Similarly, if one bug occurs more frequently than other bugs, that bug will also be prioritized accordingly. In contrast, if a bug happens infrequently with few side effects and would require a rewrite of thousands of lines of code, that bug likely would be assigned a very low priority.
  • diagnostic engine 401 transfers the contents of packaged file 210 to software development 430 for further analysis.
  • feedback 436 on the bug is provided to diagnostic engine 401 .
  • Feedback 436 may include new attributes that datamining utility 405 should begin looking for.
  • bucket database 415 may be updated with new bucket information, so similar issues will be flagged and bucketed properly.
  • Other feedback could include documentation of the bug, workarounds for the problem, or a timeframe for finding a solution to the bug. Any of a number of similar items could be also included in feedback 436 .
  • the bug since the bug has been diagnosed, it will be categorized as a “known” bug and added to bug fix priority list 432 .
  • FIG. 5 illustrates an exemplary attribute structure 520 generated by datamining utility 405 and stored in bucket database 415 .
  • Bucket 416 may have numerous relationships with packaged file 210 .
  • a given bucket may contain data from many packaged files; hence, the bucket to packaged file mapping may be 1 to many.
  • a given packaged file may contain one or more diagnostic data files, thus, the packaged file to diagnostic data files mapping may also be 1 to many.
  • a given diagnostic data file may map to one or more processes.
  • a given process may map to multiple threads.
  • Other implementations of bucket 416 may include variables, instructions, and other values and events.
  • bucket 416 has many potential attributes
  • an aspect of the system is to look for natural groupings of attributes within a bucket and see if sub-classifications exist that make sense to pull out, instead of classifying every new bug into a generic “application hang” bucket.
  • datamining utility 405 mines for attributes in order to identify similar issues and bucket them accordingly.
  • this process is not trivial. For example, a diagnostic data file containing what appears to be an “idle-related” bug may not actually be bucketed in the “idle” bucket because the idle thread may have been caused by a locked attribute from an earlier crashed application. To solve this problem, it is necessary to see who acquired a lock on the attribute, and if the lock was incorrectly acquired then categorize the bug in a different bucket.
  • bucketing may be performed initially by technical support personnel, who will flesh out attributes datamining utility 405 should look for and add entries into the bucket database. However, as the database grows, more and more bugs should be handled by the system.
  • attribute structure 520 expands.
  • attribute structure 520 may form a decision tree for each bucket or issue.
  • the decision tree attribute structure 520 is a graphical depiction of the order in which relevant attributes can be utilized to identify entries of the corresponding bucket.
  • the attributes forming the tree include natural groupings of thread and process level attributes.
  • exemplary nodes 524 and 525 each contain a thread or process level attribute that when examined appear to form a natural grouping, so nodes 524 and 525 are grouped together under node 523 .
  • node 523 may contain an attribute that is part of a grouping of attributes.
  • node 523 contains a name created to describe its children nodes. The grouping of nodes under node 523 forms a sub-class of node 522 .
  • node 522 is a sub-class of node 521 .
  • the attribute structure for the “APPLICATION_HANG” bucket looks at the natural groupings of attributes below to see if a sub-classification may be created out of the generic APPLICATION_HANG bucket. If there appears to be a natural grouping of attributes from the process and thread level attributes (such as instruction calls or module names) then that group of attributes is made into a sub-class. Alternatively, if the sub-class is large enough, it may become its own bucket.
  • the decision tree creates a logical representation of the data that is easy to search and provides a nice way for software developers to analyze the data.
  • attribute structure 520 may also look at process and thread-level attributes from different buckets to find dependencies and correlations between them.
  • the attributes in a bucket or in sub-class grouping receive a user-friendly name by creating a table that combines one or more attributes into a user-friendly name.
  • the combination can be the result of aggregating attributes, extracting data from another database, or the result of joining other tables.
  • an index is created using bucket names.
  • Another attribute structure 520 that may be used to retrieve data from bucket database 415 is a Na ⁇ ve Bayes model, which allows a troubleshooter to choose attributes of interest and a ranked list of attribute-value pairs are presented, showing whether any buckets have the corresponding attribute/value pair.
  • These or other attribute structures create a robust architecture for querying bucket database 415 , so that when data from a new hang is received, it can efficiently be compared to previously extracted data.
  • datamining utility 405 extracts an attribute structure 520 like the one illustrated in FIG. 5 .
  • an attribute(s) is placed at the root or topmost level of the tree 521 and other attributes 522 – 525 are added which fill out attribute structure 520 as they are extracted and grouped.
  • software provider receives a packaged file, processes the packaged file, and extracts attributes from it to form an attribute structure.
  • attribute structure 520 has been filled with values, a comparison can be made to already existing data in bucket database 415 .
  • the entire attribute structure 520 may be compared for equivalence to entries in bucket database 415 . If attribute structure 520 matches an entry in bucket database 415 , then the bug is known and subsequent action may be based on this fact.
  • Comparing the entire attribute structure 520 to an entry in bucket database 415 may include comparison of nodes at each level of the tree. For example, the attributes in the topmost nodes are compared to see if they are reasonably equivalent. If so, then the next level of values is compared to see if the values are reasonably equivalent to similarly structured values in bucket database 415 . The process is continued until all the nodes of attribute structure 520 have been evaluated. If the nodes match up at every level then it is likely safe to assume two are equivalent. Fore example, one comparison may find that the topmost node 521 contains a value, “IsReportingFault,” which is equivalent to the topmost value of bucket database's entry 415 . Since the attribute is present in both structures, a comparison between next level nodes is made.
  • sub-class 522 may consider the situation where another thread in the process was busy packing a Watson report, while the user interface thread was still trying to display the hung user interface to the user. If the attribute is present in both structures, the comparison proceeds until the tree has been traversed, in which case the bug is known, or until the two trees diverge, at which point the bug is reported to software development.
  • attribute structure 520 is reasonably equivalent to an entry from the bucket database 415 based on predetermined criteria, then the issue may also be known and subsequent action will be based on this fact.
  • the predetermined criteria could be any of a number of factors, such as if a statistically significant number of attributes are similar, the three top instructions on the call stack are the same, or some other relevant criteria. For example, if attribute structure 520 reveals that an application hung on one particular thread, that fact alone may be sufficient to classify the file as pertaining to a particular bucket. In another implementation, several instructions taken together may sufficiently identify a bug and merit classification in one bucket as opposed to another. If it is determined that a bug is unknown, the diagnostic data files may be forwarded to software development for further analysis. As the bug is diagnosed, bucket database 415 may be updated. Updated information may include adding a new bucket or it adding new nodes to an existing bucket.
  • attribute structure 520 could allow troubleshooters to reproduce the bug.
  • Packaged file 210 may contain enough data to automatically figure out common occurrences needed to reproduce the bug.
  • aspects of datamining utility 405 are off-loaded to computing device 100 . Much of the attribute extraction can be performed by a datamining utility local to computing device 100 . The resulting attribute structure 520 could then be packaged and sent to software provider.
  • all aspects of the system related to known bugs can be offloaded to the user's computing device.
  • the datamining utility 405 in one implementation, is extensible so that analysis on the diagnostic data files is done in a single interface.
  • the datamining utility 405 may be enhanced by adding attribute specific extensions 510 for different programs; thus, software providers can mine Watson-like attributes for their specific application.
  • extensions 510 can be added to the data capture program on computing device to gather third-party software specific data.
  • FIG. 6 illustrates a method for finding a solution to hang-inducing bug.
  • finding a solution to a bug involves first capturing data from a hung program on a computing device 605 . Once hang data has been captured, the data is packaged into a file to be sent to a software provider for analysis 610 . The packaged data is sent 615 and eventually received by software provider. Upon receipt of the packaged data, attributes are extracted from the captured data in order to determine relevant characteristics of the hang 620 . The extracted attributes are compared to entries in a database containing known bugs 625 . Comparing the captured data to the database entries will likely identify whether the hang-inducing bug is a known bug 630 .
  • FIG. 7 illustrates the method for finding solutions to unknown bugs. As described in conjunction with FIG.
  • data from a hung program is captured and reported to a software provider.
  • the software provider determines the reported bug is an unknown bug it must be properly diagnosed 705 .
  • the captured hang data is sent to software engineering 710 .
  • software engineering diagnoses the bug, several steps may occur in any order.
  • a database of known issues is updated to indicate that the bug is known, so subsequent files reporting the same bug will be classified appropriately 720 .
  • the mechanism for extracting attributes from the bug report may be updated to look for new attributes or to include more data about a particular attribute 715 .
  • the bug may be prioritized as to when it should be fixed 725 .
  • Priority may be determined by a variety of factors such as FIFO (first-in, first-out), LIFO (last-in, first-out), security concerns, convenience concerns, time concerns, and other similar factors.
  • FIG. 8 illustrates a method for generating an attribute structure to make a comparison between hang data and a database of known issues.
  • data is collected and captured on a computing device 805 after a hang.
  • attributes are extracted from the collected data 810 .
  • the extracted attributes are grouped into a logical structure, such as an array, binary tree, linked list, or other data structure, to represent the hang-inducing bug.
  • the resulting attribute structure is compared to previously determined structures in order to determine whether that particular bug has already been fixed. If a hang is known, steps are taken to find a solution and notify a user when a solution becomes available. Otherwise the captured data is sent to the program provider for further analysis 835 .

Abstract

A large portion of software bugs are related to hangs. Analyzing and diagnosing hang-related bugs involves capturing data from a hung program and extracting attributes from the captured data. Extracting attributes from the capture data provides a scheme to determine relevant characteristics of the hang. Moreover, the extracted attributes may be compared to known issues and, based on that comparison, a bug may be classified as known or unknown. Alternatively, triage may be performed on the client computing device in order to determine the potential cause of the hang event. Once the potential cause of the hang event has been determined, troubleshooting steps may be performed on the client computing device to quarantine it. Ultimately, if the hang-inducing bug is known, a user may be provided with a solution to the bug. Alternatively, if the bug is unknown, implementations of the invention send the captured data to be analyzed and fixed by the software's provider.

Description

TECHNICAL FIELD
The invention relates generally to analyzing defects in software. More specifically, the invention relates to analyzing and diagnosing software defects caused by hangs.
BACKGROUND
In a computer (e.g., personal computer (PC) or the like), the abnormal termination of a software process by either the operating system (OS) or an end user indicates the possibility of a defect (bug) in the software. Software typically contains a number of bugs classifiable into two general categories: crashes and hangs. Among the chief concerns for program developers has always been identifying software defects that cause computers to crash. Software crashes are fatal system errors, which usually result in the abnormal termination of a program by a kernel or system thread. Normally, when a crash-causing bug is discovered, the software provider obtains diagnostic data, attempts to reproduce the error, and, depending on the severity of the bug, creates and distributes a fix for the bug.
One way of diagnosing crash-induced bugs involves examining a log file containing diagnostic data including commands, events, instructions, program error number, computer processor type, and/or other pertinent diagnostic information. The log file typically is generated right after a crash has been detected. For example, a Microsoft® Windows operative PC loads Watson, a debugging tool which monitors running processes and logs useful diagnostic data when a crash is detected. After a crash, the Watson log file may be sent to the software provider for analysis. In some cases, a log file does not contain enough information to diagnose a problem, thus, a crash dump may be required to troubleshoot the problem. A crash dump is generated when the physical contents of memory are written to a predetermined file location. The resulting file is a binary file. Analyzing crash dumps is more complex than analyzing log files because the binary file usually needs to be loaded into a debugger and manually traversed by a troubleshooter.
In an effort to more effectively troubleshoot bugs, some software providers attempt to perform varying degrees of computerized analysis on log and crash files. For example, Microsoft has introduced its Online Crash Analysis (OCA) engine to automate the process of troubleshooting crashes. The OCA engine allows users to submit, through a web browser, a crash log or a crash mini-dump file to Microsoft. The analysis engine compares data from the uploaded file to a database of known issues. If the bug is known and a patch or workaround is available, the user is notified of the solution. Otherwise, the uploaded file is used by troubleshooters to diagnose the bug.
A problem with all of the above-mentioned troubleshooting techniques is that they attempt to diagnose crashes only, overlooking hangs, the second major class of bugs. Moreover, these approaches rely heavily on manual analysis of bugs and require the user to send in a report to the software provider, where most of the analysis is performed, wasting the software provider's resources.
In reality, many reported bugs are related to hangs. However, software providers typically expend their debugging efforts fixing crash-inducing bugs, even though, to end-users, crashes and hangs often appear to be the same thing. A software hang occurs when a piece of software appears to stop responding or when a software thread looks inactive. Hangs often result in the abnormal termination of a recoverable software process by the end-user. Abnormal termination of software by any means, including user-induced termination, may indicate the presence of a bug in the software. For example, a piece of software may normally take 10 or 15 seconds to paint a user interface, but under a given set of circumstances, the user interface thread may call an API that takes a long time to return or, alternatively, the user interface thread may make a network call that requires a response before painting the user interface. Thus, the time to paint the user interface in this instance may take an abnormally long 50 or 60 seconds to finish. Because of the abnormal delay, a user may become frustrated and manually terminate the application after 20 seconds. The fact that the user interface became unresponsive, in this instance, is a bug because it caused the user to abnormally terminate the software.
Another example of a hang involves a scenario where a software application crashes because of an error in a related dynamic link library (.DLL) file. In this scenario, at the time of the crash, the software application has acquired certain system resources, like file handlers and critical sections, which are not released after the crash. Other threads need access to those acquired resources, but cannot gain access to them because they are still marked as locked by the crashed thread. Because of the lock, other running threads hang. The fact that other threads hung indicates a bug that may need to be diagnosed and fixed.
One of the difficulties software providers encounter when troubleshooting hangs is that they are hard to identify, diagnose, and reproduce. For example, hangs are usually not as dramatic as crashes, e.g., there may not be an obvious “blue screen of death”-type response by a computer to indicate a bug, so users are less likely to report the error. Moreover, crashes are easier to diagnose since they tend to occur after a specific instruction or event has been issued. In contrast, identifying the offending instruction or block of code in a hang may be more difficult to do since the bug could be related to another piece of software, to a specific environment on a PC, to an impatient user, or to any number of other issues. Thus, software providers often do not emphasize hangs when fixing bugs.
Therefore, there exists a need for tools to troubleshoot hangs. More specifically, there exists a need for automating the process of diagnosing and troubleshooting software hangs. There also exists a need for client-side tools to aid in the diagnosis of bugs in order to free software provider resources.
SUMMARY
When a software program hangs, implementations of the invention capture data in order to troubleshoot bugs associated with the hangs. From the captured data, attributes may automatically be extracted and compared to known issues. If the hang-inducing bug is known, a user may be provided with a solution to the bug. Alternatively, if the bug is unknown, implementations of the invention send the captured data to be analyzed and fixed by the software's provider.
In additional implementations, if the bug is unknown, the captured data is packaged into a file to be sent to the software provider and assigned an identification value for tracking the hang.
In one implementation, comparing the extracted attributes to known issues is performed on the client computing device in order to determine the potential cause of the hang event. Once the potential cause of the hang event has been determined, troubleshooting steps are performed on the client computing device to quarantine the file, module, process, thread, block of code, instruction, or the like that is likely causing the hang.
Additional features and advantages of the invention will be made apparent from the following detailed description of implementations that proceeds with reference to the accompanying drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 is a block diagram of a suitable computing environment for implementing aspects of the invention.
FIG. 2 is a schematic diagram of a hang analysis system, in accordance with an implementation of the invention.
FIG. 3 is a block diagram of a packaged file component, in accordance with an implementation of the invention.
FIG. 4 is a block diagram illustrating a system for identifying a solution to a hang-inducing bug, in accordance with an implementation of the invention.
FIG. 5 is a block diagram illustrating a method for extracting attributes from hang data, in accordance with an implementation of the invention.
FIG. 6 is a flowchart of a method of analyzing a hang, in accordance with an implementation of the invention.
FIG. 7 is a flowchart of a method of analyzing an unknown bug, in accordance with an implementation of the invention.
FIG. 8 is a flowchart illustrating a method of identifying a bug, in accordance with an implementation of the invention.
DETAILED DESCRIPTION
FIG. 1 illustrates an exemplary system for practicing the invention, according to one implementation. As seen in FIG. 1, the system includes computing device 100. In a very basic implementation, computing device 100 typically includes at least one processing unit 102 and system memory 104. Processing unit 102 includes existing and future processors, multiple processors acting together, virtual processors, and any other device or software program capable of interpreting binary executable instructions. Depending on the exact implementation and type of computing device, system memory 104 may be volatile (such as RAM), non-volatile (such as ROM, flash memory, etc.), or some combination of the two. System memory 104 typically includes an operating system 105, one or more program modules with their associated data 106, and a hang analysis tool 114.
Computing device 100 may also have additional features or functionality. For example, computing device 100 may also include additional data storage devices (removable and/or non-removable) such as magnetic disks, optical disks, or tape. Such additional storage is illustrated in FIG. 1 by removable storage 107 and non-removable storage 108. Computer storage media may include volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, program modules, or other data. System memory 104, removable storage 107, and non-removable storage 108 are all examples of computer storage media. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, DVD or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by computing device 100. Computing device 100 may also have input device(s) 112 such as keyboard, mouse, pen, voice input device, touch input device, etc. Output device(s) 110 such as a display, speakers, printer, etc. may also be included. All these devices are known in the art and need not be discussed at length here.
Computing device 100 may also include communications connection(s) 113 that allow the device to communicate with other computing devices 120, such as over a network. Communications connection(s) 113 is an example of communication media, which typically embodies computer readable instructions, data structures, program modules, or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed is such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct wired connections, and wireless media such as acoustic, RF, infrared, and other wireless media. The term computer readable media as used herein includes both storage media and communication media.
FIG. 2 illustrates exemplary aspects of computing device 100 to capture hang data and to transmit the data to software providers so it can be troubleshot. As seen in FIG. 2, computing device 100 includes components of the invention stored in computer storage media as illustrated in FIG. 1. In one implementation, computing device 100 includes one or more of the following components: one or more program modules with their associated data 202, a data capture program 203, one or more diagnostic data files 204, and a triage evaluator 205. Triage evaluator may further be comprised of a database 207 and one or more history files 206.
The one or more program modules with their associated data (program) 202 may include pieces of software such as a software application, a driver file, an API, a text file, an executable file or any other computer readable instructions, data structures, or software modules.
In one implementation, user 201 accesses program 202, which subsequently hangs. Because of the hang, user 201 terminates the program. Generally to terminate a program, user 201 will issue program termination commands to computing device 100. For example, in a Windows environment, a user may press <Ctrl-Alt-Del> on their keyboard and, when presented with a menu of programs, user 201 may selectively end any running process or thread. Additionally, if the program happens to be an application running in user-mode, user 201 may click on the close command to terminate the process. Similarly, in a UNIX or Java environment, user 201 may issue a “kill” command at a command prompt to terminate hung program 202.
After computing device 100 registers the termination command, data capture program 203 is invoked, which captures data related to hung program 202. The amount of data captured typically is dependant on how sophisticated data capture program 203 is. Some data capture programs, such as Watson, will allow a user to track thousands of instructions. In any event, the captured data includes a wide range of information to diagnose the hang. For example, data capture program 203 may capture a mini-dump of the hang, or alternatively, it may generate a log file containing the running version of the operating system (including support pack numbers), the name of the hung program and its corresponding thread name, software version, names and versions of other software modules or processes loaded into memory, the call stack, or any other information that may help diagnose the cause of the hang. For example, in a Windows environment, after a hang, Watson technologies capture data associated with a hung program. Watson technologies allow a user to specify the amount of data to be captured (e.g., the last 10 events, last 1000 events, etc.) then saves the data to a log file or a mini-dump.
Once hang data has been captured, in one implementation, it is generally stored in a diagnostic data file 204 by the data capture program 203. Diagnostic data file 204 may include a crash dump file, mini-dump, log file, ABEND log, text file, html file, binary file, or any other type of file stored locally or on a remote computer that contains data to help troubleshoot a bug. Additionally, diagnostic data file 204 may include data from one or more hangs. For example, captured hang data may simply be appended to the end of an existing diagnostic data file, or alternatively, diagnostic data file 204 may include a directory of files with diagnostic data. In another implementation, diagnostic data file 204 may include a searchable, relational database, where each hang is added to a database of prior hangs.
As illustrated in FIG. 2, diagnostic data file 204 may be wrapped into a packaged file 210 and transmitted over Internet 220 to software provider 230. Components of packaged file 210 are discussed in more detail in conjunction with FIG. 3.
Packaged file 210 may be sent based on a certain set of criteria. For example, in one implementation, a user may be prompted to send a report to software provider 230 after a hang has been detected. In an alternate implementation, a user may initiate the transmittal of data. In yet another implementation, packaged file 210 may be sent automatically when computing device 100 detects a hang. In another implementation, packaged file 210 may be sent to software provider 230 only after certain conditions are met (e.g., after five occurrences of similar hangs, after a fixed number of days, or after a problem of a certain severity has been encountered.
FIG. 2 also illustrates a triage evaluator 205, which provides a mechanism for initial triage on computing device 100. The triage evaluator 205 performs initial triage on computing device 100 to prevent a repeat of the bug and to speed up the solution process. In one implementation, triage evaluator 205 also extracts attributes from diagnostic data files similar to datamining utility 405 described in conjunction with FIG. 4. In another implementation, the triage evaluator 205 further includes a database 207 of common bugs and issues related to software on computing device 100.
In one implementation, triage evaluator 205 uses diagnostic data file 204 as initial input to determine objects, variables, addresses and modules loaded into system memory. Triage evaluator also maintains a history file 206 that describes ownership and reliability of functions and modules. In one implementation, triage evaluator 205 processes diagnostic data file 204, looks at the call stack and uses the predetermined data in history file 206 to determine the reliability of certain modules and routines. To determine a culprit or faulty component, different weights are assigned to different data based on the information in history file 206. Special values are assigned to candidate files, modules, and routines to calculate the likelihood that a particular module or routine is faulty. Files, modules, and routines become candidate culprits if they are part of the captured data. The assigned values may be as simple (such as a counter value) or more complex (such as a math or statistical algorithm). For example, a module that has recently been patched, is used frequently, and has no history of problems may be assigned the value “unlikely to be the culprit.” Another candidate piece of software may be assigned a value “may be the culprit” because it is used often and appears somewhat frequently in hang data. As a final example, an instruction that is always on the call stack when a particular hang appears may be identified as the “likely” culprit. Once triage evaluator 205 isolates the likely culprit (file, module, routine, or instruction), initial triage may be performed. In one basic implementation, triage measures may include renaming the culprit file, installing an original version of a file, attempting to find a newer version of the file, or otherwise quarantining the faulty file, module, routine or instruction.
To further illustrate how triage evaluator 205 works, consider the following example. A user browses the Internet using Microsoft Internet Explorer. At some point, Internet Explorer hangs, invoking Watson, which captures hang data. Here, Watson invokes triage evaluator 205 to perform initial triage on the machine. Triage evaluator has maintained a history of crashes and hangs and notices that the file “bad.dll” is often associated with hangs like the one that just occurred. Thus, triage evaluator marks bad.dll as the likely culprit and attempts to quarantine the file. First, triage evaluator may try renaming bad.dll, but the file is required by Internet Explorer. Hence, triage evaluator attempts to back-rev the file to an older, more stable version, but the current file is the original. Finally, triage evaluator attempts to update the file using Microsoft's Windows Update feature. If triage evaluator succeeds in finding a new file and updating bad.dll, then the bug may be fixed without further user intervention. If triage evaluator does not find a fix, then the diagnostic data files are packaged and sent to software provider 230.
In other implementations, different triage steps may be performed, or they may be performed in a different order. Furthermore, in another implementation, triage evaluator 205 prompts the user for permission to perform triage or prompts the user to perform the triage steps. Other implementations are also available to one of ordinary skill in the art. In one implementation, triage evaluator 205 maintains a known issues database 207, which receives updates from software providers relating to fixes and solutions. For example, the issues database 207 may routinely be updated with new information as it becomes available in a fashion similar to Microsoft's Windows Update system. As part of the triage process, triage evaluator 205 compares diagnostic data to its database. If there is a known solution to a bug, the issues database may either display the solution, retrieve a solution or fix, prompt user 201 for permission to apply a solution, automatically apply the solution, or perform some other similar act. Again, if the bug is unknown, it will be sent to software provider 230 to be troubleshot and diagnosed. Thus, triage evaluator 205 performs initial analysis and triage on a bug, thereby conserving software provider 230 resources, such as bandwidth, development costs, troubleshooting costs, disk space, and the like.
Sending packaged file 210 to software provider 230 may involve copying the packaged file from computing device 100 to software provider 230. The transfer of file may occur by uploading packaged file 210 to a software provider server, sending an email message with packaged file attached, connecting to a support page and attaching the file, or using some other electronic form of communication. In one implementation, packaged file 210 is transmitted over Internet 220. In another implementation, software provider 230 is on the same network (e.g. LAN or WAN) as computing device 100.
Once packaged file 210 has been sent and received by software provider 230, packaged file 210 may be stored in a packaged file repository 231 until it may be evaluated by analysis engine 232. In one implementation, after packaged file 210 has been analyzed, the bug is transferred to software development 240 so the bug can be troubleshot and fixed. In another implementation, after software development has found a solution to the bug, computing device 100 and/or user 201 are notified of the solution.
FIG. 3 illustrates an exemplary implementation of packaged file 210. The format of packaged file 210 may be a CAB file, a ZIP file, or any other type of packaged or compressed file. Moreover, packaged file 210 may be encrypted, password protected, or otherwise secured before being transferred to software provider 230. Packaged file 210 typically includes among other components including a bucket ID 311 and at least a diagnostic data file 312. Bucket ID 311 provides a means for labeling the bug so it can be categorized into a bucket. Bucketing involves categorizing classes of similar or related problems, which helps to identify and troubleshoot bugs. Exemplary buckets are described in more detail in conjunction with FIG. 5.
In one implementation, bucket ID 311 may incorporate information such as computer name, user name, MAC address, hardware serial number, client identifier, IP address, or other information uniquely identifying a computing device, user, or bug. In one instance, generating a bucket ID 311 for packaged file 210 involves walking call stack 305 and performing a hash on it. As shown in FIG. 3, call stack 305 may include multiple events, such as “create file” 306, “open file” 307, or “create thread” 308. In the illustrated example, a program calls a create file event, which creates a file with a file handler locked in critical section 309. A subsequent “create thread” event requires access to the “create file” handler. Thus, when the “create thread” event occurs, the newly created thread stalls and hangs because it cannot access the “create file” handler. When the hang is detected, diagnostic data is captured and packaged to send to service provider 230. Call stack 309 is hashed to generate bucket ID and then bucket ID is wrapped into packaged file 210. In this case, the hash of call stack 309 may uniquely identify this particular bug, thus, if other similar hangs have been reported to software provider 230, an evaluation of bucket ID 311 may be sufficient to identify the bug.
As further illustrated in FIG. 3, packaged file 210 includes at least one diagnostic data file 312, containing hang data. In one implementation, diagnostic data file is a CAB file or another type of packaged or compressed file. In another implementation, diagnostic data file 312 may be encrypted, password protected, or otherwise secured.
Diagnostic data file 312 includes one or more attributes 316. Attributes are diagnostic values which are provided by a debugger infrastructure to help troubleshooters understand the environment and events associated with a hang. For example, attributes 316 that may be useful for diagnosing bugs include the name of the program, thread number, application version number, instructions on the stack, and any number of other captured values and events. Attributes 316 may be in a text format or numeric format, depending on the nature of the diagnostic data file 312. Once wrapped up, packaged file 210 is sent to software provider 230 where it is stored in a packaged file repository 231 until it can be analyzed.
FIG. 4 illustrates software provider's system for handling packaged file 210. The software provider system is typically a server with components generally similar to those in computing device 100. For example the software provider system would likely include a processor and computer storage media to perform analysis on received packaged file 210. Once received and stored, packaged file 210 may be sent to diagnostic engine 401. In one implementation, when packaged file 210 is received, before being placed in packaged file repository 231, the bucket ID and diagnostic data file are extracted and then stored. In another implementation, packaged file 210 is stored as received and its contents are extracted prior to being analyzed by diagnostic engine 401.
As shown in FIG. 4, diagnostic engine 401 may have several components including a datamining utility 405, an attribute structure 410, and a bucket database 415. Datamining utility 405 loads a packaged file from packaged file repository 231 and extracts attributes from packaged file's diagnostic data files. In one implementation, datamining utility 405 is an automated utility that extracts attributes based on a diagnostic data format. For example, in a Windows environment, mini-dumps may contain cookie crumbs that make attributes identifiable and thus extractable through automated tools. In another implementation, data in packaged file 210 is analyzed by troubleshooters (in this case the troubleshooters become utility 405) in a debugger who manually extract key attributes from the file. In another implementation, datamining utility 405 is a text file, such as a batch file, with a list of commands that are fed into a debugger for extracting attributes from packaged file 210. In all of these cases, datamining utility 405 may look for keywords, binary patterns, offsets, or other recognizable data. Datamining utility 405 repeatedly iterates on the diagnostic data files until no more attributes can be found. Once extracted, attributes are put into an analyzable format, as illustrated by attribute structure 410. Over a period of time, datamining utility will identify a number of problem classes (buckets) and extract a large number of attributes from packaged file(s) 210. As new problems and classes of problems are identified, this information is added to bucket database 415.
As illustrated, the extracted attributes are placed in attribute structure 410 in computer storage media. Attribute structure 410 may be a binary tree, an array, a linked list, a text file, HTML file, a database entry, or other comparable data structure that stores and organizes data in an analyzable format. FIG. 5 illustrates another exemplary implementation of attribute structure 410.
In one implementation, after attributes have been mined, diagnostic engine 401 queries its bucket database 415 to see if the mined data in attribute structure 410 belongs to a known bucket. Bucket database 405 consists of entries (buckets) that contain groups of similar or related bugs categorized based on a given set of criteria. Buckets may contain bugs relating to a particular software application, a module name, an application version, the bucket ID, an attribute, a thread name, an error number, a computer address, a user name, a combination of these factors, or some other reasonable means of categorizing software bugs.
In one implementation, attribute structure 410 is compared to entries in bucket database 415. If attribute structure 410 corresponds to a known bucket and a lookup of the bug indicates a solution 420 is available, the solution 420 is sent to computing device 100. In another implementation, if attribute structure 410 corresponds to a known bucket but a fix is not available, then software development 430 is notified and values in the bucket database may be updated. In yet another implementation, attribute structure 410 is stored according to its associated bucket ID. In still another implementation, only attribute structure 410 is stored. In another implementation, a counter is updated to indicate that one more instance of a bucketed problem has been encountered. Other implementations may store username and computer device identifying data to notify a user when solution 420 becomes available.
As more packaged files 210 are evaluated and bucket database 415 grows as bugs are identified, the stored data may be used to create a bug fix priority list 432 because certain bugs need to be fixed sooner than other bugs. Any number of factors may be used to determine the priority, such as the security risks posed by a bug, the likelihood of data loss, frequency of the error, and other similar factors. For example, if a bug creates an exploitable security flaw in an application, such as a heap overflow error, then that bug will be prioritized higher than other bugs. Similarly, if one bug occurs more frequently than other bugs, that bug will also be prioritized accordingly. In contrast, if a bug happens infrequently with few side effects and would require a rewrite of thousands of lines of code, that bug likely would be assigned a very low priority.
If, in a comparison to bucket database entries, attribute structure appears to identify an undisclosed bug, then diagnostic engine 401 transfers the contents of packaged file 210 to software development 430 for further analysis. Once software development has diagnosed and defined the bug, feedback 436 on the bug is provided to diagnostic engine 401. Feedback 436 may include new attributes that datamining utility 405 should begin looking for. Moreover, bucket database 415 may be updated with new bucket information, so similar issues will be flagged and bucketed properly. Other feedback could include documentation of the bug, workarounds for the problem, or a timeframe for finding a solution to the bug. Any of a number of similar items could be also included in feedback 436. Finally, since the bug has been diagnosed, it will be categorized as a “known” bug and added to bug fix priority list 432.
FIG. 5 illustrates an exemplary attribute structure 520 generated by datamining utility 405 and stored in bucket database 415. Bucket 416 may have numerous relationships with packaged file 210. For example, a given bucket may contain data from many packaged files; hence, the bucket to packaged file mapping may be 1 to many. A given packaged file may contain one or more diagnostic data files, thus, the packaged file to diagnostic data files mapping may also be 1 to many. Moreover, a given diagnostic data file may map to one or more processes. Finally, in one implementation, a given process may map to multiple threads. Other implementations of bucket 416 may include variables, instructions, and other values and events.
Since bucket 416 has many potential attributes, an aspect of the system is to look for natural groupings of attributes within a bucket and see if sub-classifications exist that make sense to pull out, instead of classifying every new bug into a generic “application hang” bucket. For example, datamining utility 405 mines for attributes in order to identify similar issues and bucket them accordingly. However, this process is not trivial. For example, a diagnostic data file containing what appears to be an “idle-related” bug may not actually be bucketed in the “idle” bucket because the idle thread may have been caused by a locked attribute from an earlier crashed application. To solve this problem, it is necessary to see who acquired a lock on the attribute, and if the lock was incorrectly acquired then categorize the bug in a different bucket. Thus, in one implementation, bucketing may be performed initially by technical support personnel, who will flesh out attributes datamining utility 405 should look for and add entries into the bucket database. However, as the database grows, more and more bugs should be handled by the system.
As attributes are extracted by datamining utility 405, attribute structure 520 expands. In one implementation, attribute structure 520 may form a decision tree for each bucket or issue. The decision tree attribute structure 520 is a graphical depiction of the order in which relevant attributes can be utilized to identify entries of the corresponding bucket. In the illustrated implementation, the attributes forming the tree include natural groupings of thread and process level attributes. Here, exemplary nodes 524 and 525 each contain a thread or process level attribute that when examined appear to form a natural grouping, so nodes 524 and 525 are grouped together under node 523. In one implementation, node 523 may contain an attribute that is part of a grouping of attributes. In another implementation, node 523 contains a name created to describe its children nodes. The grouping of nodes under node 523 forms a sub-class of node 522. Similarly, node 522 is a sub-class of node 521.
To illustrate the decision tree concept, in one implementation, the attribute structure for the “APPLICATION_HANG” bucket looks at the natural groupings of attributes below to see if a sub-classification may be created out of the generic APPLICATION_HANG bucket. If there appears to be a natural grouping of attributes from the process and thread level attributes (such as instruction calls or module names) then that group of attributes is made into a sub-class. Alternatively, if the sub-class is large enough, it may become its own bucket. The decision tree creates a logical representation of the data that is easy to search and provides a nice way for software developers to analyze the data. In a variation on the decision tree mode, attribute structure 520 may also look at process and thread-level attributes from different buckets to find dependencies and correlations between them.
Once groupings have been made, they may need to be named. In one implementation, the attributes in a bucket or in sub-class grouping receive a user-friendly name by creating a table that combines one or more attributes into a user-friendly name. The combination can be the result of aggregating attributes, extracting data from another database, or the result of joining other tables. In one implementation, an index is created using bucket names.
Another attribute structure 520 that may be used to retrieve data from bucket database 415 is a Naïve Bayes model, which allows a troubleshooter to choose attributes of interest and a ranked list of attribute-value pairs are presented, showing whether any buckets have the corresponding attribute/value pair. These or other attribute structures create a robust architecture for querying bucket database 415, so that when data from a new hang is received, it can efficiently be compared to previously extracted data.
In one implementation, datamining utility 405 extracts an attribute structure 520 like the one illustrated in FIG. 5. Here, an attribute(s) is placed at the root or topmost level of the tree 521 and other attributes 522525 are added which fill out attribute structure 520 as they are extracted and grouped. For example, software provider receives a packaged file, processes the packaged file, and extracts attributes from it to form an attribute structure. Once attribute structure 520 has been filled with values, a comparison can be made to already existing data in bucket database 415. In one implementation, the entire attribute structure 520 may be compared for equivalence to entries in bucket database 415. If attribute structure 520 matches an entry in bucket database 415, then the bug is known and subsequent action may be based on this fact.
Comparing the entire attribute structure 520 to an entry in bucket database 415 may include comparison of nodes at each level of the tree. For example, the attributes in the topmost nodes are compared to see if they are reasonably equivalent. If so, then the next level of values is compared to see if the values are reasonably equivalent to similarly structured values in bucket database 415. The process is continued until all the nodes of attribute structure 520 have been evaluated. If the nodes match up at every level then it is likely safe to assume two are equivalent. Fore example, one comparison may find that the topmost node 521 contains a value, “IsReportingFault,” which is equivalent to the topmost value of bucket database's entry 415. Since the attribute is present in both structures, a comparison between next level nodes is made. Here, sub-class 522 may consider the situation where another thread in the process was busy packing a Watson report, while the user interface thread was still trying to display the hung user interface to the user. If the attribute is present in both structures, the comparison proceeds until the tree has been traversed, in which case the bug is known, or until the two trees diverge, at which point the bug is reported to software development.
In another implementation, if attribute structure 520 is reasonably equivalent to an entry from the bucket database 415 based on predetermined criteria, then the issue may also be known and subsequent action will be based on this fact. The predetermined criteria could be any of a number of factors, such as if a statistically significant number of attributes are similar, the three top instructions on the call stack are the same, or some other relevant criteria. For example, if attribute structure 520 reveals that an application hung on one particular thread, that fact alone may be sufficient to classify the file as pertaining to a particular bucket. In another implementation, several instructions taken together may sufficiently identify a bug and merit classification in one bucket as opposed to another. If it is determined that a bug is unknown, the diagnostic data files may be forwarded to software development for further analysis. As the bug is diagnosed, bucket database 415 may be updated. Updated information may include adding a new bucket or it adding new nodes to an existing bucket.
In one implementation, attribute structure 520 could allow troubleshooters to reproduce the bug. Packaged file 210 may contain enough data to automatically figure out common occurrences needed to reproduce the bug. In another implementation, aspects of datamining utility 405 are off-loaded to computing device 100. Much of the attribute extraction can be performed by a datamining utility local to computing device 100. The resulting attribute structure 520 could then be packaged and sent to software provider. In an alternative implementation, all aspects of the system related to known bugs can be offloaded to the user's computing device.
The datamining utility 405, in one implementation, is extensible so that analysis on the diagnostic data files is done in a single interface. The datamining utility 405 may be enhanced by adding attribute specific extensions 510 for different programs; thus, software providers can mine Watson-like attributes for their specific application. Moreover, extensions 510 can be added to the data capture program on computing device to gather third-party software specific data.
FIG. 6 illustrates a method for finding a solution to hang-inducing bug. In one implementation, finding a solution to a bug involves first capturing data from a hung program on a computing device 605. Once hang data has been captured, the data is packaged into a file to be sent to a software provider for analysis 610. The packaged data is sent 615 and eventually received by software provider. Upon receipt of the packaged data, attributes are extracted from the captured data in order to determine relevant characteristics of the hang 620. The extracted attributes are compared to entries in a database containing known bugs 625. Comparing the captured data to the database entries will likely identify whether the hang-inducing bug is a known bug 630.
If the bug is not known, then additional analysis as illustrated in FIG. 7 will likely be performed 635. If the bug is known, then a check is made to see if there is a solution to the bug 640. If there is an available solution, it is sent back to the computing device 645. Alternatively, if a fix is not available, then the captured data is sent to software engineering for further analysis 650. Once software engineering has analyzed and diagnosed the bug, it is prioritized according to a set of predetermined criteria 655 and later fixed 660. After a fix becomes available, computing devices that reported the bug are notified 665. FIG. 7 illustrates the method for finding solutions to unknown bugs. As described in conjunction with FIG. 6, data from a hung program is captured and reported to a software provider. When the software provider determines the reported bug is an unknown bug it must be properly diagnosed 705. To diagnose the bug, the captured hang data is sent to software engineering 710. After, software engineering diagnoses the bug, several steps may occur in any order. A database of known issues is updated to indicate that the bug is known, so subsequent files reporting the same bug will be classified appropriately 720. The mechanism for extracting attributes from the bug report may be updated to look for new attributes or to include more data about a particular attribute 715. Finally, the bug may be prioritized as to when it should be fixed 725. Priority may be determined by a variety of factors such as FIFO (first-in, first-out), LIFO (last-in, first-out), security concerns, convenience concerns, time concerns, and other similar factors. Once the bug has been identified, software engineering may fix the bug 730, at which point the computing device where the bug originated is notified of the available fix 735. Alternatively, fix is sent to the user.
FIG. 8 illustrates a method for generating an attribute structure to make a comparison between hang data and a database of known issues. Initially, data is collected and captured on a computing device 805 after a hang. Either locally on the computing device or on a remote system, attributes are extracted from the collected data 810. The extracted attributes are grouped into a logical structure, such as an array, binary tree, linked list, or other data structure, to represent the hang-inducing bug. The resulting attribute structure is compared to previously determined structures in order to determine whether that particular bug has already been fixed. If a hang is known, steps are taken to find a solution and notify a user when a solution becomes available. Otherwise the captured data is sent to the program provider for further analysis 835.
The methods and systems illustrated herein describe the functionality of several system components such as the triage evaluator, attribute structure, datamining utility, and bucket database. It should be understood that the functionality ascribed to any one these and other components described above can also be performed by any of the other related components if they are programmed to do so.
In view of the many possible implementations to which the principles of our invention may be applied, we claim as our invention all such implementations as may come within the scope and spirit of the following claims and equivalents thereto.

Claims (19)

1. A method of troubleshooting software hangs on a computing device, the method comprising:
capturing data associated with a hang;
extracting attributes associated with the hang;
comparing the extracted attributes to a database of issues to troubleshoot the hang;
performing on the computing device the comparison of extracted attributes to the database of issues;
assigning the extracted attributes a value based on a history of hang events;
determining a potential culprit for the hang event based on the assigned values; and
performing troubleshooting steps to quarantine the potential culprit;
wherein performing troubleshooting steps to quarantine the potential culprit comprises renaming a file.
2. The method of claim 1 further comprising:
packaging the captured data into a file; and
assigning the packaged file an identification value for tracking the hang.
3. The method of claim 2 wherein the identification value comprises a hash value associated with a call stack.
4. The method of claim 1, wherein comparing the extracted attributes further comprises:
identifying the hang; and
providing a user with a solution to the hang, if the solution is available.
5. The method of claim 1, wherein capturing data associated with a hang further comprises extending a schema by using a data capture program extension.
6. The method of claim 1, wherein extracting attributes to diagnose the hang further comprises extending an attribute extraction schema through the use of an attribute plugin.
7. The method claim 1, wherein the database of issues comprises data to represent at least one hang event.
8. The method of claim 1, wherein the potential culprit comprises one of a file, module, process, thread, block of code, or instruction.
9. The method of claim 1, further comprising updating the history of hang events.
10. A computer readable storage medium comprising executable instructions for performing a method of troubleshooting software hangs on a computing device, the method comprising:
capturing data associated with a hang;
extracting attributes associated with the hang;
comparing the extracted attributes to a database of issues to troubleshoot the hang;
performing on the computing device the comparison of extracted attributes to the database of issues;
assigning the extracted attributes a value based on a history of hang events;
determining a potential culprit for the hang event based on the assigned values; and
performing troubleshooting steps to quarantine the potential culprit;
wherein performing troubleshooting steps to quarantine the potential culprit comprises renaming a file.
11. The computer readable storage medium of claim 10, the method further comprising:
packaging the captured data into a file; and
assigning the packaged file an identification value for tracking the hang.
12. The method of claim 10, wherein the identification value comprises a hash value associated with a call stack.
13. The computer readable storage medium of claim 10, the method further comprising:
identifying the hang; and
providing a user with a solution to the hang, if the solution is available.
14. The method of claim 10, wherein capturing data associated with a hang further comprises extending a schema by using a data capture program extension.
15. The method of claim 10, wherein extracting attributes to diagnose the hang further comprises extending an attribute extraction schema through the use of an attribute plugin.
16. The method claim 10, wherein the database of issues comprises data to represent at least one hang event.
17. The method of claim 10, wherein the potential culprit comprises one of a file, module, process, thread, block of code, or instruction.
18. The method of claim 10, further comprising updating the history of hang events.
19. A computer-enabled system comprising:
means for capturing data associated with a hang;
means for extracting attributes associated with the hang;
means for comparing the extracted attributes to a database of issues to troubleshoot the hang;
means for performing on the computing device the comparison of extracted attributes to the database of issues;
means for assigning the extracted attributes a value based on a history of hang events;
means for determining a potential culprit for the hang event based on the assigned values; and
means for performing troubleshooting steps to quarantine the potential culprit;
wherein performing troubleshooting steps to quarantine the potential culprit comprises renaming a file.
US10/713,867 2003-11-14 2003-11-14 Automatic root cause analysis and diagnostics engine Active 2025-04-06 US7191364B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US10/713,867 US7191364B2 (en) 2003-11-14 2003-11-14 Automatic root cause analysis and diagnostics engine

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US10/713,867 US7191364B2 (en) 2003-11-14 2003-11-14 Automatic root cause analysis and diagnostics engine

Publications (2)

Publication Number Publication Date
US20050120273A1 US20050120273A1 (en) 2005-06-02
US7191364B2 true US7191364B2 (en) 2007-03-13

Family

ID=34619874

Family Applications (1)

Application Number Title Priority Date Filing Date
US10/713,867 Active 2025-04-06 US7191364B2 (en) 2003-11-14 2003-11-14 Automatic root cause analysis and diagnostics engine

Country Status (1)

Country Link
US (1) US7191364B2 (en)

Cited By (64)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040250156A1 (en) * 2003-05-19 2004-12-09 Siemens Aktiengesellschaft Aspect based recovery system and method
US20050204199A1 (en) * 2004-02-28 2005-09-15 Ibm Corporation Automatic crash recovery in computer operating systems
US20070033281A1 (en) * 2005-08-02 2007-02-08 Hwang Min J Error management system and method of using the same
US20070105607A1 (en) * 2005-11-08 2007-05-10 Microsoft Corporation Dynamic debugging dump for game console
US20070174710A1 (en) * 2006-01-11 2007-07-26 International Business Machines Corporation Apparatus and method for collecting and displaying data for remote diagnostics
US20070220518A1 (en) * 2006-02-28 2007-09-20 Microsoft Corporation Thread Interception and Analysis
US20070220348A1 (en) * 2006-02-28 2007-09-20 Mendoza Alfredo V Method of isolating erroneous software program components
US20070268300A1 (en) * 2006-05-22 2007-11-22 Honeywell International Inc. Information map system
US20080082973A1 (en) * 2006-09-29 2008-04-03 Brenda Lynne Belkin Method and Apparatus for Determining Software Interoperability
US20080104455A1 (en) * 2006-10-31 2008-05-01 Hewlett-Packard Development Company, L.P. Software failure analysis method and system
US20090006883A1 (en) * 2007-06-27 2009-01-01 Microsoft Corporation Software error report analysis
US7500142B1 (en) * 2005-12-20 2009-03-03 International Business Machines Corporation Preliminary classification of events to facilitate cause-based analysis
US20090106589A1 (en) * 2007-10-19 2009-04-23 Oracle International Corporation Gathering context information used for activation of contextual dumping
US20090113248A1 (en) * 2007-10-26 2009-04-30 Megan Elena Bock Collaborative troubleshooting computer systems using fault tree analysis
US7627785B1 (en) * 2004-07-12 2009-12-01 Sun Microsystems, Inc. Capturing machine state of unstable Java program
US20090320136A1 (en) * 2008-06-24 2009-12-24 Microsoft Corporation Identifying exploitation of vulnerabilities using error report
US20090327809A1 (en) * 2008-06-26 2009-12-31 Microsoft Corporation Domain-specific guidance service for software development
US7681182B1 (en) 2008-11-06 2010-03-16 International Business Machines Corporation Including function call graphs (FCG) generated from trace analysis data within a searchable problem determination knowledge base
US20100083048A1 (en) * 2008-09-26 2010-04-01 Microsoft Corporation Evaluating effectiveness of memory management techniques selectively using mitigations to reduce errors
US20100083036A1 (en) * 2008-09-26 2010-04-01 Microsoft Corporation Configuration of memory management techniques selectively using mitigations to reduce errors
US20100083047A1 (en) * 2008-09-26 2010-04-01 Microsoft Corporation Memory management techniques selectively using mitigations to reduce errors
US20100115348A1 (en) * 2008-07-29 2010-05-06 Frank Van Gilluwe Alternate procedures assisting computer users in solving problems related to error and informational messages
US20100112997A1 (en) * 2006-08-16 2010-05-06 Nuance Communications, Inc. Local triggering methods, such as applications for device-initiated diagnostic or configuration management
US7793229B1 (en) * 2003-12-19 2010-09-07 Unisys Corporation Recording relevant information in a GUI window of a panel dump browser tool
US20100235738A1 (en) * 2009-03-16 2010-09-16 Ibm Corporation Product limitations advisory system
US20100318853A1 (en) * 2009-06-16 2010-12-16 Oracle International Corporation Techniques for gathering evidence for performing diagnostics
US20110107137A1 (en) * 2009-11-05 2011-05-05 Sony Corporation System and method for providing automated support to electronic devices
US20110264960A1 (en) * 2010-04-22 2011-10-27 Samsung Electronics Co. Ltd. Apparatus and method for analyzing error generation in mobile terminal
US20110271257A1 (en) * 2010-04-29 2011-11-03 International Business Machines Corporation Defect management in integrated development environments
US8255747B1 (en) * 2004-11-30 2012-08-28 Centurylink Intellectual Property Llc System and method for providing resolutions for abnormally ended jobs on a mainframe computer system
US8548955B1 (en) 2004-11-30 2013-10-01 Centurylink Intellectual Property Llc System and method for automating disaster recovery of a mainframe computing system
US8645543B2 (en) 2010-10-13 2014-02-04 International Business Machines Corporation Managing and reconciling information technology assets in a configuration database
US8688866B1 (en) * 2012-09-25 2014-04-01 International Business Machines Corporation Generating recommendations for peripheral devices compatible with a processor and operating system of a computer
US8997086B2 (en) 2011-12-20 2015-03-31 International Business Machines Corporation Fix delivery system
US9250993B2 (en) 2013-04-30 2016-02-02 Globalfoundries Inc Automatic generation of actionable recommendations from problem reports
US9251013B1 (en) 2014-09-30 2016-02-02 Bertram Capital Management, Llc Social log file collaboration and annotation
US9274872B1 (en) * 2013-09-27 2016-03-01 Emc Corporation Set-based bugs discovery system via SQL query
US9292311B2 (en) 2012-12-20 2016-03-22 International Business Machines Corporation Method and apparatus for providing software problem solutions
US9354962B1 (en) * 2013-09-10 2016-05-31 Emc Corporation Memory dump file collection and analysis using analysis server and cloud knowledge base
US9424115B2 (en) 2013-06-07 2016-08-23 Successfactors, Inc. Analysis engine for automatically analyzing and linking error logs
US9436455B2 (en) 2014-01-06 2016-09-06 Apple Inc. Logging operating system updates of a secure element of an electronic device
US9465685B2 (en) * 2015-02-02 2016-10-11 International Business Machines Corporation Identifying solutions to application execution problems in distributed computing environments
US9483249B2 (en) 2014-01-06 2016-11-01 Apple Inc. On-board applet migration
US9519564B1 (en) 2012-09-28 2016-12-13 EMC IP Holding Company LLC Trace saving intervals
US9852172B2 (en) 2014-09-17 2017-12-26 Oracle International Corporation Facilitating handling of crashes in concurrent execution environments of server systems while processing user queries for data retrieval
US9934014B2 (en) 2014-08-22 2018-04-03 Apple Inc. Automatic purposed-application creation
US9940187B2 (en) 2015-04-17 2018-04-10 Microsoft Technology Licensing, Llc Nexus determination in a computing device
US9979607B2 (en) 2015-06-22 2018-05-22 Ca, Inc. Diagnosing anomalies based on deviation analysis
US10067812B2 (en) 2015-03-30 2018-09-04 Ca, Inc. Presenting diagnostic headlines using simple linguistic terms
US10192177B2 (en) 2016-06-29 2019-01-29 Microsoft Technology Licensing, Llc Automated assignment of errors in deployed code
US10303538B2 (en) 2015-03-16 2019-05-28 Microsoft Technology Licensing, Llc Computing system issue detection and resolution
US10445212B2 (en) 2017-05-12 2019-10-15 Microsoft Technology Licensing, Llc Correlation of failures that shift for different versions of an analysis engine
US10489463B2 (en) 2015-02-12 2019-11-26 Microsoft Technology Licensing, Llc Finding documents describing solutions to computing issues
US10545811B2 (en) 2017-01-11 2020-01-28 International Business Machines Corporation Automatic root cause analysis for web applications
US10565045B2 (en) 2017-06-28 2020-02-18 Microsoft Technology Licensing, Llc Modularized collaborative performance issue diagnostic system
US10585742B2 (en) 2016-08-26 2020-03-10 International Business Machines Corporation Root cause analysis
US10592821B2 (en) 2015-06-19 2020-03-17 Trane International Inc. Self-learning fault detection for HVAC systems
US10684035B2 (en) 2018-01-08 2020-06-16 Trane International Inc. HVAC system that collects customer feedback in connection with failure triage
US10740169B1 (en) * 2017-05-25 2020-08-11 CSC Holdings, LLC Self-learning troubleshooter
US11205092B2 (en) 2019-04-11 2021-12-21 International Business Machines Corporation Clustering simulation failures for triage and debugging
US11526391B2 (en) 2019-09-09 2022-12-13 Kyndryl, Inc. Real-time cognitive root cause analysis (CRCA) computing
US11595245B1 (en) 2022-03-27 2023-02-28 Bank Of America Corporation Computer network troubleshooting and diagnostics using metadata
US20230063814A1 (en) * 2021-09-02 2023-03-02 Charter Communications Operating, Llc Scalable real-time anomaly detection
US11658889B1 (en) 2022-03-27 2023-05-23 Bank Of America Corporation Computer network architecture mapping using metadata

Families Citing this family (84)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7050311B2 (en) * 2003-11-25 2006-05-23 Electric Power Research Institute, Inc. Multilevel converter based intelligent universal transformer
US7685575B1 (en) * 2004-06-08 2010-03-23 Sun Microsystems, Inc. Method and apparatus for analyzing an application
US7360125B2 (en) * 2004-06-18 2008-04-15 Sap Aktiengesellschaft Method and system for resolving error messages in applications
US7437612B1 (en) * 2004-09-21 2008-10-14 Sun Microsystems, Inc. Postmortem detection of owned mutual exclusion locks
US8195693B2 (en) * 2004-12-16 2012-06-05 International Business Machines Corporation Automatic composition of services through semantic attribute matching
JP2006285453A (en) * 2005-03-31 2006-10-19 Oki Electric Ind Co Ltd Information processor, information processing method, and information processing program
US20070101338A1 (en) * 2005-10-31 2007-05-03 Microsoft Corporation Detection, diagnosis and resolution of deadlocks and hangs
US7958512B2 (en) * 2005-10-31 2011-06-07 Microsoft Corporation Instrumentation to find the thread or process responsible for an application failure
US7660412B1 (en) * 2005-12-09 2010-02-09 Trend Micro Incorporated Generation of debug information for debugging a network security appliance
US7987450B2 (en) * 2005-12-19 2011-07-26 International Business Machines Corporation Stack-based problem identification for a software component
US8510596B1 (en) * 2006-02-09 2013-08-13 Virsec Systems, Inc. System and methods for run time detection and correction of memory corruption
US8516444B2 (en) 2006-02-23 2013-08-20 International Business Machines Corporation Debugging a high performance computing program
US8806476B2 (en) * 2006-03-14 2014-08-12 International Business Machines Corporation Implementing a software installation process
US20070220513A1 (en) * 2006-03-15 2007-09-20 International Business Machines Corporation Automatic detection of hang, bottleneck and deadlock
US7796527B2 (en) * 2006-04-13 2010-09-14 International Business Machines Corporation Computer hardware fault administration
US7646294B2 (en) 2006-05-22 2010-01-12 Honeywell International Inc. Alarm maps to facilitate root cause analysis through spatial and pattern recognition
US20080126325A1 (en) * 2006-06-26 2008-05-29 William Pugh Process for making software diagnostics more efficient by leveraging existing content, human filtering and automated diagnostic tools
US7805630B2 (en) * 2006-07-27 2010-09-28 Microsoft Corporation Detection and mitigation of disk failures
US20080098109A1 (en) * 2006-10-20 2008-04-24 Yassine Faihe Incident resolution
US20080240675A1 (en) * 2007-03-27 2008-10-02 Adam Berger Coordinating Audio/Video Items Stored On Devices
US9330230B2 (en) * 2007-04-19 2016-05-03 International Business Machines Corporation Validating a cabling topology in a distributed computing system
US7757126B2 (en) * 2007-04-20 2010-07-13 Sap Ag System and method for supporting software
US8205215B2 (en) * 2007-05-04 2012-06-19 Microsoft Corporation Automated event correlation
US7831866B2 (en) * 2007-08-02 2010-11-09 International Business Machines Corporation Link failure detection in a parallel computer
JP5119935B2 (en) * 2008-01-15 2013-01-16 富士通株式会社 Management program, management apparatus, and management method
US8949671B2 (en) * 2008-01-30 2015-02-03 International Business Machines Corporation Fault detection, diagnosis, and prevention for complex computing systems
US8972794B2 (en) * 2008-02-26 2015-03-03 International Business Machines Corporation Method and apparatus for diagnostic recording using transactional memory
US8806037B1 (en) 2008-02-29 2014-08-12 Netapp, Inc. Remote support automation for a storage server
US7530072B1 (en) * 2008-05-07 2009-05-05 International Business Machines Corporation Method to segregate suspicious threads in a hosted environment to prevent CPU resource exhaustion from hung threads
US8805995B1 (en) * 2008-05-23 2014-08-12 Symantec Corporation Capturing data relating to a threat
US7509539B1 (en) * 2008-05-28 2009-03-24 International Business Machines Corporation Method for determining correlation of synchronized event logs corresponding to abnormal program termination
US8578393B1 (en) * 2008-06-18 2013-11-05 Alert Logic, Inc. Log message collection employing on-demand loading of message translation libraries
US20090328005A1 (en) * 2008-06-27 2009-12-31 Microsoft Corporation Debugger call stack caching
EP2333668A4 (en) * 2008-08-12 2012-07-04 Fujitsu Ltd Information processor and hang-up cause investigation information acquiring method
US8086909B1 (en) * 2008-11-05 2011-12-27 Network Appliance, Inc. Automatic core file upload
US20100125541A1 (en) * 2008-11-14 2010-05-20 Andreas Werner Wendel Popup window for error correction
US7996722B2 (en) * 2009-01-02 2011-08-09 International Business Machines Corporation Method for debugging a hang condition in a process without affecting the process state
US9871811B2 (en) 2009-05-26 2018-01-16 Microsoft Technology Licensing, Llc Identifying security properties of systems from application crash traffic
US8171343B2 (en) 2009-06-16 2012-05-01 Oracle International Corporation Techniques for determining models for performing diagnostics
US8417656B2 (en) * 2009-06-16 2013-04-09 Oracle International Corporation Techniques for building an aggregate model for performing diagnostics
US8612377B2 (en) * 2009-12-17 2013-12-17 Oracle International Corporation Techniques for generating diagnostic results
DE102010004385A1 (en) * 2010-01-12 2011-07-14 Siemens Aktiengesellschaft, 80333 Method and device for automatically identifying further faulty components in a device
US8490055B2 (en) 2010-09-17 2013-07-16 Ca, Inc. Generating dependency maps from dependency data
US8949675B2 (en) 2010-11-30 2015-02-03 Microsoft Corporation Error report processing using call stack similarity
US8688606B2 (en) * 2011-01-24 2014-04-01 International Business Machines Corporation Smarter business intelligence systems
EP2666088B1 (en) 2011-04-07 2019-10-30 Siemens Healthcare Diagnostics Inc. Methods for hierarchically identifying root cause errors
US9202185B2 (en) 2011-04-08 2015-12-01 Ca, Inc. Transaction model with structural and behavioral description of complex transactions
US8438427B2 (en) 2011-04-08 2013-05-07 Ca, Inc. Visualizing relationships between a transaction trace graph and a map of logical subsystems
US8516301B2 (en) * 2011-04-08 2013-08-20 Ca, Inc. Visualizing transaction traces as flows through a map of logical subsystems
US8782614B2 (en) 2011-04-08 2014-07-15 Ca, Inc. Visualization of JVM and cross-JVM call stacks
US9460136B1 (en) * 2011-06-30 2016-10-04 Emc Corporation Managing databases in data storage systems
US9389936B2 (en) * 2011-09-23 2016-07-12 Microsoft Technology Licensing, Llc. Monitoring the responsiveness of a user interface
US9934229B2 (en) * 2011-10-23 2018-04-03 Microsoft Technology Licensing, Llc Telemetry file hash and conflict detection
US9703680B1 (en) 2011-12-12 2017-07-11 Google Inc. System and method for automatic software development kit configuration and distribution
US9262250B2 (en) * 2011-12-12 2016-02-16 Crashlytics, Inc. System and method for data collection and analysis of information relating to mobile applications
US9087154B1 (en) * 2011-12-12 2015-07-21 Crashlytics, Inc. System and method for providing additional functionality to developer side application in an integrated development environment
US8726092B1 (en) * 2011-12-29 2014-05-13 Google Inc. Identifying causes of application crashes
US9020463B2 (en) * 2011-12-29 2015-04-28 The Nielsen Company (Us), Llc Systems, methods, apparatus, and articles of manufacture to measure mobile device usage
CN102622510A (en) * 2012-01-31 2012-08-01 龚波 System and method for quantitative management of software defects
US10310594B2 (en) * 2012-12-04 2019-06-04 Aetherpal Inc. Knowledge base in virtual mobile management
US20150032681A1 (en) * 2013-07-23 2015-01-29 International Business Machines Corporation Guiding uses in optimization-based planning under uncertainty
US9384114B2 (en) * 2013-09-04 2016-07-05 AppDynamics, Inc. Group server performance correction via actions to server subset
US9092563B1 (en) * 2013-09-27 2015-07-28 Emc Corporation System for discovering bugs using interval algebra query language
CN104077210B (en) * 2014-06-06 2017-06-06 百度在线网络技术(北京)有限公司 The localization method and system of a kind of client collapse
US10127142B2 (en) * 2014-10-09 2018-11-13 Hcl Technologies Ltd. Defect classification and association in a software development environment
US9606884B2 (en) * 2014-10-15 2017-03-28 Dell Products L.P. Method and system for remote diagnostics of a display device
US10198304B2 (en) * 2014-11-04 2019-02-05 Oath Inc. Targeted crash fixing on a client device
WO2016085443A1 (en) * 2014-11-24 2016-06-02 Hewlett Packard Enterprise Development Lp Application management based on data correlations
CN105808435A (en) * 2016-03-08 2016-07-27 北京理工大学 Construction method of software defect evaluation model on the basis of complex network
US10037239B2 (en) * 2016-03-28 2018-07-31 Wlpro Limited System and method for classifying defects occurring in a software environment
US10365959B2 (en) 2016-06-23 2019-07-30 Vmware, Inc. Graphical user interface for software crash analysis data
US10268563B2 (en) 2016-06-23 2019-04-23 Vmware, Inc. Monitoring of an automated end-to-end crash analysis system
US10338990B2 (en) 2016-06-23 2019-07-02 Vmware, Inc. Culprit module detection and signature back trace generation
US10331508B2 (en) 2016-06-23 2019-06-25 Vmware, Inc. Computer crash risk assessment
US10191837B2 (en) 2016-06-23 2019-01-29 Vmware, Inc. Automated end-to-end analysis of customer service requests
US10338991B2 (en) 2017-02-21 2019-07-02 Microsoft Technology Licensing, Llc Cloud-based recovery system
US10585788B2 (en) 2017-02-21 2020-03-10 Microsoft Technology Licensing, Llc State-based remedial action generation
US10437663B2 (en) 2017-04-14 2019-10-08 Microsoft Technology Licensing, Llc Administrative user communication and error recovery
US10482000B2 (en) * 2017-04-24 2019-11-19 Microsoft Technology Licensing, Llc Machine learned decision guidance for alerts originating from monitoring systems
US10733079B2 (en) * 2017-05-31 2020-08-04 Oracle International Corporation Systems and methods for end-to-end testing of applications using dynamically simulated data
CN109815152A (en) * 2019-01-31 2019-05-28 科大讯飞股份有限公司 Program crashing type prediction method and system
US11436074B2 (en) 2019-04-17 2022-09-06 Microsoft Technology Licensing, Llc Pruning and prioritizing event data for analysis
US11714699B2 (en) * 2021-06-22 2023-08-01 Microsoft Technology Licensing, Llc In-app failure intelligent data collection and analysis
US20230058452A1 (en) * 2021-08-17 2023-02-23 Sap Se Efficient error reproduction scenarios through data transformation

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020112200A1 (en) * 2001-02-12 2002-08-15 Hines George W. Automated analysis of kernel and user core files including searching, ranking, and recommending patch files
US20030084376A1 (en) * 2001-10-25 2003-05-01 Nash James W. Software crash event analysis method and system
US6665758B1 (en) * 1999-10-04 2003-12-16 Ncr Corporation Software sanity monitor
US20040078686A1 (en) * 2002-03-06 2004-04-22 Mitsubishi Denki Kabushiki Kaisha Computer system, failure handling method, and computer program
US20040098640A1 (en) * 2001-05-24 2004-05-20 Smith Walter R. Method and system for recording program information in the event of a failure
US6742141B1 (en) * 1999-05-10 2004-05-25 Handsfree Networks, Inc. System for automated problem detection, diagnosis, and resolution in a software driven system
US20040153823A1 (en) * 2003-01-17 2004-08-05 Zubair Ansari System and method for active diagnosis and self healing of software systems
US20040194063A1 (en) * 2003-03-28 2004-09-30 Joel Pereira System and method for automated testing of a software module
US6859893B2 (en) * 2001-08-01 2005-02-22 Sun Microsystems, Inc. Service guru system and method for automated proactive and reactive computer system analysis
US7000150B1 (en) * 2002-06-12 2006-02-14 Microsoft Corporation Platform for computer process monitoring

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6742141B1 (en) * 1999-05-10 2004-05-25 Handsfree Networks, Inc. System for automated problem detection, diagnosis, and resolution in a software driven system
US6665758B1 (en) * 1999-10-04 2003-12-16 Ncr Corporation Software sanity monitor
US20020112200A1 (en) * 2001-02-12 2002-08-15 Hines George W. Automated analysis of kernel and user core files including searching, ranking, and recommending patch files
US6763517B2 (en) * 2001-02-12 2004-07-13 Sun Microsystems, Inc. Automated analysis of kernel and user core files including searching, ranking, and recommending patch files
US20040098640A1 (en) * 2001-05-24 2004-05-20 Smith Walter R. Method and system for recording program information in the event of a failure
US6859893B2 (en) * 2001-08-01 2005-02-22 Sun Microsystems, Inc. Service guru system and method for automated proactive and reactive computer system analysis
US20030084376A1 (en) * 2001-10-25 2003-05-01 Nash James W. Software crash event analysis method and system
US20040078686A1 (en) * 2002-03-06 2004-04-22 Mitsubishi Denki Kabushiki Kaisha Computer system, failure handling method, and computer program
US7000150B1 (en) * 2002-06-12 2006-02-14 Microsoft Corporation Platform for computer process monitoring
US20040153823A1 (en) * 2003-01-17 2004-08-05 Zubair Ansari System and method for active diagnosis and self healing of software systems
US20040194063A1 (en) * 2003-03-28 2004-09-30 Joel Pereira System and method for automated testing of a software module

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
"An Annotated Dr. Watson Log File", 4 pp. (C) 2005 Microsoft Corporation [Downloaded from the World Wide Web on May 26, 2005].
"Description of the Dr. Watson (Drwatson.exe) Tool", 1 p. (C) 2005 Microsoft Corporation [Downloaded from the World Wide Web on May 26, 2005].
"Dr. Watson and Windows 3.1", 2 pp. (C) 2005 Microsoft Corporation [Downloaded from the World Wide Web on May 26, 2005].
"How to Troubleshoot Program Faults with Dr. Watson", 3 pp. (C) 2005 Microsoft Corporation [Downloaded from the World Wide Web on May 26, 2005].
"Microsoft Online Crash Analysis", 2 pp. (C) 2003 Microsoft Corporation [Downloaded from the World Wide Web on May 26, 2005].

Cited By (104)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040250156A1 (en) * 2003-05-19 2004-12-09 Siemens Aktiengesellschaft Aspect based recovery system and method
US7571352B2 (en) * 2003-05-19 2009-08-04 Siemens Aktiengesellschaft Aspect based recovery system and method
US7793229B1 (en) * 2003-12-19 2010-09-07 Unisys Corporation Recording relevant information in a GUI window of a panel dump browser tool
US20050204199A1 (en) * 2004-02-28 2005-09-15 Ibm Corporation Automatic crash recovery in computer operating systems
US7627785B1 (en) * 2004-07-12 2009-12-01 Sun Microsystems, Inc. Capturing machine state of unstable Java program
US20100042875A1 (en) * 2004-07-12 2010-02-18 Sun Microsystems, Inc. Capturing machine state of unstable java program
US7941703B2 (en) * 2004-07-12 2011-05-10 Oracle America, Inc. Capturing machine state of unstable java program
US8548955B1 (en) 2004-11-30 2013-10-01 Centurylink Intellectual Property Llc System and method for automating disaster recovery of a mainframe computing system
US8255747B1 (en) * 2004-11-30 2012-08-28 Centurylink Intellectual Property Llc System and method for providing resolutions for abnormally ended jobs on a mainframe computer system
US7702959B2 (en) * 2005-08-02 2010-04-20 Nhn Corporation Error management system and method of using the same
US20070033281A1 (en) * 2005-08-02 2007-02-08 Hwang Min J Error management system and method of using the same
US8088011B2 (en) * 2005-11-08 2012-01-03 Microsoft Corporation Dynamic debugging dump for game console
US20070105607A1 (en) * 2005-11-08 2007-05-10 Microsoft Corporation Dynamic debugging dump for game console
US7500142B1 (en) * 2005-12-20 2009-03-03 International Business Machines Corporation Preliminary classification of events to facilitate cause-based analysis
US20090063902A1 (en) * 2005-12-20 2009-03-05 International Business Machines Corporation Preliminary Classification of Events to Facilitate Cause-Based Analysis
US20090070463A1 (en) * 2005-12-20 2009-03-12 International Business Machines Corporation Preliminary Classification of Events to Facilitate Cause-Based Analysis
US20070174710A1 (en) * 2006-01-11 2007-07-26 International Business Machines Corporation Apparatus and method for collecting and displaying data for remote diagnostics
US7647527B2 (en) * 2006-01-11 2010-01-12 International Business Machines Corporation Apparatus and method for collecting and displaying data for remote diagnostics
US8151142B2 (en) 2006-02-28 2012-04-03 Microsoft Corporation Thread interception and analysis
US7716530B2 (en) * 2006-02-28 2010-05-11 Microsoft Corporation Thread interception and analysis
US20080059973A1 (en) * 2006-02-28 2008-03-06 Microsoft Corporation Thread Interception and Analysis
US7698597B2 (en) * 2006-02-28 2010-04-13 International Business Machines Corporation Method of isolating erroneous software program components
US20070220518A1 (en) * 2006-02-28 2007-09-20 Microsoft Corporation Thread Interception and Analysis
US7865777B2 (en) 2006-02-28 2011-01-04 Microsoft Corporation Thread interception and analysis
US20070220348A1 (en) * 2006-02-28 2007-09-20 Mendoza Alfredo V Method of isolating erroneous software program components
US20080066069A1 (en) * 2006-02-28 2008-03-13 Microsoft Corporation Thread Interception and Analysis
US20070268300A1 (en) * 2006-05-22 2007-11-22 Honeywell International Inc. Information map system
US20100112997A1 (en) * 2006-08-16 2010-05-06 Nuance Communications, Inc. Local triggering methods, such as applications for device-initiated diagnostic or configuration management
US20080082973A1 (en) * 2006-09-29 2008-04-03 Brenda Lynne Belkin Method and Apparatus for Determining Software Interoperability
US20080104455A1 (en) * 2006-10-31 2008-05-01 Hewlett-Packard Development Company, L.P. Software failure analysis method and system
US20090006883A1 (en) * 2007-06-27 2009-01-01 Microsoft Corporation Software error report analysis
US7890814B2 (en) 2007-06-27 2011-02-15 Microsoft Corporation Software error report analysis
US20090106589A1 (en) * 2007-10-19 2009-04-23 Oracle International Corporation Gathering context information used for activation of contextual dumping
US20090105991A1 (en) * 2007-10-19 2009-04-23 Oracle International Corporation Rule-based engine for gathering diagnostic data
US7941707B2 (en) 2007-10-19 2011-05-10 Oracle International Corporation Gathering information for use in diagnostic data dumping upon failure occurrence
US7937623B2 (en) 2007-10-19 2011-05-03 Oracle International Corporation Diagnosability system
US20090105982A1 (en) * 2007-10-19 2009-04-23 Oracle International Corporation Diagnosability system: flood control
US8255182B2 (en) 2007-10-19 2012-08-28 Oracle International Corporation Diagnosability system: flood control
US20090106262A1 (en) * 2007-10-19 2009-04-23 Oracle International Corporation Scrubbing and editing of diagnostic data
US7856575B2 (en) 2007-10-26 2010-12-21 International Business Machines Corporation Collaborative troubleshooting computer systems using fault tree analysis
US20090113248A1 (en) * 2007-10-26 2009-04-30 Megan Elena Bock Collaborative troubleshooting computer systems using fault tree analysis
US20090320136A1 (en) * 2008-06-24 2009-12-24 Microsoft Corporation Identifying exploitation of vulnerabilities using error report
US8745703B2 (en) 2008-06-24 2014-06-03 Microsoft Corporation Identifying exploitation of vulnerabilities using error report
US20090327809A1 (en) * 2008-06-26 2009-12-31 Microsoft Corporation Domain-specific guidance service for software development
US20100115348A1 (en) * 2008-07-29 2010-05-06 Frank Van Gilluwe Alternate procedures assisting computer users in solving problems related to error and informational messages
US8645760B2 (en) * 2008-07-29 2014-02-04 FAQware Alternate procedures assisting computer users in solving problems related to error and informational messages
US20100083036A1 (en) * 2008-09-26 2010-04-01 Microsoft Corporation Configuration of memory management techniques selectively using mitigations to reduce errors
US8417999B2 (en) * 2008-09-26 2013-04-09 Microsoft Corporation Memory management techniques selectively using mitigations to reduce errors
US20110173501A1 (en) * 2008-09-26 2011-07-14 Microsoft Corporation Memory management techniques selectively using mitigations to reduce errors
US20100083048A1 (en) * 2008-09-26 2010-04-01 Microsoft Corporation Evaluating effectiveness of memory management techniques selectively using mitigations to reduce errors
US20100083047A1 (en) * 2008-09-26 2010-04-01 Microsoft Corporation Memory management techniques selectively using mitigations to reduce errors
US7949903B2 (en) * 2008-09-26 2011-05-24 Microsoft Corporation Memory management techniques selectively using mitigations to reduce errors
US8140892B2 (en) 2008-09-26 2012-03-20 Microsoft Corporation Configuration of memory management techniques selectively using mitigations to reduce errors
US7937625B2 (en) * 2008-09-26 2011-05-03 Microsoft Corporation Evaluating effectiveness of memory management techniques selectively using mitigations to reduce errors
US7681182B1 (en) 2008-11-06 2010-03-16 International Business Machines Corporation Including function call graphs (FCG) generated from trace analysis data within a searchable problem determination knowledge base
US20100235738A1 (en) * 2009-03-16 2010-09-16 Ibm Corporation Product limitations advisory system
US8589739B2 (en) * 2009-03-16 2013-11-19 International Business Machines Corporation Product limitations advisory system
US20100318853A1 (en) * 2009-06-16 2010-12-16 Oracle International Corporation Techniques for gathering evidence for performing diagnostics
US20110107137A1 (en) * 2009-11-05 2011-05-05 Sony Corporation System and method for providing automated support to electronic devices
US20110264960A1 (en) * 2010-04-22 2011-10-27 Samsung Electronics Co. Ltd. Apparatus and method for analyzing error generation in mobile terminal
US8972793B2 (en) * 2010-04-22 2015-03-03 Samsung Electronics Co., Ltd. Apparatus and method for analyzing error generation in mobile terminal
US20110271257A1 (en) * 2010-04-29 2011-11-03 International Business Machines Corporation Defect management in integrated development environments
US8645543B2 (en) 2010-10-13 2014-02-04 International Business Machines Corporation Managing and reconciling information technology assets in a configuration database
US9009324B2 (en) 2010-10-13 2015-04-14 International Business Machines Corporation Managing and reconciling information technology assets in a configuration database
US8997086B2 (en) 2011-12-20 2015-03-31 International Business Machines Corporation Fix delivery system
US8688866B1 (en) * 2012-09-25 2014-04-01 International Business Machines Corporation Generating recommendations for peripheral devices compatible with a processor and operating system of a computer
US9519564B1 (en) 2012-09-28 2016-12-13 EMC IP Holding Company LLC Trace saving intervals
US9292311B2 (en) 2012-12-20 2016-03-22 International Business Machines Corporation Method and apparatus for providing software problem solutions
US9250993B2 (en) 2013-04-30 2016-02-02 Globalfoundries Inc Automatic generation of actionable recommendations from problem reports
US9424115B2 (en) 2013-06-07 2016-08-23 Successfactors, Inc. Analysis engine for automatically analyzing and linking error logs
US9354962B1 (en) * 2013-09-10 2016-05-31 Emc Corporation Memory dump file collection and analysis using analysis server and cloud knowledge base
US20160314035A1 (en) * 2013-09-27 2016-10-27 Emc Corporation Set-based bugs discovery system via sql query
US10146605B2 (en) * 2013-09-27 2018-12-04 EMC IP Holding Company LLC Set-based bugs discovery system via SQL query
US9274872B1 (en) * 2013-09-27 2016-03-01 Emc Corporation Set-based bugs discovery system via SQL query
US9483249B2 (en) 2014-01-06 2016-11-01 Apple Inc. On-board applet migration
US10223096B2 (en) 2014-01-06 2019-03-05 Apple Inc. Logging operating system updates of a secure element of an electronic device
US9436455B2 (en) 2014-01-06 2016-09-06 Apple Inc. Logging operating system updates of a secure element of an electronic device
US9880830B2 (en) 2014-01-06 2018-01-30 Apple Inc. On-board applet migration
US9934014B2 (en) 2014-08-22 2018-04-03 Apple Inc. Automatic purposed-application creation
US9852172B2 (en) 2014-09-17 2017-12-26 Oracle International Corporation Facilitating handling of crashes in concurrent execution environments of server systems while processing user queries for data retrieval
US9251013B1 (en) 2014-09-30 2016-02-02 Bertram Capital Management, Llc Social log file collaboration and annotation
US10089169B2 (en) 2015-02-02 2018-10-02 International Business Machines Corporation Identifying solutions to application execution problems in distributed computing environments
US9465685B2 (en) * 2015-02-02 2016-10-11 International Business Machines Corporation Identifying solutions to application execution problems in distributed computing environments
US10489463B2 (en) 2015-02-12 2019-11-26 Microsoft Technology Licensing, Llc Finding documents describing solutions to computing issues
US10303538B2 (en) 2015-03-16 2019-05-28 Microsoft Technology Licensing, Llc Computing system issue detection and resolution
US10067812B2 (en) 2015-03-30 2018-09-04 Ca, Inc. Presenting diagnostic headlines using simple linguistic terms
US9940187B2 (en) 2015-04-17 2018-04-10 Microsoft Technology Licensing, Llc Nexus determination in a computing device
US10592821B2 (en) 2015-06-19 2020-03-17 Trane International Inc. Self-learning fault detection for HVAC systems
US9979607B2 (en) 2015-06-22 2018-05-22 Ca, Inc. Diagnosing anomalies based on deviation analysis
US10192177B2 (en) 2016-06-29 2019-01-29 Microsoft Technology Licensing, Llc Automated assignment of errors in deployed code
US10585742B2 (en) 2016-08-26 2020-03-10 International Business Machines Corporation Root cause analysis
US10545811B2 (en) 2017-01-11 2020-01-28 International Business Machines Corporation Automatic root cause analysis for web applications
US11074119B2 (en) 2017-01-11 2021-07-27 International Business Machines Corporation Automatic root cause analysis for web applications
US10445212B2 (en) 2017-05-12 2019-10-15 Microsoft Technology Licensing, Llc Correlation of failures that shift for different versions of an analysis engine
US10740169B1 (en) * 2017-05-25 2020-08-11 CSC Holdings, LLC Self-learning troubleshooter
US10565045B2 (en) 2017-06-28 2020-02-18 Microsoft Technology Licensing, Llc Modularized collaborative performance issue diagnostic system
US10684035B2 (en) 2018-01-08 2020-06-16 Trane International Inc. HVAC system that collects customer feedback in connection with failure triage
US11205092B2 (en) 2019-04-11 2021-12-21 International Business Machines Corporation Clustering simulation failures for triage and debugging
US11526391B2 (en) 2019-09-09 2022-12-13 Kyndryl, Inc. Real-time cognitive root cause analysis (CRCA) computing
US20230063814A1 (en) * 2021-09-02 2023-03-02 Charter Communications Operating, Llc Scalable real-time anomaly detection
US11595245B1 (en) 2022-03-27 2023-02-28 Bank Of America Corporation Computer network troubleshooting and diagnostics using metadata
US11658889B1 (en) 2022-03-27 2023-05-23 Bank Of America Corporation Computer network architecture mapping using metadata
US11792095B1 (en) 2022-03-27 2023-10-17 Bank Of America Corporation Computer network architecture mapping using metadata
US11824704B2 (en) 2022-03-27 2023-11-21 Bank Of America Corporation Computer network troubleshooting and diagnostics using metadata

Also Published As

Publication number Publication date
US20050120273A1 (en) 2005-06-02

Similar Documents

Publication Publication Date Title
US7191364B2 (en) Automatic root cause analysis and diagnostics engine
Glerum et al. Debugging in the (very) large: ten years of implementation and experience
US10810074B2 (en) Unified error monitoring, alerting, and debugging of distributed systems
US7937623B2 (en) Diagnosability system
US8037195B2 (en) Method and apparatus for managing components in an IT system
US7877642B2 (en) Automatic software fault diagnosis by exploiting application signatures
US7552447B2 (en) System and method for using root cause analysis to generate a representation of resource dependencies
US7100085B2 (en) System for automated problem detection, diagnosis, and resolution in a software driven system
US7698691B2 (en) Server application state
US6785848B1 (en) Method and system for categorizing failures of a program module
US7171337B2 (en) Event-based automated diagnosis of known problems
KR101099152B1 (en) Automatic task generator method and system
US20150220421A1 (en) System and Method for Providing Runtime Diagnostics of Executing Applications
CN116107846B (en) Linux system event monitoring method and device based on EBPF
US9256509B1 (en) Computing environment analyzer
WO2000068793A1 (en) System for automated problem detection, diagnosis, and resolution in a software driven system
US11151020B1 (en) Method and system for managing deployment of software application components in a continuous development pipeline
US20090313266A1 (en) Model Based Distributed Application Management
US7305583B2 (en) Command initiated logical dumping facility
Chen A pilot study of cross-system failures
Joukov et al. Discovery of Hard-coded External Dependencies in Enterprise Production Environments
Isci et al. Delivering software with agility and quality in a cloud environment
Aguiar et al. Sandiff: Semantic File Comparator for Continuous Testing of Android Builds

Legal Events

Date Code Title Description
AS Assignment

Owner name: MICROSOFT CORPORATION, WASHINGTON

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:HUDSON, WILLIAM HUNTER;FINK, REINER;PEASE, GEOFF;AND OTHERS;REEL/FRAME:014710/0266

Effective date: 20031114

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCF Information on status: patent grant

Free format text: PATENTED CASE

FPAY Fee payment

Year of fee payment: 4

FPAY Fee payment

Year of fee payment: 8

AS Assignment

Owner name: MICROSOFT TECHNOLOGY LICENSING, LLC, WASHINGTON

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MICROSOFT CORPORATION;REEL/FRAME:034541/0477

Effective date: 20141014

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 12TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1553); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 12

AS Assignment

Owner name: SERVICENOW, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MICROSOFT TECHNOLOGY LICENSING, LLC;REEL/FRAME:047681/0916

Effective date: 20181115

AS Assignment

Owner name: SERVICENOW, INC., CALIFORNIA

Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE RECORDAL TO REMOVE INADVERTENTLY RECOREDED PROPERTIES SHOWN IN ATTACHED SHEET PREVIOUSLY RECORDED AT REEL: 047681 FRAME: 0916. ASSIGNOR(S) HEREBY CONFIRMS THE ASSIGNMENT;ASSIGNOR:MICROSOFT TECHNOLOGY LICENSING, LLC;REEL/FRAME:049797/0119

Effective date: 20181115