WO2012069094A1 - Mitigation system - Google Patents

Mitigation system Download PDF

Info

Publication number
WO2012069094A1
WO2012069094A1 PCT/EP2010/068344 EP2010068344W WO2012069094A1 WO 2012069094 A1 WO2012069094 A1 WO 2012069094A1 EP 2010068344 W EP2010068344 W EP 2010068344W WO 2012069094 A1 WO2012069094 A1 WO 2012069094A1
Authority
WO
WIPO (PCT)
Prior art keywords
event
action
symptoms
measurements
mitigating
Prior art date
Application number
PCT/EP2010/068344
Other languages
French (fr)
Inventor
Keith Harrison
Original Assignee
Hewlett-Packard Development Company, L P
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hewlett-Packard Development Company, L P filed Critical Hewlett-Packard Development Company, L P
Priority to PCT/EP2010/068344 priority Critical patent/WO2012069094A1/en
Publication of WO2012069094A1 publication Critical patent/WO2012069094A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/14Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
    • H04L63/1441Countermeasures against malicious traffic

Definitions

  • a set of symptoms may be indicative of a particular event, complaint, disorder or situation.
  • Symptoms can be varied and diverse, and events, disorders and so on can vary from those experienced by a person, to a more general event or situation in a broader environment.
  • a symptom or indication of a reduction in a particular exchange rate can give rise to a situation in which certain stocks or goods are purchased at a higher velocity than otherwise.
  • a measurement of temperature can provide an indication of higher temperature, or temperature above a certain threshold, which can be a symptom of a fever.
  • Figure 1 is a schematic block diagram of a relationship between strategic and tactical actions in an environment
  • Figure 2 is a schematic block diagram of a mitigation system according to an example
  • Figure 3 is an expanded block diagram of part of a mitigation engine according to an example
  • Figure 4 is a schematic block diagram of a process for selecting an action according to an example
  • Figure 5 is a block diagram of a virtualized environment according to an example
  • Figure 6 is a block diagram of a virtualized environment according to an example
  • Figure 7 is a block diagram of a storage and processing system that can implement any of the examples described herein;
  • Figure 8 is a schematic block diagram of a user interface module according to an example.
  • figure 1 is a schematic block diagram of a relationship between strategic and tactical actions in an environment 100.
  • Environment 100 can be a corporate environment, a cloud based computing environment, or more generally speaking a person or group of people for example. That is to say, an environment 100 can be any setting, situation, person or populace for example in which both external 101 and internal 103 factors can influence the state of the environment.
  • strategic 105 and/or tactical 107 actions can be performed.
  • Higher level strategic actions 105 can be longer term actions which may play out over a longer time period than tactical lower level actions 107 in which there is more immediacy associated with decisions and actions.
  • a strategic action 105 in response to an external 101 or internal 103 factor can be an action which causes the patching of multiple machines on a network of the environment 100, which patching may take a considerable amount of planning to effectively implement in the least disruptive way.
  • a tactical action 107 in response to the security threat can be an action which removes a particular machine from the network for example, perhaps because it has been determined that the machine is infected with some malware which could compromise the environment 100. This type of action can be performed quickly, and may be less disruptive.
  • an environment 100 can be subject to various influences or factors, both internal in some way to the environment and external to the environment.
  • internal factors 103 may be easier to detect and control than external factors 101 .
  • an action or actions can be taken.
  • the actions can be broadly classified as strategic 105 or tactical 107 actions.
  • Strategic actions 05 can be higher level in the sense that they can operate over a longer time frame or can be related to higher level aspects of the infrastructure of an environment for example.
  • Tactical actions 107 can be lower level in the sense that they operate over a shorter timeframe or can be related to more specific or lower level components of an infrastructure of an environment for example.
  • a system and method for providing a mitigating action elicits user interaction and decision making in order to provide a response to a potential threat.
  • FIG. 2 is a schematic block diagram of a system according to an example.
  • An environment 100 comprises multiple sensors 201 arranged to provide measurements 203.
  • the sensors can be any sensors capable of providing measures of some aspect of environment 100, such as an internal or externa! factor.
  • sensors 201 can include hardware detection apparatus to monitor network traffic across the infrastructure.
  • Sensors 201 can also include modules embedded in applications which can include embedded antivirus modules for example to provide measurements representing the activity of malware within the environment 100.
  • sensors 201 generate multiple measurements 203 based on observations of the environment 100.
  • the measurements can be based on observations taken from hardware of the environment 100, or from machine readable instructions of applications in the environment, or a combination. That is to say, sensors 201 can be hardware based sensors, or sensors using machine readable instructions such as those operating in an application of a system.
  • the sensors 201 may also include "social sensors" to generate data representing intelligence about the moods, desires and so on of employees for example.
  • Such social sensors can measure, amongst other things, noise profiles such as a noise profile in a particular area, and aspects of a human physiological condition such a heart rate, perspiration, pupil dilation etc.
  • an application can provide noise level readings from mobile phones, as well as information provided by users.
  • Other forms of social sensing can determine a mood of a group of people. For example, by determining the presence or frequency of certain words in a set of emails of a social networking system, blogging or micro-blogging service, a general measure of the mood of a group of users can be calculated.
  • Other measurements detected by sensors 201 can include measurements associated with the current state of weather in a location or multiple locations, the current state of a the rate of exchange of a currency or currencies, or more general measurements representing business intelligence such as a particular strategy or action of a company for example. Measurements can therefore be diverse - accordingly, sensors should be construed as anything capable of providing such diverse measurements.
  • Measurements 203 are used by a mitigation engine 204 to form symptoms 207.
  • Mitigation engine 204 includes an event correlator engine 205 and an action engine 206.
  • Symptoms 207 can be indicative of a particular threat present in the environment 100, and can be determined in response to a measurement in a particular range or above/below a predetermined value for example. Accordingly, if a measurement is no longer in the range that created the symptom the symptom the symptom can be deemed to be no longer present.
  • symptoms 207 can include those which result from particular temperatures, such as a high or low temperature for example.
  • a set of symptoms 207 can include those which result from particularly high instances of attempted TCP/IP connection attempts to or from a particular IP address.
  • sensors 201 can be hardware devices to monitor network traffic, or could be daemons configured to monitor network access from within multiple applications for example.
  • symptoms can be derived using the absence of certain measurements. For example, the absence of an indication that a network login password has been changed can give rise to an associated symptom. A collection of such absent measurements can be combined to provide an overall indication for a symptom - e.g. a cluster of machines which have not had their network access passwords changed can provide an indication of an associated symptom for the cluster.
  • the event correlator engine 205 determines and matches a set of symptoms 207 to events.
  • action engine 206 includes a set of actions which can be used to mitigate the effects of the event. Accordingly, in block 208, and in response to a positive determination of the potential presence of an event, a set of actions 209 are provided by the action engine 206. The actions 209 can be presented to a user who can give a response 210. The provided actions form a schema for mitigating the effects of the identified event, and user input is used to confirm deployment of actions in the schema according to an example.
  • actions can play out to mitigate an event without user input, or with limited user input (such as user input occurring infrequently for example).
  • a selected action from those presented to a user is a mitigating action 21 1 used by the system.
  • the environment 100 may change.
  • sensors 201 may provide measurements 203 which give symptoms 207 indicative of whether or not mitigating action 21 1 was successful in mitigating the event after having been deployed in the environment 100.
  • a set of multiple actions 209 can be related to an event or multiple events. For example, it may be possible for an action or a set of actions 209 to mitigate the effects of several distinct events. Alternatively, a set of actions 209 may be limited to mitigating the effects of one event. For example, it is unlikely that the actions 209 relating to an event in an environment 100 such as a forest fire would be applicable to the actions which could be used to mitigate the effects of malware in an IT infrastructure. However, there are likely to be common actions for dealing with events such as known malware detected in a system and possible 0-day malware being detected.
  • event correlator engine 205 can use certainty factor values associated with symptoms to calculate the likelihood of an event or threat being present. For example, with reference to figure 3, which is an expanded block diagram some parts of mitigation engine 204, symptoms
  • Symptoms can be received singly, sequentially or in combination with other symptoms.
  • Each event stored in the event correlator engine 205 of mitigation engine 204 includes an associated set of symptoms.
  • the set of symptoms for each event are symptoms which can be indicative that the event is occurring or present.
  • Each of the symptoms has a measure associated with it. According to an example, a measure for a symptom can vary between -1 and +1 , and can take any value in between (including the end points of the range). Other alternative measures and ranges are possible.
  • a positive value indicates an increased likelihood that the symptom in question is indicative of the event.
  • a negative value indicates strength of contra-indication of an event in view of the symptom.
  • each measure extends increases the certainty of the likelihood. So for example, a measure of +1 for a symptom indicates that there is a 100% likelihood that an event dependent on that symptom is present if that symptom occurs, whereas a measure of -1 for a symptom indicates that there is a 100% likelihood that an event is not present if that symptom occurs.
  • a value in a range between -0.2 to +0.2 for example can represent an indeterminate likelihood. Such a range or the individual end points of the range can be varied as desired.
  • Mapped to each event is an overall certainty factor ⁇ CF1 , CF2, CF3 CFm ⁇ which provides a likelihood value that the event is present.
  • the certainty factor CFj for an event Ej is a value representing a measure of the probability that an event is occurring or is present.
  • a symptom S1 can be a related to a measurement 203 representing temperature.
  • An event E1 can represent the provision of influenza in a person (in which case the 'environment' 100 here is the person, and a sensor 201 can be a temperature sensor for example).
  • the certainty factor CF1 for E1 can be a higher value in order to reflect the fact that the measured temperature is an indicative symptom of E1.
  • certainty factors range from -1 to +1 , and can take any value in between.
  • a value of -1 indicates that an event is not occurring or is not present.
  • a value of +1 indicates that an event is occurring or is present.
  • a value for a certainty factor below 0.2 for example can represent an indeterminate likelihood, or provide a level of certainty suggesting that the event is not present. This can be varied as desired to provide any minimum threshold value below which an event is considered to be absent.
  • events can be defined by multiple symptoms. Accordingly, in the presence of multiple symptoms, individual certainty factors or measures for symptoms are combined to provide an overall likelihood value for an event in the presence of the symptoms. According to an example, symptoms within an event are independent of one another. For example, pairs of values a, b for symptoms of an event can be combined to provide an overall certainty factor CF ab according to the following relationships:
  • Event correlator engine 105 receives the symptoms and compares them against symptoms for multiple events. An event E1 can be considered to be present if symptoms S1 and S2 are present, whilst an event E2 can be considered to be present if symptoms S1 , S2 and S3 are present. However, the provision of S3 renders E1 less likely. Accordingly, event correlator engine 105 determines that either event E1 or E2 (or both) can be occurring as a result of the received symptoms. A likelihood value for events E1 and E2 varies as symptoms are received. For example, when symptom S1 is received, corresponding certainty factors for E1 and E2 are calculated.
  • event correlator engine 105 can store likelihood values of 0.3, 0.5 and -0.7 for symptoms S1 , S2 and S3 - the values may differ for other events which include those symptoms.
  • event correlator engine 105 can store values of 0.3, 0.5 for symptoms S1 and S2. According to an example, such values can be determined heuristically. Accordingly, on receipt of symptoms S1 and S2, event correlator engine 105 calculates that an overall likelihood value for E1 or E2 is 0.65 using the relationships given above. On receipt of S3, the value for E1 is revised to -0.14. Accordingly, the overall likelihood of event E2 is higher than that for E1. This is because:
  • a calculated likelihood value is used by action engine 206 to provide a set of appropriate actions 209 to mitigate the effects, or otherwise obviate the effect of the event or events which may be occurring or be present in an environment 100.
  • a user can be presented with an indication of events, actions and an associated certainty factor.
  • a user can be presented with an indication that, as a result of symptoms S1 , S2 and S3, event E1 may be present with a certainty factor of 0.65 or 65%, and event E2 may be present with a certainty factor of - 0.14 or -14%.
  • Appropriate actions for each event can be presented. The user is then equipped to make a decision on the basis of the information as to a mitigating action to make.
  • an action can be executed automatically. For example, in certain circumstances a user's reaction may not be fast enough to determine and deploy a desired action in such a way so as to mitigate the effects of the event. For example, if there is a high likelihood that a particular piece of malware is in a system, the damage which may be caused (such as by the infection of various other machines operating on the system) may be higher if a mitigating action is not deployed promptly. In such cases, a suitable mitigating action can be automatically deployed. Accordingly, if the probability of the existence of a particular event is above a certain threshold value, or if a particular type of event is determined to be present (such as a particularly infectious piece of malware for example) an action can be automatically deployed. Selection of actions can then revert to a user.
  • a mitigating action 21 1 can cause a set of measurements and thus a set of symptoms to change, thereby altering a value for a likelihood of an event. For example, if S1 relates to increased network traffic from a particular machine on a network and is related to an event which requires some mitigation, and according to action engine 206 an appropriate mitigating action is to remove that machine from the network, removal of the machine will reduce or eliminate the symptom, thereby altering the value for the event. More specifically, after removal, a measurement will result in an indication of a reduction in network traffic from the machine.
  • a process for defining and combining symptoms can be conditional such that, for example, the certainty factor for one symptom can be dependent on the certainty factor for another symptom.
  • conditional probabilities can be used for symptoms, which can be combined to form a measure for an overall certainty factor (or belief) in an event.
  • a suitable combinatorial scheme which can be used is the Dempster-Shafer rule of combination for example.
  • Figure 4 is a schematic block diagram of a process for selecting an action according to an example.
  • Symptoms 207 received by event correlator engine 205 determine an event or multiple events for which multiple actions are available in action engine 206. That is to say, in response to a set of symptoms 207, engine 205 determines an event or multiple events which may be present in environment 100 by matching symptoms to possible events. A match for at least one symptom causes the corresponding event to be displayed on a graphical user interface (GUI) 400.
  • GUI graphical user interface
  • GUI 400 presents an event and an associated certainty factor value for the event to a user. Other events and related certainty factor values can be also be presented.
  • GUI 400 presents an action for a user from action engine 206 associated with the event or events. The presented action is determined by the action engine 206 using the action structure 403, which selects a node in a structure for the presented event relating to a mitigating action 21 1 which can be taken.
  • a user response 210 in the form of a selection for a mitigating action 21 1 causes the next action in structure 403 to be selected.
  • the selected mitigating action 21 1 has an effect 405 on environment 100, which may cause the symptoms 207 to change.
  • the change may mean that the event is no longer present (e.g. the mitigating action 21 1 is to remove a machine from environment 100 thereby removing the symptom which caused the event to be presented). Accordingly, GUI 400 can be updated so that the event is no longer presented.
  • action engine 206 can provide measurements 203 from which symptoms 207 can be derived. More specifically, action engine 206 can provide measurements to indicate that an event has been dealt with - this can be desirable in, for example, the case that a mitigating action 21 1 will take some time to cause an effect 405 which would then alter measurement and thus symptoms 207 to the extent that the event is considered to be mitigated. For example, if a mitigating action is such that it may take anywhere from the order of several hours to several weeks to play out, event correlator engine 205 is likely to continue receiving the symptoms 207 which caused the provision of the mitigating action 21 1 in the first place. Accordingly, a mechanism to 'close the loop' can be provided in which the action engine can provide a measurement or multiple measurement indicative that a selected mitigating action has been deployed.
  • a symptom can provide an indication that an event is more likely than another event. For example, consider two possible events relating to the possible presence of i) a known piece of prolific malware in an environment and ii) an less prolific piece of malware (or 0-day).
  • the known threat can be favored - for example, by including a symptom for the less well known threat (event) in which there is a positive likelihood value for the known threat. Accordingly, possible presence of the known threat will cause the relative likelihood of the less prolific threat to be reduced.
  • the known threat can include a symptom will a negative value for the less prolific threat.
  • multiple events defined by a set of symptoms 207 therefore have associated actions which can be taken to mitigate the effects of the events in an environment.
  • the event(s), certainty factors associated with the event(s) and a corresponding action(s) are presented to a user using a GUI 400.
  • a user can select a mitigating action 21 1 to be taken, which can cause symptoms to change.
  • events may no longer be present, or may have their certainty factor adjusted.
  • the user selection can cause the next node in the action structure 403 to be presented using GUI 400 in the case that the associated event does not get deleted (as a result of action 21 1 for example).
  • GUI 400 can be updated to remove any indication that the event was present.
  • an environment 100 can be a virtualized environment, such as a cloud computing environment, or a hardware device arranged to use multiple VMs for example.
  • a mitigation engine 204 can be provided in such a virtual machine so that it is isolated from the rest of the environment.
  • Figure 5 is a block diagram of a virtualized environment according to an example.
  • a virtual machine monitor (VMM) 501 lies above a physical hardware infrastructure 500.
  • Infrastructure 500 typically includes a number of processors 507, which can be multi-core processors, as well as volatile memory 508 such as RAM for example, network interface hardware 509, storage 510 such as hard disk storage for example, graphics processing hardware 511 such as multiple graphical processing processors and so on, all of which can communicate using a bus 530.
  • VMs 502, 503 can be instantiated using VMM 501 and are allocated hardware from infrastructure 500.
  • VMMs 502, 503 can be allocated multiple cores from processors 507 depending on the tasks they are destined to perform.
  • a number of smaller VMs 504 (in terms of resources allocated, and/or capability) can be instantiated by VMM 501 .
  • VMs 504, 506 are virtual appliances which are used to monitor the VMs 502, 503 using memory introspection according to an example.
  • An environment with multiple VMs such as that shown in figure 5 can be provided as a cell in a cloud computing environment for example.
  • the system of figure 5 can be provided on a hardware platform including of a laptop or desktop computer, or other suitable hardware.
  • VM 502 can operate as a mitigation engine 204 by receiving measurements 203 via VMM 501 , and presenting actions using GUI 400 to a user over a network using hardware 509 or via a display (not shown) using hardware 51 1 for example.
  • a privileged (DomO) VM can include device drivers which enable the use of physical resources for any VMs.
  • a network monitor in which network activity (and other activity such as disk and memory access activity) can be used by the DomO VM. For example, as data packets pass through the DomO to the physical hardware, they can be inspected to determine whether they are legitimate or malicious. The network monitor can be used to provide measurements 203.
  • a threat tries to set up a TCP connection with an IP address which is outside of the range known to be permitted (such as a range of IP addresses in a company network, such as those in the form 16.xx.xxx.x for example), this may constitute suspicious behavior which may form a symptom of an event for which certain mitigating actions should be performed.
  • FIG. 6 is a block diagram of a virtualized environment according to an example.
  • VM 502 is replaced by a standard VM 600 (i.e. mitigation engine 204 does not reside in VM 502). Instead, mitigation engine 204 resides in hardware 500. More specifically, mitigation engine 204 can be provided as hardware or as a machine readable instruction module in physical infrastructure 500.
  • FIG. 7 is a block diagram of a mitigation system 700 according to an example.
  • the system 700 includes a processing unit 701 (CPU), a system memory 703, and a system bus 705 that couples processing unit 701 to the various components of the system 700.
  • the processing unit 701 typically includes processors, each of which may be in the form of any one of various commercially available processors.
  • the system memory 703 typically includes a read only memory (ROM) that stores a basic input/output system (BIOS) that contains start-up routines for the system 202 and a random access memory (RAM).
  • ROM read only memory
  • BIOS basic input/output system
  • RAM random access memory
  • the system bus 705 may be a memory bus, a peripheral bus or a local bus, and may be compatible with any of a variety of bus protocols, including PCI(e), VESA, MicroChannel, ISA, and EISA.
  • the system 700 also includes a persistent storage memory 707 (e.g., a hard drive (HDD), a CD-ROM drive, magnetic tape drives, flash memory devices, and digital video disks) that is connected to the system bus 705 and contains computer-readable media disks that provide nonvolatile or persistent storage for data, data structures and computer-executable instructions.
  • a persistent storage memory 707 e.g., a hard drive (HDD), a CD-ROM drive, magnetic tape drives, flash memory devices, and digital video disks
  • a user may interact (e.g., enter commands or data) with system 700 using input devices 709 (e.g., a keyboard, a computer mouse, a microphone, joystick, and touch pad or touch sensitive display screen).
  • Information in the form of GUI 400 may be presented to a user on the display 71 1 (implemented by, e.g., a display monitor which can be touch sensitive, including a capacitive, resistive or inductive touch sensitive surface for example), and which is controlled by a display controller 713 (implemented by, e.g., a video graphics card).
  • the system 700 also typically includes peripheral output devices, such as speakers and a printer.
  • Multiple remote computers may be connected to the system 700 through a network interface card (NIC) 715.
  • NIC network interface card
  • system 700 can upload retrieved data, or a pointer thereto, to a remote storage service such as cloud based service for example.
  • Sensors 201 communicate with other parts of system 700 via bus 705.
  • the system memory 703 also stores mitigation engine 204, and processing information 717 that includes measurements 203, symptoms 207 and actions 209.
  • the mitigation engine 204 interfaces with a graphics driver to present a user interface on the display 71 1 for managing and controlling the operation of the system 700, such as for providing a response 210.
  • FIG 8 is a schematic block diagram of a user interface module according to an example.
  • the user interface module 800 includes a main body 801 which can be presented to a user using the display 711 as described above with reference to figure 7, and which can be part of GUI 400.
  • the main body 801 includes multiple display child windows 803-810, each of which includes information for a user in order to allow the user to make an informed choice about whether or not to deploy a mitigating action.
  • window 803 can provide an indication of the event or threat, such as by naming a piece of malware determined to be present for example (e.g. "botnet-1").
  • Windows 804 and 805 can respectively provide data representing an owner of an affected system, and an identification of the affected system or device.
  • Window 806 within the main body 801 can provide an overall certainty value or probability for the event of 803, and window 807 can provide an indication that the event - in view of the certainty factor 806 for example - is high or low risk. For example, if the certainty factor in 806 is greater than a predetermined value, a high risk indication can be provided in 807 (such as by providing a graphic and/or a textual indication for example).
  • Child window 808 can provide data representing an action to be taken, with windows 809, 810 providing a user with convenient yes/no selection buttons to enable an action presented in window 808 to be deployed or rejected.
  • figure 8 represents a specific layout for a GUI module according to an example, it should be noted that alternative layouts can be provided including layouts in which fewer or more information can be provided. Accordingly, although the module of figure 8 is considered to be a suitable layout for a GUI module, it is not intended to be limiting in terms of design or content.

Abstract

A mitigation system for deploying an action to mitigate an event, comprises multiple sensors to provide measurements for the system, an event correlator engine to derive multiple symptoms from the measurements and to use the symptoms to determine a likelihood value for the existence of the event, an action engine to provide multiple actions for mitigating the event in response to the likelihood value, and a user interface to provide a selection mechanism for a user of the system to select a mitigating action to be deployed by the action engine.

Description

MITIGATION SYSTEM
BACKGROUND
[0001 ] In general, a set of symptoms may be indicative of a particular event, complaint, disorder or situation. Symptoms can be varied and diverse, and events, disorders and so on can vary from those experienced by a person, to a more general event or situation in a broader environment. For example, a symptom or indication of a reduction in a particular exchange rate can give rise to a situation in which certain stocks or goods are purchased at a higher velocity than otherwise. Alternatively, a measurement of temperature can provide an indication of higher temperature, or temperature above a certain threshold, which can be a symptom of a fever.
[0002] Accordingly, the use of symptoms in environments can help to determine an issue and provide a solution. For example, in large scale and generally complex systems and organizations such as corporate Information Technology (IT) and cloud computing infrastructures, there exist potential impacts to the security of the system in the form of security risks. Existing command and control systems attempt to detect symptoms of such security risks and automatically make decisions to mitigate. However, such a model does not always work - for example, the symptoms of a particular risk may correspond to multiple possible causes, each with their own specific mitigations. Accordingly, the issue of ambiguity is not well handled in such systems.
BRIEF DESCRIPTION OF THE DRAWINGS
[0003] Various features and advantages of the present disclosure will be apparent from the detailed description which follows, taken in conjunction with the accompanying drawings, which together illustrate, by way of example only, features of the present disclosure, and wherein:
[0004] Figure 1 is a schematic block diagram of a relationship between strategic and tactical actions in an environment;
[0005] Figure 2 is a schematic block diagram of a mitigation system according to an example; [0006] Figure 3 is an expanded block diagram of part of a mitigation engine according to an example;
[0007] Figure 4 is a schematic block diagram of a process for selecting an action according to an example;
[0008] Figure 5 is a block diagram of a virtualized environment according to an example;
[0009] Figure 6 is a block diagram of a virtualized environment according to an example;
[0010] Figure 7 is a block diagram of a storage and processing system that can implement any of the examples described herein; and
[0011] Figure 8 is a schematic block diagram of a user interface module according to an example.
DETAILED DESCRIPTION
[0012] According to an example, there is provided a system and method for executing appropriate mitigating actions to a potential security risk or event in an environment in response to the identification of various options. Accordingly, figure 1 is a schematic block diagram of a relationship between strategic and tactical actions in an environment 100. Environment 100 can be a corporate environment, a cloud based computing environment, or more generally speaking a person or group of people for example. That is to say, an environment 100 can be any setting, situation, person or populace for example in which both external 101 and internal 103 factors can influence the state of the environment. In general, in response to external 101 and/or internal 103 factors, strategic 105 and/or tactical 107 actions can be performed. Higher level strategic actions 105 can be longer term actions which may play out over a longer time period than tactical lower level actions 107 in which there is more immediacy associated with decisions and actions. For example, in a corporate IT environment, a strategic action 105 in response to an external 101 or internal 103 factor (such as a security threat for example) can be an action which causes the patching of multiple machines on a network of the environment 100, which patching may take a considerable amount of planning to effectively implement in the least disruptive way. A tactical action 107 in response to the security threat can be an action which removes a particular machine from the network for example, perhaps because it has been determined that the machine is infected with some malware which could compromise the environment 100. This type of action can be performed quickly, and may be less disruptive.
[0013] Accordingly, an environment 100 can be subject to various influences or factors, both internal in some way to the environment and external to the environment. In some instances, internal factors 103 may be easier to detect and control than external factors 101 . In response to the factors or influences, an action or actions can be taken. The actions can be broadly classified as strategic 105 or tactical 107 actions. Strategic actions 05 can be higher level in the sense that they can operate over a longer time frame or can be related to higher level aspects of the infrastructure of an environment for example. Tactical actions 107 can be lower level in the sense that they operate over a shorter timeframe or can be related to more specific or lower level components of an infrastructure of an environment for example. According to an example, a system and method for providing a mitigating action elicits user interaction and decision making in order to provide a response to a potential threat.
[0014] Figure 2 is a schematic block diagram of a system according to an example. An environment 100 comprises multiple sensors 201 arranged to provide measurements 203. The sensors can be any sensors capable of providing measures of some aspect of environment 100, such as an internal or externa! factor. For example, if the environment 100 is a corporate IT infrastructure, sensors 201 can include hardware detection apparatus to monitor network traffic across the infrastructure. Sensors 201 can also include modules embedded in applications which can include embedded antivirus modules for example to provide measurements representing the activity of malware within the environment 100.
[0015] In general, sensors 201 generate multiple measurements 203 based on observations of the environment 100. The measurements can be based on observations taken from hardware of the environment 100, or from machine readable instructions of applications in the environment, or a combination. That is to say, sensors 201 can be hardware based sensors, or sensors using machine readable instructions such as those operating in an application of a system. According to an example, the sensors 201 may also include "social sensors" to generate data representing intelligence about the moods, desires and so on of employees for example. Such social sensors can measure, amongst other things, noise profiles such as a noise profile in a particular area, and aspects of a human physiological condition such a heart rate, perspiration, pupil dilation etc. For example, an application can provide noise level readings from mobile phones, as well as information provided by users.
[0016] Other forms of social sensing can determine a mood of a group of people. For example, by determining the presence or frequency of certain words in a set of emails of a social networking system, blogging or micro-blogging service, a general measure of the mood of a group of users can be calculated. Other measurements detected by sensors 201 can include measurements associated with the current state of weather in a location or multiple locations, the current state of a the rate of exchange of a currency or currencies, or more general measurements representing business intelligence such as a particular strategy or action of a company for example. Measurements can therefore be diverse - accordingly, sensors should be construed as anything capable of providing such diverse measurements.
[0017] Measurements 203 are used by a mitigation engine 204 to form symptoms 207. Mitigation engine 204 includes an event correlator engine 205 and an action engine 206. Symptoms 207 can be indicative of a particular threat present in the environment 100, and can be determined in response to a measurement in a particular range or above/below a predetermined value for example. Accordingly, if a measurement is no longer in the range that created the symptom the symptom can be deemed to be no longer present. For example, given a set of measurements 203 relating to a temperature reading from sensors 201 , symptoms 207 can include those which result from particular temperatures, such as a high or low temperature for example. An alternative example could be - given a set of measurements 203 from sensors 201 relating to network traffic on an IT infrastructure, a set of symptoms 207 can include those which result from particularly high instances of attempted TCP/IP connection attempts to or from a particular IP address. Accordingly, sensors 201 can be hardware devices to monitor network traffic, or could be daemons configured to monitor network access from within multiple applications for example. [0018] According to an example, symptoms can be derived using the absence of certain measurements. For example, the absence of an indication that a network login password has been changed can give rise to an associated symptom. A collection of such absent measurements can be combined to provide an overall indication for a symptom - e.g. a cluster of machines which have not had their network access passwords changed can provide an indication of an associated symptom for the cluster.
[0019] The event correlator engine 205 determines and matches a set of symptoms 207 to events. For example, an event E1 may have multiple symptoms [SEi] = {S1 ,
S2 Sn} associated with it. That is to say, the likelihood of an event E1 occurring changes in response to the presence of more or less of the symptoms in SEi - In the presence of the event, action engine 206 includes a set of actions which can be used to mitigate the effects of the event. Accordingly, in block 208, and in response to a positive determination of the potential presence of an event, a set of actions 209 are provided by the action engine 206. The actions 209 can be presented to a user who can give a response 210. The provided actions form a schema for mitigating the effects of the identified event, and user input is used to confirm deployment of actions in the schema according to an example. Alternatively, actions can play out to mitigate an event without user input, or with limited user input (such as user input occurring infrequently for example). Accordingly, a selected action from those presented to a user is a mitigating action 21 1 used by the system. In response to the mitigating action 21 1 , the environment 100 may change. Accordingly, sensors 201 may provide measurements 203 which give symptoms 207 indicative of whether or not mitigating action 21 1 was successful in mitigating the event after having been deployed in the environment 100.
[0020] A set of multiple actions 209 can be related to an event or multiple events. For example, it may be possible for an action or a set of actions 209 to mitigate the effects of several distinct events. Alternatively, a set of actions 209 may be limited to mitigating the effects of one event. For example, it is unlikely that the actions 209 relating to an event in an environment 100 such as a forest fire would be applicable to the actions which could be used to mitigate the effects of malware in an IT infrastructure. However, there are likely to be common actions for dealing with events such as known malware detected in a system and possible 0-day malware being detected. [0021 ] In determining the potential presence of an event (or threat), event correlator engine 205 can use certainty factor values associated with symptoms to calculate the likelihood of an event or threat being present. For example, with reference to figure 3, which is an expanded block diagram some parts of mitigation engine 204, symptoms
207 form a set {S1 , S2, S3 Sn} in event correlator engine 205. Symptoms can be received singly, sequentially or in combination with other symptoms. Each event stored in the event correlator engine 205 of mitigation engine 204 includes an associated set of symptoms. The set of symptoms for each event are symptoms which can be indicative that the event is occurring or present. Each of the symptoms has a measure associated with it. According to an example, a measure for a symptom can vary between -1 and +1 , and can take any value in between (including the end points of the range). Other alternative measures and ranges are possible. A positive value indicates an increased likelihood that the symptom in question is indicative of the event. A negative value indicates strength of contra-indication of an event in view of the symptom. The further into the positive or negative each measure extends increases the certainty of the likelihood. So for example, a measure of +1 for a symptom indicates that there is a 100% likelihood that an event dependent on that symptom is present if that symptom occurs, whereas a measure of -1 for a symptom indicates that there is a 100% likelihood that an event is not present if that symptom occurs. A value in a range between -0.2 to +0.2 for example can represent an indeterminate likelihood. Such a range or the individual end points of the range can be varied as desired.
[0022] Mapped to each event is an overall certainty factor {CF1 , CF2, CF3 CFm} which provides a likelihood value that the event is present. The certainty factor CFj for an event Ej is a value representing a measure of the probability that an event is occurring or is present. For example, a symptom S1 can be a related to a measurement 203 representing temperature. An event E1 can represent the provision of influenza in a person (in which case the 'environment' 100 here is the person, and a sensor 201 can be a temperature sensor for example). Accordingly, for an event E1 , if the measurement of temperature is relatively high, then the certainty factor CF1 for E1 can be a higher value in order to reflect the fact that the measured temperature is an indicative symptom of E1. According to an example, certainty factors range from -1 to +1 , and can take any value in between. A value of -1 indicates that an event is not occurring or is not present. A value of +1 indicates that an event is occurring or is present. A value for a certainty factor below 0.2 for example can represent an indeterminate likelihood, or provide a level of certainty suggesting that the event is not present. This can be varied as desired to provide any minimum threshold value below which an event is considered to be absent.
[0023] According to an example, events can be defined by multiple symptoms. Accordingly, in the presence of multiple symptoms, individual certainty factors or measures for symptoms are combined to provide an overall likelihood value for an event in the presence of the symptoms. According to an example, symptoms within an event are independent of one another. For example, pairs of values a, b for symptoms of an event can be combined to provide an overall certainty factor CFab according to the following relationships:
CFab = a + b - ab a, b≥ 0
CFab = a + b + ab a, b < 0
CFab = (a + b)/(1 - min(|a|, |b|)) otherwise (a, b≠ 0)
If more than two symptoms for an event exist, they can be combined in pairs (in any order) in order to arrive at a final likelihood value. Other alternatives for combining values associated with symptoms can be used.
[0024] Consider the provision of three symptoms, S1 , S2 and S3. Event correlator engine 105 receives the symptoms and compares them against symptoms for multiple events. An event E1 can be considered to be present if symptoms S1 and S2 are present, whilst an event E2 can be considered to be present if symptoms S1 , S2 and S3 are present. However, the provision of S3 renders E1 less likely. Accordingly, event correlator engine 105 determines that either event E1 or E2 (or both) can be occurring as a result of the received symptoms. A likelihood value for events E1 and E2 varies as symptoms are received. For example, when symptom S1 is received, corresponding certainty factors for E1 and E2 are calculated. On receipt of S2, the values of CF1 and CF2 are recalculated. On receipt of S3, values CF1 and CF2 are recalculated. Now, the likelihood that E1 is present is lower than the likelihood that E2 is present by virtue of the fact that S3 has caused CF1 to reduce. Both events can be presented to a user, despite the fact that the probability of E2 occurring is lower than that of E1 - a user can then determine a mitigating action based on the facts and data presented.
[0025] Consider the above with the addition of numerical values. For event E1 , event correlator engine 105 can store likelihood values of 0.3, 0.5 and -0.7 for symptoms S1 , S2 and S3 - the values may differ for other events which include those symptoms. For event E2, event correlator engine 105 can store values of 0.3, 0.5 for symptoms S1 and S2. According to an example, such values can be determined heuristically. Accordingly, on receipt of symptoms S1 and S2, event correlator engine 105 calculates that an overall likelihood value for E1 or E2 is 0.65 using the relationships given above. On receipt of S3, the value for E1 is revised to -0.14. Accordingly, the overall likelihood of event E2 is higher than that for E1. This is because:
E2[S1 + S2] = 0.3 + 0.5 - (0.3x0.5) = 0.65
E1 [(S1 + S2) + S3] = (0.65 - 0.7)/(1-0.65) = -0.14
[0026] A calculated likelihood value is used by action engine 206 to provide a set of appropriate actions 209 to mitigate the effects, or otherwise obviate the effect of the event or events which may be occurring or be present in an environment 100. According to an example, a user can be presented with an indication of events, actions and an associated certainty factor. In the example given above, a user can be presented with an indication that, as a result of symptoms S1 , S2 and S3, event E1 may be present with a certainty factor of 0.65 or 65%, and event E2 may be present with a certainty factor of - 0.14 or -14%. Appropriate actions for each event can be presented. The user is then equipped to make a decision on the basis of the information as to a mitigating action to make. According to an example, an action can be executed automatically. For example, in certain circumstances a user's reaction may not be fast enough to determine and deploy a desired action in such a way so as to mitigate the effects of the event. For example, if there is a high likelihood that a particular piece of malware is in a system, the damage which may be caused (such as by the infection of various other machines operating on the system) may be higher if a mitigating action is not deployed promptly. In such cases, a suitable mitigating action can be automatically deployed. Accordingly, if the probability of the existence of a particular event is above a certain threshold value, or if a particular type of event is determined to be present (such as a particularly infectious piece of malware for example) an action can be automatically deployed. Selection of actions can then revert to a user.
[0027] As described, a mitigating action 21 1 can cause a set of measurements and thus a set of symptoms to change, thereby altering a value for a likelihood of an event. For example, if S1 relates to increased network traffic from a particular machine on a network and is related to an event which requires some mitigation, and according to action engine 206 an appropriate mitigating action is to remove that machine from the network, removal of the machine will reduce or eliminate the symptom, thereby altering the value for the event. More specifically, after removal, a measurement will result in an indication of a reduction in network traffic from the machine.
[0028] According to an example, a process for defining and combining symptoms can be conditional such that, for example, the certainty factor for one symptom can be dependent on the certainty factor for another symptom. For example, conditional probabilities can be used for symptoms, which can be combined to form a measure for an overall certainty factor (or belief) in an event. A suitable combinatorial scheme which can be used is the Dempster-Shafer rule of combination for example.
[0029] Figure 4 is a schematic block diagram of a process for selecting an action according to an example. Symptoms 207 received by event correlator engine 205 determine an event or multiple events for which multiple actions are available in action engine 206. That is to say, in response to a set of symptoms 207, engine 205 determines an event or multiple events which may be present in environment 100 by matching symptoms to possible events. A match for at least one symptom causes the corresponding event to be displayed on a graphical user interface (GUI) 400.
[0030] Associated with each displayed event is an action tree or graph structure 403 which embodies a set of actions which can be taken in response to user input for the event, and which are designed to mitigate the effects of the event, or to remove the event. The graph structure is an action flow which represents a set of actions to be taken to mitigate an event, and an order for the actions. Accordingly, GUI 400 presents an event and an associated certainty factor value for the event to a user. Other events and related certainty factor values can be also be presented. GUI 400 presents an action for a user from action engine 206 associated with the event or events. The presented action is determined by the action engine 206 using the action structure 403, which selects a node in a structure for the presented event relating to a mitigating action 21 1 which can be taken. A user response 210 in the form of a selection for a mitigating action 21 1 causes the next action in structure 403 to be selected. The selected mitigating action 21 1 has an effect 405 on environment 100, which may cause the symptoms 207 to change. The change may mean that the event is no longer present (e.g. the mitigating action 21 1 is to remove a machine from environment 100 thereby removing the symptom which caused the event to be presented). Accordingly, GUI 400 can be updated so that the event is no longer presented.
[0031] According to an example, action engine 206 can provide measurements 203 from which symptoms 207 can be derived. More specifically, action engine 206 can provide measurements to indicate that an event has been dealt with - this can be desirable in, for example, the case that a mitigating action 21 1 will take some time to cause an effect 405 which would then alter measurement and thus symptoms 207 to the extent that the event is considered to be mitigated. For example, if a mitigating action is such that it may take anywhere from the order of several hours to several weeks to play out, event correlator engine 205 is likely to continue receiving the symptoms 207 which caused the provision of the mitigating action 21 1 in the first place. Accordingly, a mechanism to 'close the loop' can be provided in which the action engine can provide a measurement or multiple measurement indicative that a selected mitigating action has been deployed.
[0032] According to an example, a symptom can provide an indication that an event is more likely than another event. For example, consider two possible events relating to the possible presence of i) a known piece of prolific malware in an environment and ii) an less prolific piece of malware (or 0-day). The known threat can be favored - for example, by including a symptom for the less well known threat (event) in which there is a positive likelihood value for the known threat. Accordingly, possible presence of the known threat will cause the relative likelihood of the less prolific threat to be reduced. Alternatively, the known threat can include a symptom will a negative value for the less prolific threat. [0033] According to an example, multiple events defined by a set of symptoms 207 therefore have associated actions which can be taken to mitigate the effects of the events in an environment. The event(s), certainty factors associated with the event(s) and a corresponding action(s) are presented to a user using a GUI 400. A user can select a mitigating action 21 1 to be taken, which can cause symptoms to change. In response to the mitigating action 21 1 , events may no longer be present, or may have their certainty factor adjusted. The user selection can cause the next node in the action structure 403 to be presented using GUI 400 in the case that the associated event does not get deleted (as a result of action 21 1 for example). In the case that an event is not present (perhaps because the overall certainty value for the event is below a predetermined threshold as described above, or because a mitigating action has removed the event) GUI 400 can be updated to remove any indication that the event was present.
[0034] According to an example, an environment 100 can be a virtualized environment, such as a cloud computing environment, or a hardware device arranged to use multiple VMs for example. A mitigation engine 204 can be provided in such a virtual machine so that it is isolated from the rest of the environment. Figure 5 is a block diagram of a virtualized environment according to an example. A virtual machine monitor (VMM) 501 lies above a physical hardware infrastructure 500. Infrastructure 500 typically includes a number of processors 507, which can be multi-core processors, as well as volatile memory 508 such as RAM for example, network interface hardware 509, storage 510 such as hard disk storage for example, graphics processing hardware 511 such as multiple graphical processing processors and so on, all of which can communicate using a bus 530. VMs 502, 503 can be instantiated using VMM 501 and are allocated hardware from infrastructure 500. For example, VMMs 502, 503 can be allocated multiple cores from processors 507 depending on the tasks they are destined to perform. A number of smaller VMs 504 (in terms of resources allocated, and/or capability) can be instantiated by VMM 501 . VMs 504, 506 are virtual appliances which are used to monitor the VMs 502, 503 using memory introspection according to an example. An environment with multiple VMs such as that shown in figure 5 can be provided as a cell in a cloud computing environment for example. Alternatively, in a smaller scale environment, the system of figure 5 can be provided on a hardware platform including of a laptop or desktop computer, or other suitable hardware. [0035] VM 502 can operate as a mitigation engine 204 by receiving measurements 203 via VMM 501 , and presenting actions using GUI 400 to a user over a network using hardware 509 or via a display (not shown) using hardware 51 1 for example.
[0036] According to an example, a privileged (DomO) VM can include device drivers which enable the use of physical resources for any VMs. Accordingly, a network monitor in which network activity (and other activity such as disk and memory access activity) can be used by the DomO VM. For example, as data packets pass through the DomO to the physical hardware, they can be inspected to determine whether they are legitimate or malicious. The network monitor can be used to provide measurements 203. For example, if a threat tries to set up a TCP connection with an IP address which is outside of the range known to be permitted (such as a range of IP addresses in a company network, such as those in the form 16.xx.xxx.x for example), this may constitute suspicious behavior which may form a symptom of an event for which certain mitigating actions should be performed.
[0037] Figure 6 is a block diagram of a virtualized environment according to an example. In the environment of figure 6, VM 502 is replaced by a standard VM 600 (i.e. mitigation engine 204 does not reside in VM 502). Instead, mitigation engine 204 resides in hardware 500. More specifically, mitigation engine 204 can be provided as hardware or as a machine readable instruction module in physical infrastructure 500.
[0038] Figure 7 is a block diagram of a mitigation system 700 according to an example. The system 700 includes a processing unit 701 (CPU), a system memory 703, and a system bus 705 that couples processing unit 701 to the various components of the system 700. The processing unit 701 typically includes processors, each of which may be in the form of any one of various commercially available processors. The system memory 703 typically includes a read only memory (ROM) that stores a basic input/output system (BIOS) that contains start-up routines for the system 202 and a random access memory (RAM). The system bus 705 may be a memory bus, a peripheral bus or a local bus, and may be compatible with any of a variety of bus protocols, including PCI(e), VESA, MicroChannel, ISA, and EISA. The system 700 also includes a persistent storage memory 707 (e.g., a hard drive (HDD), a CD-ROM drive, magnetic tape drives, flash memory devices, and digital video disks) that is connected to the system bus 705 and contains computer-readable media disks that provide nonvolatile or persistent storage for data, data structures and computer-executable instructions.
[0039] A user may interact (e.g., enter commands or data) with system 700 using input devices 709 (e.g., a keyboard, a computer mouse, a microphone, joystick, and touch pad or touch sensitive display screen). Information in the form of GUI 400 may be presented to a user on the display 71 1 (implemented by, e.g., a display monitor which can be touch sensitive, including a capacitive, resistive or inductive touch sensitive surface for example), and which is controlled by a display controller 713 (implemented by, e.g., a video graphics card). The system 700 also typically includes peripheral output devices, such as speakers and a printer. Multiple remote computers may be connected to the system 700 through a network interface card (NIC) 715. Alternatively, system 700 can upload retrieved data, or a pointer thereto, to a remote storage service such as cloud based service for example. Sensors 201 communicate with other parts of system 700 via bus 705.
[0040] As shown in figure 7, the system memory 703 also stores mitigation engine 204, and processing information 717 that includes measurements 203, symptoms 207 and actions 209. According to an example the mitigation engine 204 interfaces with a graphics driver to present a user interface on the display 71 1 for managing and controlling the operation of the system 700, such as for providing a response 210.
[0041] Figure 8 is a schematic block diagram of a user interface module according to an example. The user interface module 800 includes a main body 801 which can be presented to a user using the display 711 as described above with reference to figure 7, and which can be part of GUI 400. The main body 801 includes multiple display child windows 803-810, each of which includes information for a user in order to allow the user to make an informed choice about whether or not to deploy a mitigating action. Accordingly, window 803 can provide an indication of the event or threat, such as by naming a piece of malware determined to be present for example (e.g. "botnet-1"). Windows 804 and 805 can respectively provide data representing an owner of an affected system, and an identification of the affected system or device. Window 806 within the main body 801 can provide an overall certainty value or probability for the event of 803, and window 807 can provide an indication that the event - in view of the certainty factor 806 for example - is high or low risk. For example, if the certainty factor in 806 is greater than a predetermined value, a high risk indication can be provided in 807 (such as by providing a graphic and/or a textual indication for example). Child window 808 can provide data representing an action to be taken, with windows 809, 810 providing a user with convenient yes/no selection buttons to enable an action presented in window 808 to be deployed or rejected.
[0042] Although figure 8 represents a specific layout for a GUI module according to an example, it should be noted that alternative layouts can be provided including layouts in which fewer or more information can be provided. Accordingly, although the module of figure 8 is considered to be a suitable layout for a GUI module, it is not intended to be limiting in terms of design or content.

Claims

CLAIMS What is claimed is:
1 . A mitigation system for deploying an action to mitigate an event, comprising:
multiple sensors to provide measurements for the system;
an event correlator engine to derive multiple symptoms from the measurements and to use the symptoms to determine a likelihood value for the existence of the event;
an action engine to provide multiple actions for mitigating the event in response to the likelihood value; and
a user interface to provide a selection mechanism for a user of the system to select a mitigating action to be deployed by the action engine.
2. A mitigation system as claimed in claim 1 , wherein the event correlator engine is further operable to assign a certainty value to respective ones of the symptoms and to use the certainty values to determine the likelihood value.
3. A mitigation system as claimed in claim 1 , the user interface to provide an indication of the presence of multiple other events and their corresponding likelihood values along with multiple actions for mitigating the multiple other events.
4. A mitigation system as claimed in claim 1 , wherein the action engine provides multiple actions using an action structure for the event including a set of actions which can be selected by the user and which are designed to mitigate the effects of the event.
5. A mitigation system as claimed in claim 1 , comprising a mitigation engine to receive the measurements and to derive symptoms for the event correlator engine therefrom according to a range of values of the measurements or end points of a range of values for the measurements.
6. A method for mitigating a threat by observing multiple symptoms in a computer system in program execution from outside of a host virtual machine, including:
receiving multiple measurements from the virtual machine;
using the measurements to derive symptoms of a threat; calculating a likelihood value of a threat using the symptoms by combining multiple values for symptoms associated with the threat
using the likelihood value to determine a set of multiple actions for mitigating the threat; and
deploying a mitigating action in the system in response to user input.
7. A method as claimed in claim 6, wherein deploying a mitigating action includes presenting a user with multiple options for selection derived using the symptoms.
8. A method as claimed in claim 6, wherein a mitigating action is deployed automatically in response to a likelihood value for a threat above a predetermined threshold value.
9. A method as claimed in claim 6, wherein measurements are used to derive symptoms of a threat using ranges or thresholds for measurements each of which is indicative of a symptom.
10. A method as claimed in claim 6, wherein multiple measurements are received from the virtual machine using a second virtual machine of the system to generate measurements using memory introspection.
1 1. A machine-readable medium storing machine-readable instructions arranged to be executed on a machine to cause a mitigation system to:
receive multiple measurements from a set of sensors in an environment;
derive a set of symptoms for multiple events using the measurements;
match the symptoms to an event and calculate a measure for the likelihood of the presence of the event in the environment;
provide an action for selection by a user for mitigating the event; and
deploy the action in the environment in response to selection.
12. The machine-readable medium as claimed in claim 1 1 , further including instructions to cause a mitigation system to:
present the action for selection to the user using a graphical user interface, along with an indication of the likelihood of the event.
13. The machine-readable medium as claimed in claim 1 1 , further including instructions to cause a mitigation system to:
deploy the action in the environment automatically in response to a measure for the likelihood greater than a predetermined threshold.
14. The machine-readable medium as claimed in claim 1 1 , further including instructions to cause a mitigation system to:
determine a set of actions suitable for mitigating the multiple events using a repository of actions arranged in the form of respective graphs representing action flows for events.
15. The machine-readable medium as claimed in claim 1 1 , further including instructions to cause a mitigation system to:
provide a measurement indicative that an action has been deployed in the environment.
PCT/EP2010/068344 2010-11-26 2010-11-26 Mitigation system WO2012069094A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/EP2010/068344 WO2012069094A1 (en) 2010-11-26 2010-11-26 Mitigation system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/EP2010/068344 WO2012069094A1 (en) 2010-11-26 2010-11-26 Mitigation system

Publications (1)

Publication Number Publication Date
WO2012069094A1 true WO2012069094A1 (en) 2012-05-31

Family

ID=44343923

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2010/068344 WO2012069094A1 (en) 2010-11-26 2010-11-26 Mitigation system

Country Status (1)

Country Link
WO (1) WO2012069094A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020014687A1 (en) * 2018-07-13 2020-01-16 Ribbon Communications Operating Company, Inc. Communications methods and apparatus for dynamic detection and/or mitigation of threats and/or anomalies
US10659485B2 (en) 2017-12-06 2020-05-19 Ribbon Communications Operating Company, Inc. Communications methods and apparatus for dynamic detection and/or mitigation of anomalies
US10666798B2 (en) 2017-12-06 2020-05-26 Ribbon Communications Operating Company, Inc. Methods and apparatus for detection and mitigation of robocalls

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6563300B1 (en) * 2001-04-11 2003-05-13 Advanced Micro Devices, Inc. Method and apparatus for fault detection using multiple tool error signals
US20040093513A1 (en) * 2002-11-07 2004-05-13 Tippingpoint Technologies, Inc. Active network defense system and method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6563300B1 (en) * 2001-04-11 2003-05-13 Advanced Micro Devices, Inc. Method and apparatus for fault detection using multiple tool error signals
US20040093513A1 (en) * 2002-11-07 2004-05-13 Tippingpoint Technologies, Inc. Active network defense system and method

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
"Information Assurance in Computer Networks", vol. 2052, 1 January 2001, SPRINGER BERLIN HEIDELBERG, Berlin, Heidelberg, ISBN: 978-3-54-042103-0, article DIPANKAR DASGUPTA ET AL: "An Intelligent Decision Support System for Intrusion Detection and Response", pages: 1 - 14, XP055004414 *
F CUPPENS, A MIÈGE: "CRIM: An Approach to Correlate Alerts andRecognize Malicious Intentions", RTO IST SYMPOSIUM ON REAL TIME INTRUSION DETECTION, 27 May 2002 (2002-05-27), Estoril, Portugal, XP002656594, Retrieved from the Internet <URL:http://citeseerx.ist.psu.edu> [retrieved on 20110803] *
H.A.M. LUIIJF, R. COOLEN: "Intrusion Detection Introduction and Generics", RTO IST SYMPOSIUM ON REAL TIME INTRUSION DETECTION, 27 May 2002 (2002-05-27), Estoril, Portugal, XP002656595, Retrieved from the Internet <URL:http://citeseerx.ist.psu.edu> [retrieved on 20110803] *
KEMMERER R A ET AL: "A Comprehensive Approach to Intrusion Detection Alert Correlation", IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, IEEE SERVICE CENTER, NEW YORK, NY, US, vol. 1, no. 3, 1 March 2004 (2004-03-01), pages 146 - 169, XP011123180, ISSN: 1545-5971, DOI: 10.1109/TDSC.2004.21 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10659485B2 (en) 2017-12-06 2020-05-19 Ribbon Communications Operating Company, Inc. Communications methods and apparatus for dynamic detection and/or mitigation of anomalies
US10666798B2 (en) 2017-12-06 2020-05-26 Ribbon Communications Operating Company, Inc. Methods and apparatus for detection and mitigation of robocalls
WO2020014687A1 (en) * 2018-07-13 2020-01-16 Ribbon Communications Operating Company, Inc. Communications methods and apparatus for dynamic detection and/or mitigation of threats and/or anomalies
US10931696B2 (en) 2018-07-13 2021-02-23 Ribbon Communications Operating Company, Inc. Communications methods and apparatus for dynamic detection and/or mitigation of threats and/or anomalies
US11902311B2 (en) 2018-07-13 2024-02-13 Ribbon Communications Operating Company, Inc. Communications methods and apparatus for dynamic detection and/or mitigation of threats and/or anomalies

Similar Documents

Publication Publication Date Title
US11126716B2 (en) System security method and apparatus
JP6364547B2 (en) System and method for classifying security events as targeted attacks
US10311235B2 (en) Systems and methods for malware evasion management
US10936717B1 (en) Monitoring containers running on container host devices for detection of anomalies in current container behavior
TWI564713B (en) Signature-independent, system behavior-based malware detection
CN110383278A (en) The system and method for calculating event for detecting malice
CN107810504B (en) System and method for determining malicious download risk based on user behavior
US9774614B2 (en) Methods and systems for side channel analysis detection and protection
US9838405B1 (en) Systems and methods for determining types of malware infections on computing devices
US9479357B1 (en) Detecting malware on mobile devices based on mobile behavior analysis
JP6690646B2 (en) Information processing apparatus, information processing system, information processing method, and program
US10003606B2 (en) Systems and methods for detecting security threats
US20170185785A1 (en) System, method and apparatus for detecting vulnerabilities in electronic devices
US10165002B2 (en) Identifying an imposter account in a social network
US20160330217A1 (en) Security breach prediction based on emotional analysis
CN105027135A (en) Distributed traffic pattern analysis and entropy prediction for detecting malware in a network environment
JP6405055B2 (en) SPECIFIC SYSTEM, SPECIFIC DEVICE, AND SPECIFIC METHOD
US11669779B2 (en) Prudent ensemble models in machine learning with high precision for use in network security
JP2017211978A (en) Business processing system monitoring device and monitoring method
CN114329374A (en) Data protection system based on user input mode on device
US10291644B1 (en) System and method for prioritizing endpoints and detecting potential routes to high value assets
JP2018077607A (en) Security rule evaluation device and security rule evaluation system
WO2019101087A1 (en) Slow-disk detection method, and storage array
JP6383445B2 (en) System and method for blocking access to protected applications
WO2012069094A1 (en) Mitigation system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10785053

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 10785053

Country of ref document: EP

Kind code of ref document: A1