US9794279B2 - Threat indicator analytics system - Google Patents

Threat indicator analytics system Download PDF

Info

Publication number
US9794279B2
US9794279B2 US14/473,910 US201414473910A US9794279B2 US 9794279 B2 US9794279 B2 US 9794279B2 US 201414473910 A US201414473910 A US 201414473910A US 9794279 B2 US9794279 B2 US 9794279B2
Authority
US
United States
Prior art keywords
compromise
potential
threat
indicator
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US14/473,910
Other versions
US20160269434A1 (en
Inventor
Louis William DiValentin
Matthew Carver
Michael L. Lefebvre
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Accenture Global Services Ltd
Original Assignee
Accenture Global Services Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Accenture Global Services Ltd filed Critical Accenture Global Services Ltd
Priority to US14/473,910 priority Critical patent/US9794279B2/en
Priority to AU2015203086A priority patent/AU2015203086B2/en
Assigned to ACCENTURE GLOBAL SERVICES LIMITED reassignment ACCENTURE GLOBAL SERVICES LIMITED ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: CARVER, MATTHEW, DiValentin, Louis William, Lefebvre, Michael L.
Priority to EP15171735.2A priority patent/EP2955895B1/en
Publication of US20160269434A1 publication Critical patent/US20160269434A1/en
Priority to US15/782,498 priority patent/US10021127B2/en
Application granted granted Critical
Publication of US9794279B2 publication Critical patent/US9794279B2/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/14Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
    • H04L63/1433Vulnerability analysis
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/14Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
    • H04L63/1408Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic by monitoring network traffic
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/14Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
    • H04L63/1441Countermeasures against malicious traffic

Definitions

  • the present disclosure relates to security and network operations.
  • one innovative aspect of the subject matter described in this specification can be embodied in methods for analyzing threat intelligence information including receiving by a threat information server, threat intelligence information from one or more intelligence feeds and generating one or more identified security threats, identifying a compromise by a management process orchestration server and retrieving information from the threat information server and identifying one or more actions to be performed, determining by an indicator analytics processor, a composite credibility based on the actions, and determining one or more components for profiling and determining indicators of compromise for each component, and communicating the indicators of compromise to the management process orchestration server.
  • the management process orchestration server can analyze the indicators of compromise and generate a response process.
  • the management process orchestration server can execute the response process by communicating instructions to a network switching controller.
  • identifying a compromise to a system performing a snapshot of the system and, based at least in part on the snapshot, identifying one or more potential indicators of compromise, determining that one or more potential indicators of compromise are potential threat indicators, and, for each potential indicator of compromise that is a potential threat indicator, identifying one or more corresponding actions performed by the system, determining a credibility of each action performed by the system, determining a composite credibility of the potential indicator of compromise, based on the credibility of each action, and determining that the potential indicator of compromise is an actual threat indicator, based on the composite credibility.
  • Each of the potential indicators of compromise can be associated with a system process or a presence of a file on the system. Determining that the potential indicators of compromise are potential threat indicators can be based on matching the potential indicators of compromise with stored security threat information.
  • identifying one or more potential indicators of compromise can include analyzing the snapshot to identify one or more of currently running processes, recently ended processes, or recently modified objects.
  • Identifying one or more actions performed by the system can include identifying actions related to one or more of process spawning, file access or modification, or registry access or modification.
  • Determining the credibility of each action performed by the system can include determining a credibility score for each action in regard to the system process.
  • Determining the composite credibility of the potential indicator of compromise can include determining a composite credibility score for the potential indicator of compromise by accessing a model that combines the credibility scores for the actions.
  • the model can include interaction terms between the actions, to a multiple degree.
  • the model can include a time decay function between actions. Determining that the potential indicator of compromise is an actual threat indicator can include determining that the composite credibility score for the potential indicator of compromise meets a predetermined threshold. An actual security threat indicator can be prioritized, based at least in part on its potential effectiveness in preventing or mitigating a security threat.
  • FIG. 1 A block diagram illustrating an exemplary computing environment in accordance with the present disclosure.
  • FIG. 1 A block diagram illustrating an exemplary computing environment in accordance with the present disclosure.
  • FIG. 1 A block diagram illustrating an exemplary computing environment in accordance with the present disclosure.
  • FIG. 1 A block diagram illustrating an exemplary computing environment in accordance with the present disclosure.
  • FIG. 1 A block diagram illustrating an exemplary computing environment in accordance with the present disclosure.
  • FIG. 1 A block diagram illustrating an exemplary computing environment in accordance with the present disclosure.
  • FIG. 1 A block diagram illustrating an exemplary computing environment in accordance with the present disclosure.
  • FIG. 1 A block diagram illustrating an exemplary computing environment in accordance with the present disclosure.
  • FIG. 1 A block diagram illustrating an exemplary computing environment in accordance with the present disclosure.
  • FIG. 1 A block diagram illustrating an exemplary computing environment in accordance with the present disclosure.
  • FIG. 1 A block diagram illustrating an exemplary computing environment in accordance with the present disclosure.
  • Computer networks can defend against evolving cyber-attacks. Indicators of compromise can be operationalized to prevent the spread of a threat internally. Threat actors can be profiled and possible motivations against an organization can be determined. Responses to threats can be automated, and systems and processes for providing mitigations can be coordinated. Organizations can share information related to potential threats.
  • FIGS. 1 & 2 depict example systems that can execute implementations of the present disclosure.
  • FIGS. 3-5 depict example processes that can be executed in accordance with implementations of the present disclosure.
  • FIG. 6 is a block diagram of a computing system that can be used in connection with computer-implemented methods described in this document.
  • analyzing threat indicators may include aggregating threat activity identified within a computer network and applying analytics to gain actionable insights into security threats. Predictive analytics may be used to determine threat indicators which can be used to improve an organization's security posture.
  • deception networks can combine network agility, deep packet inspection, and honeypots to identify indicators of network compromise, to contextualize internal threat intelligence, and to automatically apply mitigations using infrastructure orchestration.
  • incident responses may include incident discovery and incident remediation processes.
  • Active defense techniques for network security may include orchestrating incident response and threat management systems, incorporating deception network capabilities, and leveraging software-defined networking (SDN). Predetermined courses of action may be established, based on the processes within a security organization. Process changes may be implemented automatically or semi-automatically through orchestration.
  • deception networks, decoy resources, anti-reconnaissance, and resource shifting may be used.
  • an adversary can be deceived by generating false network topologies and baiting the adversary into a honeypot (e.g., a computer, data, and/or network site that appear as part of a network and that appear to include information of value, but is sandboxed and monitored).
  • a honeypot e.g., a computer, data, and/or network site that appear as part of a network and that appear to include information of value, but is sandboxed and monitored.
  • a sandboxed environment can provide a contained set of resources that permits actions by a security threat, with minimal or no permanent effect to the environment and/or to an organization's network.
  • Information gathered from observation of the adversary's behavior may be used to proactively mitigate threats to an organization.
  • Security response and mitigation capabilities can be embedded into an agile and adaptive infrastructure. By leveraging application and network virtualization capabilities, for example, an organization can meet the adversary's tactics.
  • FIG. 1 depicts an example system 100 that can execute implementations of the present disclosure.
  • the system 100 includes multiple hardware and/or software components (e.g., modules, objects, services) including a threat intelligence component 102 , a security information and analytics component 104 , and a defense component 106 .
  • a threat intelligence component 102 e.g., modules, objects, services
  • a security information and analytics component 104 e.g., a security information and analytics component 104
  • a defense component 106 e.g., Two or more of the components 102 , 104 , and 106 may be implemented on the same device (e.g., same computing device), or on different devices, such as devices included in a computer network or a peer-to-peer network. Operations performed by each of the components 102 , 104 , and 106 may be performed by a single computing device, or may be distributed to multiple devices.
  • the threat intelligence component 102 can receive information from one or more intelligence feeds 110 .
  • the intelligence feeds 110 may include feeds from commercial, government, and/or peers sources.
  • a peer source may be associated with a similar type of organization (e.g., a similar type of business, such as a business in a similar sector or industry) as an organization that hosts, maintains, or operates the system 100 .
  • a peer source may be associated with a system that includes one or more components (e.g., operating systems, databases, applications, services, and/or servers) that are similar to components of the organization's network.
  • a peer source may also be another organization that hosts, maintains, or operates a system similar in structure to system 100 (e.g., one with one or more similar components, such as, for instance, network components of the same or similar make or model, or running a same or similar version of operating software, or operating a same or similar version of application software).
  • a service may receive security threat information from multiple peer sources, and provide aggregated intelligence feeds in return to each of the sources.
  • the intelligence feeds 110 can be provided by a peer exchange service 112 which receives threat information from multiple peers (e.g., including the system 100 ), and which aggregates and provides the information in return to each of the peers, thus facilitating the sharing of security threat information among peer organizations.
  • Each of the intelligence feeds 110 , and internal intelligence 114 from the system 100 may include information associated with one or more security threats.
  • the threat intelligence component 102 can identify key indicators and observables associated with each of the threats. Indicators and observables may include, for example, names, identifiers, and/or hashes of processes, objects, files, applications, or services, Internet Protocol (IP) addresses of devices, registry keys to be accessed or modified, user accounts, or other suitable indicators and observables of a security threat.
  • IP Internet Protocol
  • security threat information from multiple feeds may be consolidated and/or normalized. For example, a consolidated view of multiple feeds may include a list of indicators and observables upon which action may be taken.
  • Insights 116 , 118 based on the security threat information can be provided by the threat intelligence component 102 to the security information and analytics component 104 and/or to the defense component 106 .
  • insights 116 can include information associated with key indicators and observables for previously occurring security threats, such as information that indicates that suspicious activity has been observed in association with a particular executable that has been used by particular instances of malware and by particular threat actors.
  • insights 118 provided to the defense component 106 may include information associated with an incident response plan and one or more mitigating controls. For example, based on one or more indicators of compromise identified by the intelligence component 102 , an appropriate plan of action may be generated, selected, and implemented to respond to a corresponding security threat.
  • the security information and analytics component 104 may be supported by security information and event management (SIEM), analytics, and visualization capabilities.
  • SIEM security information and event management
  • the security information and analytics component 104 can monitor one or more internal data sources, and can map threat indicators to predefined courses of action to take in response to various security threats.
  • the security information and analytics component 104 can receive information from internal network data sources 120 (e.g., including information technology data sources 122 , operational technology data sources 124 , and/or physical data sources 126 ), and can provide monitoring of patterns and anomalies indicative of threat activity within an organization, as informed by the insights 116 provided by the threat intelligence component 102 .
  • the security and information analytics component 104 can modify its threat monitoring process.
  • the security and information analytics component 104 detects a known pattern of events (e.g., related to the insights 116 , such as the existence of a file, an activity on an IP address, or another indicator or observable action), it can record an incident, can trigger one or more requests for responses and can provide the response requests to the defense component 106 .
  • Response requests may include incident activity information 130 , for example, which may include information related to appropriate handling of a security threat or security breach, such as removal of a file, reporting of an IP address, upgrading of software, or another appropriate course of action.
  • the defense component 106 may be supported by orchestration services 140 (e.g., including security orchestration services 142 and/or infrastructure orchestration services 144 ), which can set policies and can automate threat management workflows.
  • the security orchestration services 142 can maintain an ontology regarding actions to be performed in response to particular security threats or breaches, whereas the infrastructure orchestration services 144 can maintain information related to mitigations (e.g., infrastructure changes by a software-defined networking controller) to be performed.
  • the defense component 106 can provide automated or semi-automated infrastructure changes and service management ticketing to mitigate the impact of identified threats or breaches.
  • the defense component 106 can perform particular actions in response to particular indicators, such as blocking an IP address, blocking a process executed by an endpoint, reporting to data loss/leak prevention (DLP) when a particular document is opened, redirecting traffic, or another appropriate action.
  • the defense component 106 can cause a predefined course of action to be executed, including using the orchestration services 140 to determine whether a uniform resource locator (URL) included in an e-mail is malicious, and if so, to block access to the URL and to generate a workflow request to remove the malicious e-mail from a recipient's mailbox.
  • the defense component 106 can use the orchestration services 140 to modify software-defined networking (SDN) settings to reroute network traffic associated with the attack.
  • SDN software-defined networking
  • one or more automated incident response components 108 may be distributed among one or more of the security information and analytics component 104 , the defense component 106 , and/or the orchestration services 140 .
  • the automated incident response components 108 include a response selector 132 , a notification provider 134 , and a response implementer 136 .
  • the response selector 132 can select an appropriate strategy for responding to an identified incident (e.g., a security threat or breach), based on comparing the incident to a predefined ontology.
  • the notification provider 134 for example, can optionally provide information associated with the identified incident to an operator to facilitate a semi-automated response.
  • the response implementer 136 can implement the selected response strategy by implementing one or more steps indicated by the predefined ontology. Operations performed by each of the components 132 , 134 , and 136 may be performed by a single computing device, or may be distributed to multiple devices.
  • the defense component 106 may use a deception network (e.g., discussed in further detail in association with FIG. 2 ) to mitigate a security threat and/or to gather additional intelligence related to the threat.
  • a deception network e.g., discussed in further detail in association with FIG. 2
  • information associated with one or more threat indicators 150 can be provided to the threat intelligence component 102 and/or information associated with one or more targets 152 (e.g., suspicious processes, files, traffic sources, or other aspects to be monitored) may be provided to the security information and analytics component 104 .
  • the security information and analytics component 104 may receive response strategy information 154 from the security orchestration services 142 .
  • Insight related to internal threat intelligence 114 e.g., indicators and observables determined from the data sources 120 and/or deception networks
  • FIG. 2 depicts an example system 200 that can execute implementations of the present disclosure, including the implementation of a deception network.
  • FIG. 2 also illustrates an example flow of data within the system 200 during stages (A) to (K), where the stages (A) to (K) may occur in the illustrated sequence, or they may occur in a sequence that is different than in the illustrated sequence. For example, two or more of the stages (A) to (K) may occur concurrently.
  • the example system 200 includes multiple computing devices (e.g., personal computing devices, servers, server clusters) in communication over a wired and/or wireless network.
  • the system 200 includes a threat intelligence server 202 , a management and process orchestration server 204 , a software-defined networking controller 206 , and an indicator analytics server 208 .
  • Each of the devices 202 , 204 , 206 , and 208 can include one or more processors configured to execute instructions stored by computer-readable media for performing various device operations, such as input/output, communication, data processing and/or data maintenance.
  • An example computer device is described below with reference to FIG. 6 .
  • the devices 202 , 204 , 206 , and 208 can communicate over a local area network, a wireless network, a wide area network, a mobile telecommunications network, the Internet, or any other suitable network or any suitable combination thereof.
  • threat intelligence information is received by the threat intelligence server 202 .
  • a peer organization can share (e.g., via the peer exchange 112 , shown in FIG. 1 ), information associated with an IP block of addresses targeting a particular type of resource (e.g., a database server).
  • internal threat intelligence information can be provided by monitoring capabilities of a security information and event management system (e.g., included in the security information and analytics component 104 , shown in FIG. 1 ).
  • threat intelligence information is contextualized and stored.
  • the threat intelligence server 202 can contextualize and store information associated with external and/or internal security threats to provide an understanding of a threat environment.
  • contextualizing and storing information may include matching threat information identified from internal security threats to threat information identified from external security threats to supplement the information from each source.
  • one or more threat indicators e.g., an IP block of addresses
  • SSH secure shell
  • applicable threat intelligence information is provided to the management and process orchestration server 204 .
  • the information can be provided as a list of key indicators and observables.
  • the management and process orchestration server 204 can receive threat intelligence information through an application programming interface (API).
  • API application programming interface
  • the management and process orchestration server 204 identifies applicable actions for identified security threats, and executes courses of action. For example, the management and process orchestration server 204 can maintain information associated with predefined courses of action (e.g., a playbook) for various types of security threats. When the management and process information orchestration server 204 identifies an occurrence of a security threat (e.g., via one or more data sources 120 , shown in FIG. 1 ) as matching a known threat indicator, it can execute a course of action to mitigate the particular threat.
  • a security threat e.g., via one or more data sources 120 , shown in FIG. 1
  • the management and process orchestration server 204 can receive information indicating that a production environment 210 (e.g., a network endpoint running a database server) is in communication with a device associated with an IP address that is a known threat indicator, and can automatically execute an appropriate course of action to mitigate the threat.
  • a production environment 210 e.g., a network endpoint running a database server
  • the management and process orchestration server 204 can change the infrastructure of the system 200 automatically, and/or can manipulate security controls to interfere with an attacker.
  • process mitigation controls are provided to protect one or more endpoints.
  • the management and process orchestration server 204 may determine that the production environment 210 is at risk, and may provide instructions to the production environment to perform one or more actions, such as removing files, terminating processes, blocking communications, or other appropriate actions.
  • a snapshot may be taken for use in threat analysis and/or for use in rebuilding a session. For example, a snapshot of a current session of the production environment 210 can be taken and can be used to recreate the session in a honeypot environment 212 .
  • stage (F) flow change information is provided to direct network topology changes.
  • the management and process orchestration server 204 can provide instructions for the software-defined networking (SDN) controller 206 to redirect network traffic intended for the production environment 210 (e.g., traffic from an attacker's IP address) to the honeypot environment 212 .
  • the software-defined networking controller 206 can facilitate on-the-fly changing of network topology from a centralized point, such that an attacker may be unaware of the change, and may perceive that communication with the production environment 210 persists while traffic is actually being diverted to the honeypot environment 212 .
  • SDN software-defined networking
  • the software-defined networking controller 206 may implement a white noise generator to report all IP addresses as being open, thus potentially confusing an attacker and causing the attacker to question reconnaissance efforts.
  • the controller 206 may implement an IP black hole, silencing the system.
  • fake topologies may be generated, to reduce the effectiveness of reconnaissance efforts.
  • targeted deep packet inspection, data manipulation, network intrusion prevention, and/or breach containment techniques may be implemented.
  • the software-defined networking (SDN) controller 206 provides flow changes to a software-defined networking (SDN) switch 214 .
  • the software-defined networking switch 214 can implement policy changes related to network traffic flow.
  • the software defined networking switch 214 redirects flow to the honeypot environment 212 .
  • the honeypot environment 212 can use process tracing techniques to identify and provide information associated with an attack, as the attack is being performed.
  • events may be generated for actions performed by the system. For example, if an attacker logs in and installs a package, the honeypot environment can identify where the user logs in from, associated process identifiers, commands performed by the system, files manipulated by the installation, registry keys that are modified, and other relevant information.
  • stage (I) information is provided by the honeypot environment 212 to the indicator analytics server 208 .
  • information related to an attacker's tactics, techniques, and procedures can be harvested, and sent to the indicator analytics server 208 for analysis.
  • the indicator analytics server 208 During stage (J), the indicator analytics server 208 generates threat intelligence. For example, based on observable threat indicators identified by the honeypot environment 212 , the indicator analytics server 208 can identify one or more indicators that are potentially actionable, and that can be used to determine insights for defense against threats to the system 200 .
  • stage (K) generated threat intelligence is provided to the threat intelligence server 202 .
  • the threat intelligence server 202 can determine whether any internally identified indicators are actionable, and can use external threat intelligence information to contextualize the information. For example, internal and external threat intelligence information can be used to map threat actors to malware to processes.
  • the cycle shown in stages (A) to (K) may continue iteratively, for example, thus improving an organization's security controls in response to ongoing security threats.
  • Updated threat information for example, can be provided to the management and process orchestration server 204 , where it can be used to generate another predetermined course of action and/or to block future attacks.
  • the threat information can be used to direct network topology changes (e.g., further stages E, F, and G), based on the observed honeypot activity.
  • FIG. 3 is a flowchart of an example process 300 that can be executed in accordance with implementations of the present disclosure.
  • the process 300 may be performed by the system 100 (shown in FIG. 1 ) and/or 200 (shown in FIG. 2 ), and will be described as such for the purpose of clarity.
  • the example process 300 may be used for identifying indicators of security threats and/or system compromises.
  • the process 300 includes identifying a compromise, retrieving data from relevant sources, identifying the status of a compromised environment, identifying indicator matches, identifying one or more performed actions, determining the credibility of each process action, determining a composite credibility based on the actions, determining one or more components for profiling, determining indicators of compromise for each component, and providing the indicators of compromise for orchestration.
  • a compromise can be identified ( 302 ).
  • the management and process orchestration server 204 can identify a compromise to the system 200 via network traffic analysis.
  • external threat intelligence can provide information that, when validated against other network services, indicates a compromise.
  • the compromise may be, for example, a process compromise (e.g., a malicious running process on a system), an object compromise (e.g., an executable or other file), a suspicious network connection, or another sort of compromise to the system 200 .
  • Data can be retrieved ( 304 ) from one or more relevant sources.
  • relevant data can be retrieved by the management and process orchestration server 204 from endpoint management systems, security information and event management (SIEM) systems, packet capture (PCAP) monitoring systems, or other suitable sources.
  • Relevant data can be structured or unstructured, for example.
  • the data can be analyzed by the indicator analytics server 208 , for example, to generate one or more indicators of compromise.
  • the data can be persisted (e.g., in a Hadoop cluster) for future reference.
  • the status of a compromised environment can be identified ( 306 ).
  • endpoint management software can be used to take a snapshot of a system (e.g., the honeypot environment 212 and/or the production environment 210 ) under attack.
  • the snapshot may provide one or more potential indicators of compromise, based on a list of currently running processes, recently (e.g., within a predetermined timeframe, such as a minute, ten seconds, a second, or another suitable timeframe) ended processes, and/or recently modified objects in a similar timeframe.
  • Indicator matches can be identified ( 308 ).
  • security threat information provided by the threat intelligence server 202 can be accessed by the management and process orchestration server 204 and can be used for identifying matches from the list of running and/or recently ended processes and/or modified objects.
  • Processes and/or modified objects that match may be identified as threat indicators, and may be initially assigned low credibility scores.
  • One or more performed actions can be identified ( 310 ).
  • Actions may include process spawning, file access or modification, registry value access or modification, analysis of installed files, or other sorts of actions.
  • To identify the actions for example, data from an endpoint management system may be filtered, and the actions initiated by each process may be sorted into a dataset.
  • identifying performed actions may be an iterative process with halting conditions to identify the scope of the compromise.
  • each process action can be determined ( 312 ).
  • each process can initiate a finite number of actions, and each action may be associated with a particular credibility score in regard to the process. For example, for a particular process, an action of modifying a registry value may have a low credibility value, whereas modifying a file may have a high credibility value.
  • a composite credibility can be determined ( 314 ) for each process, based on the actions.
  • the indicator analytics server 208 can access a model that combines the credibility scores for the process actions to generate a composite credibility score (e.g., ranging from zero to one) for each process.
  • the model may include interaction terms between the actions, to a second or third interaction degree. For example, if a process performs two or more actions in conjunction (e.g., concurrently or in series), the process may receive a score adjustment (e.g., an increase or decrease).
  • the model may include a time decay function between actions to deemphasize unrelated actions.
  • a composite credibility score may be determined by a machine learning algorithm (e.g., a general linear model), a cumulative sum algorithm, or another suitable algorithm.
  • One or more components for profiling can be determined ( 316 ). For example, based on the composite credibility scores, a determination can be made of which processes and/or objects to profile for generating indicators of compromise (or threat indicators). In some implementations, the determination can be based on a threshold value for a composite credibility score. When a particular composite credibility score meets the threshold, for example, the indicator analytics server 208 can automatically determine which indicators to provide to the threat intelligence server 202 . As another example, indicator selection may be a semi-automated process.
  • Indicators of compromise can be determined ( 318 ) for each component. Indicators of compromise (or actual security threat indicators) may be prioritized, for example, based at least in part on potential effectiveness in preventing or mitigating an associated threat. For example, a process associated with a security threat may communicate with a particular IP address, and may edit a particular registry key. In the present example, indicators of compromise associated with the threat may be the process name and the IP address, since these attributes may be used to prevent the threat from occurring, whereas the registry key may not. Determining indicators of compromise may be an automated or semi-automated process. In general, some low impact indicators may be generated automatically, while indicators for critical systems may require human oversight.
  • Indicators of compromise can be provided ( 320 ) for orchestration.
  • the indicators of compromise can be provided by the threat intelligence server 202 to the management and process orchestration server 204 .
  • the management and process orchestration server 204 may coordinate human response and/or automate response to implement mitigations against future security threats.
  • FIG. 4 is a flowchart of an example process 400 that can be executed in accordance with implementations of the present disclosure.
  • the process 400 may be performed by the system 100 (shown in FIG. 1 ) and/or 200 (shown in FIG. 2 ), and will be described as such for the purpose of clarity.
  • the example process 400 may be used for providing automated responses to security threats.
  • the process 400 includes identifying a security incident (e.g., a security threat or security breach), comparing the security incident with a predefined ontology, selecting a response strategy, optionally sending one or more notifications, and implementing a response strategy.
  • an ontology may include the design and representation of a structure that can be used to control system behavior.
  • a runbook ontology may specify details about configuring, operating, and supporting a computing network (e.g., systems, devices, and/or software applications) and can map security incidents to courses of action.
  • a security incident can be identified ( 402 ).
  • the security information and analytics component 104 can identify a security incident (e.g., a security threat or security breach) to an organization's network, based on information from one or more of the data sources 120 .
  • a security incident e.g., a security threat or security breach
  • a distributed denial-of-service (DDoS) attack on one or more network servers may be identified by an endpoint management system, based on detecting a known pattern of events associated with the type of attack.
  • DDoS distributed denial-of-service
  • the security incident can be compared ( 404 ) with a predefined ontology.
  • the security information and analytics component 104 and/or the security orchestration services 142 can maintain an incident response ontology (e.g., a runbook that maps security threats to courses of action), and the system 100 can use one or more of the automated incident response components 108 to compare an identified incident to the ontology.
  • Information related to mitigations e.g., changes to a software defined networking topology
  • to be performed in response to a security threat or breach during an incident response process for example, can be maintained by the infrastructure orchestration services 144 .
  • the incident can be identified and an appropriate response strategy (e.g., rerouting network traffic) can be selected ( 406 ), e.g., by the response selector 132 .
  • Response strategies may be based on strategy information 154 received from the security orchestration services 142 , and/or may be influenced by insight information 116 received from the threat intelligence component 102 . For example, if a particular pattern is observed within a security incident, a response strategy may be influenced by the insight information 116 to determine an appropriate course of action (e.g., repairing damage caused by a security breach).
  • the security information and analytics component 104 can provide information (e.g., incident activity information 130 ) related to the identified security incident (e.g., a security threat or breach) to the defense component 106 .
  • the incident activity information 130 may include information for mitigating a security incident, including workflow steps to perform and/or infrastructure changes to implement in response to the incident.
  • the incident activity information 130 may include instructions for rerouting some network traffic (e.g., traffic originating from a particular IP block) or all traffic intended for the server under attack to a honeypot or to an IP black hole.
  • the incident activity information 130 may include information related to incident activity (e.g., a type of threat or breach and affected system components and processes), and response handling may be determined by the defense component 106 and the orchestration services 140 .
  • one or more notifications may optionally be sent ( 408 ).
  • the defense component 106 and/or the security information and analytics component 104 can use the notification provider 134 to provide information associated with an identified incident (e.g., a security threat or security breach) and appropriate responses to the incident to an operator to facilitate a semi-automated response that may include automated and human workflow processes (e.g., discussed in further detail in association with FIG. 5 ).
  • An incident response for example, may include instructions for mitigating a security threat or security breach while performing a system recovery.
  • a semi-automated process for implementing the incident response may include various checkpoints for which a human operator may be prompted to make a decision, and further automated processes (e.g., scripts) may be launched in response to receiving an indication of the decision.
  • the defense component 106 can log the actions, and notification of the actions performed can be provided to the operator through a reporting interface.
  • the response strategy can be implemented ( 410 ).
  • the defense component 106 can use the response implementer 136 to implement a selected response strategy via the orchestration services 140 (e.g., including security orchestration services 142 and/or infrastructure orchestration services 144 ).
  • implementing a response strategy may include implementing steps of an incident response ontology, refining an understanding of a scope of a security incident (e.g., by changing an infrastructure to observe and gather information about a security incident, such as by routing network traffic to perform packet capture), restricting networking and/or communications capabilities of computing devices/systems under threat (e.g., to prevent a spread of an identified threat), eradicating threats, and/or restoring affected devices/systems.
  • the defense component 106 may automate incident responses.
  • the defense component 106 can use the infrastructure orchestration services 144 to coordinate operations of various other services (e.g., third-party services) and to automatically implement a response strategy maintained by the security orchestration services 142 .
  • a software-defined networking (SDN) controller e.g., the controller 206 , shown in FIG. 2
  • SDN software-defined networking
  • a software-defined networking topology can be modified to limit (e.g., by restricting particular ports) the computing devices and/or systems with which a compromised computing device may communicate.
  • the compromised computing device may continue to function, but not operate in such a way that it can spread an identified threat.
  • an incident response process includes investigating an incident, and includes determining and following a response process, based on the incident.
  • Automating a host-based forensics step in an incident response ontology may include, for example, implementing a step in an incident response ontology, gathering data by a host agent and sending the data to a forensics repository, and proceeding to a next step in the incident response ontology.
  • the security orchestration services 142 can implement a step in an incident response ontology by making a request to the infrastructure orchestration services 144 to perform a particular operation (e.g., a request to gather data from an computing device on a network), and the infrastructure orchestration services 144 can make a corresponding request to a host agent.
  • a particular operation e.g., a request to gather data from an computing device on a network
  • the infrastructure orchestration services 144 can make a corresponding request to a host agent.
  • the data can be provided by the host agent to a forensics repository for further analysis/processing.
  • the host agent can send an acknowledgement to the infrastructure orchestration services 144 , for example, which can in turn send an acknowledgement to the security orchestration services 142 , which can then proceed to the next step in the ontology.
  • FIG. 5 is a flow diagram of an example process 500 that can be executed in accordance with implementations of the present disclosure.
  • the process 500 may be performed by the system 100 (shown in FIG. 1 ) and/or 200 (shown in FIG. 2 ), and will described as such for the purpose of clarity.
  • the process 500 illustrates an example set of interactions between an operator 502 and a computing system 504 , in which a semi-automated process for implementing an incident response may occur.
  • the example flow of events shown in FIG. 5 generally flows from the bottom of the diagram to the top of the diagram.
  • a component e.g., the security information and analytics component 104 of the computing system 504 can continually monitor for indicators of compromise (e.g., indicators of security threats or security breaches), such as suspicious processes, files, traffic sources, or other aspects, based on information provided by the data sources 120 .
  • indicators of compromise e.g., indicators of security threats or security breaches
  • an indicator of compromise associated with a security incident e.g., a threat or breach
  • a corresponding notification 514 can be provided (e.g., by the notification provider 134 ) to the operator 502 .
  • the indicator of compromise may indicate that a particular computing device of the computing system 504 (e.g., a node on a network) is running a process that has been identified as malicious.
  • the notification 514 may include one or more possible actions that may be performed (e.g., by the response implementer 136 ) to mitigate the security incident.
  • Each possible action for example, can correspond to an incident response (e.g., a script) for mitigating the security incident.
  • the notification 514 can include a description of the security incident, and a list of possible actions which may be performed by the response implementer 136 , including an action of removing the malicious process from the affected computing device and restoring the device, and an action of blocking outgoing network traffic from the affected device.
  • the operator 502 can select one or more of the possible actions 518 , and information associated with the actions can be provided to the computing system 504 .
  • instructions for implementing the incident response e.g., scripts
  • instructions for implementing the incident response may be provided to the computing system 504 by a computing device of the operator 502 .
  • instructions for implementing the incident response may be hosted by the computing system 504 , and the action(s) 518 may include information that identifies the appropriate instructions.
  • the operator 502 provides instructions for removing the malicious process and restoring the affected device.
  • the computing system 504 can perform the incident response (e.g., execute a script corresponding to the selected action(s) 518 ), and can provide a further notification 522 that pertains to the security incident, which can include results of the performed actions (e.g., the response scripts), a status of the affected computing device, and a list of possible actions that may be performed based on the device's current status.
  • the further notification 522 indicates that a script for removing the malicious process was executed, but additional suspicious files were detected on the affected device.
  • the further notification 522 in the present example may also indicate that an action of isolating the suspicious files, and an action of blocking outgoing network traffic from the affected device may be performed.
  • the operator 502 can select (at stage 524 ) the option to perform the action of blocking outgoing network traffic from the device, and information associated with the action(s) 526 can be provided to the computing system 504 .
  • the computing system 504 can perform the corresponding incident response action (at stage 528 ), and can provide a further notification 530 pertaining to the security incident (e.g., a notification that traffic was successfully blocked).
  • the semi-automated example process 500 may be iterative, with performed actions potentially triggering further possible actions based on a changing state of an affected device or network.
  • the operator 502 for example, can direct an incident response process at a high level, while the computing system 504 performs low-level repeatable tasks.
  • Embodiments of the subject matter and the functional operations described in this specification can be implemented in digital electronic circuitry, in tangibly-embodied computer software or firmware, in computer hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them.
  • Embodiments of the subject matter described in this specification can be implemented as one or more computer programs, i.e., one or more modules of computer program instructions encoded on a tangible non-transitory program carrier for execution by, or to control the operation of, data processing apparatus.
  • the program instructions can be encoded on an artificially-generated propagated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal, that is generated to encode information for transmission to suitable receiver apparatus for execution by a data processing apparatus.
  • the computer storage medium can be a machine-readable storage device, a machine-readable storage substrate, a random or serial access memory device, or a combination of one or more of them.
  • data processing apparatus refers to data processing hardware and encompasses all kinds of apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers.
  • the apparatus can also be or further include special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit).
  • the apparatus can optionally include, in addition to hardware, code that creates an execution environment for computer programs, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them.
  • a computer program which may also be referred to or described as a program, software, a software application, a module, a software module, a script, or code, can be written in any form of programming language, including compiled or interpreted languages, or declarative or procedural languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
  • a computer program may, but need not, correspond to a file in a file system.
  • a program can be stored in a portion of a file that holds other programs or data, e.g., one or more scripts stored in a markup language document, in a single file dedicated to the program in question, or in multiple coordinated files, e.g., files that store one or more modules, sub-programs, or portions of code.
  • a computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network.
  • the processes and logic flows described in this specification can be performed by one or more programmable computers executing one or more computer programs to perform functions by operating on input data and generating output.
  • the processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit).
  • special purpose logic circuitry e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit).
  • Computers suitable for the execution of a computer program include, by way of example, general or special purpose microprocessors or both, or any other kind of central processing unit.
  • a central processing unit will receive instructions and data from a read-only memory or a random access memory or both.
  • the essential elements of a computer are a central processing unit for performing or executing instructions and one or more memory devices for storing instructions and data.
  • a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks.
  • mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks.
  • a computer need not have such devices.
  • a computer can be embedded in another device, e.g., a mobile telephone, a personal digital assistant (PDA), a mobile audio or video player, a game console, a Global Positioning System (GPS) receiver, or a portable storage device, e.g., a universal serial bus (USB) flash drive, to name just a few.
  • PDA personal digital assistant
  • GPS Global Positioning System
  • USB universal serial bus
  • Computer-readable media suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks.
  • semiconductor memory devices e.g., EPROM, EEPROM, and flash memory devices
  • magnetic disks e.g., internal hard disks or removable disks
  • magneto-optical disks e.g., CD-ROM and DVD-ROM disks.
  • the processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.
  • a computer having a display device, e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor, for displaying information to the user and a keyboard and a pointing device, e.g., a mouse or a trackball, by which the user can provide input to the computer.
  • a display device e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor
  • a keyboard and a pointing device e.g., a mouse or a trackball
  • Other kinds of devices can be used to provide for interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback, e.g., visual feedback, auditory feedback, or tactile feedback; and input from the user can be received in any form, including acoustic, speech, or tactile input.
  • a computer can interact with a user by sending documents to and receiving documents from a device that is used by the user; for example, by sending web pages to
  • Embodiments of the subject matter described in this specification can be implemented in a computing system that includes a back-end component, e.g., as a data server, or that includes a middleware component, e.g., an application server, or that includes a front-end component, e.g., a client computer having a graphical user interface or a Web browser through which a user can interact with an implementation of the subject matter described in this specification, or any combination of one or more such back-end, middleware, or front-end components.
  • the components of the system can be interconnected by any form or medium of digital data communication, e.g., a communication network. Examples of communication networks include a local area network (LAN) and a wide area network (WAN), e.g., the Internet.
  • LAN local area network
  • WAN wide area network
  • the computing system can include clients and servers.
  • a client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
  • a server transmits data, e.g., an HTML page, to a user device, e.g., for purposes of displaying data to and receiving user input from a user interacting with the user device, which acts as a client.
  • Data generated at the user device e.g., a result of the user interaction, can be received from the user device at the server.
  • FIG. 6 shows a schematic diagram of a generic computer system 600 .
  • the system 600 can be used for the operations described in association with any of the computer-implement methods described previously, according to one implementation.
  • the system 600 includes a processor 610 , a memory 620 , a storage device 630 , and an input/output device 640 .
  • Each of the components 610 , 620 , 630 , and 640 are interconnected using a system bus 650 .
  • the processor 610 is capable of processing instructions for execution within the system 600 .
  • the processor 610 is a single-threaded processor.
  • the processor 610 is a multi-threaded processor.
  • the processor 610 is capable of processing instructions stored in the memory 620 or on the storage device 630 to display graphical information for a user interface on the input/output device 640 .
  • the memory 620 stores information within the system 600 .
  • the memory 620 is a computer-readable medium.
  • the memory 620 is a volatile memory unit.
  • the memory 620 is a non-volatile memory unit.
  • the storage device 630 is capable of providing mass storage for the system 600 .
  • the storage device 630 is a computer-readable medium.
  • the storage device 630 may be a floppy disk device, a hard disk device, an optical disk device, or a tape device.
  • the input/output device 640 provides input/output operations for the system 600 .
  • the input/output device 640 includes a keyboard and/or pointing device.
  • the input/output device 640 includes a display unit for displaying graphical user interfaces.

Abstract

Methods, systems, and apparatus, including computer programs encoded on computer storage media, for analyzing threat intelligence information. One of the methods includes receiving by a threat information server, threat intelligence information from one or more intelligence feeds and generating one or more identified security threats, identifying a compromise by a management process orchestration server and retrieving information from the threat information server and identifying one or more actions to be performed, determining by an indicator analytics processor, a composite credibility based on the actions, and determining one or more components for profiling and determining indicators of compromise for each component, and communicating the indicators of compromise to the management process orchestration server.

Description

BACKGROUND
The present disclosure relates to security and network operations.
SUMMARY
In general, one innovative aspect of the subject matter described in this specification can be embodied in methods for analyzing threat intelligence information including receiving by a threat information server, threat intelligence information from one or more intelligence feeds and generating one or more identified security threats, identifying a compromise by a management process orchestration server and retrieving information from the threat information server and identifying one or more actions to be performed, determining by an indicator analytics processor, a composite credibility based on the actions, and determining one or more components for profiling and determining indicators of compromise for each component, and communicating the indicators of compromise to the management process orchestration server.
This and other embodiments may each optionally include one or more of the following features. For instance, the management process orchestration server can analyze the indicators of compromise and generate a response process. The management process orchestration server can execute the response process by communicating instructions to a network switching controller. The network switching controller can implement a network topology change to redirect network traffic. Determining the composite credibility based on the actions can include determining a composite credibility score for the indicator of compromise by accessing a model that combines credibility scores for the actions.
In general, another innovative aspect of the subject matter described in this specification can be embodied in methods including identifying a compromise to a system, performing a snapshot of the system and, based at least in part on the snapshot, identifying one or more potential indicators of compromise, determining that one or more potential indicators of compromise are potential threat indicators, and, for each potential indicator of compromise that is a potential threat indicator, identifying one or more corresponding actions performed by the system, determining a credibility of each action performed by the system, determining a composite credibility of the potential indicator of compromise, based on the credibility of each action, and determining that the potential indicator of compromise is an actual threat indicator, based on the composite credibility. Each of the potential indicators of compromise can be associated with a system process or a presence of a file on the system. Determining that the potential indicators of compromise are potential threat indicators can be based on matching the potential indicators of compromise with stored security threat information.
This and other embodiments may each optionally include one or more of the following features. For instance, identifying one or more potential indicators of compromise can include analyzing the snapshot to identify one or more of currently running processes, recently ended processes, or recently modified objects. Identifying one or more actions performed by the system can include identifying actions related to one or more of process spawning, file access or modification, or registry access or modification. Determining the credibility of each action performed by the system can include determining a credibility score for each action in regard to the system process. Determining the composite credibility of the potential indicator of compromise can include determining a composite credibility score for the potential indicator of compromise by accessing a model that combines the credibility scores for the actions. The model can include interaction terms between the actions, to a multiple degree. The model can include a time decay function between actions. Determining that the potential indicator of compromise is an actual threat indicator can include determining that the composite credibility score for the potential indicator of compromise meets a predetermined threshold. An actual security threat indicator can be prioritized, based at least in part on its potential effectiveness in preventing or mitigating a security threat.
Other embodiments of these aspects include corresponding computer systems, and include corresponding apparatus and computer programs recorded on one or more computer storage devices, each configured to perform the actions of the methods. A system of one or more computers can be configured to perform particular operations or actions by virtue of having software, firmware, hardware, or a combination of them installed on the system that in operation causes or cause the system to perform the actions. One or more computer programs can be configured to perform particular operations or actions by virtue of including instructions that, when executed by data processing apparatus, cause the apparatus to perform the actions.
Particular embodiments of the subject matter described in this specification may be implemented so as to realize one or more of the following advantages. Computer networks can defend against evolving cyber-attacks. Indicators of compromise can be operationalized to prevent the spread of a threat internally. Threat actors can be profiled and possible motivations against an organization can be determined. Responses to threats can be automated, and systems and processes for providing mitigations can be coordinated. Organizations can share information related to potential threats.
The details of one or more embodiments of the subject matter described in this specification are set forth in the accompanying drawings and the description below. Other potential features, aspects, and advantages of the subject matter will become apparent from the description, the drawings, and the claims.
DESCRIPTION OF DRAWINGS
FIGS. 1 & 2 depict example systems that can execute implementations of the present disclosure.
FIGS. 3-5 depict example processes that can be executed in accordance with implementations of the present disclosure.
FIG. 6 is a block diagram of a computing system that can be used in connection with computer-implemented methods described in this document.
Like reference symbols in the various drawings indicate like elements.
DETAILED DESCRIPTION
This specification describes systems, methods, and computer programs for analyzing threat indicators, implementing deception networks, and automating incident responses in a computer network security environment. In general, analyzing threat indicators may include aggregating threat activity identified within a computer network and applying analytics to gain actionable insights into security threats. Predictive analytics may be used to determine threat indicators which can be used to improve an organization's security posture. In general, deception networks can combine network agility, deep packet inspection, and honeypots to identify indicators of network compromise, to contextualize internal threat intelligence, and to automatically apply mitigations using infrastructure orchestration. In general, incident responses may include incident discovery and incident remediation processes.
Active defense techniques for network security may include orchestrating incident response and threat management systems, incorporating deception network capabilities, and leveraging software-defined networking (SDN). Predetermined courses of action may be established, based on the processes within a security organization. Process changes may be implemented automatically or semi-automatically through orchestration. To investigate threat activity, and/or to engage an adversary, deception networks, decoy resources, anti-reconnaissance, and resource shifting may be used. For example, an adversary can be deceived by generating false network topologies and baiting the adversary into a honeypot (e.g., a computer, data, and/or network site that appear as part of a network and that appear to include information of value, but is sandboxed and monitored). A sandboxed environment, for example, can provide a contained set of resources that permits actions by a security threat, with minimal or no permanent effect to the environment and/or to an organization's network. Information gathered from observation of the adversary's behavior, for example, may be used to proactively mitigate threats to an organization. Security response and mitigation capabilities can be embedded into an agile and adaptive infrastructure. By leveraging application and network virtualization capabilities, for example, an organization can meet the adversary's tactics.
FIG. 1 depicts an example system 100 that can execute implementations of the present disclosure. In the present example, the system 100 includes multiple hardware and/or software components (e.g., modules, objects, services) including a threat intelligence component 102, a security information and analytics component 104, and a defense component 106. Two or more of the components 102, 104, and 106 may be implemented on the same device (e.g., same computing device), or on different devices, such as devices included in a computer network or a peer-to-peer network. Operations performed by each of the components 102, 104, and 106 may be performed by a single computing device, or may be distributed to multiple devices.
The threat intelligence component 102 can receive information from one or more intelligence feeds 110. For example, the intelligence feeds 110 may include feeds from commercial, government, and/or peers sources. A peer source, for example, may be associated with a similar type of organization (e.g., a similar type of business, such as a business in a similar sector or industry) as an organization that hosts, maintains, or operates the system 100. As another example, a peer source may be associated with a system that includes one or more components (e.g., operating systems, databases, applications, services, and/or servers) that are similar to components of the organization's network. A peer source may also be another organization that hosts, maintains, or operates a system similar in structure to system 100 (e.g., one with one or more similar components, such as, for instance, network components of the same or similar make or model, or running a same or similar version of operating software, or operating a same or similar version of application software). In some implementations, a service may receive security threat information from multiple peer sources, and provide aggregated intelligence feeds in return to each of the sources. For example, one or more of the intelligence feeds 110 can be provided by a peer exchange service 112 which receives threat information from multiple peers (e.g., including the system 100), and which aggregates and provides the information in return to each of the peers, thus facilitating the sharing of security threat information among peer organizations.
Each of the intelligence feeds 110, and internal intelligence 114 from the system 100, may include information associated with one or more security threats. Upon receiving feed information, for example, the threat intelligence component 102 can identify key indicators and observables associated with each of the threats. Indicators and observables may include, for example, names, identifiers, and/or hashes of processes, objects, files, applications, or services, Internet Protocol (IP) addresses of devices, registry keys to be accessed or modified, user accounts, or other suitable indicators and observables of a security threat. In some implementations, security threat information from multiple feeds may be consolidated and/or normalized. For example, a consolidated view of multiple feeds may include a list of indicators and observables upon which action may be taken.
Insights 116, 118 based on the security threat information can be provided by the threat intelligence component 102 to the security information and analytics component 104 and/or to the defense component 106. For example, insights 116 can include information associated with key indicators and observables for previously occurring security threats, such as information that indicates that suspicious activity has been observed in association with a particular executable that has been used by particular instances of malware and by particular threat actors. In some implementations, insights 118 provided to the defense component 106 may include information associated with an incident response plan and one or more mitigating controls. For example, based on one or more indicators of compromise identified by the intelligence component 102, an appropriate plan of action may be generated, selected, and implemented to respond to a corresponding security threat.
In general, the security information and analytics component 104 may be supported by security information and event management (SIEM), analytics, and visualization capabilities. The security information and analytics component 104 can monitor one or more internal data sources, and can map threat indicators to predefined courses of action to take in response to various security threats. For example, the security information and analytics component 104 can receive information from internal network data sources 120 (e.g., including information technology data sources 122, operational technology data sources 124, and/or physical data sources 126), and can provide monitoring of patterns and anomalies indicative of threat activity within an organization, as informed by the insights 116 provided by the threat intelligence component 102. Based on the insights 116, for example, the security and information analytics component 104 can modify its threat monitoring process. For example, when the security and information analytics component 104 detects a known pattern of events (e.g., related to the insights 116, such as the existence of a file, an activity on an IP address, or another indicator or observable action), it can record an incident, can trigger one or more requests for responses and can provide the response requests to the defense component 106. Response requests may include incident activity information 130, for example, which may include information related to appropriate handling of a security threat or security breach, such as removal of a file, reporting of an IP address, upgrading of software, or another appropriate course of action.
In general, the defense component 106 may be supported by orchestration services 140 (e.g., including security orchestration services 142 and/or infrastructure orchestration services 144), which can set policies and can automate threat management workflows. The security orchestration services 142, for example, can maintain an ontology regarding actions to be performed in response to particular security threats or breaches, whereas the infrastructure orchestration services 144 can maintain information related to mitigations (e.g., infrastructure changes by a software-defined networking controller) to be performed. Based on insights 118 from the threat intelligence component 102 and/or incident activity information 130 from the security information and analytics component 104, for example, the defense component 106 can provide automated or semi-automated infrastructure changes and service management ticketing to mitigate the impact of identified threats or breaches. The defense component 106, for example, can perform particular actions in response to particular indicators, such as blocking an IP address, blocking a process executed by an endpoint, reporting to data loss/leak prevention (DLP) when a particular document is opened, redirecting traffic, or another appropriate action. To mitigate a phishing attack, for example, the defense component 106 can cause a predefined course of action to be executed, including using the orchestration services 140 to determine whether a uniform resource locator (URL) included in an e-mail is malicious, and if so, to block access to the URL and to generate a workflow request to remove the malicious e-mail from a recipient's mailbox. As another example, to mitigate a distributed denial-of-service (DDoS) attack, the defense component 106 can use the orchestration services 140 to modify software-defined networking (SDN) settings to reroute network traffic associated with the attack.
In some implementations, one or more automated incident response components 108 (e.g., discussed in further detail in association with FIG. 4) may be distributed among one or more of the security information and analytics component 104, the defense component 106, and/or the orchestration services 140. In the present example, the automated incident response components 108 include a response selector 132, a notification provider 134, and a response implementer 136. The response selector 132, for example, can select an appropriate strategy for responding to an identified incident (e.g., a security threat or breach), based on comparing the incident to a predefined ontology. The notification provider 134, for example, can optionally provide information associated with the identified incident to an operator to facilitate a semi-automated response. The response implementer 136, for example, can implement the selected response strategy by implementing one or more steps indicated by the predefined ontology. Operations performed by each of the components 132, 134, and 136 may be performed by a single computing device, or may be distributed to multiple devices.
In some implementations, the defense component 106 may use a deception network (e.g., discussed in further detail in association with FIG. 2) to mitigate a security threat and/or to gather additional intelligence related to the threat. Based on information gathered by the deception network, for example, information associated with one or more threat indicators 150 can be provided to the threat intelligence component 102 and/or information associated with one or more targets 152 (e.g., suspicious processes, files, traffic sources, or other aspects to be monitored) may be provided to the security information and analytics component 104. As another example, the security information and analytics component 104 may receive response strategy information 154 from the security orchestration services 142. Insight related to internal threat intelligence 114 (e.g., indicators and observables determined from the data sources 120 and/or deception networks) can be provided to the threat intelligence component 102 for external communication (e.g., via the peer exchange 112).
FIG. 2 depicts an example system 200 that can execute implementations of the present disclosure, including the implementation of a deception network. FIG. 2 also illustrates an example flow of data within the system 200 during stages (A) to (K), where the stages (A) to (K) may occur in the illustrated sequence, or they may occur in a sequence that is different than in the illustrated sequence. For example, two or more of the stages (A) to (K) may occur concurrently.
The example system 200 includes multiple computing devices (e.g., personal computing devices, servers, server clusters) in communication over a wired and/or wireless network. In the present example, the system 200 includes a threat intelligence server 202, a management and process orchestration server 204, a software-defined networking controller 206, and an indicator analytics server 208. Each of the devices 202, 204, 206, and 208 can include one or more processors configured to execute instructions stored by computer-readable media for performing various device operations, such as input/output, communication, data processing and/or data maintenance. An example computer device is described below with reference to FIG. 6. The devices 202, 204, 206, and 208, for example, can communicate over a local area network, a wireless network, a wide area network, a mobile telecommunications network, the Internet, or any other suitable network or any suitable combination thereof.
Referring to the example flow of data, during stage (A), threat intelligence information is received by the threat intelligence server 202. For example, a peer organization can share (e.g., via the peer exchange 112, shown in FIG. 1), information associated with an IP block of addresses targeting a particular type of resource (e.g., a database server). As another example, internal threat intelligence information can be provided by monitoring capabilities of a security information and event management system (e.g., included in the security information and analytics component 104, shown in FIG. 1).
During stage (B), threat intelligence information is contextualized and stored. For example, the threat intelligence server 202 can contextualize and store information associated with external and/or internal security threats to provide an understanding of a threat environment. For example, contextualizing and storing information may include matching threat information identified from internal security threats to threat information identified from external security threats to supplement the information from each source. In the present example, one or more threat indicators (e.g., an IP block of addresses) may be associated with a particular security threat (e.g., a secure shell (SSH) brute force attack).
During stage (C), applicable threat intelligence information is provided to the management and process orchestration server 204. For example, the information can be provided as a list of key indicators and observables. As another example, the management and process orchestration server 204 can receive threat intelligence information through an application programming interface (API).
During stage (D), the management and process orchestration server 204 identifies applicable actions for identified security threats, and executes courses of action. For example, the management and process orchestration server 204 can maintain information associated with predefined courses of action (e.g., a playbook) for various types of security threats. When the management and process information orchestration server 204 identifies an occurrence of a security threat (e.g., via one or more data sources 120, shown in FIG. 1) as matching a known threat indicator, it can execute a course of action to mitigate the particular threat. In the present example, the management and process orchestration server 204 can receive information indicating that a production environment 210 (e.g., a network endpoint running a database server) is in communication with a device associated with an IP address that is a known threat indicator, and can automatically execute an appropriate course of action to mitigate the threat. For example, the management and process orchestration server 204 can change the infrastructure of the system 200 automatically, and/or can manipulate security controls to interfere with an attacker.
During stage (E), process mitigation controls are provided to protect one or more endpoints. For example, the management and process orchestration server 204 may determine that the production environment 210 is at risk, and may provide instructions to the production environment to perform one or more actions, such as removing files, terminating processes, blocking communications, or other appropriate actions. In some implementations, a snapshot may be taken for use in threat analysis and/or for use in rebuilding a session. For example, a snapshot of a current session of the production environment 210 can be taken and can be used to recreate the session in a honeypot environment 212.
During stage (F), flow change information is provided to direct network topology changes. For example, the management and process orchestration server 204 can provide instructions for the software-defined networking (SDN) controller 206 to redirect network traffic intended for the production environment 210 (e.g., traffic from an attacker's IP address) to the honeypot environment 212. The software-defined networking controller 206, for example, can facilitate on-the-fly changing of network topology from a centralized point, such that an attacker may be unaware of the change, and may perceive that communication with the production environment 210 persists while traffic is actually being diverted to the honeypot environment 212.
In some implementations, other software-defined networking (SDN) techniques may be used to passively and/or actively engage an adversary. For example, the software-defined networking controller 206 may implement a white noise generator to report all IP addresses as being open, thus potentially confusing an attacker and causing the attacker to question reconnaissance efforts. As another example, the controller 206 may implement an IP black hole, silencing the system. As another example, fake topologies may be generated, to reduce the effectiveness of reconnaissance efforts. As another example, targeted deep packet inspection, data manipulation, network intrusion prevention, and/or breach containment techniques may be implemented.
During stage (G), the software-defined networking (SDN) controller 206 provides flow changes to a software-defined networking (SDN) switch 214. The software-defined networking switch 214, for example, can implement policy changes related to network traffic flow.
During stage (H), the software defined networking switch 214 redirects flow to the honeypot environment 212. The honeypot environment 212, for example, can use process tracing techniques to identify and provide information associated with an attack, as the attack is being performed. In some implementations, events may be generated for actions performed by the system. For example, if an attacker logs in and installs a package, the honeypot environment can identify where the user logs in from, associated process identifiers, commands performed by the system, files manipulated by the installation, registry keys that are modified, and other relevant information.
During stage (I), information is provided by the honeypot environment 212 to the indicator analytics server 208. For example, information related to an attacker's tactics, techniques, and procedures (TTP) can be harvested, and sent to the indicator analytics server 208 for analysis.
During stage (J), the indicator analytics server 208 generates threat intelligence. For example, based on observable threat indicators identified by the honeypot environment 212, the indicator analytics server 208 can identify one or more indicators that are potentially actionable, and that can be used to determine insights for defense against threats to the system 200.
During stage (K), generated threat intelligence is provided to the threat intelligence server 202. The threat intelligence server 202, for example, can determine whether any internally identified indicators are actionable, and can use external threat intelligence information to contextualize the information. For example, internal and external threat intelligence information can be used to map threat actors to malware to processes.
The cycle shown in stages (A) to (K) may continue iteratively, for example, thus improving an organization's security controls in response to ongoing security threats. Updated threat information, for example, can be provided to the management and process orchestration server 204, where it can be used to generate another predetermined course of action and/or to block future attacks. For example, the threat information can be used to direct network topology changes (e.g., further stages E, F, and G), based on the observed honeypot activity.
FIG. 3 is a flowchart of an example process 300 that can be executed in accordance with implementations of the present disclosure. In some implementations, the process 300 may be performed by the system 100 (shown in FIG. 1) and/or 200 (shown in FIG. 2), and will be described as such for the purpose of clarity. The example process 300, for example, may be used for identifying indicators of security threats and/or system compromises. Briefly, the process 300 includes identifying a compromise, retrieving data from relevant sources, identifying the status of a compromised environment, identifying indicator matches, identifying one or more performed actions, determining the credibility of each process action, determining a composite credibility based on the actions, determining one or more components for profiling, determining indicators of compromise for each component, and providing the indicators of compromise for orchestration.
A compromise can be identified (302). For example, the management and process orchestration server 204 can identify a compromise to the system 200 via network traffic analysis. Alternatively, external threat intelligence can provide information that, when validated against other network services, indicates a compromise. The compromise may be, for example, a process compromise (e.g., a malicious running process on a system), an object compromise (e.g., an executable or other file), a suspicious network connection, or another sort of compromise to the system 200.
Data can be retrieved (304) from one or more relevant sources. For example, relevant data can be retrieved by the management and process orchestration server 204 from endpoint management systems, security information and event management (SIEM) systems, packet capture (PCAP) monitoring systems, or other suitable sources. Relevant data can be structured or unstructured, for example. The data can be analyzed by the indicator analytics server 208, for example, to generate one or more indicators of compromise. In some implementations, the data can be persisted (e.g., in a Hadoop cluster) for future reference.
The status of a compromised environment can be identified (306). For example, endpoint management software can be used to take a snapshot of a system (e.g., the honeypot environment 212 and/or the production environment 210) under attack. The snapshot, for example, may provide one or more potential indicators of compromise, based on a list of currently running processes, recently (e.g., within a predetermined timeframe, such as a minute, ten seconds, a second, or another suitable timeframe) ended processes, and/or recently modified objects in a similar timeframe.
Indicator matches can be identified (308). For example, security threat information provided by the threat intelligence server 202 can be accessed by the management and process orchestration server 204 and can be used for identifying matches from the list of running and/or recently ended processes and/or modified objects. Processes and/or modified objects that match, for example, may be identified as threat indicators, and may be initially assigned low credibility scores.
One or more performed actions can be identified (310). Actions, for example, may include process spawning, file access or modification, registry value access or modification, analysis of installed files, or other sorts of actions. To identify the actions, for example, data from an endpoint management system may be filtered, and the actions initiated by each process may be sorted into a dataset. In some implementations, identifying performed actions may be an iterative process with halting conditions to identify the scope of the compromise.
The credibility of each process action can be determined (312). In general, each process can initiate a finite number of actions, and each action may be associated with a particular credibility score in regard to the process. For example, for a particular process, an action of modifying a registry value may have a low credibility value, whereas modifying a file may have a high credibility value.
A composite credibility can be determined (314) for each process, based on the actions. For example, the indicator analytics server 208 can access a model that combines the credibility scores for the process actions to generate a composite credibility score (e.g., ranging from zero to one) for each process. In some implementations, the model may include interaction terms between the actions, to a second or third interaction degree. For example, if a process performs two or more actions in conjunction (e.g., concurrently or in series), the process may receive a score adjustment (e.g., an increase or decrease). In some implementations, the model may include a time decay function between actions to deemphasize unrelated actions. In some implementations, a composite credibility score may be determined by a machine learning algorithm (e.g., a general linear model), a cumulative sum algorithm, or another suitable algorithm.
One or more components for profiling can be determined (316). For example, based on the composite credibility scores, a determination can be made of which processes and/or objects to profile for generating indicators of compromise (or threat indicators). In some implementations, the determination can be based on a threshold value for a composite credibility score. When a particular composite credibility score meets the threshold, for example, the indicator analytics server 208 can automatically determine which indicators to provide to the threat intelligence server 202. As another example, indicator selection may be a semi-automated process.
Indicators of compromise can be determined (318) for each component. Indicators of compromise (or actual security threat indicators) may be prioritized, for example, based at least in part on potential effectiveness in preventing or mitigating an associated threat. For example, a process associated with a security threat may communicate with a particular IP address, and may edit a particular registry key. In the present example, indicators of compromise associated with the threat may be the process name and the IP address, since these attributes may be used to prevent the threat from occurring, whereas the registry key may not. Determining indicators of compromise may be an automated or semi-automated process. In general, some low impact indicators may be generated automatically, while indicators for critical systems may require human oversight.
Indicators of compromise can be provided (320) for orchestration. For example, the indicators of compromise can be provided by the threat intelligence server 202 to the management and process orchestration server 204. Based on the indicators of compromise, for example, the management and process orchestration server 204 may coordinate human response and/or automate response to implement mitigations against future security threats.
FIG. 4 is a flowchart of an example process 400 that can be executed in accordance with implementations of the present disclosure. In some implementations, the process 400 may be performed by the system 100 (shown in FIG. 1) and/or 200 (shown in FIG. 2), and will be described as such for the purpose of clarity. The example process 400, for example, may be used for providing automated responses to security threats. Briefly, the process 400 includes identifying a security incident (e.g., a security threat or security breach), comparing the security incident with a predefined ontology, selecting a response strategy, optionally sending one or more notifications, and implementing a response strategy. In general, an ontology may include the design and representation of a structure that can be used to control system behavior. For example, a runbook ontology may specify details about configuring, operating, and supporting a computing network (e.g., systems, devices, and/or software applications) and can map security incidents to courses of action.
A security incident can be identified (402). Referring to FIG. 1, for example, the security information and analytics component 104 can identify a security incident (e.g., a security threat or security breach) to an organization's network, based on information from one or more of the data sources 120. For example, a distributed denial-of-service (DDoS) attack on one or more network servers may be identified by an endpoint management system, based on detecting a known pattern of events associated with the type of attack.
The security incident can be compared (404) with a predefined ontology. For example, the security information and analytics component 104 and/or the security orchestration services 142 can maintain an incident response ontology (e.g., a runbook that maps security threats to courses of action), and the system 100 can use one or more of the automated incident response components 108 to compare an identified incident to the ontology. Information related to mitigations (e.g., changes to a software defined networking topology) to be performed in response to a security threat or breach during an incident response process, for example, can be maintained by the infrastructure orchestration services 144.
Based on one or more indicators of compromise associated with a particular security incident (e.g., a DDoS attack), for example, the incident can be identified and an appropriate response strategy (e.g., rerouting network traffic) can be selected (406), e.g., by the response selector 132. Response strategies, for example, may be based on strategy information 154 received from the security orchestration services 142, and/or may be influenced by insight information 116 received from the threat intelligence component 102. For example, if a particular pattern is observed within a security incident, a response strategy may be influenced by the insight information 116 to determine an appropriate course of action (e.g., repairing damage caused by a security breach). Upon selecting the response strategy, for example, the security information and analytics component 104 can provide information (e.g., incident activity information 130) related to the identified security incident (e.g., a security threat or breach) to the defense component 106. In some implementations, the incident activity information 130 may include information for mitigating a security incident, including workflow steps to perform and/or infrastructure changes to implement in response to the incident. In the present example, the incident activity information 130 may include instructions for rerouting some network traffic (e.g., traffic originating from a particular IP block) or all traffic intended for the server under attack to a honeypot or to an IP black hole. As another example, the incident activity information 130 may include information related to incident activity (e.g., a type of threat or breach and affected system components and processes), and response handling may be determined by the defense component 106 and the orchestration services 140.
In some implementations, one or more notifications may optionally be sent (408). For example, the defense component 106 and/or the security information and analytics component 104 can use the notification provider 134 to provide information associated with an identified incident (e.g., a security threat or security breach) and appropriate responses to the incident to an operator to facilitate a semi-automated response that may include automated and human workflow processes (e.g., discussed in further detail in association with FIG. 5). An incident response, for example, may include instructions for mitigating a security threat or security breach while performing a system recovery. A semi-automated process for implementing the incident response, for example, may include various checkpoints for which a human operator may be prompted to make a decision, and further automated processes (e.g., scripts) may be launched in response to receiving an indication of the decision. As another example, upon automatically performing actions for mitigating a security threat or security breach, the defense component 106 can log the actions, and notification of the actions performed can be provided to the operator through a reporting interface.
The response strategy can be implemented (410). For example, the defense component 106 can use the response implementer 136 to implement a selected response strategy via the orchestration services 140 (e.g., including security orchestration services 142 and/or infrastructure orchestration services 144). In general, implementing a response strategy may include implementing steps of an incident response ontology, refining an understanding of a scope of a security incident (e.g., by changing an infrastructure to observe and gather information about a security incident, such as by routing network traffic to perform packet capture), restricting networking and/or communications capabilities of computing devices/systems under threat (e.g., to prevent a spread of an identified threat), eradicating threats, and/or restoring affected devices/systems. In some implementations, the defense component 106 may automate incident responses. For example, the defense component 106 can use the infrastructure orchestration services 144 to coordinate operations of various other services (e.g., third-party services) and to automatically implement a response strategy maintained by the security orchestration services 142. In the present example, a software-defined networking (SDN) controller (e.g., the controller 206, shown in FIG. 2) can be used to redirect network traffic in response to the DDoS attack. As another example, a software-defined networking topology can be modified to limit (e.g., by restricting particular ports) the computing devices and/or systems with which a compromised computing device may communicate. Thus, the compromised computing device may continue to function, but not operate in such a way that it can spread an identified threat.
In some implementations, capture of host-based forensics may be automated. When a system breach is detected or has occurred, for example, priority may be placed on restoring the system, and performing host-based forensics may be deferred. By automating a forensics process, for example, capture of forensics information can be performed in the background and in parallel with restoring the system. In general, an incident response process includes investigating an incident, and includes determining and following a response process, based on the incident. Automating a host-based forensics step in an incident response ontology may include, for example, implementing a step in an incident response ontology, gathering data by a host agent and sending the data to a forensics repository, and proceeding to a next step in the incident response ontology. For example, the security orchestration services 142 can implement a step in an incident response ontology by making a request to the infrastructure orchestration services 144 to perform a particular operation (e.g., a request to gather data from an computing device on a network), and the infrastructure orchestration services 144 can make a corresponding request to a host agent. Upon gathering the data (e.g., by logging the hostname of the device, by taking a snapshot of the device, and by capturing a memory dump and log files), for example, the data can be provided by the host agent to a forensics repository for further analysis/processing. Next, the host agent can send an acknowledgement to the infrastructure orchestration services 144, for example, which can in turn send an acknowledgement to the security orchestration services 142, which can then proceed to the next step in the ontology.
FIG. 5 is a flow diagram of an example process 500 that can be executed in accordance with implementations of the present disclosure. In some implementations, the process 500 may be performed by the system 100 (shown in FIG. 1) and/or 200 (shown in FIG. 2), and will described as such for the purpose of clarity. The process 500, for example, illustrates an example set of interactions between an operator 502 and a computing system 504, in which a semi-automated process for implementing an incident response may occur. For sake of clarity, the example flow of events shown in FIG. 5 generally flows from the bottom of the diagram to the top of the diagram.
During stage 510, for example, a component (e.g., the security information and analytics component 104) of the computing system 504 can continually monitor for indicators of compromise (e.g., indicators of security threats or security breaches), such as suspicious processes, files, traffic sources, or other aspects, based on information provided by the data sources 120. During stage 512, for example, an indicator of compromise associated with a security incident (e.g., a threat or breach) can be detected as having occurred within the computing system 504, and a corresponding notification 514 can be provided (e.g., by the notification provider 134) to the operator 502. In the present example, the indicator of compromise may indicate that a particular computing device of the computing system 504 (e.g., a node on a network) is running a process that has been identified as malicious.
In some implementations, the notification 514 may include one or more possible actions that may be performed (e.g., by the response implementer 136) to mitigate the security incident. Each possible action, for example, can correspond to an incident response (e.g., a script) for mitigating the security incident. In the present example, the notification 514 can include a description of the security incident, and a list of possible actions which may be performed by the response implementer 136, including an action of removing the malicious process from the affected computing device and restoring the device, and an action of blocking outgoing network traffic from the affected device.
During stage 516, for example, the operator 502 can select one or more of the possible actions 518, and information associated with the actions can be provided to the computing system 504. In some implementations, instructions for implementing the incident response (e.g., scripts) can be provided to the computing system 504 by a computing device of the operator 502. In some implementations, instructions for implementing the incident response may be hosted by the computing system 504, and the action(s) 518 may include information that identifies the appropriate instructions. In the present example, the operator 502 provides instructions for removing the malicious process and restoring the affected device.
During stage 520, the computing system 504 can perform the incident response (e.g., execute a script corresponding to the selected action(s) 518), and can provide a further notification 522 that pertains to the security incident, which can include results of the performed actions (e.g., the response scripts), a status of the affected computing device, and a list of possible actions that may be performed based on the device's current status. In the present example, the further notification 522 indicates that a script for removing the malicious process was executed, but additional suspicious files were detected on the affected device. The further notification 522 in the present example may also indicate that an action of isolating the suspicious files, and an action of blocking outgoing network traffic from the affected device may be performed. Based on the notification 522, for example, the operator 502 can select (at stage 524) the option to perform the action of blocking outgoing network traffic from the device, and information associated with the action(s) 526 can be provided to the computing system 504. In the present example, the computing system 504 can perform the corresponding incident response action (at stage 528), and can provide a further notification 530 pertaining to the security incident (e.g., a notification that traffic was successfully blocked). Thus, the semi-automated example process 500 may be iterative, with performed actions potentially triggering further possible actions based on a changing state of an affected device or network. The operator 502, for example, can direct an incident response process at a high level, while the computing system 504 performs low-level repeatable tasks.
Embodiments of the subject matter and the functional operations described in this specification can be implemented in digital electronic circuitry, in tangibly-embodied computer software or firmware, in computer hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. Embodiments of the subject matter described in this specification can be implemented as one or more computer programs, i.e., one or more modules of computer program instructions encoded on a tangible non-transitory program carrier for execution by, or to control the operation of, data processing apparatus. Alternatively or in addition, the program instructions can be encoded on an artificially-generated propagated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal, that is generated to encode information for transmission to suitable receiver apparatus for execution by a data processing apparatus. The computer storage medium can be a machine-readable storage device, a machine-readable storage substrate, a random or serial access memory device, or a combination of one or more of them.
The term “data processing apparatus” refers to data processing hardware and encompasses all kinds of apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers. The apparatus can also be or further include special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit). The apparatus can optionally include, in addition to hardware, code that creates an execution environment for computer programs, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them.
A computer program, which may also be referred to or described as a program, software, a software application, a module, a software module, a script, or code, can be written in any form of programming language, including compiled or interpreted languages, or declarative or procedural languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program may, but need not, correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data, e.g., one or more scripts stored in a markup language document, in a single file dedicated to the program in question, or in multiple coordinated files, e.g., files that store one or more modules, sub-programs, or portions of code. A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network.
The processes and logic flows described in this specification can be performed by one or more programmable computers executing one or more computer programs to perform functions by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit).
Computers suitable for the execution of a computer program include, by way of example, general or special purpose microprocessors or both, or any other kind of central processing unit. Generally, a central processing unit will receive instructions and data from a read-only memory or a random access memory or both. The essential elements of a computer are a central processing unit for performing or executing instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks. However, a computer need not have such devices. Moreover, a computer can be embedded in another device, e.g., a mobile telephone, a personal digital assistant (PDA), a mobile audio or video player, a game console, a Global Positioning System (GPS) receiver, or a portable storage device, e.g., a universal serial bus (USB) flash drive, to name just a few.
Computer-readable media suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.
To provide for interaction with a user, embodiments of the subject matter described in this specification can be implemented on a computer having a display device, e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor, for displaying information to the user and a keyboard and a pointing device, e.g., a mouse or a trackball, by which the user can provide input to the computer. Other kinds of devices can be used to provide for interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback, e.g., visual feedback, auditory feedback, or tactile feedback; and input from the user can be received in any form, including acoustic, speech, or tactile input. In addition, a computer can interact with a user by sending documents to and receiving documents from a device that is used by the user; for example, by sending web pages to a web browser on a user's device in response to requests received from the web browser.
Embodiments of the subject matter described in this specification can be implemented in a computing system that includes a back-end component, e.g., as a data server, or that includes a middleware component, e.g., an application server, or that includes a front-end component, e.g., a client computer having a graphical user interface or a Web browser through which a user can interact with an implementation of the subject matter described in this specification, or any combination of one or more such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication, e.g., a communication network. Examples of communication networks include a local area network (LAN) and a wide area network (WAN), e.g., the Internet.
The computing system can include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. In some embodiments, a server transmits data, e.g., an HTML page, to a user device, e.g., for purposes of displaying data to and receiving user input from a user interacting with the user device, which acts as a client. Data generated at the user device, e.g., a result of the user interaction, can be received from the user device at the server.
An example of one such type of computer is shown in FIG. 6, which shows a schematic diagram of a generic computer system 600. The system 600 can be used for the operations described in association with any of the computer-implement methods described previously, according to one implementation. The system 600 includes a processor 610, a memory 620, a storage device 630, and an input/output device 640. Each of the components 610, 620, 630, and 640 are interconnected using a system bus 650. The processor 610 is capable of processing instructions for execution within the system 600. In one implementation, the processor 610 is a single-threaded processor. In another implementation, the processor 610 is a multi-threaded processor. The processor 610 is capable of processing instructions stored in the memory 620 or on the storage device 630 to display graphical information for a user interface on the input/output device 640.
The memory 620 stores information within the system 600. In one implementation, the memory 620 is a computer-readable medium. In one implementation, the memory 620 is a volatile memory unit. In another implementation, the memory 620 is a non-volatile memory unit.
The storage device 630 is capable of providing mass storage for the system 600. In one implementation, the storage device 630 is a computer-readable medium. In various different implementations, the storage device 630 may be a floppy disk device, a hard disk device, an optical disk device, or a tape device.
The input/output device 640 provides input/output operations for the system 600. In one implementation, the input/output device 640 includes a keyboard and/or pointing device. In another implementation, the input/output device 640 includes a display unit for displaying graphical user interfaces.
While this specification contains many specific implementation details, these should not be construed as limitations on the scope of any invention or on the scope of what may be claimed, but rather as descriptions of features that may be specific to particular embodiments of particular inventions. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.
Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous. Moreover, the separation of various system modules and components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products.
Particular embodiments of the subject matter have been described. Other embodiments are within the scope of the following claims. For example, the actions recited in the claims can be performed in a different order and still achieve desirable results. As one example, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some cases, multitasking and parallel processing may be advantageous.

Claims (14)

What is claimed is:
1. A computer-implemented method using one or more hardware processors, the method comprising:
identifying a compromise to a system;
performing a snapshot of the system and, based at least in part on the snapshot, identifying one or more potential indicators of compromise, wherein each of the potential indicators of compromise is based on one or more currently running or recently ended processes on the system;
determining that one or more potential indicators of compromise are potential threat indicators, wherein the determining is based on matching the potential indicators of compromise with stored security threat information;
for each potential indicator of compromise that is a potential threat indicator:
identifying one or more corresponding actions performed by the system that have been initiated by the one or more currently running or recently ended processes on the system on which identifying the potential indicator of compromise was based;
determining, by the one or more hardware processors, a credibility score of each action of the one or more corresponding actions performed by the system, wherein each action is associated with a particular credibility score in regard to the currently running or recently ended process which initiated the action;
determining a composite credibility score of the potential indicator of compromise, by combining the determined credibility scores of each action; and
determining that the potential indicator of compromise is an actual threat indicator, based on the composite credibility score.
2. The method of claim 1, wherein identifying one or more potential indicators of compromise includes analyzing the snapshot to identify one or more of currently running processes, recently ended processes, or recently modified objects.
3. The method of claim 1, wherein identifying one or more corresponding actions performed by the system includes identifying actions related to one or more of process spawning, file access or modification, or registry access or modification.
4. The method of claim 1, wherein determining the composite credibility score of the potential indicator of compromise includes accessing a model that combines the credibility scores for the actions.
5. The method of claim 4, wherein the model includes interaction terms between the actions, to a multiple degree.
6. The method of claim 4, wherein the model includes a time decay function between actions.
7. The method of claim 4, wherein determining that the potential indicator of compromise is an actual threat indicator includes determining that the composite credibility score for the potential indicator of compromise meets a predetermined threshold.
8. The method of claim 1, further comprising prioritizing an actual security threat indicator, based at least in part on its potential effectiveness in preventing or mitigating a security threat.
9. A system comprising:
one or more computers and one or more storage devices storing instructions that are operable, when executed by the one or more computers, to cause the one or more computers to perform operations comprising:
identifying a compromise to a system;
performing a snapshot of the system and, based at least in part on the snapshot, identifying one or more potential indicators of compromise, wherein each of the potential indicators of compromise is based on one or more currently running or recently ended processes on the system;
determining that one or more potential indicators of compromise are potential threat indicators, wherein the determining is based on matching the potential indicators of compromise with stored security threat information;
for each potential indicator of compromise that is a potential threat indicator:
identifying one or more corresponding actions performed by the system that have been initiated by the one or more currently running or recently ended processes on the system on which identifying the potential indicator of compromise was based;
determining a credibility score of each action of the one or more corresponding actions performed by the system, wherein each action is associated with a particular credibility score in regard to the currently running or recently ended process which initiated the action;
determining a composite credibility score of the potential indicator of compromise, by combining the determined credibility scores of each action; and
determining that the potential indicator of compromise is an actual threat indicator, based on the composite credibility score.
10. The system of claim 9, wherein identifying one or more potential indicators of compromise includes analyzing the snapshot to identify one or more of currently running processes, recently ended processes, or recently modified objects.
11. The system of claim 9, wherein identifying one or more corresponding actions performed by the system includes identifying actions related to one or more of process spawning, file access or modification, or registry access or modification.
12. The system of claim 9, wherein determining the composite credibility score of the potential indicator of compromise includes accessing a model that combines the credibility scores for the actions.
13. The system of claim 12, wherein the model includes at least one of interaction terms between the actions, to a multiple degree, and a time decay function between actions.
14. The system of claim 12, wherein determining that the potential indicator of compromise is an actual threat indicator includes determining that the composite credibility score for the potential indicator of compromise meets a predetermined threshold.
US14/473,910 2014-06-11 2014-08-29 Threat indicator analytics system Active 2035-07-22 US9794279B2 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
US14/473,910 US9794279B2 (en) 2014-06-11 2014-08-29 Threat indicator analytics system
AU2015203086A AU2015203086B2 (en) 2014-06-11 2015-06-10 Threat indicator analytics system
EP15171735.2A EP2955895B1 (en) 2014-06-11 2015-06-11 Threat indicator analytics system
US15/782,498 US10021127B2 (en) 2014-06-11 2017-10-12 Threat indicator analytics system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201462010816P 2014-06-11 2014-06-11
US14/473,910 US9794279B2 (en) 2014-06-11 2014-08-29 Threat indicator analytics system

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US15/782,498 Division US10021127B2 (en) 2014-06-11 2017-10-12 Threat indicator analytics system

Publications (2)

Publication Number Publication Date
US20160269434A1 US20160269434A1 (en) 2016-09-15
US9794279B2 true US9794279B2 (en) 2017-10-17

Family

ID=53476683

Family Applications (2)

Application Number Title Priority Date Filing Date
US14/473,910 Active 2035-07-22 US9794279B2 (en) 2014-06-11 2014-08-29 Threat indicator analytics system
US15/782,498 Active US10021127B2 (en) 2014-06-11 2017-10-12 Threat indicator analytics system

Family Applications After (1)

Application Number Title Priority Date Filing Date
US15/782,498 Active US10021127B2 (en) 2014-06-11 2017-10-12 Threat indicator analytics system

Country Status (3)

Country Link
US (2) US9794279B2 (en)
EP (1) EP2955895B1 (en)
AU (1) AU2015203086B2 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11005887B2 (en) * 2017-11-02 2021-05-11 Korea Advanced Institute Of Science And Technology Honeynet method, system and computer program for mitigating link flooding attacks of software defined network
US11012448B2 (en) 2018-05-30 2021-05-18 Bank Of America Corporation Dynamic cyber event analysis and control
US11163889B2 (en) 2019-06-14 2021-11-02 Bank Of America Corporation System and method for analyzing and remediating computer application vulnerabilities via multidimensional correlation and prioritization

Families Citing this family (48)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8903973B1 (en) 2008-11-10 2014-12-02 Tanium Inc. Parallel distributed network management
US10740692B2 (en) 2017-10-17 2020-08-11 Servicenow, Inc. Machine-learning and deep-learning techniques for predictive ticketing in information technology systems
US10600002B2 (en) 2016-08-04 2020-03-24 Loom Systems LTD. Machine learning techniques for providing enriched root causes based on machine-generated data
US11172470B1 (en) 2012-12-21 2021-11-09 Tanium Inc. System, security and network management using self-organizing communication orbits in distributed networks
US9246977B2 (en) 2012-12-21 2016-01-26 Tanium Inc. System, security and network management using self-organizing communication orbits in distributed networks
US9886581B2 (en) * 2014-02-25 2018-02-06 Accenture Global Solutions Limited Automated intelligence graph construction and countermeasure deployment
US10873645B2 (en) 2014-03-24 2020-12-22 Tanium Inc. Software application updating in a local network
US9769275B2 (en) 2014-03-24 2017-09-19 Tanium Inc. Data caching and distribution in a local network
US9690928B2 (en) * 2014-10-25 2017-06-27 Mcafee, Inc. Computing platform security methods and apparatus
US9888031B2 (en) 2014-11-19 2018-02-06 Cyber Secdo Ltd. System and method thereof for identifying and responding to security incidents based on preemptive forensics
US10230742B2 (en) 2015-01-30 2019-03-12 Anomali Incorporated Space and time efficient threat detection
US20160294871A1 (en) * 2015-03-31 2016-10-06 Arbor Networks, Inc. System and method for mitigating against denial of service attacks
US11303662B2 (en) * 2015-04-20 2022-04-12 Micro Focus Llc Security indicator scores
US11461208B1 (en) 2015-04-24 2022-10-04 Tanium Inc. Reliable map-reduce communications in a decentralized, self-organizing communication orbit of a distributed network
US9910752B2 (en) 2015-04-24 2018-03-06 Tanium Inc. Reliable map-reduce communications in a decentralized, self-organizing communication orbit of a distributed network
US10902114B1 (en) * 2015-09-09 2021-01-26 ThreatQuotient, Inc. Automated cybersecurity threat detection with aggregation and analysis
US10742664B2 (en) * 2015-10-20 2020-08-11 International Business Machines Corporation Probabilistically detecting low-intensity, multi-modal threats using synthetic events
US11372938B1 (en) 2016-03-08 2022-06-28 Tanium Inc. System and method for performing search requests in a network
US10498744B2 (en) * 2016-03-08 2019-12-03 Tanium Inc. Integrity monitoring in a local network
US11886229B1 (en) 2016-03-08 2024-01-30 Tanium Inc. System and method for generating a global dictionary and performing similarity search queries in a network
US10929345B2 (en) 2016-03-08 2021-02-23 Tanium Inc. System and method of performing similarity search queries in a network
US11609835B1 (en) 2016-03-08 2023-03-21 Tanium Inc. Evaluating machine and process performance in distributed system
US11153383B2 (en) 2016-03-08 2021-10-19 Tanium Inc. Distributed data analysis for streaming data sources
US10372904B2 (en) * 2016-03-08 2019-08-06 Tanium Inc. Cost prioritized evaluations of indicators of compromise
US20170345112A1 (en) * 2016-05-25 2017-11-30 Tyco Fire & Security Gmbh Dynamic Threat Analysis Engine for Mobile Users
US10963634B2 (en) * 2016-08-04 2021-03-30 Servicenow, Inc. Cross-platform classification of machine-generated textual data
US10789119B2 (en) 2016-08-04 2020-09-29 Servicenow, Inc. Determining root-cause of failures based on machine-generated textual data
WO2018027226A1 (en) * 2016-08-05 2018-02-08 Fractal Industries, Inc. Detection mitigation and remediation of cyberattacks employing an advanced cyber-decision platform
US10681062B2 (en) * 2016-11-02 2020-06-09 Accenture Global Solutions Limited Incident triage scoring engine
LU93398B1 (en) 2016-12-23 2018-07-24 Luxembourg Inst Science & Tech List Method for orchestrating reactions to complex attacks on cumputing systems
US10469509B2 (en) * 2016-12-29 2019-11-05 Chronicle Llc Gathering indicators of compromise for security threat detection
US11049026B2 (en) 2017-03-20 2021-06-29 Micro Focus Llc Updating ground truth data in a security management platform
US10999296B2 (en) * 2017-05-15 2021-05-04 Forcepoint, LLC Generating adaptive trust profiles using information derived from similarly situated organizations
US10992698B2 (en) * 2017-06-05 2021-04-27 Meditechsafe, Inc. Device vulnerability management
US10824729B2 (en) 2017-07-14 2020-11-03 Tanium Inc. Compliance management in a local network
KR101850351B1 (en) * 2017-12-08 2018-04-19 (주) 세인트 시큐리티 Method for Inquiring IoC Information by Use of P2P Protocol
US10841365B2 (en) * 2018-07-18 2020-11-17 Tanium Inc. Mapping application dependencies in a computer network
US11343355B1 (en) * 2018-07-18 2022-05-24 Tanium Inc. Automated mapping of multi-tier applications in a distributed system
US10924481B2 (en) 2018-11-06 2021-02-16 Bank Of America Corporation Processing system for providing console access to a cyber range virtual environment
US10958670B2 (en) 2018-11-06 2021-03-23 Bank Of America Corporation Processing system for providing console access to a cyber range virtual environment
US11201893B2 (en) * 2019-10-08 2021-12-14 The Boeing Company Systems and methods for performing cybersecurity risk assessments
US11831670B1 (en) 2019-11-18 2023-11-28 Tanium Inc. System and method for prioritizing distributed system risk remediations
US11836247B2 (en) * 2020-03-30 2023-12-05 Fortinet, Inc. Detecting malicious behavior in a network using security analytics by analyzing process interaction ratios
CN111782967A (en) * 2020-07-02 2020-10-16 奇安信科技集团股份有限公司 Information processing method, information processing device, electronic equipment and computer readable storage medium
US11563764B1 (en) 2020-08-24 2023-01-24 Tanium Inc. Risk scoring based on compliance verification test results in a local network
CN112383411B (en) * 2020-10-22 2022-11-15 杭州安恒信息安全技术有限公司 Network security early warning notification method, electronic device and storage medium
CN114157498B (en) * 2021-12-07 2022-08-16 上海交通大学 WEB high-interaction honeypot system based on artificial intelligence and attack prevention method
US20230224275A1 (en) * 2022-01-12 2023-07-13 Bank Of America Corporation Preemptive threat detection for an information system

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US2208210A (en) 1938-03-23 1940-07-16 Eugene M Coppola Door check
US20060212925A1 (en) * 2005-03-02 2006-09-21 Markmonitor, Inc. Implementing trust policies
US20070150957A1 (en) * 2005-12-28 2007-06-28 Microsoft Corporation Malicious code infection cause-and-effect analysis
WO2007090224A1 (en) 2006-02-08 2007-08-16 Pc Tools Technology Pty Limited Automated threat analysis
US20080244748A1 (en) * 2007-04-02 2008-10-02 Microsoft Corporation Detecting compromised computers by correlating reputation data with web access logs
US7450005B2 (en) * 2006-01-18 2008-11-11 International Business Machines Corporation System and method of dynamically weighted analysis for intrusion decision-making
US20100024037A1 (en) * 2006-11-09 2010-01-28 Grzymala-Busse Witold J System and method for providing identity theft security
US20100235879A1 (en) * 2007-06-08 2010-09-16 Matthew Burnside Systems, methods, and media for enforcing a security policy in a network including a plurality of components
US20110040983A1 (en) * 2006-11-09 2011-02-17 Grzymala-Busse Withold J System and method for providing identity theft security
US20110093916A1 (en) 2008-06-10 2011-04-21 Ulrich Lang Method and system for rapid accreditation/re-accreditation of agile it environments, for example service oriented architecture (soa)
US20110314557A1 (en) * 2010-06-16 2011-12-22 Adknowledge, Inc. Click Fraud Control Method and System
US8209759B2 (en) * 2005-07-18 2012-06-26 Q1 Labs, Inc. Security incident manager
US20130191919A1 (en) * 2012-01-19 2013-07-25 Mcafee, Inc. Calculating quantitative asset risk
US20140020072A1 (en) 2012-07-13 2014-01-16 Andrew J. Thomas Security access protection for user data stored in a cloud computing facility
US20140214610A1 (en) * 2013-01-31 2014-07-31 Sean Moshir Method and System to Intelligently Assess and Mitigate Security Risks on a Mobile Device

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US2208210A (en) 1938-03-23 1940-07-16 Eugene M Coppola Door check
US20060212925A1 (en) * 2005-03-02 2006-09-21 Markmonitor, Inc. Implementing trust policies
US20060212930A1 (en) * 2005-03-02 2006-09-21 Markmonitor, Inc. Distribution of trust data
US8209759B2 (en) * 2005-07-18 2012-06-26 Q1 Labs, Inc. Security incident manager
US20070150957A1 (en) * 2005-12-28 2007-06-28 Microsoft Corporation Malicious code infection cause-and-effect analysis
US7450005B2 (en) * 2006-01-18 2008-11-11 International Business Machines Corporation System and method of dynamically weighted analysis for intrusion decision-making
WO2007090224A1 (en) 2006-02-08 2007-08-16 Pc Tools Technology Pty Limited Automated threat analysis
US20100024037A1 (en) * 2006-11-09 2010-01-28 Grzymala-Busse Witold J System and method for providing identity theft security
US20110040983A1 (en) * 2006-11-09 2011-02-17 Grzymala-Busse Withold J System and method for providing identity theft security
US20080244748A1 (en) * 2007-04-02 2008-10-02 Microsoft Corporation Detecting compromised computers by correlating reputation data with web access logs
US20100235879A1 (en) * 2007-06-08 2010-09-16 Matthew Burnside Systems, methods, and media for enforcing a security policy in a network including a plurality of components
US20110093916A1 (en) 2008-06-10 2011-04-21 Ulrich Lang Method and system for rapid accreditation/re-accreditation of agile it environments, for example service oriented architecture (soa)
US20110314557A1 (en) * 2010-06-16 2011-12-22 Adknowledge, Inc. Click Fraud Control Method and System
US20130191919A1 (en) * 2012-01-19 2013-07-25 Mcafee, Inc. Calculating quantitative asset risk
US20140020072A1 (en) 2012-07-13 2014-01-16 Andrew J. Thomas Security access protection for user data stored in a cloud computing facility
US20140214610A1 (en) * 2013-01-31 2014-07-31 Sean Moshir Method and System to Intelligently Assess and Mitigate Security Risks on a Mobile Device

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
Australia Office Action in Appl. No. 2015203088, dated Mar. 7, 2016, 2 pages.
Extended European Search Report in Application No. 15171735.2, dated Nov. 11, 2015, 8 pages.
Penta et al., "The life and death of statically detected vulnerabilities: An empirical study," Information Software Technol., 51(10):1469-1484 (Oct. 1, 2009).
U.S. Non-Final Office Action for U.S. Appl. No. 14/473,866 dated Aug. 12, 2015, 10 pages.
U.S. Non-Final Office Action for U.S. Appl. No. 15/196,651, dated Nov. 10, 2016, 14 pages.
U.S. Notice of Allowance for U.S. Appl. No. 14/473,866 dated Apr. 4, 2016, 8 pages.

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11005887B2 (en) * 2017-11-02 2021-05-11 Korea Advanced Institute Of Science And Technology Honeynet method, system and computer program for mitigating link flooding attacks of software defined network
US11012448B2 (en) 2018-05-30 2021-05-18 Bank Of America Corporation Dynamic cyber event analysis and control
US11163889B2 (en) 2019-06-14 2021-11-02 Bank Of America Corporation System and method for analyzing and remediating computer application vulnerabilities via multidimensional correlation and prioritization

Also Published As

Publication number Publication date
US10021127B2 (en) 2018-07-10
AU2015203086A1 (en) 2016-01-07
US20160269434A1 (en) 2016-09-15
EP2955895B1 (en) 2019-10-16
US20180041538A1 (en) 2018-02-08
EP2955895A1 (en) 2015-12-16
AU2015203086B2 (en) 2017-01-12

Similar Documents

Publication Publication Date Title
US10021127B2 (en) Threat indicator analytics system
US10051010B2 (en) Method and system for automated incident response
US10447733B2 (en) Deception network system
US11750659B2 (en) Cybersecurity profiling and rating using active and passive external reconnaissance
US11277432B2 (en) Generating attack graphs in agile security platforms
US20220014560A1 (en) Correlating network event anomalies using active and passive external reconnaissance to identify attack information
US11316891B2 (en) Automated real-time multi-dimensional cybersecurity threat modeling
JP6916300B2 (en) Collecting compromise indicators for security threat detection
EP3179696B1 (en) Connected security system
EP3343867B1 (en) Methods and apparatus for processing threat metrics to determine a risk of loss due to the compromise of an organization asset
US9258321B2 (en) Automated internet threat detection and mitigation system and associated methods
US20220201042A1 (en) Ai-driven defensive penetration test analysis and recommendation system
US20210360032A1 (en) Cybersecurity risk analysis and anomaly detection using active and passive external reconnaissance
WO2015134008A1 (en) Automated internet threat detection and mitigation system and associated methods
US20220014561A1 (en) System and methods for automated internet-scale web application vulnerability scanning and enhanced security profiling
US20230164158A1 (en) Interactive artificial intelligence-based response loop to a cyberattack
US20240031380A1 (en) Unifying of the network device entity and the user entity for better cyber security modeling along with ingesting firewall rules to determine pathways through a network
US20230403294A1 (en) Cyber security restoration engine
WO2024035746A1 (en) A cyber security restoration engine
WO2021154460A1 (en) Cybersecurity profiling and rating using active and passive external reconnaissance

Legal Events

Date Code Title Description
AS Assignment

Owner name: ACCENTURE GLOBAL SERVICES LIMITED, IRELAND

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:DIVALENTIN, LOUIS WILLIAM;CARVER, MATTHEW;LEFEBVRE, MICHAEL L.;SIGNING DATES FROM 20140912 TO 20150609;REEL/FRAME:035815/0070

STCF Information on status: patent grant

Free format text: PATENTED CASE

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4