WO2014147618A1 - Accelerating a clock system to identify malware - Google Patents

Accelerating a clock system to identify malware Download PDF

Info

Publication number
WO2014147618A1
WO2014147618A1 PCT/IL2014/050298 IL2014050298W WO2014147618A1 WO 2014147618 A1 WO2014147618 A1 WO 2014147618A1 IL 2014050298 W IL2014050298 W IL 2014050298W WO 2014147618 A1 WO2014147618 A1 WO 2014147618A1
Authority
WO
WIPO (PCT)
Prior art keywords
computer
malware
sandbox
security
clock
Prior art date
Application number
PCT/IL2014/050298
Other languages
French (fr)
Inventor
Tavi Salomon
David Herman
Original Assignee
Israel Aerospace Industries Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Israel Aerospace Industries Ltd. filed Critical Israel Aerospace Industries Ltd.
Publication of WO2014147618A1 publication Critical patent/WO2014147618A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/50Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
    • G06F21/55Detecting local intrusion or implementing counter-measures
    • G06F21/56Computer malware detection or handling, e.g. anti-virus arrangements
    • G06F21/566Dynamic detection, i.e. detection performed at run-time, e.g. emulation, suspicious activities

Definitions

  • the present invention relates generally to computer security and more particularly to sandboxes for computer security.
  • Sandbox systems are known in the art e.g. as shown in prior art Fig. la.
  • TOD jumps are known., e.g. at the following http links:
  • US2013160120A entitled Protecting End Users From Malware Using Advertising Virtual Machine
  • US2012151586A, US2008016339A entitled Application Sandbox To Detect, Remove, And Prevent Malware
  • US2006021029A entitled Method Of Improving Computer Security Through Sandboxing
  • US8099596B entitled System And Method For Malware Protection Using Virtualization
  • US2012278892A entitled System For Malware Normalization And Detection
  • US2006161982A entitled Intrusion Detection System.
  • Certain embodiments of the present invention seek to provide improved computer security apparatus.
  • DOS or other low-level internal system computes the time of day to correspond to the normal pace of time governing human endeavor.
  • Certain embodiments of the present invention seek to accelerate system time, as opposed to changing or skipping a system time to a different value which is known in the art. At least one of the following may be provided to effect the acceleration: a. external clock which may be a particularly cost effective solution;
  • a processor computes elapsed time by counting ticks; for example, if there is one tick per nano-second, then the processor may count 1000 ticks and then zero the counter to indicate that a millisecond has elapsed.
  • an interrupt may determine that 10,000 ticks are required, or conversely that less than 1 ,000 ticks are required.
  • the system clock is not changed, as opposed to the time of day which is changed. So, the acceleration may be applied only to the time-of-day, not to elements such as the processor, RAM, or DRAM.
  • Embodiment a Computer-security providing sandbox apparatus for fast identification of malware, the apparatus comprising:
  • a sandbox with an accelerated clock system instead of a conventional "time of day” clock system e.g. an oscillator-based PC clock.
  • Embodiment b Computer-security providing sandbox apparatus according to Embodiment a wherein the apparatus detects within minutes, or even in real-time, malware which has a latency period of days, weeks, months or years, before launching its zero-day cyber attack. More generally, the apparatus detects the malware within a time period which is significantly less than the malware 's actual latency period e.g. by selecting acceleration of the sandbox accelerated clock system and incubation period in the sandbox, such that the incubation period having been accelerated, is longer than the estimated or known value of the malware 's latency period.
  • Embodiment c Computer-security providing sandbox apparatus according to Embodiment a wherein the accelerated "time of day” clock system is provided by controllable clock acceleration functionality, also termed herein a “cyber metronome”.
  • Embodiment 1 Computer-security apparatus comprising:
  • an accelerated clock system for identifying malware by artificially hastening activation of malware so said activation occurs during the malware 's period of residence in the sandbox even if the malware 's latency period is longer than the malware 's period of residence in the sandbox.
  • Embodiment 2 Computer-security apparatus according to any of the previous embodiments wherein the apparatus detects malware which has a latency period of at least months before launching its zero-day cyber attack, in real-time.
  • Embodiment 3 Computer-security apparatus according to any of the previous embodiments wherein the accelerated "time of day” clock system is provided by controllable clock acceleration functionality.
  • Embodiment 4 Computer-security apparatus according to any of the previous embodiments and installed within a firewall.
  • Embodiment 5 Computer-security apparatus according to any of the previous embodiments and residing in a stand-alone device located at an entry-point to secure facilities.
  • Embodiment 6 A computer-security method comprising:
  • Embodiment 7 Computer-security apparatus according to any of the previous embodiments wherein acceleration is achieved by provision of a hardware Breakpoint.
  • Embodiment 8 Computer-security apparatus according to any of the previous embodiments wherein acceleration is achieved by provision of a separated Clock.
  • Embodiment 9 Computer-security apparatus according to any of the previous embodiments wherein acceleration is achieved by provision of a separate Processor running a separate system time.
  • Embodiment 10 A method according to any of the previous embodiments and also comprising:
  • Embodiment 11 A method according to any of the previous embodiments wherein acceleration is effected by an external clock.
  • Embodiment 12 A method according to any of the previous embodiments wherein acceleration is effected by an external interrupt.
  • Embodiment 13 A method according to any of the previous embodiments wherein acceleration is effected by an external CPU operative to compute the time of day without changing the system clock and without affecting elements such as processors, RAMs, or DRAMs.
  • Embodiment 14 A method according to any of the previous embodiments wherein instead of a RAM determining that a predetermined number of accumulated ticks triggers counter- zeroing, an interrupt determines that a different number of ticks is required to trigger counter- zeroing.
  • Embodiment 15 Computer-security apparatus according to any of the previous embodiments wherein providing the hardware breakpoint comprises programming a watchpoint unit to monitor core busses for an instruction fetch from a specific memory location.
  • Embodiment 16 Computer-security apparatus according to any of the previous embodiments wherein a hardware breakpoint is established in RAM.
  • Embodiment 18 Computer-security apparatus according to any of the previous embodiments wherein system time is changed by addressing a breakpoint in a counting program so as to provide faster count parameters using division by a parameter.
  • Embodiment 19 A computer program product, comprising a non-transitory tangible computer readable medium having computer readable program code embodied therein, said computer readable program code adapted to be executed to implement a computer-security method operative in conjunction with a sandbox, in which malware resides for a pre-designated period, the sandbox having a clock; the method comprising accelerating said clock for artificially hastening activation of at least one malevolent effect by malware whose latency period is longer than said pre- designated period.
  • a computer program comprising computer program code means for performing any of the methods shown and described herein when said program is run on a computer; and a computer program product, comprising a typically non-transitory computer-usable or -readable medium e.g. non- transitory computer -usable or -readable storage medium, typically tangible, having a computer readable program code embodied therein, said computer readable program code adapted to be executed to implement any or all of the methods shown and described herein. It is appreciated that any or all of the computational steps shown and described herein may be computer-implemented.
  • non-transitory is used herein to exclude transitory, propagating signals or waves, but to otherwise include any volatile or nonvolatile computer memory technology suitable to the application.
  • Any suitable processor, display and input means may be used to process, display e.g. on a computer screen or other computer output device, store, and accept information such as information used by or generated by any of the methods and apparatus shown and described herein; the above processor, display and input means including computer programs, in accordance with some or all of the embodiments of the present invention.
  • any or all functionalities of the invention shown and described herein, such as but not limited to steps of flowcharts, may be performed by a conventional personal computer processor, workstation or other programmable device or computer or electronic computing device or processor, either general-purpose or specifically constructed, used for processing; a computer display screen and/or printer and/or speaker for displaying; machine-readable memory such as optical disks, CDROMs, DVDs, BluRays, magnetic-optical discs or other discs; RAMs, ROMs, EPROMs, EEPROMs, magnetic or optical or other cards, for storing, and keyboard or mouse for accepting.
  • the term "process” as used above is intended to include any type of computation or manipulation or transformation of data represented as physical, e.g. electronic, phenomena which may occur or reside e.g. within registers and /or memories of a computer or processor.
  • the term processor includes a single processing unit or a plurality of distributed or remote such units.
  • the above devices may communicate via any conventional wired or wireless digital communication means, e.g. via a wired or cellular telephone network or a computer network such as the Internet.
  • the apparatus of the present invention may include, according to certain embodiments of the invention, machine readable memory containing or otherwise storing a program of instructions which, when executed by the machine, implements some or all of the apparatus, methods, features and functionalities of the invention shown and described herein.
  • the apparatus of the present invention may include, according to certain embodiments of the invention, a program as above which may be written in any conventional programming language, and optionally a machine for executing the program such as but not limited to a general purpose computer which may optionally be configured or activated in accordance with the teachings of the present invention. Any of the teachings incorporated herein may whereever suitable operate on signals representative of physical objects or substances.
  • the term "computer” should be broadly construed to cover any kind of electronic device with data processing capabilities, including, by way of non-limiting example, personal computers, servers, computing system, communication devices, processors (e.g. digital signal processor (DSP), microcontrollers, field programmable gate array (FPGA), application specific integrated circuit (ASIC), etc.) and other electronic computing devices.
  • processors e.g. digital signal processor (DSP), microcontrollers, field programmable gate array (FPGA), application specific integrated circuit (ASIC), etc.
  • DSP digital signal processor
  • FPGA field programmable gate array
  • ASIC application specific integrated circuit
  • Any suitable input device such as but not limited to a sensor, may be used to generate or otherwise provide information received by the apparatus and methods shown and described herein.
  • Any suitable output device or display may be used to display or output information generated by the apparatus and methods shown and described herein.
  • Any suitable processor may be employed to compute or generate information as described herein e.g. by providing one or more modules in the processor to perform functionalities described herein.
  • Any suitable computerized data storage e.g. computer memory may be used to store information received by or generated by the systems shown and described herein.
  • Functionalities shown and described herein may be divided between a server computer and a plurality of client computers. These or any other computerized components shown and described herein may communicate between themselves via a suitable computer network.
  • FIG. la is a diagram illustrating sandbox systems known in the art.
  • Fig. lb is a simplified flowchart illustration of a method which might not suit for protecting against malware with a very long latency period.
  • Fig. lc is a simplified flowchart illustration of a method for accelerating time inside a sandbox in accordance with certain embodiments of the present invention e.g. as an example implementation of the method of Fig. 4.
  • Fig. 2 is a simplified flowchart illustration of a method in which interrupt- driven transfers use an external clock instead of the system clock, to count ticks.
  • Fig. 3 illustrates a separate processor which may run, separately, any linear or other combination of system time, in accordance with certain embodiments of the present invention.
  • the separated CPU may compute the ticking system time and may provide 1msec (say) incremental to the main program.
  • Fig. 4 is a more general flowchart illustration of a method for accelerating activation of malware inside a sandbox in accordance with certain embodiments of the present invention.
  • the methods of the flowcharts each typically comprise some or all of the illustrated steps, suitably ordered e.g. as shown.
  • Computational components described and illustrated herein can be implemented in various forms, for example, as hardware circuits such as but not limited to custom VLSI circuits or gate arrays or programmable hardware devices such as but not limited to FPGAs, or as software program code stored on at least one tangible or intangible computer readable medium and executable by at least one processor, or any suitable combination thereof.
  • a specific functional component may be formed by one particular sequence of software code, or by a plurality of such, which collectively act or behave or act as described herein with reference to the functional component in question.
  • the component may be distributed over several code sequences such as but not limited to objects, procedures, functions, routines and programs and may originate from several computer files which typically operate synergistically.
  • Data can be stored on one or more tangible or intangible computer readable media stored at one or more different locations, different network nodes or different storage devices at a single node or location.
  • Suitable computer data storage or information retention apparatus may include apparatus which is primary, secondary, tertiary or off-line; which is of any type or level or amount or category of volatility, differentiation, mutability, accessibility, addressability, capacity, performance and energy use; and which is based on any suitable technologies such as semiconductor, magnetic, optical, paper and others.
  • Anti- virus relies on pre -known signatures of malware 's SW code e.g. viruses to detect cyber-attacks.
  • Fig. la is a diagram illustrating Sandbox systems known in the art. The flowchart of Fig. lb might not be suitable for protecting against malware with a very long latency period.
  • Fig. lc is a simplified flowchart illustration of a method for accelerating time inside a sandbox in accordance with certain embodiments of the present invention.
  • computer-security providing sandbox apparatus for fast identification of malware is provided, e.g. as shown generally in Fig. 4, the apparatus comprising:
  • a sandbox e.g. robust concept sandbox
  • an accelerated virtual system time instead of a conventional "system time” or "time of day” clock system e.g. an oscillator-based PC clock.
  • a robust concept sandbox may perform a complete scan of dates.
  • system time represents a computer system's notion of the passing of time.
  • System time is measured by a system clock, which is typically implemented as a simple count of the number of ticks that have occurred starting from an arbitrary instant in time termed the epoch which serves as an origin of a particular era and/or a reference point from which time is measured.
  • the epoch may be constantly saved for next upload.
  • the system clock is typically implemented as a programmable interval timer that periodically interrupts the CPU.
  • the CPU may then start executing a timer interrupt service routine which typically adds one tick, say, to the system clock.
  • a simple counter may for example count the number of 100-nanosecond ticks in Microsoft OS to get to a 1msec resolution and handles other periodic housekeeping tasks e.g. preemption, before returning to whatever the CPU was doing before the interruption. It is a particular feature of certain embodiments that the acceleration has no impact on the system oscillation clock in which the PC CPU and internal busses are running.
  • the apparatus detects within minutes, or even in real-time, malware which has a latency period of at least months before launching its zero-day cyber attack.
  • controllable clock acceleration functionality also termed herein a “cyber metronome”. It is appreciated that an oscillator may for example be divided in software or in hardware, to achieve a desired level of acceleration.
  • HW Breakpoint method as shown e.g. in Fig. 2, there is a deliberate stopping or pausing location in a tick counting program to provide a different number of ticks, e.g. by multiplication, that actually accelerates the system time.
  • a HW (Hardware) breakpoint may, e.g. as shown in Fig. 2, be set by programming a watchpoint unit to monitor the core busses for an instruction fetch from a specific memory location.
  • HW breakpoints may be established at any suitable location in RAM or ROM.
  • System time which may be implemented as a count of the number of ticks that have occurred since an arbitrary starting date, may be changed accordingly, e.g. by addressing a breakpoint in the counting program so as to provide faster count parameters e.g. by diminishing the counter, e.g. by dividing by a parameter.
  • a Separated Clk may address an unbiased clock that may address the system time instead of the programmable ticking counting.
  • a unique or added or separate processor may run, separately, any linear or other combination of system time.
  • Unique is used in the sense of being separate and often dedicated to acceleration; the unique processor typically comprises a separate, typically dedicated, HW CPU e.g. on the same PC motherboard.
  • any suitable acceleration factor can be used, e.g. so as to exceed the predicted worst-case ratio between malware latency time and sandbox quarantine time.
  • a separated CPU may compute the ticking system time and may provide 1msec (say) incremental to the main program. Then, the BOT, when looking to the PC (e.g.) clock to time its zero-day- attack, is led to believe that months or years (say) have elapsed whereas in fact, the PC clock has been accelerated and only, say, 5 or 10 minutes have actually elapsed.
  • PC e.g.
  • the botnet may be triggered by the PC Time of Day (System Time). More generally, the apparatus may operate or invoke some or all of the following, or other, automatic triggers whether external or internal:
  • a change of user e.g. level of authorization such as administrator vs. non- administrator.
  • the sandbox apparatus may have 2 or more conditions and may jump from one to the other.
  • This is particularly useful for detecting malware triggered as above or triggered by the above in combination with a long elapsed time -period.
  • the apparatus may be implemented in software, hardware or a combination of both.
  • the apparatus may be installed within an organization's firewall or may be a stand-alone device located at the entry-point to secure facilities which may, for example, accept and conduct an accelerated sandbox-based security check, as above, on disk-on-key s from visitors.
  • any suitable sandbox may be employed or adapted for the purposes of the particular embodiments shown and described herein, for example any sandbox having any combination of the following sandbox characteristics and functionalities which are known in the art, may be combined with any of the features of the present invention shown and described herein:
  • Any security mechanism for separating running programs which may for example be used to execute untested code, or untrusted programs from unverified third-parties, suppliers, untrusted users and untrusted websites.
  • Network access the ability to inspect the host system or read from input devices may be disallowed or heavily restricted.
  • Sandboxes may include some or all of:
  • Applets or other self-contained programs that run in a virtual machine or scripting language interpreter that does the sandboxing.
  • the applet may be downloaded onto a remote client and may begin executing before it arrives in its entirety.
  • Applets in web browsers may safely execute untrusted code embedded in web pages.
  • Applet implementations such as Adobe Flash, Java applets and Silverlight— provide a rectangular window via which to interact with the user and may provide persistent storage.
  • a jail or set of resource limits imposed on programs by the operating system kernel May include some or all of: I/O bandwidth caps, disk quotas, network-access restrictions and a restricted filesystem namespace. Jails may be used in virtual hosting.
  • Rule-based Execution which gives users control over which processes are started, spawned (by other applications), or allowed to inject code into other apps and/or have access to the net. May control file/registry security e.g. which programs can read and write to the file system/registry). Examples: SELinux and Apparmor security frameworks, for Linux.
  • Virtual machines which emulate a complete host computer, on which a conventional operating system may boot and run as on actual hardware.
  • the guest operating system does not function natively on the host and can access host resources only through the emulator.
  • Sandboxing on native hosts an environment that mimics or replicates the targeted desktops is created to evaluate how malware infects and compromises a target host.
  • Capability systems in which programs are given opaque tokens when spawned and can do specific things depending on what tokens they hold. Implementations may work at levels from kernel to user-space. Example: HTML rendering in a Web browser.
  • New-generation pastebins which allow users to execute pasted code snippets.
  • seccomp Secure Computing Mode
  • Sandboxed applications for Apple's mobile operating system iOS which are only able to access files inside their own respective storage areas, and are not able to change system settings.
  • Any secure environment to contain untrusted helper applications which may be adversarial, may serve as a sandbox e.g. by restricting a program's access to the underlying operating system.
  • a sandbox may intercept and filter dangerous system calls e.g. via a Solaris process tracing facility.
  • the sandbox may allow or deny individual system calls flexibly, perhaps depending on the arguments to the call. For example, the open system call could be allowed or denied, depending on which file the application was trying to open, and whether the file was intended for reading or for writing.
  • the program of the sandbox may include a framework, and dynamic modules, used to implement various aspects of a configurable security policy by filtering relevant system calls.
  • the framework typically reads a configuration file, which can be site-, user-, or application-dependent. This file lists which of the modules should be loaded, and may supply parameters to them. For example, the configuration line path allow read, write /tmp/* may load the path module, passing it the parameters ⁇ allow read, write /tmp/*" at initialization time. This syntax allows files under /tmp to be opened for reading or writing.
  • Each module filters out dangerous system call invocations, according to an area of specialization.
  • the framework dispatches that information to relevant policy modules.
  • Each module reports its opinion on whether the system call may be permitted or quashed, and any necessary action is taken by the framework.
  • the operating system may execute a system call only if some module explicitly allows it; the default is for system calls to be denied. This causes the system to err on the side of security in case of an under-specified security policy.
  • Each module contains a list of system calls to examine and filter. Some system calls may appear in several modules' lists.
  • a module may assign to each system call a function which validates the arguments of the call before the call is executed by the operating system. The function may then use this information to optionally update local state, and then suggest allowing the system call, suggest denying it, or make no comment on the attempted system call.
  • Modules are typically listed in the configuration file e.g. from most general to most specific, so that the last relevant module for any system call dictates whether the call is to be allowed or denied. For example, a suggestion to allow countermands an earlier denial. Note that a "no comment" response has no effect: in particular, it does not override an earlier "deny” or “allow” response.
  • Write access to .rhosts could be super-denied near the top of the configuration file, for example, to provide a safety net in case of accidentally miswriting of a subsequent file access rule.
  • An operating system to support the sandbox may be one which allows one user-level process to watch the system calls executed by another.
  • a module can assign to a system call a similar function which gets called after the system call has executed, just before control is returned to the helper application. This function can examine the arguments to the system call, as well as the return value, and update the module's local state, process, and control the second process in various ways e.g. causing selected system calls to fail.
  • Operating systems may have a process-tracing facility, intended for debugging. Most operating systems offer a program which can observe the system calls performed by another process as well as their return values. This is often implemented with a special system call which allows the tracer to register a callback that is executed whenever the tracer issues a system call.
  • Operating systems such as Solaris 2.4 and OSF/1 , offer a process-tracing facility through the /proc virtual filesystem, an interface which allows direct control of the traced process's memory. Furthermore, one can request callbacks on a per-system call basis.
  • the policy modules are used to select and implement security policy decisions. They are dynamically loaded at runtime, so that different security policies can be configured for different sites, users, or applications.
  • a set of modules can be used to set up the traced application's environment, and to restrict its ability to read or write files, execute programs, and establish TCP connections. In addition, the traced application is prevented from performing certain system calls, as described below.
  • the provided modules offer considerable flexibility themselves. Configuration may be achieved by editing their parameters in the configuration file.
  • Policy modules make a decision as to which system calls to allow, which to deny, and for which a function must be called to determine what to do.
  • system calls that are always allowed (in certain modules) are close, exit, fork, and read.
  • system calls that are always denied (in certain modules) are ones that would not succeed for an unprivileged process anyway, like setuid and mount, along with chdir, which one may disallow as part of security policy.
  • System calls for which a function must, in general, be called to determine whether the system call should be allowed or denied typically include system calls such as open, rename, stat, and kill whose arguments must be checked against the configurable security policy specified in the parameters given to the module at load time.
  • Helper applications may be allowed to fork children, which may be recursively traced. Traced processes can only send signals to themselves or to their children, and never to an untraced application. Environment variables are initially sanitized, and resource usage is carefully limited. In an example policy, access to the filesystem is severely limited. A helper application is placed in a particular directory; it cannot chdir out of this directory. It is given full access to files in or below this directory. The untrusted application is allowed read access to certain carefully controlled files referenced by absolute pathnames, such as shared libraries and global configuration files. One may concentrate all access control in the open system call, and always allow read and write calls, because write is only useful when used on a file descriptor obtained from a system call like open.
  • Helper applications may require access to network resources e.g. may need to open a window on the XI 1 display to present document contents.
  • One may allow network connections only to the X display, and this access is allowed only through a safe X proxy.
  • X access control is all-or-nothing.
  • a rogue X client has full access to all other clients on the same server, so an otherwise confined helper application could compromise other applications if it were allowed uncontrolled access to X.
  • a basic module typically supplies defaults for the system calls which are easiest to analyze, and takes no configuration parameters.
  • the putenv module allows one to specify environment variable settings for the traced application via its parameters; those which are not explicitly mentioned are unset.
  • the special parameter display causes the helper application to inherit the parent's DISPLAY.
  • the tcpconnect module allows us to restrict TCP connections by host and/or port; the default is to disallow all connections.
  • the path module the most complicated one, lets one allow or deny file accesses according to one or more patterns.
  • the framework starts by reading the configuration file, the location of which can be specified on the command line.
  • the first word is the name of the module to load, and the rest of the first line acts as a parameter to the module.
  • dlopen(3x) is used to dynamically load the module into the framework's address space.
  • the module's init() function is called, if present, with the parameters for the module as its argument.
  • the list of system calls and associated values and functions in the module is then merged into the framework's dispatch table.
  • the dispatch table is an array, indexed by system call number, of linked lists. Each value and function in the module is appended to the list in the dispatch table that is indexed by the
  • the dispatch table provides a linked list that can be traversed to decide whether to allow or deny a system call.
  • a child process is fork()ed, and the child's state is cleaned up. This includes setting a umask of 077, setting limits on virtual memory use, disabling core dumps, switching to a sandbox directory, and closing unnecessary file descriptors. Modules get a chance to further initialize the child's state; for instance, the putenv module sanitizes the environment variables.
  • the parent process waits for the child to complete this cleanup, and begins to debug the child via the /proc interface. It sets the child process to stop whenever it begins or finishes a system call (only a subset of the system calls are marked in this manner, typically).
  • the child waits until it is being traced, and executes the desired application.
  • the application is typically confined to a sandbox directory.
  • this directory is created in /tmp with a random name, but the SANDBOX DIR environment variable can be used to override this choice.
  • the application runs until it performs a system call. At this point, it is put to sleep, and the tracing process wakes up. The tracing process determines which system call was attempted, along with the arguments to the call. It then traverses the appropriate linked list in the dispatch table, in order to determine whether to allow or to deny this system call.
  • the tracing process wakes up the application, which proceeds to complete the system call. If, however, the system call is to be denied, the tracing process wakes up the application. This causes the system call to abort immediately, returning a value indicating that the system call failed and setting errno to EINTR. In either case, the tracing process goes back to sleep.
  • Some applications are coded in such a way that, if they receive an EINTR error from a system call, they will retry the system call. Thus, if such a application tries to execute a system call which is denied by the security policy, it will get stuck in a retry loop. If this occurs, assume the traced application is not going to make any further progress, and kill the application entirely, giving an explanatory message to the user.
  • the tracing process When a system call completes, the tracing process has the ability to examine the return value if it so wishes. If any module had assigned a function to be executed when this system call completes, as described above, it is executed at this time. This facility is not widely used, except in one special case.
  • the tracing process checks the return value and then fork()s itself. The child of the tracing process then detaches from the application, and begins tracing the application's child. This method safely allows the traced application to spawn a child (as ghostview spawns gs, for example) by ensuring that all children of untrusted applications are traced as well.
  • protection of the invocation of any helper application with such an environment is provided, e.g. by specifying the janus program in a mailcap file.
  • a system administrator could set up the in-house security policy by listing janus in the default global mailcap file; then the secure environment would be transparent to all the users on the system. Users could protect themselves by doing the same to their personal .mailcap file.
  • Sandboxing was introduced by Wahbe et al. in the context of software fault isolation. They achieved safety for trusted modules running in the same address space as untrusted modules. They also use binary-rewriting technology.
  • Java provides architecture-independence, while Janus only applies to native code and provides no help with portability.
  • OmniWare takes advantage of software fault isolation techniques and compiler support to safely execute untrusted code.
  • securelib is a shared library that replaces the C accept, recvfrom, and recvmsg library calls by a version that performs address-based authentication; it is intended to protect security-critical Unix system daemons.
  • Replacement of dangerous C library calls with a safe wrapper may be insufficient in an extended context of untrusted and possibly hostile applications; a hostile application could bypass this access control by issuing the dangerous system call directly without invoking any library calls.
  • DTE Domain and Type Enforcement
  • IE 10 supports allowing popups for valid cases via the ms-allow-popups token.
  • the server is also able to sandbox content.
  • the sandbox attribute restricts content only when within an iframe. If the sandboxed content is able to convince the user to browse to it directly, then the untrusted content would no longer be within the sandboxed iframe and none of the security restrictions would apply.
  • the server could send the untrusted content with a text/html-sandboxed MIME type. An example header one can send to sandbox the content but allow form submission and script execution:
  • X-Content-Security-Policy sandbox allow-forms allow-scripts.
  • a sandbox may allow processes to execute only within a very restrictive environment.
  • the only resources sandboxed processes can freely use may for example be CPU cycles and memory.
  • sandboxed processes might not be able to write to the filesystem e.g. to the disk or display their own windows.
  • What exactly sandboxes can and cannot do may be controlled by any suitable explicit policy.
  • code cannot perform any form of I/O (e.g. disk, keyboard, or screen) without making a system call.
  • Windows performs some sort of security check and these, according to the sandbox policy, fail for actions that the sandboxed process is not to perform.
  • the sandbox is such that all access checks fail.
  • certain communication channels are explicitly for sandboxed processes which can then write and read from these channels.
  • a more privileged process e.g. in Chromium, the browser process
  • the apparatus of the present invention may be customized, using methods described above, to combat any sort of malware having any combination of characteristics, such as but not limited to any combination of the following characteristics of Flame and/or of Stuxnet:
  • LAN local area network
  • USB stick via USB stick. It can record audio, screenshots, keyboard activity and network traffic.
  • the program also records Skype conversations and can turn infected computers into Bluetooth beacons which attempt to download contact information from nearby Bluetooth-enabled devices. This data, along with locally stored documents, is sent on to one of several command and control servers that are scattered around the world. The program then awaits further instructions from these servers.
  • the program allows other attack modules to be loaded after initial infection.
  • the malware uses five different encryption methods and an SQLite database to store structured information.
  • the method used to inject code into various processes is stealthy, in that the malware modules do not appear in a listing of the modules loaded into a process and malware memory pages are protected with READ, WRITE and EXECUTE permissions that make them inaccessible by user-mode applications.
  • the malware determines what antivirus software is installed, then customises its own behaviour (for example, by changing the filename extensions it uses) to reduce the probability of detection by that software. Additional indicators of compromise include mutex and registry activity, such as installation of a fake audio driver which the malware uses to maintain persistence on the compromised system.
  • Flame is not designed to deactivate automatically, but supports a "kill" function that makes it eliminate all traces of its files and operation from a system on receipt of a module from its controllers.
  • PLC programmable logic controller
  • the worm initially spreads indiscriminately, but includes a highly specialized malware payload that is designed to target only Siemens SCADA systems that are configured to control and monitor specific industrial processes. Stuxnet infects PLCs by subverting the Step-7 software application that is used to reprogram these devices.
  • the worm is promiscuous, makes itself inert if the targeted software is not found on infected computers, and contains safeguards to prevent each infected computer from spreading the worm to more than three others, and to erase itself on a pre-specified date.
  • Stuxnet contains, among other things, code for a man-in-the- middle attack that fakes industrial process control sensor signals so an infected system does not shut down due to detected abnormal behavior.
  • the worm provides a layered attack against Windows operating system; industrial software applications that run on that operating system, and one or more specific PLCs.
  • Stuxnet uses four zero-day attacks (plus the CPLINK vulnerability and a vulnerability used by the Conficker worm). It initially spreads using infected removable drives such as USB flash drives, and then uses peer-to-peer RPC inter alia to infect and update other computers inside private networks not directly connected to the Internet.
  • the Windows component of the malware is promiscuous in that it spreads relatively quickly and indiscriminately.
  • the malware has both user-mode and kernel-mode rootkit capability under Windows, [40] and its device drivers may be digitally signed.
  • the driver signing helps it install kernel-mode rootkit drivers successfully without users being notified, and therefore to remain undetected for a relatively long period of time.
  • Remote websites have been configured as command and control servers for the malware, allowing it to be updated, and for industrial espionage to be conducted by uploading information.
  • Stuxnet Once installed on a Windows system, Stuxnet infects project files and subverts a key communication library thereby to intercept communications between software running under Windows and the target PLC devices that the software is able to configure and program when the two are connected via a data cable.
  • the malware is able to install itself on PLC devices unnoticed, and subsequently to mask its presence from WinCC if the control software attempts to read an infected block of memory from the PLC system.
  • the malware furthermore uses a zero-day exploit in the WinCC/SCADA database software in the form of a hard-coded database password.
  • Stuxnet's payload targets only those SCADA configurations that meet criteria that it is programmed to identify.
  • Stuxnet requires specific slave variable-frequency drives (frequency converter drives) to be attached to the targeted system and its associated modules, and attacks only those PLC systems with variable-frequency drives from specific vendors. It monitors the frequency of attached motors, and only attacks systems that spin at a predetermined range of frequencies.
  • Stuxnet installs malware into memory block DB890 of the PLC that monitors the Profibus messaging bus of the system. When certain criteria are met, it periodically modifies the frequency, thus affecting operation of the connected motors by changing their rotational speed. It also installs a rootkit that hides the malware on the system and masks the changes in rotational speed from monitoring systems.
  • software components of the present invention including programs and data may, if desired, be implemented in ROM (read only memory) form including CD-ROMs, EPROMs and EEPROMs, or may be stored in any other suitable typically non-transitory computer-readable medium such as but not limited to disks of various kinds, cards of various kinds and RAMs.
  • ROM read only memory
  • EEPROM electrically erasable programmable read-only memory
  • Components described herein as software may, alternatively, be implemented wholly or partly in hardware and/or firmware, if desired, using conventional techniques, and vice-versa. Each module or component may be centralized in a single location or distributed over several locations.
  • Any computer-readable or machine -readable media described herein is intended to include non-transitory computer- or machine- readable media.
  • Any computations or other forms of analysis described herein may be performed by a suitable computerized method. Any step described herein may be computer-implemented.
  • the invention shown and described herein may include (a) using a computerized method to identify a solution to any of the problems or for any of the objectives described herein, the solution optionally includes at least one of a decision, an action, a product, a service or any other information described herein that impacts, in a positive manner, a problem or objectives described herein; and (b) outputting the solution.
  • the system may, if desired, be implemented as a web-based system employing software, computers, routers and telecommunications equipment as appropriate.
  • a server may store certain applications, for download to clients, which are executed at the client side, the server side serving only as a storehouse.
  • Some or all functionalities e.g. software functionalities shown and described herein may be deployed in a cloud environment.
  • Clients e.g. mobile communication devices, such as smartphones, may be operatively associated with, but external to, the cloud.
  • the scope of the present invention is not limited to structures and functions specifically described herein and is also intended to include devices which have the capacity to yield a structure, or perform a function, described herein, such that even though users of the device may not use the capacity, they are, if they so desire, able to modify the device to obtain the structure or function.
  • a system embodiment is intended to include a corresponding process embodiment.
  • each system embodiment is intended to include a server- centered "view” or client centered “view”, or “view” from any other node of the system, of the entire functionality of the system, computer-readable medium, apparatus, including only those functionalities performed at that server or client or node.

Abstract

Computer-security apparatus comprising a sandbox; and an accelerated clock system for identifying malware by artificially hastening activation of malware so the activation occurs during the malware's period of residence in the sandbox even if the malware's latency period is longer than the malware's period of residence in the sandbox.

Description

ACCELERATING A CLOCK SYSTEM TO IDENTIFY MALWARE
REFERENCE TO CO-PENDING APPLICATIONS
Priority is claimed from United States Provisional Patent Application No. 61/803,641, entitled "Cyber metronome apparatus and methods useful in
conjunction therewith" and filed 20 March 2013. FIELD OF THIS DISCLOSURE
The present invention relates generally to computer security and more particularly to sandboxes for computer security.
BACKGROUND FOR THIS DISCLOSURE
Conventional technology constituting background to certain embodiments of the present invention is described in the following publications inter alia:
Sandbox systems are known in the art e.g. as shown in prior art Fig. la.
TOD jumps are known., e.g. at the following http links:
//security.dico.unimi.it/~andrew/archive/multiplexecutionpaths.pdf; and //bitblaze.cs. berkeley.edu/papers/botnet_book-2007.pdf.
Systems for protection against malware are known, e.g.: US2013139264A entitled Application Sandboxing Using A Dynamic Optimization Framework;
US2013160120A entitled Protecting End Users From Malware Using Advertising Virtual Machine; US2012151586A, US2008016339A, entitled Application Sandbox To Detect, Remove, And Prevent Malware ; US2006021029A, entitled Method Of Improving Computer Security Through Sandboxing ; US8099596B entitled System And Method For Malware Protection Using Virtualization; US2012278892A, entitled System For Malware Normalization And Detection and US2006161982A, entitled Intrusion Detection System.
The disclosures of all publications and patent documents mentioned in the specification, and of the publications and patent documents cited therein directly or indirectly, are hereby incorporated by reference. Materiality of such publications and patent documents to patentability is not conceded. SUMMARY OF CERTAIN EMBODIMENTS
Certain embodiments of the present invention seek to provide improved computer security apparatus.
Conventionally, a DOS or other low-level internal system computes the time of day to correspond to the normal pace of time governing human endeavor.
Certain embodiments of the present invention seek to accelerate system time, as opposed to changing or skipping a system time to a different value which is known in the art. At least one of the following may be provided to effect the acceleration: a. external clock which may be a particularly cost effective solution;
b. external interrupt which may be particularly insusceptible to virus
c. external CPU to compute the time of day which also may be particularly insusceptible to virus.
For example, a processor computes elapsed time by counting ticks; for example, if there is one tick per nano-second, then the processor may count 1000 ticks and then zero the counter to indicate that a millisecond has elapsed. Instead of a RAM determining that 1000 accumulated ticks triggers the zeroing of the counter, an interrupt may determine that 10,000 ticks are required, or conversely that less than 1 ,000 ticks are required.
Typically, the system clock is not changed, as opposed to the time of day which is changed. So, the acceleration may be applied only to the time-of-day, not to elements such as the processor, RAM, or DRAM.
The present invention typically includes at least the following embodiments: Embodiment a: Computer-security providing sandbox apparatus for fast identification of malware, the apparatus comprising:
a sandbox with an accelerated clock system instead of a conventional "time of day" clock system e.g. an oscillator-based PC clock.
Embodiment b. Computer-security providing sandbox apparatus according to Embodiment a wherein the apparatus detects within minutes, or even in real-time, malware which has a latency period of days, weeks, months or years, before launching its zero-day cyber attack. More generally, the apparatus detects the malware within a time period which is significantly less than the malware 's actual latency period e.g. by selecting acceleration of the sandbox accelerated clock system and incubation period in the sandbox, such that the incubation period having been accelerated, is longer than the estimated or known value of the malware 's latency period.
Embodiment c. Computer-security providing sandbox apparatus according to Embodiment a wherein the accelerated "time of day" clock system is provided by controllable clock acceleration functionality, also termed herein a "cyber metronome".
Embodiment 1. Computer-security apparatus comprising:
a sandbox; and
an accelerated clock system for identifying malware by artificially hastening activation of malware so said activation occurs during the malware 's period of residence in the sandbox even if the malware 's latency period is longer than the malware 's period of residence in the sandbox.
Embodiment 2. Computer-security apparatus according to any of the previous embodiments wherein the apparatus detects malware which has a latency period of at least months before launching its zero-day cyber attack, in real-time.
Embodiment 3. Computer-security apparatus according to any of the previous embodiments wherein the accelerated "time of day" clock system is provided by controllable clock acceleration functionality.
Embodiment 4. Computer-security apparatus according to any of the previous embodiments and installed within a firewall.
Embodiment 5. Computer-security apparatus according to any of the previous embodiments and residing in a stand-alone device located at an entry-point to secure facilities.
Embodiment 6. A computer-security method comprising:
providing a sandbox, in which malware resides for a pre-designated period, the sandbox having a clock; and
accelerating said clock for artificially hastening activation of at least one malevolent effect by malware whose latency period is longer than said pre-designated period.
Embodiment 7. Computer-security apparatus according to any of the previous embodiments wherein acceleration is achieved by provision of a hardware Breakpoint. Embodiment 8. Computer-security apparatus according to any of the previous embodiments wherein acceleration is achieved by provision of a separated Clock.
Embodiment 9. Computer-security apparatus according to any of the previous embodiments wherein acceleration is achieved by provision of a separate Processor running a separate system time.
Embodiment 10. A method according to any of the previous embodiments and also comprising:
assessing a security state;
determining whether there has been a change in the security state on the basis of accelerated system time and if not, returning to said assessing; and
if there has been a change in the security state on the basis of accelerated system time, determining risk level and, if a threshold has not yet been exceeded, returning to said assessing; whereas if the threshold has been exceeded, a suitable action is performed.
Embodiment 11. A method according to any of the previous embodiments wherein acceleration is effected by an external clock.
Embodiment 12. A method according to any of the previous embodiments wherein acceleration is effected by an external interrupt.
Embodiment 13. A method according to any of the previous embodiments wherein acceleration is effected by an external CPU operative to compute the time of day without changing the system clock and without affecting elements such as processors, RAMs, or DRAMs.
Embodiment 14. A method according to any of the previous embodiments wherein instead of a RAM determining that a predetermined number of accumulated ticks triggers counter- zeroing, an interrupt determines that a different number of ticks is required to trigger counter- zeroing.
Embodiment 15. Computer-security apparatus according to any of the previous embodiments wherein providing the hardware breakpoint comprises programming a watchpoint unit to monitor core busses for an instruction fetch from a specific memory location.
Embodiment 16. Computer-security apparatus according to any of the previous embodiments wherein a hardware breakpoint is established in RAM. Embodiment 17. Computer-security apparatus according to any of the previous embodiments wherein a hardware breakpoint is established in ROM.
Embodiment 18. Computer-security apparatus according to any of the previous embodiments wherein system time is changed by addressing a breakpoint in a counting program so as to provide faster count parameters using division by a parameter.
Embodiment 19. A computer program product, comprising a non-transitory tangible computer readable medium having computer readable program code embodied therein, said computer readable program code adapted to be executed to implement a computer-security method operative in conjunction with a sandbox, in which malware resides for a pre-designated period, the sandbox having a clock; the method comprising accelerating said clock for artificially hastening activation of at least one malevolent effect by malware whose latency period is longer than said pre- designated period.
Also provided, excluding signals, is a computer program comprising computer program code means for performing any of the methods shown and described herein when said program is run on a computer; and a computer program product, comprising a typically non-transitory computer-usable or -readable medium e.g. non- transitory computer -usable or -readable storage medium, typically tangible, having a computer readable program code embodied therein, said computer readable program code adapted to be executed to implement any or all of the methods shown and described herein. It is appreciated that any or all of the computational steps shown and described herein may be computer-implemented. The operations in accordance with the teachings herein may be performed by a computer specially constructed for the desired purposes or by a general purpose computer specially configured for the desired purpose by a computer program stored in a typically non-transitory computer readable storage medium. The term "non-transitory" is used herein to exclude transitory, propagating signals or waves, but to otherwise include any volatile or nonvolatile computer memory technology suitable to the application.
Any suitable processor, display and input means may be used to process, display e.g. on a computer screen or other computer output device, store, and accept information such as information used by or generated by any of the methods and apparatus shown and described herein; the above processor, display and input means including computer programs, in accordance with some or all of the embodiments of the present invention. Any or all functionalities of the invention shown and described herein, such as but not limited to steps of flowcharts, may be performed by a conventional personal computer processor, workstation or other programmable device or computer or electronic computing device or processor, either general-purpose or specifically constructed, used for processing; a computer display screen and/or printer and/or speaker for displaying; machine-readable memory such as optical disks, CDROMs, DVDs, BluRays, magnetic-optical discs or other discs; RAMs, ROMs, EPROMs, EEPROMs, magnetic or optical or other cards, for storing, and keyboard or mouse for accepting. The term "process" as used above is intended to include any type of computation or manipulation or transformation of data represented as physical, e.g. electronic, phenomena which may occur or reside e.g. within registers and /or memories of a computer or processor. The term processor includes a single processing unit or a plurality of distributed or remote such units.
The above devices may communicate via any conventional wired or wireless digital communication means, e.g. via a wired or cellular telephone network or a computer network such as the Internet.
The apparatus of the present invention may include, according to certain embodiments of the invention, machine readable memory containing or otherwise storing a program of instructions which, when executed by the machine, implements some or all of the apparatus, methods, features and functionalities of the invention shown and described herein. Alternatively or in addition, the apparatus of the present invention may include, according to certain embodiments of the invention, a program as above which may be written in any conventional programming language, and optionally a machine for executing the program such as but not limited to a general purpose computer which may optionally be configured or activated in accordance with the teachings of the present invention. Any of the teachings incorporated herein may whereever suitable operate on signals representative of physical objects or substances.
The embodiments referred to above, and other embodiments, are described in detail in the next section.
Any trademark occurring in the text or drawings is the property of its owner and occurs herein merely to explain or illustrate one example of how an embodiment of the invention may be implemented. Unless specifically stated otherwise, as apparent from the following discussions, it is appreciated that throughout the specification discussions, utilizing terms such as, "processing", "computing", "estimating" , "selecting" , "ranking", "grading", "calculating", "determining", "generating", "reassessing" , "classifying", "generating" , "producing", "stereo-matching", "registering", "detecting", "associating" , "superimposing", "obtaining" or the like, refer to the action and/or processes of a computer or computing system, or processor or similar electronic computing device, that manipulate and/or transform data represented as physical, such as electronic, quantities within the computing system's registers and/or memories, into other data similarly represented as physical quantities within the computing system's memories, registers or other such information storage, transmission or display devices. The term "computer" should be broadly construed to cover any kind of electronic device with data processing capabilities, including, by way of non-limiting example, personal computers, servers, computing system, communication devices, processors (e.g. digital signal processor (DSP), microcontrollers, field programmable gate array (FPGA), application specific integrated circuit (ASIC), etc.) and other electronic computing devices.
The present invention may be described, merely for clarity, in terms of terminology specific to particular programming languages, operating systems, browsers, system versions, individual products, and the like. It will be appreciated that this terminology is intended to convey general principles of operation clearly and briefly, by way of example, and is not intended to limit the scope of the invention to any particular programming language, operating system, browser, system version, or individual product.
Elements separately listed herein need not be distinct components and alternatively may be the same structure.
Any suitable input device, such as but not limited to a sensor, may be used to generate or otherwise provide information received by the apparatus and methods shown and described herein. Any suitable output device or display may be used to display or output information generated by the apparatus and methods shown and described herein. Any suitable processor may be employed to compute or generate information as described herein e.g. by providing one or more modules in the processor to perform functionalities described herein. Any suitable computerized data storage e.g. computer memory may be used to store information received by or generated by the systems shown and described herein. Functionalities shown and described herein may be divided between a server computer and a plurality of client computers. These or any other computerized components shown and described herein may communicate between themselves via a suitable computer network.
BRIEF DESCRIPTION OF THE DRAWINGS
Certain embodiments of the present invention are illustrated in the following drawings:
Prior art Fig. la is a diagram illustrating sandbox systems known in the art.
Fig. lb is a simplified flowchart illustration of a method which might not suit for protecting against malware with a very long latency period.
Fig. lc is a simplified flowchart illustration of a method for accelerating time inside a sandbox in accordance with certain embodiments of the present invention e.g. as an example implementation of the method of Fig. 4.
Fig. 2 is a simplified flowchart illustration of a method in which interrupt- driven transfers use an external clock instead of the system clock, to count ticks.
Fig. 3 illustrates a separate processor which may run, separately, any linear or other combination of system time, in accordance with certain embodiments of the present invention.The separated CPU may compute the ticking system time and may provide 1msec (say) incremental to the main program.
Fig. 4 is a more general flowchart illustration of a method for accelerating activation of malware inside a sandbox in accordance with certain embodiments of the present invention.
The methods of the flowcharts each typically comprise some or all of the illustrated steps, suitably ordered e.g. as shown.
Computational components described and illustrated herein can be implemented in various forms, for example, as hardware circuits such as but not limited to custom VLSI circuits or gate arrays or programmable hardware devices such as but not limited to FPGAs, or as software program code stored on at least one tangible or intangible computer readable medium and executable by at least one processor, or any suitable combination thereof. A specific functional component may be formed by one particular sequence of software code, or by a plurality of such, which collectively act or behave or act as described herein with reference to the functional component in question. For example, the component may be distributed over several code sequences such as but not limited to objects, procedures, functions, routines and programs and may originate from several computer files which typically operate synergistically.
Data can be stored on one or more tangible or intangible computer readable media stored at one or more different locations, different network nodes or different storage devices at a single node or location.
It is appreciated that any computer data storage technology, including any type of storage or memory and any type of computer components and recording media that retain digital data used for computing for an interval of time, and any type of information retention technology, may be used to store the various data provided and employed herein. Suitable computer data storage or information retention apparatus may include apparatus which is primary, secondary, tertiary or off-line; which is of any type or level or amount or category of volatility, differentiation, mutability, accessibility, addressability, capacity, performance and energy use; and which is based on any suitable technologies such as semiconductor, magnetic, optical, paper and others.
DETAILED DESCRIPTION OF CERTAIN EMBODIMENTS
Emails, compact Disc, Disk on Keys, social networks and many other computer data sources and vehicles inject data, which may be malware, into an organization's computer network.
Anti- virus relies on pre -known signatures of malware 's SW code e.g. viruses to detect cyber-attacks.
Active Persistence Threats (APTs) are cyber attacks which may be based on secret, long-term attacks in stages. BOTs may reproduce quietly and be numerous and extensively dispersed by the time they are detected. BOTs initially operate seemingly innocently and only much later, sometimes years later, when a "zero-day attack" is perpetrated, which may be time-triggered or actuated responsive to a logical combination (and/or/xor) of triggers which may include time, but not only time. So, for APTs, conventional sandboxes may require months or years to be effective, hence are impractical, or need be very sophisticated to try to guess or find in the code, possible trigger dates as is known in the art). Prior art Fig. la is a diagram illustrating Sandbox systems known in the art. The flowchart of Fig. lb might not be suitable for protecting against malware with a very long latency period.
Fig. lc is a simplified flowchart illustration of a method for accelerating time inside a sandbox in accordance with certain embodiments of the present invention.
According to one embodiment, computer-security providing sandbox apparatus for fast identification of malware is provided, e.g. as shown generally in Fig. 4, the apparatus comprising:
a sandbox, e.g. robust concept sandbox, with an accelerated virtual system time instead of a conventional "system time" or "time of day" clock system e.g. an oscillator-based PC clock. A robust concept sandbox may perform a complete scan of dates.
In computer science and computer programming, "system time" represents a computer system's notion of the passing of time. System time is measured by a system clock, which is typically implemented as a simple count of the number of ticks that have occurred starting from an arbitrary instant in time termed the epoch which serves as an origin of a particular era and/or a reference point from which time is measured. The epoch may be constantly saved for next upload.
The system clock is typically implemented as a programmable interval timer that periodically interrupts the CPU. The CPU may then start executing a timer interrupt service routine which typically adds one tick, say, to the system clock. A simple counter may for example count the number of 100-nanosecond ticks in Microsoft OS to get to a 1msec resolution and handles other periodic housekeeping tasks e.g. preemption, before returning to whatever the CPU was doing before the interruption. It is a particular feature of certain embodiments that the acceleration has no impact on the system oscillation clock in which the PC CPU and internal busses are running.
In contrast to the accelerated virtual clock provided herein, TOD jumps are known.
Optionally, the apparatus detects within minutes, or even in real-time, malware which has a latency period of at least months before launching its zero-day cyber attack.
Optionally, the accelerated "time of day" clock system is provided by controllable clock acceleration functionality, also termed herein a "cyber metronome". It is appreciated that an oscillator may for example be divided in software or in hardware, to achieve a desired level of acceleration.
The following three example methods for running an activity or process e.g. as above, are now described:
HW Breakpoint
Separated Clk
Unique Processor
According to the first, HW Breakpoint method, as shown e.g. in Fig. 2, there is a deliberate stopping or pausing location in a tick counting program to provide a different number of ticks, e.g. by multiplication, that actually accelerates the system time.
A HW (Hardware) breakpoint may, e.g. as shown in Fig. 2, be set by programming a watchpoint unit to monitor the core busses for an instruction fetch from a specific memory location. HW breakpoints may be established at any suitable location in RAM or ROM. System time which may be implemented as a count of the number of ticks that have occurred since an arbitrary starting date, may be changed accordingly, e.g. by addressing a breakpoint in the counting program so as to provide faster count parameters e.g. by diminishing the counter, e.g. by dividing by a parameter.
According to the second method, a Separated Clk (clock) may address an unbiased clock that may address the system time instead of the programmable ticking counting.
According to the third method, e.g. as shown in Fig. 3, a unique or added or separate processor may run, separately, any linear or other combination of system time. Unique" is used in the sense of being separate and often dedicated to acceleration; the unique processor typically comprises a separate, typically dedicated, HW CPU e.g. on the same PC motherboard. In this embodiment and others, any suitable acceleration factor can be used, e.g. so as to exceed the predicted worst-case ratio between malware latency time and sandbox quarantine time.
For example, e.g. as shown in Fig. 3, a separated CPU may compute the ticking system time and may provide 1msec (say) incremental to the main program. Then, the BOT, when looking to the PC (e.g.) clock to time its zero-day- attack, is led to believe that months or years (say) have elapsed whereas in fact, the PC clock has been accelerated and only, say, 5 or 10 minutes have actually elapsed.
The botnet may be triggered by the PC Time of Day (System Time). More generally, the apparatus may operate or invoke some or all of the following, or other, automatic triggers whether external or internal:
a. change of user e.g. level of authorization such as administrator vs. non- administrator.
b. change or upgrade of version of operating system or other program.
c. special dates e.g. 9/11.
For example, to implement (a) and/or (b), the sandbox apparatus may have 2 or more conditions and may jump from one to the other.
This is particularly useful for detecting malware triggered as above or triggered by the above in combination with a long elapsed time -period.
The apparatus may be implemented in software, hardware or a combination of both.
The apparatus may be installed within an organization's firewall or may be a stand-alone device located at the entry-point to secure facilities which may, for example, accept and conduct an accelerated sandbox-based security check, as above, on disk-on-key s from visitors.
Any suitable sandbox may be employed or adapted for the purposes of the particular embodiments shown and described herein, for example any sandbox having any combination of the following sandbox characteristics and functionalities which are known in the art, may be combined with any of the features of the present invention shown and described herein:
Any security mechanism for separating running programs, which may for example be used to execute untested code, or untrusted programs from unverified third-parties, suppliers, untrusted users and untrusted websites.
Any tightly controlled set of resources for guest programs to run in, such as scratch space on disk and memory. Network access, the ability to inspect the host system or read from input devices may be disallowed or heavily restricted.
Sandboxes may include some or all of:
Applets or other self-contained programs that run in a virtual machine or scripting language interpreter that does the sandboxing. In application streaming schemes, the applet may be downloaded onto a remote client and may begin executing before it arrives in its entirety. Applets in web browsers may safely execute untrusted code embedded in web pages. Applet implementations such as Adobe Flash, Java applets and Silverlight— provide a rectangular window via which to interact with the user and may provide persistent storage.
A jail or set of resource limits imposed on programs by the operating system kernel. May include some or all of: I/O bandwidth caps, disk quotas, network-access restrictions and a restricted filesystem namespace. Jails may be used in virtual hosting.
Rule-based Execution which gives users control over which processes are started, spawned (by other applications), or allowed to inject code into other apps and/or have access to the net. May control file/registry security e.g. which programs can read and write to the file system/registry). Examples: SELinux and Apparmor security frameworks, for Linux.
Virtual machines which emulate a complete host computer, on which a conventional operating system may boot and run as on actual hardware. The guest operating system does not function natively on the host and can access host resources only through the emulator.
Sandboxing on native hosts: an environment that mimics or replicates the targeted desktops is created to evaluate how malware infects and compromises a target host.
Capability systems in which programs are given opaque tokens when spawned and can do specific things depending on what tokens they hold. Implementations may work at levels from kernel to user-space. Example: HTML rendering in a Web browser.
Online judge systems operative to test programs in programming contests.
New-generation pastebins which allow users to execute pasted code snippets.
Secure Computing Mode (seccomp), a sandbox built in the Linux kernel which, when activated, allows only write(), read(), exit() and sigreturn() system calls.
"sandbox" attribute for use with iframes [2]
Sandboxed applications for Apple's mobile operating system iOS which are only able to access files inside their own respective storage areas, and are not able to change system settings. Any secure environment to contain untrusted helper applications which may be adversarial, may serve as a sandbox e.g. by restricting a program's access to the underlying operating system. In particular, a sandbox may intercept and filter dangerous system calls e.g. via a Solaris process tracing facility.
The sandbox may allow or deny individual system calls flexibly, perhaps depending on the arguments to the call. For example, the open system call could be allowed or denied, depending on which file the application was trying to open, and whether the file was intended for reading or for writing.
The program of the sandbox may include a framework, and dynamic modules, used to implement various aspects of a configurable security policy by filtering relevant system calls. The framework typically reads a configuration file, which can be site-, user-, or application-dependent. This file lists which of the modules should be loaded, and may supply parameters to them. For example, the configuration line path allow read, write /tmp/* may load the path module, passing it the parameters \allow read, write /tmp/*" at initialization time. This syntax allows files under /tmp to be opened for reading or writing.
Each module filters out dangerous system call invocations, according to an area of specialization. When the application attempts a system call, the framework dispatches that information to relevant policy modules. Each module reports its opinion on whether the system call may be permitted or quashed, and any necessary action is taken by the framework. Following the Principle of Least Privilege, the operating system may execute a system call only if some module explicitly allows it; the default is for system calls to be denied. This causes the system to err on the side of security in case of an under-specified security policy.
Each module contains a list of system calls to examine and filter. Some system calls may appear in several modules' lists. A module may assign to each system call a function which validates the arguments of the call before the call is executed by the operating system. The function may then use this information to optionally update local state, and then suggest allowing the system call, suggest denying it, or make no comment on the attempted system call.
Modules are typically listed in the configuration file e.g. from most general to most specific, so that the last relevant module for any system call dictates whether the call is to be allowed or denied. For example, a suggestion to allow countermands an earlier denial. Note that a "no comment" response has no effect: in particular, it does not override an earlier "deny" or "allow" response.
Normally, when conflicts arise, earlier modules are overridden by later ones. To escape this behavior, for special circumstances modules may unequivocally allow or deny a system call and explicitly insist that their judgment be considered final. In this case, no further modules are consulted; a "super-allow" or "super-deny" cannot be overridden.
Write access to .rhosts could be super-denied near the top of the configuration file, for example, to provide a safety net in case of accidentally miswriting of a subsequent file access rule.
An operating system to support the sandbox may be one which allows one user-level process to watch the system calls executed by another. In addition, a module can assign to a system call a similar function which gets called after the system call has executed, just before control is returned to the helper application. This function can examine the arguments to the system call, as well as the return value, and update the module's local state, process, and control the second process in various ways e.g. causing selected system calls to fail.
Operating systems may have a process-tracing facility, intended for debugging. Most operating systems offer a program which can observe the system calls performed by another process as well as their return values. This is often implemented with a special system call which allows the tracer to register a callback that is executed whenever the tracer issues a system call.
Operating systems, such as Solaris 2.4 and OSF/1 , offer a process-tracing facility through the /proc virtual filesystem, an interface which allows direct control of the traced process's memory. Furthermore, one can request callbacks on a per-system call basis.
The policy modules are used to select and implement security policy decisions. They are dynamically loaded at runtime, so that different security policies can be configured for different sites, users, or applications. A set of modules can be used to set up the traced application's environment, and to restrict its ability to read or write files, execute programs, and establish TCP connections. In addition, the traced application is prevented from performing certain system calls, as described below. The provided modules offer considerable flexibility themselves. Configuration may be achieved by editing their parameters in the configuration file.
Policy modules make a decision as to which system calls to allow, which to deny, and for which a function must be called to determine what to do.
Some examples of system calls that are always allowed (in certain modules) are close, exit, fork, and read. Some examples of system calls that are always denied (in certain modules) are ones that would not succeed for an unprivileged process anyway, like setuid and mount, along with chdir, which one may disallow as part of security policy.
System calls for which a function must, in general, be called to determine whether the system call should be allowed or denied typically include system calls such as open, rename, stat, and kill whose arguments must be checked against the configurable security policy specified in the parameters given to the module at load time.
Helper applications may be allowed to fork children, which may be recursively traced. Traced processes can only send signals to themselves or to their children, and never to an untraced application. Environment variables are initially sanitized, and resource usage is carefully limited. In an example policy, access to the filesystem is severely limited. A helper application is placed in a particular directory; it cannot chdir out of this directory. It is given full access to files in or below this directory. The untrusted application is allowed read access to certain carefully controlled files referenced by absolute pathnames, such as shared libraries and global configuration files. One may concentrate all access control in the open system call, and always allow read and write calls, because write is only useful when used on a file descriptor obtained from a system call like open.
Helper applications may require access to network resources e.g. may need to open a window on the XI 1 display to present document contents. One may allow network connections only to the X display, and this access is allowed only through a safe X proxy.
Re XI 1 , X access control is all-or-nothing. A rogue X client has full access to all other clients on the same server, so an otherwise confined helper application could compromise other applications if it were allowed uncontrolled access to X. There are safe X proxies that understand the X protocol and filter out dangerous requests. Untrusted applications may be securely encapsulated within the child Xnest server and cannot escape from this sandbox display area or affect other normal trusted applications.
A basic module typically supplies defaults for the system calls which are easiest to analyze, and takes no configuration parameters. The putenv module allows one to specify environment variable settings for the traced application via its parameters; those which are not explicitly mentioned are unset. The special parameter display causes the helper application to inherit the parent's DISPLAY.
The tcpconnect module allows us to restrict TCP connections by host and/or port; the default is to disallow all connections. The path module, the most complicated one, lets one allow or deny file accesses according to one or more patterns.
Typically, the framework starts by reading the configuration file, the location of which can be specified on the command line. In this configuration file the first word is the name of the module to load, and the rest of the first line acts as a parameter to the module.
For each module specified in the configuration file, dlopen(3x) is used to dynamically load the module into the framework's address space. The module's init() function is called, if present, with the parameters for the module as its argument. The list of system calls and associated values and functions in the module is then merged into the framework's dispatch table. The dispatch table is an array, indexed by system call number, of linked lists. Each value and function in the module is appended to the list in the dispatch table that is indexed by the
system call to which it is associated. The result, after the entire configuration file has been read, is that for each system call, the dispatch table provides a linked list that can be traversed to decide whether to allow or deny a system call.
After the dispatch table is set up, the framework gets ready to run the application that is to be traced: a child process is fork()ed, and the child's state is cleaned up. This includes setting a umask of 077, setting limits on virtual memory use, disabling core dumps, switching to a sandbox directory, and closing unnecessary file descriptors. Modules get a chance to further initialize the child's state; for instance, the putenv module sanitizes the environment variables. The parent process waits for the child to complete this cleanup, and begins to debug the child via the /proc interface. It sets the child process to stop whenever it begins or finishes a system call (only a subset of the system calls are marked in this manner, typically). The child waits until it is being traced, and executes the desired application. In a security policy, the application is typically confined to a sandbox directory. By default, typically, this directory is created in /tmp with a random name, but the SANDBOX DIR environment variable can be used to override this choice.
The application runs until it performs a system call. At this point, it is put to sleep, and the tracing process wakes up. The tracing process determines which system call was attempted, along with the arguments to the call. It then traverses the appropriate linked list in the dispatch table, in order to determine whether to allow or to deny this system call.
If the system call is to be allowed, the tracing process wakes up the application, which proceeds to complete the system call. If, however, the system call is to be denied, the tracing process wakes up the application. This causes the system call to abort immediately, returning a value indicating that the system call failed and setting errno to EINTR. In either case, the tracing process goes back to sleep.
Some applications are coded in such a way that, if they receive an EINTR error from a system call, they will retry the system call. Thus, if such a application tries to execute a system call which is denied by the security policy, it will get stuck in a retry loop. If this occurs, assume the traced application is not going to make any further progress, and kill the application entirely, giving an explanatory message to the user.
When a system call completes, the tracing process has the ability to examine the return value if it so wishes. If any module had assigned a function to be executed when this system call completes, as described above, it is executed at this time. This facility is not widely used, except in one special case. When a fork() or vfork() system call completes, the tracing process checks the return value and then fork()s itself. The child of the tracing process then detaches from the application, and begins tracing the application's child. This method safely allows the traced application to spawn a child (as ghostview spawns gs, for example) by ensuring that all children of untrusted applications are traced as well.
Typically, one may apply optimization/s to the system call dispatch table before the untrusted helper application executes. When a module's system call handler always returns the same allow/deny value (and leaves no side effects); this special case allows removing redundant values in the dispatch table.
Certain system calls, such as write, are always allowed; so one need not register a callback with the OS for them. This avoids the extra context switches to and from the tracing process each time the traced application makes such a system call, and thus those system calls can execute at full speed as though there were no tracing or filtering. Eliminating the need to trace common system calls such as read and write, provides speed.
Typically, protection of the invocation of any helper application with such an environment is provided, e.g. by specifying the janus program in a mailcap file. A system administrator could set up the in-house security policy by listing janus in the default global mailcap file; then the secure environment would be transparent to all the users on the system. Users could protect themselves by doing the same to their personal .mailcap file.
Sandboxing was introduced by Wahbe et al. in the context of software fault isolation. They achieved safety for trusted modules running in the same address space as untrusted modules. They also use binary-rewriting technology.
Java provides architecture-independence, while Janus only applies to native code and provides no help with portability.
OmniWare [20] takes advantage of software fault isolation techniques and compiler support to safely execute untrusted code.
securelib is a shared library that replaces the C accept, recvfrom, and recvmsg library calls by a version that performs address-based authentication; it is intended to protect security-critical Unix system daemons. Replacement of dangerous C library calls with a safe wrapper may be insufficient in an extended context of untrusted and possibly hostile applications; a hostile application could bypass this access control by issuing the dangerous system call directly without invoking any library calls.
One may extend the filesystem protection mechanism with per-user access control lists. One may protect against Trojan horses and viruses by limiting filesystem access: their OS extension confines user processes to the minimal filesystem privileges needed, relying on hints from the command line and run-time user input. One may add per-process capabilities support to the filesystem discretionary access controls.
Domain and Type Enforcement (DTE) is a way to extend the OS protection mechanisms to let system administrators specify fine-grained mandatory access controls over the interaction between security-relevant subjects and objects. DTE provides mandatory access control, and employs kernel modifications, e.g. for HTML5 Sandbox, with the sandbox attribute, the framed content is no longer allowed to perform some or all of:
Instantiate plugins
Execute script
Open popup windows
Submit forms
Access storage (HTML5 localStorage, sessionStorage, cookies, etc.)
Send XMLHttpRequests
Access the parent window's DOM
Use HTCs, binary behaviors, or data binding.
There are times where it might be desirable to allow popups inside the sandbox. IE 10 supports allowing popups for valid cases via the ms-allow-popups token.
Typically, the server is also able to sandbox content. The sandbox attribute restricts content only when within an iframe. If the sandboxed content is able to convince the user to browse to it directly, then the untrusted content would no longer be within the sandboxed iframe and none of the security restrictions would apply. The server could send the untrusted content with a text/html-sandboxed MIME type. An example header one can send to sandbox the content but allow form submission and script execution:
X-Content-Security-Policy: sandbox allow-forms allow-scripts.
A sandbox may allow processes to execute only within a very restrictive environment. The only resources sandboxed processes can freely use may for example be CPU cycles and memory. For example, sandboxed processes might not be able to write to the filesystem e.g. to the disk or display their own windows. What exactly sandboxes can and cannot do may be controlled by any suitable explicit policy. For example, in Windows, code cannot perform any form of I/O (e.g. disk, keyboard, or screen) without making a system call. In most system calls, Windows performs some sort of security check and these, according to the sandbox policy, fail for actions that the sandboxed process is not to perform. In Chromium, for example, the sandbox is such that all access checks fail. Sometimes, certain communication channels are explicitly for sandboxed processes which can then write and read from these channels. A more privileged process (e.g. in Chromium, the browser process) may use certain channels to do certain actions on behalf of a sandboxed process. The apparatus of the present invention may be customized, using methods described above, to combat any sort of malware having any combination of characteristics, such as but not limited to any combination of the following characteristics of Flame and/or of Stuxnet:
Flame is malware which can spread to other systems over a local network
(LAN) or via USB stick. It can record audio, screenshots, keyboard activity and network traffic. The program also records Skype conversations and can turn infected computers into Bluetooth beacons which attempt to download contact information from nearby Bluetooth-enabled devices. This data, along with locally stored documents, is sent on to one of several command and control servers that are scattered around the world. The program then awaits further instructions from these servers.
The program allows other attack modules to be loaded after initial infection. The malware uses five different encryption methods and an SQLite database to store structured information. The method used to inject code into various processes is stealthy, in that the malware modules do not appear in a listing of the modules loaded into a process and malware memory pages are protected with READ, WRITE and EXECUTE permissions that make them inaccessible by user-mode applications. The malware determines what antivirus software is installed, then customises its own behaviour (for example, by changing the filename extensions it uses) to reduce the probability of detection by that software. Additional indicators of compromise include mutex and registry activity, such as installation of a fake audio driver which the malware uses to maintain persistence on the compromised system.
Flame is not designed to deactivate automatically, but supports a "kill" function that makes it eliminate all traces of its files and operation from a system on receipt of a module from its controllers.
Stuxnet spreads via Microsoft Windows, and targets specific software and equipment, being operative to spy on and subvert industrial systems, and includes a programmable logic controller (PLC) rootkit.
The worm initially spreads indiscriminately, but includes a highly specialized malware payload that is designed to target only Siemens SCADA systems that are configured to control and monitor specific industrial processes. Stuxnet infects PLCs by subverting the Step-7 software application that is used to reprogram these devices. The worm is promiscuous, makes itself inert if the targeted software is not found on infected computers, and contains safeguards to prevent each infected computer from spreading the worm to more than three others, and to erase itself on a pre-specified date.
For its targets, Stuxnet contains, among other things, code for a man-in-the- middle attack that fakes industrial process control sensor signals so an infected system does not shut down due to detected abnormal behavior. The worm provides a layered attack against Windows operating system; industrial software applications that run on that operating system, and one or more specific PLCs.
Stuxnet uses four zero-day attacks (plus the CPLINK vulnerability and a vulnerability used by the Conficker worm). It initially spreads using infected removable drives such as USB flash drives, and then uses peer-to-peer RPC inter alia to infect and update other computers inside private networks not directly connected to the Internet. The Windows component of the malware is promiscuous in that it spreads relatively quickly and indiscriminately.
The malware has both user-mode and kernel-mode rootkit capability under Windows, [40] and its device drivers may be digitally signed. The driver signing helps it install kernel-mode rootkit drivers successfully without users being notified, and therefore to remain undetected for a relatively long period of time.
Remote websites have been configured as command and control servers for the malware, allowing it to be updated, and for industrial espionage to be conducted by uploading information.
Once installed on a Windows system, Stuxnet infects project files and subverts a key communication library thereby to intercept communications between software running under Windows and the target PLC devices that the software is able to configure and program when the two are connected via a data cable. In this way, the malware is able to install itself on PLC devices unnoticed, and subsequently to mask its presence from WinCC if the control software attempts to read an infected block of memory from the PLC system. The malware furthermore uses a zero-day exploit in the WinCC/SCADA database software in the form of a hard-coded database password.
Stuxnet' s payload targets only those SCADA configurations that meet criteria that it is programmed to identify. Stuxnet requires specific slave variable-frequency drives (frequency converter drives) to be attached to the targeted system and its associated modules, and attacks only those PLC systems with variable-frequency drives from specific vendors. It monitors the frequency of attached motors, and only attacks systems that spin at a predetermined range of frequencies. Stuxnet installs malware into memory block DB890 of the PLC that monitors the Profibus messaging bus of the system. When certain criteria are met, it periodically modifies the frequency, thus affecting operation of the connected motors by changing their rotational speed. It also installs a rootkit that hides the malware on the system and masks the changes in rotational speed from monitoring systems.
Features of the present invention which are described in the context of separate embodiments may also be provided in combination in a single embodiment. Conversely, features of the invention, including method steps, which are described for brevity in the context of a single embodiment or in a certain order may be provided separately or in any suitable subcombination or in a different order.
The above descriptions of possible characteristics and functionalities of malware such as Flame and Stuxnet; and sandboxes, are culled from or based closely on the relevant technical literature e.g. are cited in Wikipedia. To the extent possible, the original wording of the literature has been precisely maintained, for clarity.
It is appreciated that terminology such as "mandatory", "required", "need" and "must" refer to implementation choices made within the context of a particular implementation or application described herewithin for clarity and are not intended to be limiting, since, in an alternative implantation, the same elements might be defined as not mandatory and not required, or might even be eliminated altogether.
It is appreciated that software components of the present invention including programs and data may, if desired, be implemented in ROM (read only memory) form including CD-ROMs, EPROMs and EEPROMs, or may be stored in any other suitable typically non-transitory computer-readable medium such as but not limited to disks of various kinds, cards of various kinds and RAMs. Components described herein as software may, alternatively, be implemented wholly or partly in hardware and/or firmware, if desired, using conventional techniques, and vice-versa. Each module or component may be centralized in a single location or distributed over several locations.
Included in the scope of the present invention, inter alia, are electromagnetic signals carrying computer-readable instructions for performing any or all of the steps or operations of any of the methods shown and described herein, in any suitable order including simultaneous performance of suitable groups of steps as appropriate; machine -readable instructions for performing any or all of the steps of any of the methods shown and described herein, in any suitable order; program storage devices readable by machine, tangibly embodying a program of instructions executable by the machine to perform any or all of the steps of any of the methods shown and described herein, in any suitable order; a computer program product comprising a computer useable medium having computer readable program code, such as executable code, having embodied therein, and/or including computer readable program code for performing, any or all of the steps of any of the methods shown and described herein, in any suitable order; any technical effects brought about by any or all of the steps of any of the methods shown and described herein, when performed in any suitable order; any suitable apparatus or device or combination of such, programmed to perform, alone or in combination, any or all of the steps of any of the methods shown and described herein, in any suitable order; electronic devices each including a processor and a cooperating input device and/or output device and operative to perform in software any steps shown and described herein; information storage devices or physical records, such as disks or hard drives, causing a computer or other device to be configured so as to carry out any or all of the steps of any of the methods shown and described herein, in any suitable order; a program pre-stored e.g. in memory, or on an information network such as the Internet, before or after being downloaded, which embodies any or all of the steps of any of the methods shown and described herein, in any suitable order, and the method of uploading or downloading such, and a system including server/s and/or client/s for using such; a processor configured to perform any combination of the described steps or to execute any combination of the described modules; and hardware which performs any or all of the steps of any of the methods shown and described herein, in any suitable order, either alone or in conjunction with software. Any computer-readable or machine -readable media described herein is intended to include non-transitory computer- or machine- readable media.
Any computations or other forms of analysis described herein may be performed by a suitable computerized method. Any step described herein may be computer-implemented. The invention shown and described herein may include (a) using a computerized method to identify a solution to any of the problems or for any of the objectives described herein, the solution optionally includes at least one of a decision, an action, a product, a service or any other information described herein that impacts, in a positive manner, a problem or objectives described herein; and (b) outputting the solution.
The system may, if desired, be implemented as a web-based system employing software, computers, routers and telecommunications equipment as appropriate.
Any suitable deployment may be employed to provide functionalities e.g. software functionalities shown and described herein. For example, a server may store certain applications, for download to clients, which are executed at the client side, the server side serving only as a storehouse. Some or all functionalities e.g. software functionalities shown and described herein may be deployed in a cloud environment. Clients e.g. mobile communication devices, such as smartphones, may be operatively associated with, but external to, the cloud.
The scope of the present invention is not limited to structures and functions specifically described herein and is also intended to include devices which have the capacity to yield a structure, or perform a function, described herein, such that even though users of the device may not use the capacity, they are, if they so desire, able to modify the device to obtain the structure or function.
Features of the present invention which are described in the context of separate embodiments may also be provided in combination in a single embodiment.
For example, a system embodiment is intended to include a corresponding process embodiment. Also, each system embodiment is intended to include a server- centered "view" or client centered "view", or "view" from any other node of the system, of the entire functionality of the system, computer-readable medium, apparatus, including only those functionalities performed at that server or client or node.
Conversely, features of the invention, including method steps, which are described for brevity in the context of a single embodiment, or in a certain order, may be provided separately or in any suitable subcombination or in a different order, "e.g. " is used herein in the sense of a specific example which is not intended to be limiting. Devices, apparatus or systems shown coupled in any of the drawings may in fact be integrated into a single platform in certain embodiments or may be coupled via any appropriate wired or wireless coupling such as but not limited to optical fiber, Ethernet, Wireless LAN, HomePNA, power line communication, cell phone, PDA, Blackberry GPRS, Satellite including GPS, or other mobile delivery. It is appreciated that in the description and drawings shown and described herein, functionalities described or illustrated as systems and sub-units thereof can also be provided as methods and steps therewithin, and functionalities described or illustrated as methods and steps therewithin can also be provided as systems and sub-units thereof. The scale used to illustrate various elements in the drawings is merely exemplary and/or appropriate for clarity of presentation and is not intended to be limiting.

Claims

1. Computer-security apparatus comprising:
a sandbox; and
an accelerated clock system for identifying malware by artificially hastening activation of malware so said activation occurs during the malware 's period of residence in the sandbox even if the malware 's latency period is longer than the malware 's period of residence in the sandbox.
2. Computer-security apparatus according to claim 1 wherein the apparatus detects within minutes, malware operative to launch a zero-day cyber attack following a months-long latency period.
3. Computer-security apparatus according to claim 1 wherein the accelerated "time of day" clock system is provided by controllable system time acceleration functionality.
4. Computer-security apparatus according to claim 1 and installed within a firewall.
5. Computer-security apparatus according to claim 1 and residing in a standalone device located at an entry-point to secure facilities.
6. A computer-security method comprising:
providing a sandbox, in which malware resides for a pre-designated period, the sandbox having a clock; and
accelerating said clock for artificially hastening activation of at least one malevolent effect by malware whose latency period is longer than said pre-designated period.
7. Computer-security apparatus according to claim 3 wherein acceleration is achieved by provision of a hardware Breakpoint.
8. Computer-security apparatus according to claim 3 wherein acceleration is achieved by provision of a separated Clock.
9. Computer-security apparatus according to claim 3 wherein acceleration is achieved by provision of a separate Processor running a separate system time.
10. A method according to claim 6 and also comprising:
assessing a security state;
determining whether there has been a change in the security state on the basis of accelerated system time and if not, returning to said assessing; and
if there has been a change in the security state on the basis of accelerated system time, determining risk level and, if a threshold has not yet been exceeded, returning to said assessing; whereas if the threshold has been exceeded, a suitable action is performed.
11. A method according to claim 6 wherein acceleration is effected by an external clock.
12. A method according to claim 6 wherein acceleration is effected by an external interrupt.
13. A method according to claim 6 wherein acceleration is effected by an external CPU operative to compute the time of day without changing the system clock and without affecting elements such as processors, RAMs, or DRAMs.
14. A method according to claim 12 wherein instead of a RAM determining that a predetermined number of accumulated ticks triggers counter- zeroing, an interrupt determines that a different number of ticks is required to trigger counter- zeroing.
15. Computer-security apparatus according to claim 7 wherein providing the hardware breakpoint comprises programming a watchpoint unit to monitor core busses for an instruction fetch from a specific memory location.
16. Computer-security apparatus according to claim 7 wherein a hardware breakpoint is established in RAM.
17. Computer-security apparatus according to claim 7 wherein a hardware breakpoint is established in ROM.
18. Computer-security apparatus according to claim 7 wherein system time is changed by addressing a breakpoint in a counting program so as to provide faster count parameters using division by a parameter.
19. A computer program product, comprising a non-transitory tangible computer readable medium having computer readable program code embodied therein, said computer readable program code adapted to be executed to implement a computer- security method operative in conjunction with a sandbox, in which malware resides for a pre-designated period, the sandbox having a clock; the method comprising accelerating said clock for artificially hastening activation of at least one malevolent effect by malware whose latency period is longer than said pre-designated period.
PCT/IL2014/050298 2013-03-20 2014-03-18 Accelerating a clock system to identify malware WO2014147618A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201361803641P 2013-03-20 2013-03-20
US61/803,641 2013-03-20

Publications (1)

Publication Number Publication Date
WO2014147618A1 true WO2014147618A1 (en) 2014-09-25

Family

ID=51579391

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IL2014/050298 WO2014147618A1 (en) 2013-03-20 2014-03-18 Accelerating a clock system to identify malware

Country Status (1)

Country Link
WO (1) WO2014147618A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105678164A (en) * 2014-11-20 2016-06-15 华为技术有限公司 Method and device for detecting malicious software
CN111368295A (en) * 2018-12-26 2020-07-03 中兴通讯股份有限公司 Malicious sample detection method, device and system and storage medium
US10706149B1 (en) 2015-09-30 2020-07-07 Fireeye, Inc. Detecting delayed activation malware using a primary controller and plural time controllers
US10817606B1 (en) * 2015-09-30 2020-10-27 Fireeye, Inc. Detecting delayed activation malware using a run-time monitoring agent and time-dilation logic

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020035721A1 (en) * 2000-03-02 2002-03-21 Swoboda Gary L. Clock modes for a debug port with on the fly clock switching
US6691230B1 (en) * 1998-10-15 2004-02-10 International Business Machines Corporation Method and system for extending Java applets sand box with public client storage
US20040193957A1 (en) * 1989-07-31 2004-09-30 Swoboda Gary L. Emulation devices, systems and methods utilizing state machines
US20070005323A1 (en) * 2005-06-30 2007-01-04 Patzer Aaron T System and method of automating the addition of programmable breakpoint hardware to design models
US20120060220A1 (en) * 2009-05-15 2012-03-08 Invicta Networks, Inc. Systems and methods for computer security employing virtual computer systems
US20120260342A1 (en) * 2011-04-05 2012-10-11 Government Of The United States, As Represented By The Secretary Of The Air Force Malware Target Recognition

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040193957A1 (en) * 1989-07-31 2004-09-30 Swoboda Gary L. Emulation devices, systems and methods utilizing state machines
US6691230B1 (en) * 1998-10-15 2004-02-10 International Business Machines Corporation Method and system for extending Java applets sand box with public client storage
US20020035721A1 (en) * 2000-03-02 2002-03-21 Swoboda Gary L. Clock modes for a debug port with on the fly clock switching
US20070005323A1 (en) * 2005-06-30 2007-01-04 Patzer Aaron T System and method of automating the addition of programmable breakpoint hardware to design models
US20120060220A1 (en) * 2009-05-15 2012-03-08 Invicta Networks, Inc. Systems and methods for computer security employing virtual computer systems
US20120260342A1 (en) * 2011-04-05 2012-10-11 Government Of The United States, As Represented By The Secretary Of The Air Force Malware Target Recognition

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
MAVROIDIS, I ET AL.: "Accelerating Hardware Simulation: Testbench Code Emulation'';", INTERNATIONAL CONFERENCE ON ICECE TECHNOLOGY;, December 2008 (2008-12-01), pages 129 - 136, XP032392750, Retrieved from the Internet <URL:http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=4762375> [retrieved on 20140616], DOI: doi:10.1109/FPT.2008.4762375 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105678164A (en) * 2014-11-20 2016-06-15 华为技术有限公司 Method and device for detecting malicious software
EP3196795A4 (en) * 2014-11-20 2017-07-26 Huawei Technologies Co., Ltd. Malware detection method and apparatus
JP2017531257A (en) * 2014-11-20 2017-10-19 華為技術有限公司Huawei Technologies Co.,Ltd. Malware detection method and malware detection device
CN105678164B (en) * 2014-11-20 2018-08-14 华为技术有限公司 Detect the method and device of Malware
US10565371B2 (en) 2014-11-20 2020-02-18 Huawei Technologies Co., Ltd. Malware detection method and malware detection apparatus
US10963558B2 (en) 2014-11-20 2021-03-30 Huawei Technologies Co., Ltd. Malware detection method and malware detection apparatus
US10706149B1 (en) 2015-09-30 2020-07-07 Fireeye, Inc. Detecting delayed activation malware using a primary controller and plural time controllers
US10817606B1 (en) * 2015-09-30 2020-10-27 Fireeye, Inc. Detecting delayed activation malware using a run-time monitoring agent and time-dilation logic
CN111368295A (en) * 2018-12-26 2020-07-03 中兴通讯股份有限公司 Malicious sample detection method, device and system and storage medium

Similar Documents

Publication Publication Date Title
US10642753B1 (en) System and method for protecting a software component running in virtual machine using a virtualization layer
Georgiev et al. Breaking and fixing origin-based access control in hybrid web/mobile application frameworks
US11714884B1 (en) Systems and methods for establishing and managing computer network access privileges
KR101442654B1 (en) Systems and methods for behavioral sandboxing
US10726127B1 (en) System and method for protecting a software component running in a virtual machine through virtual interrupts by the virtualization layer
EP3259697B1 (en) Mining sandboxes
Pék et al. On the feasibility of software attacks on commodity virtual machine monitors via direct device assignment
US11586726B2 (en) Secure web framework
Phung et al. Hybridguard: A principal-based permission and fine-grained policy enforcement framework for web-based mobile applications
WO2014147618A1 (en) Accelerating a clock system to identify malware
Van Ginkel et al. A server-side JavaScript security architecture for secure integration of third-party libraries
Bousquet et al. Mandatory access control for the android dalvik virtual machine
US10372905B1 (en) Preventing unauthorized software execution
Peng et al. μSwitch: Fast Kernel Context Isolation with Implicit Context Switches
Zhu et al. AppArmor Profile Generator as a Cloud Service.
Blanc et al. Mandatory access protection within cloud systems
Gilbert et al. Dymo: Tracking dynamic code identity
Xing et al. A Hybrid System Call Profiling Approach for Container Protection
Singh et al. Discovering persuaded risk of permission in android applications for malicious application detection
Flatley Rootkit Detection Using a Cross-View Clean Boot Method
Cheruvu et al. IoT Software Security Building Blocks
Narvekar et al. Security sandbox model for modern web environment
Schlüter et al. Heckler: Breaking Confidential VMs with Malicious Interrupts
Karlsson et al. Evaluation of Security Mechanisms for an Integrated Automotive System Architecture
US20230244787A1 (en) System and method for detecting exploit including shellcode

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14770809

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14770809

Country of ref document: EP

Kind code of ref document: A1